--- task_categories: - text-generation language: - en size_categories: - 1B The following data mix was used to train OLMoE-1B-7B, a Mixture-of-Experts LLM with 1B active and 7B total parameters released in August 2024. The base version of OLMoE-1B-7B can be found at [this page](https://huggingface.co/OLMoE/OLMoE-1B-7B-0824), the SFT of OLMoE-1B-7B is available [here](https://huggingface.co/OLMoE/OLMoE-1B-7B-0824-SFT), and a version combining SFT and DPO is available following [this link](https://huggingface.co/OLMoE/OLMoE-1B-7B-0824-Instruct). ## Statistics | Subset | Docs | Bytes | Words | Tokens | |--------------------------------------------------------------|:----------:|:----------:|:----------:|:----------:| | [DCLM Baseline 1.0](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) | 2.95 B | 16.7 T | 3.38 T | 3.86 T | | [Starcoder](https://huggingface.co/datasets/bigcode/starcoderdata) | 78.7 M | 325 B | 63.9 B | 101 B | | [peS2o](https://huggingface.co/datasets/allenai/peS2o)
([Dolma](https://huggingface.co/datasets/allenai/dolma)) | 38.8 M | 268 B | 51.3 B | 57.2 B | | Algebraic Stack
([Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 2.83 M | 39.3 B | 9.6 B | 12.6 B | | Arxiv
([RedPajama v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
via [Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 1.55 M | 88.8 B | 23.5 B | 21.1 B | | OpenWebMath
([Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 2.91 M | 42.4 B | 10.2 B | 12.7 B | | En Wikipedia +
Wikibooks
([Dolma](https://huggingface.co/datasets/allenai/dolma)) | 6.17 M | 16.2 B | 3.16 B | 3.69 B | | **Total** | **3.08 B** | **17.4 T** | **3.53 T** | **4.07 T** | ## Preprocessing All subsets were pre-processed to remove documents with a *sequence* of 32 or more repeated *ngrams*. - a *ngram* is a span of 1 to 13 tokens, included; - *tokens* are obtained using the model tokenizer; - a *sequence* is a contiguous span of repeated ngrams. In addition of the above, Starcoder dataset was further processed by removing any document meeting any of the following rules: - document is from a repository with fewer than 2 stars on GitHub; - the top most frequent word in the document constitutes over 30% of the document; - the two most frequent words in the document constitutes over 50% of the document. ## Licensing Information This mix is licensed under [Open Data Commons Attribution License (ODC-By) v1.0](https://opendatacommons.org/licenses/by/1-0/). By using this dataset, you are bound to licenses and Terms of Services of underlying datasets, which you can access by clicking on the links in the table above.