Muennighoff commited on
Commit
fabec91
·
verified ·
1 Parent(s): f8b01ef

Sort table

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -19,16 +19,17 @@ The following data mix was used to train OLMoE-1B-7B, a Mixture-of-Experts LLM w
19
  The base version of OLMoE-1B-7B can be found at [this page](https://huggingface.co/OLMoE/OLMoE-1B-7B-0824), the SFT of OLMoE-1B-7B is available [here](https://huggingface.co/OLMoE/OLMoE-1B-7B-0824-SFT), and a version combining SFT and DPO is available following [this link](https://huggingface.co/OLMoE/OLMoE-1B-7B-0824-Instruct).
20
 
21
  ## Statistics
22
- | Subset | Docs | Bytes | Words | Tokens |
 
23
  |--------------------------------------------------------------|:----------:|:----------:|:----------:|:----------:|
24
- | [DCLM Baseline 1.0](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) | 2.95 B | 16.7 T | 3.38 T | 3.86 T |
25
- | [Starcoder](https://huggingface.co/datasets/bigcode/starcoderdata) | 78.7 M | 325 B | 63.9 B | 101 B |
26
- | [peS2o](https://huggingface.co/datasets/allenai/peS2o)<br>([Dolma](https://huggingface.co/datasets/allenai/dolma)) | 38.8 M | 268 B | 51.3 B | 57.2 B |
27
- | Algebraic Stack<br>([Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 2.83 M | 39.3 B | 9.6 B | 12.6 B |
28
- | Arxiv<br>([RedPajama v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) <br>via [Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 1.55 M | 88.8 B | 23.5 B | 21.1 B |
29
- | OpenWebMath<br>([Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 2.91 M | 42.4 B | 10.2 B | 12.7 B |
30
- | En Wikipedia + <br>Wikibooks<br>([Dolma](https://huggingface.co/datasets/allenai/dolma)) | 6.17 M | 16.2 B | 3.16 B | 3.69 B |
31
- | **Total** | **3.08 B** | **17.4 T** | **3.53 T** | **4.07 T** |
32
 
33
  ## Preprocessing
34
 
 
19
  The base version of OLMoE-1B-7B can be found at [this page](https://huggingface.co/OLMoE/OLMoE-1B-7B-0824), the SFT of OLMoE-1B-7B is available [here](https://huggingface.co/OLMoE/OLMoE-1B-7B-0824-SFT), and a version combining SFT and DPO is available following [this link](https://huggingface.co/OLMoE/OLMoE-1B-7B-0824-Instruct).
20
 
21
  ## Statistics
22
+
23
+ | Subset | Tokens | Words | Bytes | Docs |
24
  |--------------------------------------------------------------|:----------:|:----------:|:----------:|:----------:|
25
+ | [DCLM Baseline 1.0](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) | 3.86 T | 3.38 T | 16.7 T | 2.95 B |
26
+ | [Starcoder](https://huggingface.co/datasets/bigcode/starcoderdata) | 101 B | 63.9 B | 325 B | 78.7 M |
27
+ | [peS2o](https://huggingface.co/datasets/allenai/peS2o)<br>([Dolma](https://huggingface.co/datasets/allenai/dolma)) | 57.2 B | 51.3 B | 268 B | 38.8 M |
28
+ | Arxiv<br>([RedPajama v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) <br>via [Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 21.1 B | 23.5 B | 88.8 B | 1.55 M |
29
+ | OpenWebMath<br>([Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 12.7 B | 10.2 B | 42.4 B | 2.91 M |
30
+ | Algebraic Stack<br>([Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 12.6 B | 9.6 B | 39.3 B | 2.83 M |
31
+ | En Wikipedia + <br>Wikibooks<br>([Dolma](https://huggingface.co/datasets/allenai/dolma)) | 3.69 B | 3.16 B | 16.2 B | 6.17 M |
32
+ | **Total** | **4.07 T** | **3.53 T** | **17.4 T** | **3.08 B** |
33
 
34
  ## Preprocessing
35