Helw150 commited on
Commit
a1044cd
·
1 Parent(s): 3cfe79d

Model Card

Browse files
Files changed (1) hide show
  1. README.md +220 -0
README.md ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - allenai/dolmino-mix-1124
5
+ - allenai/olmo-mix-1124
6
+ - bigcode/starcoderdata
7
+ - EleutherAI/proof-pile-2
8
+ - hltcoe/megawika
9
+ - mlfoundations/dclm-baseline-1.0
10
+ - HuggingFaceTB/finemath
11
+ - marin-community/ar5iv-noproblem-markdown
12
+ - marin-community/ar5iv-warning-markdown
13
+ - marin-community/datashop-science-qa
14
+ - marin-community/stackexchange-markdown
15
+ - marin-community/wikipedia-markdown
16
+ language:
17
+ - en
18
+ tags:
19
+ - text-generation
20
+ ---
21
+
22
+ <img alt="Marin Logo" src="https://huggingface.co/datasets/marin-community/blog-images/resolve/main/marin-boat.jpg" width="96" style="margin-left:'auto' margin-right:'auto' display:'block'">
23
+
24
+
25
+ # Model Card for Marin 8B
26
+
27
+ This is the model card for the Marin 8B Base model. [The Marin Project](https://marin.community) is a collaborative effort to develop open-source foundation models.
28
+
29
+ ## Datasets
30
+
31
+ ### Datasets used in Marin 8B Base
32
+
33
+ Marin 8B Base was trained on a variety of datasets:
34
+
35
+ - [Nemotron-CC](https://data.commoncrawl.org/contrib/Nemotron/Nemotron-CC/index.html)
36
+ - [DCLM Baseline](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0)
37
+ - [Starcoder Data](https://huggingface.co/datasets/bigcode/starcoderdata)
38
+ - [Proofpile 2](https://huggingface.co/datasets/EleutherAI/proof-pile-2)
39
+ - [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) 3+
40
+ - [Dolma](https://huggingface.co/datasets/allenai/dolma), including their versions of:
41
+ - [MegaWika](https://huggingface.co/datasets/hltcoe/megawika)
42
+ - [peS2o](https://huggingface.co/datasets/allenai/peS2o)
43
+ - (And most of the rest of it)
44
+ - [Dolmino-Mix-1124](https://huggingface.co/datasets/allenai/dolmino-mix-1124), including their versions of:
45
+ - [FLAN](https://arxiv.org/abs/2109.01652)
46
+ - [CodeSearchNet](https://arxiv.org/abs/1909.09436) (with OWM Filter)
47
+ - [GSM8K](https://arxiv.org/pdf/2110.14168v1)
48
+ - [MetaMath](https://arxiv.org/abs/2309.12284)
49
+ - [MathCoder2 Synthetic](https://arxiv.org/abs/2310.03731)
50
+
51
+
52
+ And some new datasets:
53
+
54
+ - [Marin Markdownified StackExchange](https://huggingface.co/datasets/marin-community/stackexchange-markdown)
55
+ - [Marin Markdownified Wikipedia](https://huggingface.co/datasets/marin-community/wikipedia-markdown)
56
+ - [Marin Markdownified Ar5iv (No Problem)](https://huggingface.co/datasets/marin-community/ar5iv-noproblem-markdown)
57
+ - [Marin Markdownified Ar5iv (Warnings)](https://huggingface.co/datasets/marin-community/ar5iv-warning-markdown)
58
+ - [Marin Datashop Science QA](https://huggingface.co/datasets/marin-community/datashop-science-qa)
59
+
60
+ The first three are licensed per their original licenses. The fourth is licensed under CC-BY-SA 4.0.
61
+
62
+ ### Datasets used in Marin 8B Instruct
63
+
64
+ Marin 8B Instruct is currently an SFT-only model. It was trained on the following datasets:
65
+
66
+ - [TIGER-Lab/AceCode-89K](https://huggingface.co/datasets/TIGER-Lab/AceCode-89K)
67
+ - [bespokelabs/Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)
68
+ - [cognitivecomputations/dolphin-r1](https://huggingface.co/datasets/cognitivecomputations/dolphin-r1) (includes both nonreasoning and reasoning subsets)
69
+ - [tuenguyen/dolphin_r1_reasoning](https://huggingface.co/datasets/tuenguyen/dolphin_r1_reasoning)
70
+ - [facebook/natural_reasoning](https://huggingface.co/datasets/facebook/natural_reasoning)
71
+ - [open-r1/OpenThoughts-114k-math](https://huggingface.co/datasets/open-r1/OpenThoughts-114k-math)
72
+ - [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk)
73
+ - [allenai/tulu-3-sft-mixture](https://huggingface.co/datasets/allenai/tulu-3-sft-mixture)
74
+ - [PrimeIntellect/verifiable-math-problems](https://huggingface.co/datasets/PrimeIntellect/verifiable-math-problems)
75
+
76
+ It is quite likely that we will release improved versions of this model in the future.
77
+
78
+ ## Checkpoints
79
+
80
+ We release a large number of checkpoints.
81
+
82
+ ### Base Model Checkpoints
83
+
84
+ Main Page: [marin-community/marin-8b-base](https://huggingface.co/marin-community/marin-8b-base)
85
+
86
+ (More checkpoints are being uploaded right now.)
87
+
88
+ | Name | Training Tokens | Link |
89
+ |------|--------|-------------|
90
+ | `deeper-starling` | 13.7T | [marin-community/marin-8b-base](https://huggingface.co/marin-community/marin-8b-base/tree/deeper-starling) |
91
+
92
+ `main` currently refers to `deeper-starling`. This may change in the future, though we will maintain model compatibility. If you require a specific checkpoint, please use the `revision` argument.
93
+
94
+ ### Instruct Model Checkpoints
95
+
96
+ Main Page: [marin-community/marin-8b-instruct](https://huggingface.co/marin-community/marin-8b-instruct)
97
+
98
+ | Name | Training Tokens | Link |
99
+ |------|--------|-------------|
100
+ | `deeper-starling-05-15` | 5.3B | [marin-community/marin-8b-instruct](https://huggingface.co/marin-community/marin-8b-instruct/) |
101
+
102
+ `main` currently refers to `deeper-starling-05-15`. This may change in the future, though we will maintain model compatibility. If you require a specific checkpoint, please use the `revision` argument.
103
+
104
+
105
+ ## Installation
106
+
107
+ Marin 8B uses the [Llama architecture](https://arxiv.org/abs/2302.13971) and as such should
108
+ work out-of-the-box with the [Hugging Face Transformers](https://huggingface.co/docs/transformers/index) library
109
+ and any other library that supports the Llama architecture.
110
+
111
+
112
+ We use a variant of the Llama 3 tokenizer: [stanford-crfm/marin-tokenizer](https://huggingface.co/stanford-crfm/marin-tokenizer/).
113
+
114
+ ## Inference
115
+
116
+ You can use Marin with the standard HuggingFace Transformers library:
117
+
118
+ ```python
119
+ from transformers import AutoModelForCausalLM, AutoTokenizer
120
+ marin = AutoModelForCausalLM.from_pretrained("marin-community/marin-8b-base")
121
+ tokenizer = AutoTokenizer.from_pretrained("marin-community/marin-8b-base")
122
+ message = ["The Marin wind is"]
123
+ inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
124
+ response = marin.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
125
+ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
126
+ ```
127
+
128
+ We released a number of checkpoints of this model. To load a specific checkpoint, simply add the argument `revision`:
129
+
130
+ ```bash
131
+ marin = AutoModelForCausalLM.from_pretrained("marin-community/marin-8b-base", revision="deeper-starling")
132
+ ```
133
+
134
+ ### Model Description
135
+
136
+ - **Developed by:** The Marin team at Stanford CRFM.
137
+ - **Model type:** a Transformer style autoregressive language model.
138
+ - **Knowledge Cutoff:** ~July 2024
139
+ - **Language(s) (NLP):** English
140
+ - **License:** The code and model are released under Apache 2.0.
141
+ - **Contact:** `dlwh at stanford.edu`
142
+
143
+ ### Model Sources
144
+
145
+ - **Project Page:** https://marin.community
146
+ - **Repositories:**
147
+ - Core repo (data and experiment management): https://github.com/marin-community/marin
148
+ - Training code: https://github.com/stanford-crfm/levanter
149
+ - **Retrospective:** https://marin.readthedocs.io/en/latest/reports/marin-8b-retro.html
150
+ - **W&B Logs:** [Marin 8B](https://wandb.ai/stanford-mercury/marin/reports/Tootsie-8B---VmlldzoxMTY3MzU3OA)
151
+
152
+
153
+ ## Evaluation
154
+
155
+
156
+ ### Base Model Results
157
+
158
+ We ran a suite of standard benchmarks to compare our model with [Llama 3.1 8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B), and the open source 7-8B models [Olmo 2 7B](https://huggingface.co/allenai/OLMo-2-1124-7B), and [MAP NEO 7B](https://huggingface.co/m-a-p/neo_7b).
159
+ For all benchmarks, we used [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) with the default setup for each task. (These numbers may differ from reported results due to differences in setup. LM Eval Harness is usually somewhat stricter than other harnesses.)
160
+
161
+ | | Average | AGI Eval LSAT-AR | ARC Easy | ARC Challenge | BBH | BoolQ | CommonSense QA | COPA | GPQA | HellaSwag 0-shot | HellaSwag 10-shot | lambada_openai | MMLU 5-shot | MMLU 0-shot | MMLU Pro | OpenBookQA | PIQA | WinoGrande | WSC | GSM8K |
162
+ |--------------------------|----------|------------------|----------|---------------|----------|----------|----------------|----------|----------|------------------|-------------------|----------------|-------------|-------------|----------|-----------|----------|------------|----------|----------|
163
+ | Marin 8B Base (Starling) | **66.6** | 20.9 | **86.5** | **63.1** | **50.6** | **85.9** | 79.1 | **92.0** | 30.3 | **82.3** | **83.6** | **74.7** | **67.6** | **65.9** | **36.5** | 44.2 | **84.4** | **74.5** | 82.1 | 61.3 |
164
+ | Llama 3.1 Base | 65.3 | 20.4 | 85.8 | 58.9 | 46.4 | 84.2 | 75.2 | **92.0** | **32.3** | 79.4 | 81.9 | **74.7** | 66.4 | 65.5 | 33.3 | 45.8 | 82.9 | 74.4 | 83.5 | 56.8 |
165
+ | OLMo 2 Base | 64.9 | 17.4 | 85.0 | 60.7 | 44.4 | 85.5 | 75.4 | 89.0 | 26.8 | 80.5 | 81.7 | 73.1 | 63.9 | 61.9 | 30.6 | **46.2** | 82.5 | 74.3 | **86.1** | **67.6** |
166
+ | MAP NEO 7B | 59.5 | **23.0** | 81.1 | 52.0 | 42.4 | 84.7 | **81.7** | 82.0 | 27.8 | 72.5 | 73.3 | 64.6 | 58.2 | 56.4 | 25.2 | 39.4 | 79.0 | 66.1 | 73.3 | 48.0 |
167
+
168
+
169
+ Marin 8B Base fares well on most tasks.
170
+
171
+
172
+ ## Model Details
173
+
174
+ Please see [our technical retrospective](https://marin.readthedocs.io/en/latest/reports/marin-8b-retro.html) for more details on the pretraining process.
175
+
176
+ ### Architecture Details
177
+
178
+ - **Architecture:** Llama 3 8B
179
+ - **Hidden size:** 4096
180
+ - **Feedforward size:** 14336
181
+ - **Number of layers:** 32
182
+ - **Number of attention heads:** 32
183
+ - **Number of KV heads:** 8
184
+
185
+ ### Tokenizer Details
186
+
187
+ Marin 8B uses a variant of the Llama 3 tokenizer: [stanford-crfm/marin-tokenizer](https://huggingface.co/stanford-crfm/marin-tokenizer/). It has the same vocabulary but bundles a chat template into the base tokenizer for convenience.
188
+
189
+ ### Training Phases
190
+
191
+ #### Pre-training Phases
192
+
193
+ - *Kestrel (DCLM WSD-S Phase)*: DCLM+StarCoder+Proofpile2 using [WSD-S](https://arxiv.org/abs/2410.05192) (0->2.7T tokens)
194
+ - *Ocelot (DCLM WSD Phase)*: Increased batch size, using WSD. (2.7T->3.78T tokens)
195
+ - *Jellyfish (First Cooldown)*: Higher quality data (~Dolmino+Fine Math). (3.78T->4.78T tokens)
196
+ - *Phoenix (Reheated)*: Rapid rewarming + [Nemotron-CC](https://arxiv.org/abs/2412.02595) (plus [Starcoder](https://huggingface.co/datasets/bigcode/starcoderdata)). (4.78T->11.1T tokens)
197
+ - *Starling (Second Cooldown)*: Another cooldown. We followed a similar process to the first cooldown, but added a few new datasets. (11.1T->12.75T tokens)
198
+ - *Deeper Starling*: Somewhat more pretraining. (12.75T->13.7T tokens)
199
+
200
+ All released pre-training checkpoints except Kestrel use an exponential moving average of the model weights.
201
+
202
+ #### SFT Phase
203
+
204
+ SFT was comparably simple, consisting of only one phase for 5.3B tokens.
205
+
206
+ ## Bias, Risks, and Limitations
207
+
208
+ Like any base language model or fine-tuned model without safety filtering, these models can easily be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from Marin or any LLM are often inaccurate, so responses should be verified.
209
+
210
+ Marin 8B has not undergone any safety tuning or evaluation. We strongly recommend that users use this model with caution and consider the risks when applying this technology.
211
+ In particular, this model is not intended for fully autonomous use.
212
+
213
+ ## Model Card Contact
214
+ For errors in this model card, please open an issue in this repository. For technical inquiries, please contact `dlwh at stanford.edu`.
215
+
216
+ ## Acknowledgements
217
+
218
+ The compute for this model was generously provided by Google's [TPU Research Cloud](https://sites.research.google/trc/about/).
219
+
220
+ (We based this model card on Olmo 2's.)