cjpais commited on
Commit
d05fedd
·
verified ·
1 Parent(s): 53ca67a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +460 -3
README.md CHANGED
@@ -1,3 +1,460 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ language:
5
+ - en
6
+ license_link: LICENSE
7
+ base_model:
8
+ - ibm-granite/granite-3.1-2b-instruct
9
+ quantized_by: bartowski
10
+ tags:
11
+ - llamafile
12
+ - language
13
+ - granite-3.2
14
+ ---
15
+
16
+ # Granite 3.2 2B Instruct - llamafile
17
+
18
+ - Model creator: [IBM](https://huggingface.co/ibm-granite)
19
+ - Original model: [ibm-granite/granite-3.2-2b-instruct](https://huggingface.co/ibm-granite/granite-3.2-2b-instruct)
20
+
21
+ Mozilla packaged the IBM Granite 3.2 models into executable weights that we
22
+ call [llamafiles](https://github.com/Mozilla-Ocho/llamafile). This gives
23
+ you the easiest fastest way to use the model on Linux, MacOS, Windows,
24
+ FreeBSD, OpenBSD and NetBSD systems you control on both AMD64 and ARM64.
25
+
26
+ *Software Last Updated: 2025-03-31*
27
+
28
+ *Llamafile Version: 0.9.2*
29
+
30
+ ## Quickstart
31
+
32
+ To get started, you need both the Granite 3.2 weights, and the llamafile
33
+ software. Both of them are included in a single file, which can be
34
+ downloaded and run as follows:
35
+
36
+ ```
37
+ wget https://huggingface.co/Mozilla/granite-3.2-2b-instruct-llamafile/resolve/main/granite-3.2-2b-instruct-Q6_K.llamafile
38
+ chmod +x granite-3.2-2b-instruct-Q6_K.llamafile
39
+ ./granite-3.2-2b-instruct-Q6_K.llamafile
40
+ ```
41
+
42
+ The default mode of operation for these llamafiles is our new command
43
+ line chatbot interface.
44
+
45
+ ## Usage
46
+
47
+ You can use triple quotes to ask questions on multiple lines. You can
48
+ pass commands like `/stats` and `/context` to see runtime status
49
+ information. You can change the system prompt by passing the `-p "new
50
+ system prompt"` flag. You can press CTRL-C to interrupt the model.
51
+ Finally CTRL-D may be used to exit.
52
+
53
+ If you prefer to use a web GUI, then a `--server` mode is provided, that
54
+ will open a tab with a chatbot and completion interface in your browser.
55
+ For additional help on how it may be used, pass the `--help` flag. The
56
+ server also has an OpenAI API compatible completions endpoint that can
57
+ be accessed via Python using the `openai` pip package.
58
+
59
+ ```
60
+ ./granite-3.2-2b-instruct-Q6_K.llamafile --server
61
+ ```
62
+
63
+ An advanced CLI mode is provided that's useful for shell scripting. You
64
+ can use it by passing the `--cli` flag. For additional help on how it
65
+ may be used, pass the `--help` flag.
66
+
67
+ ```
68
+ ./granite-3.2-2b-instruct-Q6_K.llamafile --cli -p 'four score and seven' --log-disable
69
+ ```
70
+
71
+ ## Troubleshooting
72
+
73
+ Having **trouble?** See the ["Gotchas"
74
+ section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas-and-troubleshooting)
75
+ of the README.
76
+
77
+ On Linux, the way to avoid run-detector errors is to install the APE
78
+ interpreter.
79
+
80
+ ```sh
81
+ sudo wget -O /usr/bin/ape https://cosmo.zip/pub/cosmos/bin/ape-$(uname -m).elf
82
+ sudo chmod +x /usr/bin/ape
83
+ sudo sh -c "echo ':APE:M::MZqFpD::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
84
+ sudo sh -c "echo ':APE-jart:M::jartsr::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
85
+ ```
86
+
87
+ On Windows there's a 4GB limit on executable sizes.
88
+
89
+ ## Context Window
90
+
91
+ This model has a max context window size of 128k tokens. By default, a
92
+ context window size of 8192 tokens is used. You can ask llamafile
93
+ to use the maximum context size by passing the `-c 0` flag. That's big
94
+ enough for a small book. If you want to be able to have a conversation
95
+ with your book, you can use the `-f book.txt` flag.
96
+
97
+ ## GPU Acceleration
98
+
99
+ On GPUs with sufficient RAM, the `-ngl 999` flag may be passed to use
100
+ the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card
101
+ driver needs to be installed if you own an NVIDIA GPU. On Windows, if
102
+ you have an AMD GPU, you should install the ROCm SDK v6.1 and then pass
103
+ the flags `--recompile --gpu amd` the first time you run your llamafile.
104
+
105
+ On NVIDIA GPUs, by default, the prebuilt tinyBLAS library is used to
106
+ perform matrix multiplications. This is open source software, but it
107
+ doesn't go as fast as closed source cuBLAS. If you have the CUDA SDK
108
+ installed on your system, then you can pass the `--recompile` flag to
109
+ build a GGML CUDA library just for your system that uses cuBLAS. This
110
+ ensures you get maximum performance.
111
+
112
+ For further information, please see the [llamafile
113
+ README](https://github.com/mozilla-ocho/llamafile/).
114
+
115
+ ## About llamafile
116
+
117
+ llamafile is a new format introduced by Mozilla on Nov 20th 2023. It
118
+ uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
119
+ binaries that run on the stock installs of six OSes for both ARM64 and
120
+ AMD64.
121
+
122
+ ---
123
+
124
+ # Granite-3.2-2B-Instruct
125
+
126
+ **Model Summary:**
127
+ Granite-3.2-2B-Instruct is an 2-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of [Granite-3.1-2B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-2b-instruct), it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required.
128
+
129
+
130
+ - **Developers:** Granite Team, IBM
131
+ - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
132
+ - **Release Date**: February 26th, 2025
133
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
134
+
135
+ **Supported Languages:**
136
+ English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.
137
+
138
+ **Intended Use:**
139
+ This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
140
+
141
+ **Capabilities**
142
+ * **Thinking**
143
+ * Summarization
144
+ * Text classification
145
+ * Text extraction
146
+ * Question-answering
147
+ * Retrieval Augmented Generation (RAG)
148
+ * Code related tasks
149
+ * Function-calling tasks
150
+ * Multilingual dialog use cases
151
+ * Long-context tasks including long document/meeting summarization, long document QA, etc.
152
+
153
+
154
+
155
+ **Generation:**
156
+ This is a simple example of how to use Granite-3.2-2B-Instruct model.
157
+
158
+ Install the following libraries:
159
+
160
+ ```shell
161
+ pip install torch torchvision torchaudio
162
+ pip install accelerate
163
+ pip install transformers
164
+ ```
165
+ Then, copy the snippet from the section that is relevant for your use case.
166
+
167
+ ```python
168
+ from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
169
+ import torch
170
+
171
+ model_path="ibm-granite/granite-3.2-2b-instruct"
172
+ device="cuda"
173
+ model = AutoModelForCausalLM.from_pretrained(
174
+ model_path,
175
+ device_map=device,
176
+ torch_dtype=torch.bfloat16,
177
+ )
178
+ tokenizer = AutoTokenizer.from_pretrained(
179
+ model_path
180
+ )
181
+
182
+ conv = [{"role": "user", "content":"You have 10 liters of a 30% acid solution. How many liters of a 70% acid solution must be added to achieve a 50% acid mixture?"}]
183
+
184
+ input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)
185
+
186
+ set_seed(42)
187
+ output = model.generate(
188
+ **input_ids,
189
+ max_new_tokens=8192,
190
+ )
191
+
192
+ prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
193
+ print(prediction)
194
+ ```
195
+
196
+ **Example Outputs**
197
+ - thinking=True
198
+ ```md
199
+ Here is my thought process:
200
+
201
+ 1. First, let's understand the composition of the initial 10 liters of 30% acid solution:
202
+ - Acid content = 30% of 10 liters = 0.30 * 10 = 3 liters
203
+
204
+ 2. We want to end up with a 50% acid solution. Let's denote the volume of the 70% acid solution to be added as 'x' liters.
205
+
206
+ 3. The amount of acid in the final mixture (50% solution) should equal the initial acid content plus the acid from the added 70% solution.
207
+ - Total acid in final mixture = 0.50 * (10 + x)
208
+
209
+ 4. The equation representing the total acid content is:
210
+ 3 (from initial solution) + 0.70 * x (from added solution) = 0.50 * (10 + x)
211
+
212
+ 5. Now, we solve for 'x':
213
+ 3 + 0.70x = 5 + 0.50x
214
+ 0.70x - 0.50x = 5 - 3
215
+ 0.20x = 2
216
+ x = 2 / 0.20
217
+ x = 10 liters
218
+
219
+ 6. Therefore, 10 liters of a 70% acid solution must be added to the initial 10 liters of 30% acid solution to achieve a 50% acid mixture.
220
+
221
+ Here is my response:
222
+
223
+ To solve this problem, we need to determine how much of a 70% acid solution must be added to a 10-liter solution that is already 30% acid to create a new mixture that is 50% acid. Here's how to calculate it:
224
+
225
+ 1. **Initial Acid Content**:
226
+ - The initial 10-liter solution contains 3 liters of acid (since 30% of 10 liters).
227
+
228
+ 2. **Desired Final Acid Content**:
229
+ - We want the final mixture to be 50% acid.
230
+
231
+ 3. **Equation Setup**:
232
+ - Let 'x' be the volume (in liters) of the 70% acid solution to be added.
233
+ - The equation representing the total acid content in the final mixture (which is 10 liters from the initial solution plus 'x' liters of the added solution) should equal 50% of the total volume (10 + x).
234
+ - This gives us: 3 (acid from initial solution) + 0.70x (acid from added solution) = 0.50(10 + x)
235
+
236
+ 4. **Solving the Equation**:
237
+ - First, simplify the right side of the equation: 0.50(10 + x) = 5 + 0.50x
238
+ - Now, the equation is: 3 + 0.70x = 5 + 0.50x
239
+ - Subtract 0.50x from both sides: 0.20x = 2
240
+ - Divide both sides by 0.20: x = 2 / 0.20
241
+ - x = 10 liters
242
+
243
+ 5. **Conclusion**:
244
+ - To achieve a 50% acid mixture from the initial 30% solution, you need to add 10 liters of a 70% acid solution.
245
+
246
+ So, the final answer is that 10 liters of a 70% acid solution must be added to the initial 10 liters of 30% acid solution.
247
+ ```
248
+ - thinking=False
249
+ ```md
250
+ To solve this problem, we'll use the concept of volume and concentration. Let's denote the amount of 70% acid solution we need to add as "x" liters.
251
+
252
+ First, let's find out how much acid is in the initial 10-liter solution:
253
+
254
+ Acid in initial solution = 30% of 10 liters = 0.30 * 10 = 3 liters
255
+
256
+ Now, we want to end up with a 50% acid solution in a total volume of (10 + x) liters. Let's denote the final volume as V.
257
+
258
+ Final acid concentration = 50%
259
+ Final acid amount = 50% of V = 0.50 * V
260
+
261
+ We know the initial acid amount and the final acid amount, so we can set up an equation:
262
+
263
+ Initial acid amount + Acid from added solution = Final acid amount
264
+ 3 liters + (70% of x) = 0.50 * (10 + x)
265
+
266
+ Now, let's solve for x:
267
+
268
+ 0.70x + 3 = 0.50 * 10 + 0.50x
269
+ 0.70x - 0.50x = 0.50 * 10 - 3
270
+ 0.20x = 5 - 3
271
+ 0.20x = 2
272
+ x = 2 / 0.20
273
+ x = 10 liters
274
+
275
+ So, you need to add 10 liters of a 70% acid solution to the initial 10-liter 30% acid solution to achieve a 50% acid mixture.
276
+ ```
277
+
278
+ **Evaluation Results:**
279
+ <table>
280
+
281
+ <thead>
282
+ <tr>
283
+ <th style="text-align:left; background-color: #001d6c; color: white;">Models</th>
284
+ <th style="text-align:center; background-color: #001d6c; color: white;">ArenaHard</th>
285
+ <th style="text-align:center; background-color: #001d6c; color: white;">Alpaca-Eval-2</th>
286
+ <th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th>
287
+ <th style="text-align:center; background-color: #001d6c; color: white;">PopQA</th>
288
+ <th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th>
289
+ <th style="text-align:center; background-color: #001d6c; color: white;">BigBenchHard</th>
290
+ <th style="text-align:center; background-color: #001d6c; color: white;">DROP</th>
291
+ <th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th>
292
+ <th style="text-align:center; background-color: #001d6c; color: white;">HumanEval</th>
293
+ <th style="text-align:center; background-color: #001d6c; color: white;">HumanEval+</th>
294
+ <th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th>
295
+ <th style="text-align:center; background-color: #001d6c; color: white;">AttaQ</th>
296
+ </tr></thead>
297
+ <tbody>
298
+ <tr>
299
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Llama-3.1-8B-Instruct</td>
300
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">36.43</td>
301
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">27.22</td>
302
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">69.15</td>
303
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">28.79</td>
304
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">52.79</td>
305
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">72.66</td>
306
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">61.48</td>
307
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">83.24</td>
308
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.32</td>
309
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">80.15</td>
310
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">79.10</td>
311
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">83.43</td>
312
+ </tr>
313
+
314
+ <tr>
315
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Llama-8B</td>
316
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">17.17</td>
317
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">21.85</td>
318
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">45.80</td>
319
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">13.25</td>
320
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">47.43</td>
321
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">65.71</td>
322
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">44.46</td>
323
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">72.18</td>
324
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">67.54</td>
325
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">62.91</td>
326
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">66.50</td>
327
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">42.87</td>
328
+ </tr>
329
+
330
+ <tr>
331
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Qwen-2.5-7B-Instruct</td>
332
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">25.44</td>
333
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td>
334
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">74.30</td>
335
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">18.12</td>
336
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">63.06</td>
337
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">70.40</td>
338
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">54.71</td>
339
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">84.46</td>
340
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">93.35</td>
341
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">89.91</td>
342
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">74.90</td>
343
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">81.90</td>
344
+ </tr>
345
+
346
+ <tr>
347
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Qwen-7B</td>
348
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">10.36</td>
349
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">15.35</td>
350
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">50.72</td>
351
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">9.94</td>
352
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">47.14</td>
353
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">65.04</td>
354
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">42.76</td>
355
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">78.47</td>
356
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">79.89</td>
357
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">78.43</td>
358
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">59.10</td>
359
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">42.45</td>
360
+ </tr>
361
+
362
+ <tr>
363
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-8B-Instruct</td>
364
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">37.58</td>
365
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td>
366
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">66.77</td>
367
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">28.7</td>
368
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">65.84</td>
369
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">68.55</td>
370
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">50.78</td>
371
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">79.15</td>
372
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">89.63</td>
373
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.79</td>
374
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">73.20</td>
375
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.73</td>
376
+ </tr>
377
+
378
+
379
+ <tr>
380
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-2B-Instruct</td>
381
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">23.3</td>
382
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">27.17</td>
383
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">57.11</td>
384
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">20.55</td>
385
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">59.79</td>
386
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">54.46</td>
387
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">18.68</td>
388
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">67.55</td>
389
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">79.45</td>
390
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">75.26</td>
391
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">63.59</td>
392
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">84.7</td>
393
+ </tr>
394
+
395
+ <tr>
396
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.2-8B-Instruct</td>
397
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">55.25</td>
398
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">61.19</td>
399
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">66.79</td>
400
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">28.04</td>
401
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">66.92</td>
402
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">64.77</td>
403
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">50.95</td>
404
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">81.65</td>
405
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">89.35</td>
406
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.72</td>
407
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">74.31</td>
408
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.42</td>
409
+
410
+ </tr>
411
+
412
+ <tr>
413
+ <td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.2-2B-Instruct</b></td>
414
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">24.86</td>
415
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">34.51</td>
416
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">57.18</td>
417
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">20.56</td>
418
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">59.8</td>
419
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">52.27</td>
420
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">21.12</td>
421
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">67.02</td>
422
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">80.13</td>
423
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">73.39</td>
424
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">61.55</td>
425
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">83.23</td>
426
+ </tr>
427
+
428
+
429
+
430
+
431
+
432
+ </tbody></table>
433
+
434
+ **Training Data:**
435
+ Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites.
436
+ <!-- A detailed attribution of datasets can be found in [Granite 3.2 Technical Report (coming soon)](#), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). -->
437
+
438
+ **Infrastructure:**
439
+ We train Granite-3.2-2B-Instruct using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
440
+
441
+ **Ethical Considerations and Limitations:**
442
+ Granite-3.2-2B-Instruct builds upon Granite-3.1-2B-Instruct, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to [Granite-3.1-2B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-2b-instruct) remain relevant.
443
+
444
+
445
+ **Resources**
446
+ - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
447
+ - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
448
+ - 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
449
+
450
+ <!-- ## Citation
451
+ ```
452
+ @misc{granite-models,
453
+ author = {author 1, author2, ...},
454
+ title = {},
455
+ journal = {},
456
+ volume = {},
457
+ year = {2024},
458
+ url = {https://arxiv.org/abs/0000.00000},
459
+ }
460
+ ``` -->