md-nishat-008 commited on
Commit
89e5e1d
·
verified ·
1 Parent(s): d6f3867

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +683 -3
README.md CHANGED
@@ -1,3 +1,683 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+
5
+ <div align="center">
6
+ <h1 style="text-align: center; color: green;"> Accepted in ACL Main 2025 </h1>
7
+ </div>
8
+
9
+ <div align="center">
10
+ <table>
11
+ <tr>
12
+ <td>
13
+ <a href="https://arxiv.org/pdf/2503.10995">
14
+ <img src="https://img.shields.io/badge/arXiv-Read_Paper-blue?style=for-the-badge&logo=arxiv" alt="Read Paper"/>
15
+ </a>
16
+ </td>
17
+ <td>
18
+ <a href="https://huggingface.co/md-nishat-008/TigerLLM-1B-it">
19
+ <img src="https://img.shields.io/badge/HuggingFace-TigerLLM--1B--it-orange?style=for-the-badge&logo=huggingface" alt="TigerLLM-1B-it"/>
20
+ </a>
21
+ </td>
22
+ <td>
23
+ <a href="mailto:[email protected]">
24
+ <img src="https://img.shields.io/badge/Email-Contact_Us-blue?style=for-the-badge&logo=gmail" alt="Contact Us"/>
25
+ </a>
26
+ </td>
27
+ </tr>
28
+ </table>
29
+ </div>
30
+
31
+ <div align="center">
32
+
33
+ <h1 style="text-align: center; color: green;">TigerLLM - A Family of Bangla Large Language Models</h1>
34
+
35
+ <h3 style="text-align: center; color: green;">Nishat Raihan, Marcos Zampieri</h3>
36
+ <h4 style="text-align: center; color: green;">George Mason University, VA, USA</h4>
37
+ <p style="text-align: center; color: red;">[email protected]</p>
38
+
39
+ </div>
40
+
41
+
42
+ ---
43
+ If you find our work helpful, please consider citing our paper:
44
+
45
+ ```bibtex
46
+ @inproceedings{raihan-zampieri-2025-tigerllm,
47
+ title = "{T}iger{LLM} - A Family of {B}angla Large Language Models",
48
+ author = "Raihan, Nishat and
49
+ Zampieri, Marcos",
50
+ editor = "Che, Wanxiang and
51
+ Nabende, Joyce and
52
+ Shutova, Ekaterina and
53
+ Pilehvar, Mohammad Taher",
54
+ booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
55
+ month = jul,
56
+ year = "2025",
57
+ address = "Vienna, Austria",
58
+ publisher = "Association for Computational Linguistics",
59
+ url = "https://aclanthology.org/2025.acl-short.69/",
60
+ doi = "10.18653/v1/2025.acl-short.69",
61
+ pages = "887--896",
62
+ ISBN = "979-8-89176-252-7"
63
+ }
64
+ ```
65
+
66
+
67
+
68
+
69
+
70
+ <hr>
71
+
72
+ <h2 style="text-align: center; color: green;">Abstract</h2>
73
+ <p>
74
+ The development of Large Language Models (LLMs) remains heavily skewed towards English and a few other high-resource languages. This linguistic disparity is particularly evident for Bangla – the 5th most spoken language. A few initiatives attempted to create open-source Bangla LLMs with performance still behind high-resource languages and limited reproducibility. To address this gap, we introduce <span style="color: red;">TigerLLM</span> – a family of Bangla LLMs. Our results demonstrate that these models surpass all open-source alternatives and also outperform larger proprietary models like GPT3.5 across standard benchmarks, establishing TigerLLM as the new baseline for future Bangla language modeling.
75
+ </p>
76
+
77
+ <hr>
78
+
79
+ <h2 style="text-align: center; color: green;">1. Introduction</h2>
80
+ <p>
81
+ LLMs have fundamentally transformed NLP by achieving exceptional performance across a wide range of tasks. However, their advancements have predominantly benefited high-resource languages. Despite having about 237 million native Bangla speakers, Bangla remains underserved in modern NLP due to the lack of high-quality training data and reproducible methodologies.
82
+ </p>
83
+
84
+ <h3 style="text-align: center; color: green;">1.1 Limitations of Bangla LLM Initiatives</h3>
85
+ <p>
86
+ Recent efforts (e.g., titu-Gemma, titu-LLaMA, Bangla-LLaMA, G2B) suffer from low reproducibility, suboptimal performance, and poor documentation. Many rely on translated synthetic datasets, leading to compromised instruction quality.
87
+ </p>
88
+
89
+ <table>
90
+ <thead>
91
+ <tr>
92
+ <th style="color: green; text-align: center;">Base-LLM</th>
93
+ <th style="color: green; text-align: center;">Size</th>
94
+ <th style="color: green; text-align: center;">Pretraining<br>(pt)</th>
95
+ <th style="color: green; text-align: center;">Corpora</th>
96
+ <th style="color: green; text-align: center;">Finetuning<br>(ft)</th>
97
+ <th style="color: green; text-align: center;">Finetune Dataset</th>
98
+ <th style="color: green; text-align: center;">Paper/Report?</th>
99
+ <th style="color: green; text-align: center;">Reproducibility?</th>
100
+ </tr>
101
+ </thead>
102
+ <tbody>
103
+ <tr>
104
+ <td>titu-Gemma (Gemma-2)</td>
105
+ <td>2B</td>
106
+ <td>4.4B</td>
107
+ <td>&#10005;</td>
108
+ <td>&#10005;</td>
109
+ <td>&#10005;</td>
110
+ <td>&#10005;</td>
111
+ <td>&#10005;</td>
112
+ </tr>
113
+ <tr>
114
+ <td>titu-LLaMA (LLaMA-3.1)</td>
115
+ <td>3B</td>
116
+ <td>37B</td>
117
+ <td>&#10005;</td>
118
+ <td>&#10005;</td>
119
+ <td>&#10005;</td>
120
+ <td>&#10005;</td>
121
+ <td>&#10005;</td>
122
+ </tr>
123
+ <tr>
124
+ <td>Bangla-LLaMA (LLaMA-3.2)</td>
125
+ <td>3B</td>
126
+ <td>&#10003;</td>
127
+ <td>&#10005;</td>
128
+ <td>172K<br>(Orca-translated)</td>
129
+ <td>&#10003;</td>
130
+ <td>&#10005;</td>
131
+ <td>&#10005;</td>
132
+ </tr>
133
+ <tr>
134
+ <td>G2B (Gemma-2)</td>
135
+ <td>9B</td>
136
+ <td>&#10005;</td>
137
+ <td>&#10005;</td>
138
+ <td>145K<br>(Alpaca-translated)</td>
139
+ <td>&#10005;</td>
140
+ <td>&#10005;</td>
141
+ <td>&#10005;</td>
142
+ </tr>
143
+ <tr>
144
+ <td>Bangla-LLaMA (LLaMA-2)</td>
145
+ <td>13B</td>
146
+ <td>&#10003;</td>
147
+ <td>&#10005;</td>
148
+ <td>145K<br>(Alpaca-translated)</td>
149
+ <td>&#10005;</td>
150
+ <td>&#10005;</td>
151
+ <td>&#10005;</td>
152
+ </tr>
153
+ <tr>
154
+ <td><span style="color:red;">TigerLLM (LLaMA-3.2)</span></td>
155
+ <td>1B</td>
156
+ <td>10M</td>
157
+ <td>Bangla-TextBook</td>
158
+ <td>100K<br>(Bangla-Instruct)</td>
159
+ <td>&#10003;</td>
160
+ <td>&#10003;</td>
161
+ </tr>
162
+ <tr>
163
+ <td><span style="color:red;">TigerLLM (Gemma-2)</span></td>
164
+ <td>9B</td>
165
+ <td>10M</td>
166
+ <td>Bangla-TextBook</td>
167
+ <td>100K<br>(Bangla-Instruct)</td>
168
+ <td>&#10003;</td>
169
+ <td>&#10003;</td>
170
+ </tr>
171
+ </tbody>
172
+ </table>
173
+
174
+ <h3 style="text-align: center; color: green;">1.2 Contributions</h3>
175
+ <ul>
176
+ <li><span style="color: red;">Bangla-TextBook Corpus</span>: A 10M-token corpus of high-quality educational texts.</li>
177
+ <li><span style="color: red;">Bangla-Instruct Dataset</span>: 100K native Bangla instruction-response pairs generated via self-instruct and advanced teacher models.</li>
178
+ <li><span style="color: red;">TigerLLM Models</span>: A family of models (1B and 9B parameters) that achieve significant performance improvements over existing alternatives.</li>
179
+ </ul>
180
+
181
+ <hr>
182
+
183
+ <h2 style="text-align: center; color: green;">2. Bangla-TextBook Corpus</h2>
184
+ <p>
185
+ The <span style="color: red;">Bangla-TextBook</span> corpus is compiled exclusively from open-source educational materials provided by the National Curriculum and Textbook Board of Bangladesh. It aggregates texts from <span style="color: red;">163 textbooks</span> for Grades 6–12, yielding <span style="color: red;">9,897,623 tokens</span> and <span style="color: red;">697,903 sentences</span>, capturing authentic academic language use.
186
+ </p>
187
+
188
+ <hr>
189
+
190
+ <h2 style="text-align: center; color: green;">3. Bangla-Instruct</h2>
191
+ <p>
192
+ To overcome previous limitations, the <span style="color: red;">Bangla-Instruct</span> dataset contains <span style="color: red;">100,000 instruction-response pairs</span> generated using a self-instruct framework. Key steps include:
193
+ </p>
194
+ <ol>
195
+ <li><span style="color: red;">Seed Task Generation</span>: 500 tasks curated by 50 volunteers from diverse academic backgrounds.</li>
196
+ <li>New instruction generation using GPT-4 and Claude-3.5-Sonnet.</li>
197
+ <li>Task identification for appropriate response formatting.</li>
198
+ <li>Multi-stage filtering to ensure linguistic quality and cultural sensitivity.</li>
199
+ </ol>
200
+ <p>
201
+ Refer to <span style="color: red;">Figure 1</span> for the Bangla-Instruct generation pipeline.
202
+ </p>
203
+
204
+ <hr>
205
+
206
+ <h2 style="text-align: center; color: green;">4. TigerLLM</h2>
207
+ <p>
208
+ TigerLLM is built by leveraging the strengths of both the Bangla-TextBook corpus and the Bangla-Instruct dataset. The training process involves:
209
+ </p>
210
+ <ul>
211
+ <li><span style="color: red;">Continual Pretraining</span> on the Bangla-TextBook corpus to capture language-specific nuances.</li>
212
+ <li><span style="color: red;">Model Distillation</span> via full fine-tuning (without LoRA) using Flash Attention, ensuring efficient convergence.</li>
213
+ </ul>
214
+ <p>
215
+ For details on the training pipeline, please see <span style="color: red;">Figure 2</span> (overall pipeline), <span style="color: red;">Figure 3</span> (pretraining loss), and <span style="color: red;">Figure 4</span> (finetuning loss).
216
+ </p>
217
+
218
+ <hr>
219
+
220
+ <h2 style="text-align: center; color: green;">5. Evaluation</h2>
221
+ <p>
222
+ TigerLLM is evaluated on multiple Bangla-specific benchmarks including:
223
+ </p>
224
+ <ul>
225
+ <li>MMLU-bn</li>
226
+ <li>PangBench-bn</li>
227
+ <li>BanglaQuaD</li>
228
+ <li>mHumanEval-bn</li>
229
+ <li>BEnQA</li>
230
+ <li>BanglaRQA</li>
231
+ </ul>
232
+ <p>
233
+ The performance comparison is detailed in <span style="color: red;">Table 2</span> below:
234
+ </p>
235
+
236
+ <table>
237
+ <thead>
238
+ <tr>
239
+ <th style="color: green; text-align: center;">Model</th>
240
+ <th style="color: green; text-align: center;">MMLU-bn</th>
241
+ <th style="color: green; text-align: center;">PangBench-bn</th>
242
+ <th style="color: green; text-align: center;">BanglaQuaD</th>
243
+ <th style="color: green; text-align: center;">mHumanEval-bn</th>
244
+ <th style="color: green; text-align: center;">BEnQA</th>
245
+ <th style="color: green; text-align: center;">BanglaRQA</th>
246
+ </tr>
247
+ </thead>
248
+ <tbody>
249
+ <tr>
250
+ <td>GPT3.5</td>
251
+ <td>0.55</td>
252
+ <td>0.55</td>
253
+ <td>0.50</td>
254
+ <td>0.56</td>
255
+ <td>0.50</td>
256
+ <td>0.49</td>
257
+ </tr>
258
+ <tr>
259
+ <td>Gemini-Flash1.5</td>
260
+ <td>0.66</td>
261
+ <td>0.57</td>
262
+ <td>0.62</td>
263
+ <td>0.58</td>
264
+ <td>0.56</td>
265
+ <td>0.61</td>
266
+ </tr>
267
+ <tr>
268
+ <td>GPT4o-mini</td>
269
+ <td>0.67</td>
270
+ <td>0.62</td>
271
+ <td>0.65</td>
272
+ <td>0.56</td>
273
+ <td>0.60</td>
274
+ <td>0.60</td>
275
+ </tr>
276
+ <tr>
277
+ <td>LLaMA3.2 (11B)</td>
278
+ <td>0.22</td>
279
+ <td>0.19</td>
280
+ <td>0.21</td>
281
+ <td>0.15</td>
282
+ <td>0.18</td>
283
+ <td>0.20</td>
284
+ </tr>
285
+ <tr>
286
+ <td>Gemma 2 (27B)</td>
287
+ <td>0.35</td>
288
+ <td>0.51</td>
289
+ <td>0.43</td>
290
+ <td>0.64</td>
291
+ <td>0.50</td>
292
+ <td>0.56</td>
293
+ </tr>
294
+ <tr>
295
+ <td>Pangea (7B)</td>
296
+ <td>0.18</td>
297
+ <td>0.15</td>
298
+ <td>0.17</td>
299
+ <td>0.10</td>
300
+ <td>0.14</td>
301
+ <td>0.16</td>
302
+ </tr>
303
+ <tr>
304
+ <td><span style="color:red;">Titu-LLM</span></td>
305
+ <td>0.06</td>
306
+ <td>0.19</td>
307
+ <td>0.08</td>
308
+ <td>0.02</td>
309
+ <td>0.17</td>
310
+ <td>0.21</td>
311
+ </tr>
312
+ <tr>
313
+ <td><span style="color:red;">Bong-LLaMA</span></td>
314
+ <td>0.05</td>
315
+ <td>0.12</td>
316
+ <td>0.08</td>
317
+ <td>0.02</td>
318
+ <td>0.15</td>
319
+ <td>0.13</td>
320
+ </tr>
321
+ <tr>
322
+ <td><span style="color:red;">Bangla-LLaMA</span></td>
323
+ <td>0.02</td>
324
+ <td>0.08</td>
325
+ <td>0.05</td>
326
+ <td>0.10</td>
327
+ <td>0.11</td>
328
+ <td>0.09</td>
329
+ </tr>
330
+ <tr>
331
+ <td><span style="color:red;">Bangla-Gemma</span></td>
332
+ <td>0.18</td>
333
+ <td>0.15</td>
334
+ <td>0.12</td>
335
+ <td>0.10</td>
336
+ <td>0.22</td>
337
+ <td>0.19</td>
338
+ </tr>
339
+ <tr>
340
+ <td><span style="color:red;">TigerLLM (1B)</span></td>
341
+ <td>0.61</td>
342
+ <td>0.55</td>
343
+ <td>0.68</td>
344
+ <td>0.61</td>
345
+ <td>0.59</td>
346
+ <td>0.62</td>
347
+ </tr>
348
+ <tr>
349
+ <td><span style="color:red;">TigerLLM (9B)</span></td>
350
+ <td>0.72</td>
351
+ <td>0.68</td>
352
+ <td>0.70</td>
353
+ <td>0.63</td>
354
+ <td>0.65</td>
355
+ <td>0.68</td>
356
+ </tr>
357
+ </tbody>
358
+ </table>
359
+
360
+ <hr>
361
+
362
+ <h2 style="text-align: center; color: green;">6. Conclusion and Future Work</h2>
363
+ <p>
364
+ This paper presents <span style="color: red;">TigerLLM</span>, a family of Bangla language models that set new benchmarks by leveraging two high-quality datasets: the Bangla-TextBook corpus and the Bangla-Instruct dataset. Future work will involve qualitative analyses, expanding the corpus, scaling model sizes, and developing more sophisticated evaluation metrics.
365
+ </p>
366
+
367
+ <hr>
368
+
369
+ <h2 style="text-align: center; color: green;">Limitations</h2>
370
+ <p>
371
+ While TigerLLM demonstrates impressive performance, limitations remain. The Bangla-TextBook corpus is restricted to Grades 6–12 and may not capture broader linguistic nuances, and the Bangla-Instruct dataset covers a limited subset of instruction types. Additionally, the models are currently limited to 1B and 9B parameters due to computational constraints.
372
+ </p>
373
+
374
+ <hr>
375
+
376
+ <h2 style="text-align: center; color: green;">Ethical Considerations</h2>
377
+ <p>
378
+ Our approach emphasizes ethical practices by using open-source educational materials, ensuring cultural sensitivity via volunteer contributions, and applying rigorous filtering methods to avoid harmful biases. Users should implement further safeguards when deploying TigerLLM in sensitive applications.
379
+ </p>
380
+
381
+ <hr>
382
+
383
+ <h2 style="text-align: center; color: green;">References</h2>
384
+ <ul>
385
+ <li>Alam, F., Chowdhury, S. A., et al. (2024). LLMs for low resource languages in multilingual settings.</li>
386
+ <li>Bai, Y., Jones, A., et al. (2024). Claude 3.5 Sonnet Technical Report.</li>
387
+ <li>Bhattacharjee, A., Hasan, T., et al. (2022). BanglaBERT: Language model pretraining and benchmarks for Bangla.</li>
388
+ <li>Brown, T., Mann, B., et al. (2023). GPT-4 Technical Report.</li>
389
+ <li>Brown, T., Mann, B., et al. (2020). Language models are few-shot learners.</li>
390
+ <li>Chowdhery, A., Narang, S., et al. (2022). PaLM: Scaling language modeling with pathways.</li>
391
+ <li>Corso, F., Pierri, F., et al. (2024). TikTokenizer research.</li>
392
+ <li>Dubey, A., Jauhri, A., et al. (2024). The LLaMA 3 herd of models.</li>
393
+ <li>Ekram, S. M. S., Rahman, A. A., et al. (2022). BanglaRQA benchmark.</li>
394
+ <li>Gunasekar, S., et al. (2023). Textbooks are all you need.</li>
395
+ <li>Hinton, G., Vinyals, O., &amp; Dean, J. (2015). Distilling the knowledge in a neural network.</li>
396
+ <li>Hu, E. J., Wallis, P., et al. Lora: Low-rank adaptation of large language models.</li>
397
+ <li>Mitra, A., Del Corro, L., et al. (2023). Orca 2: Teaching small language models how to reason.</li>
398
+ <li>Ortiz Suárez, P. J., Romary, L., &amp; Sagot, B. Contextualized word embeddings for mid-resource languages.</li>
399
+ <li>Raihan, N., Anastasopoulos, A., &amp; Zampieri, M. (2024). mHumanEval – A multilingual benchmark for code generation.</li>
400
+ <li>Rony, M. R. A. H., et al. (2024). BanglaQuaD: A Bangla open-domain question answering dataset.</li>
401
+ <li>Shafayat, S., et al. (2024). BEnQA: A benchmark for Bangla question answering and reasoning.</li>
402
+ <li>Taori, R., Gulrajani, I., et al. (2023). Alpaca: A replicable instruction-following model.</li>
403
+ <li>Team, G., et al. (2024). Gemma 2: Improving open language models at a practical size.</li>
404
+ <li>Wang, Y., et al. (2023). Self-instruct: Aligning language models with self-generated instructions.</li>
405
+ <li>Wang, Y., et al. (2024). MMLU-Pro: A robust multi-task language understanding benchmark.</li>
406
+ <li>Yue, X., et al. (2024). Pangea: A fully open multilingual multimodal LLM for 39 languages.</li>
407
+ <li>Zehady, A. K., et al. (2024). BongLLama: Llama for Bangla language.</li>
408
+ <li>Zhang, Y., et al. (2023). Llama: Open and efficient foundation language models.</li>
409
+ </ul>
410
+
411
+ <hr>
412
+
413
+ <h2 style="text-align: center; color: green;">Appendix A: Bangla-Instruct Curation</h2>
414
+
415
+ <h3 style="text-align: center; color: green;">A.1 Volunteer Information</h3>
416
+ <p>
417
+ Seed tasks were created by <span style="color: red;">50 volunteers</span> from various Bangladeshi universities:
418
+ <ul>
419
+ <li>15 from Computer Science and Engineering</li>
420
+ <li>10 from Bengali Literature</li>
421
+ <li>10 from Business Administration</li>
422
+ <li>8 from Science and Engineering</li>
423
+ <li>7 from Social Sciences</li>
424
+ </ul>
425
+ Each volunteer contributed 10 diverse instructions, resulting in 500 seed tasks.
426
+ </p>
427
+
428
+ <h3 style="text-align: center; color: green;">A.2 The Seed Dataset</h3>
429
+ <p>
430
+ The seed dataset covers 10 categories:
431
+ <ol>
432
+ <li><span style="color:red;">Cultural Knowledge and Heritage</span></li>
433
+ <li><span style="color:red;">Academic Writing</span></li>
434
+ <li><span style="color:red;">Mathematical Problem Solving</span></li>
435
+ <li><span style="color:red;">Programming and Technical</span></li>
436
+ <li><span style="color:red;">Creative Writing</span></li>
437
+ <li><span style="color:red;">Scientific Explanation</span></li>
438
+ <li><span style="color:red;">Business and Economics</span></li>
439
+ <li><span style="color:red;">Social Issues Analysis</span></li>
440
+ <li><span style="color:red;">Data Analysis and Statistics</span></li>
441
+ <li><span style="color:red;">Language and Translation</span></li>
442
+ </ol>
443
+ Each category is represented with approximately 50 tasks.
444
+ </p>
445
+
446
+ <h3 style="text-align: center; color: green;">A.3 Filtering Methodology</h3>
447
+ <p>
448
+ Filtering is based on:
449
+ <ul>
450
+ <li><span style="color:red;">Language Adherence</span>: High Bengali word ratio, Unicode consistency, and grammar score ≥ 0.8.</li>
451
+ <li><span style="color:red;">Cultural Sensitivity</span>: Ensuring religious neutrality, regional inclusivity, gender balance, and political neutrality.</li>
452
+ <li><span style="color:red;">Content Quality</span>: Minimum length, coherence between instruction and response, factual accuracy, and proper formatting.</li>
453
+ <li><span style="color:red;">Novelty Verification</span>: Ensuring low similarity with existing tasks and sufficient lexical diversity.</li>
454
+ </ul>
455
+ A pair (i, r) is accepted only if all criteria are met.
456
+ </p>
457
+
458
+ <hr>
459
+
460
+ <h2 style="text-align: center; color: green;">Appendix B: Experimentation Details</h2>
461
+
462
+ <h3 style="text-align: center; color: green;">B.1 Experimental Setup</h3>
463
+ <p>
464
+ Pretraining was conducted on a Lambda Labs cluster with 8 NVIDIA A100 GPUs (40GB each), 512GB RAM, and 2TB storage (~120 hours with gradient checkpointing). Finetuning was performed on a single NVIDIA A100 GPU via Google Colab (~96 hours).
465
+ </p>
466
+
467
+ <h3 style="text-align: center; color: green;">B.2 Pretraining Hyperparameters (Table 3)</h3>
468
+ <table>
469
+ <thead>
470
+ <tr>
471
+ <th style="color: green; text-align: center;">Hyperparameter</th>
472
+ <th style="color: green; text-align: center;">Value</th>
473
+ </tr>
474
+ </thead>
475
+ <tbody>
476
+ <tr>
477
+ <td>Per device train batch size</td>
478
+ <td>64</td>
479
+ </tr>
480
+ <tr>
481
+ <td>Gradient accumulation steps</td>
482
+ <td>16</td>
483
+ </tr>
484
+ <tr>
485
+ <td>Number of training epochs</td>
486
+ <td>4</td>
487
+ </tr>
488
+ <tr>
489
+ <td>Learning rate</td>
490
+ <td>5×10<sup>-6</sup></td>
491
+ </tr>
492
+ <tr>
493
+ <td>FP16</td>
494
+ <td>False</td>
495
+ </tr>
496
+ <tr>
497
+ <td>BF16</td>
498
+ <td>True</td>
499
+ </tr>
500
+ <tr>
501
+ <td>Dataloader num workers</td>
502
+ <td>8</td>
503
+ </tr>
504
+ <tr>
505
+ <td>Gradient checkpointing</td>
506
+ <td>True</td>
507
+ </tr>
508
+ <tr>
509
+ <td>Logging steps</td>
510
+ <td>1000</td>
511
+ </tr>
512
+ <tr>
513
+ <td>DDP find unused parameters</td>
514
+ <td>False</td>
515
+ </tr>
516
+ <tr>
517
+ <td>Max gradient norm</td>
518
+ <td>1.0</td>
519
+ </tr>
520
+ <tr>
521
+ <td>Warmup steps</td>
522
+ <td>1000</td>
523
+ </tr>
524
+ <tr>
525
+ <td>Evaluation strategy</td>
526
+ <td>steps</td>
527
+ </tr>
528
+ <tr>
529
+ <td>Evaluation steps</td>
530
+ <td>1,000</td>
531
+ </tr>
532
+ <tr>
533
+ <td>Save strategy</td>
534
+ <td>steps</td>
535
+ </tr>
536
+ <tr>
537
+ <td>Save steps</td>
538
+ <td>1,000</td>
539
+ </tr>
540
+ <tr>
541
+ <td>Save total limit</td>
542
+ <td>3</td>
543
+ </tr>
544
+ <tr>
545
+ <td>Load best model at end</td>
546
+ <td>True</td>
547
+ </tr>
548
+ <tr>
549
+ <td>Metric for best model loss</td>
550
+ <td>False</td>
551
+ </tr>
552
+ </tbody>
553
+ </table>
554
+
555
+ <h3 style="text-align: center; color: green;">B.3 Finetuning Hyperparameters</h3>
556
+ <p>
557
+ Finetuning settings for TigerLLM (1B) and (9B) are detailed in Tables 4 and 5.
558
+ </p>
559
+
560
+ <table>
561
+ <thead>
562
+ <tr>
563
+ <th style="color: green; text-align: center;">Parameter</th>
564
+ <th style="color: green; text-align: center;">TigerLLM (1B)</th>
565
+ </tr>
566
+ </thead>
567
+ <tbody>
568
+ <tr>
569
+ <td>Max Sequence Length</td>
570
+ <td>2048</td>
571
+ </tr>
572
+ <tr>
573
+ <td>Batch Size (Train/Eval)</td>
574
+ <td>16</td>
575
+ </tr>
576
+ <tr>
577
+ <td>Gradient Accumulation Steps</td>
578
+ <td>4</td>
579
+ </tr>
580
+ <tr>
581
+ <td>Number of Epochs</td>
582
+ <td>3</td>
583
+ </tr>
584
+ <tr>
585
+ <td>Learning Rate</td>
586
+ <td>1e-5</td>
587
+ </tr>
588
+ <tr>
589
+ <td>Weight Decay</td>
590
+ <td>0.02</td>
591
+ </tr>
592
+ <tr>
593
+ <td>Warmup Steps</td>
594
+ <td>10%</td>
595
+ </tr>
596
+ <tr>
597
+ <td>Optimizer</td>
598
+ <td>AdamW (8-bit)</td>
599
+ </tr>
600
+ <tr>
601
+ <td>LR Scheduler</td>
602
+ <td>Cosine</td>
603
+ </tr>
604
+ <tr>
605
+ <td>Precision</td>
606
+ <td>BF16</td>
607
+ </tr>
608
+ <tr>
609
+ <td>Evaluation Steps</td>
610
+ <td>50</td>
611
+ </tr>
612
+ <tr>
613
+ <td>Seed</td>
614
+ <td>42</td>
615
+ </tr>
616
+ </tbody>
617
+ </table>
618
+
619
+ <table>
620
+ <thead>
621
+ <tr>
622
+ <th style="color: green; text-align: center;">Parameter</th>
623
+ <th style="color: green; text-align: center;">TigerLLM (9B)</th>
624
+ </tr>
625
+ </thead>
626
+ <tbody>
627
+ <tr>
628
+ <td>Max Sequence Length</td>
629
+ <td>2048</td>
630
+ </tr>
631
+ <tr>
632
+ <td>Batch Size (Train/Eval)</td>
633
+ <td>32</td>
634
+ </tr>
635
+ <tr>
636
+ <td>Gradient Accumulation Steps</td>
637
+ <td>8</td>
638
+ </tr>
639
+ <tr>
640
+ <td>Number of Epochs</td>
641
+ <td>3</td>
642
+ </tr>
643
+ <tr>
644
+ <td>Learning Rate</td>
645
+ <td>1e-6</td>
646
+ </tr>
647
+ <tr>
648
+ <td>Weight Decay</td>
649
+ <td>0.04</td>
650
+ </tr>
651
+ <tr>
652
+ <td>Warmup Steps</td>
653
+ <td>15%</td>
654
+ </tr>
655
+ <tr>
656
+ <td>Optimizer</td>
657
+ <td>AdamW (8-bit)</td>
658
+ </tr>
659
+ <tr>
660
+ <td>LR Scheduler</td>
661
+ <td>Cosine</td>
662
+ </tr>
663
+ <tr>
664
+ <td>Precision</td>
665
+ <td>BF16</td>
666
+ </tr>
667
+ <tr>
668
+ <td>Evaluation Steps</td>
669
+ <td>250</td>
670
+ </tr>
671
+ <tr>
672
+ <td>Seed</td>
673
+ <td>42</td>
674
+ </tr>
675
+ </tbody>
676
+ </table>
677
+
678
+ <hr>
679
+
680
+ <h2 style="text-align: center; color: green;">Appendix C: TigerLLM - Training Pipeline</h2>
681
+ <p>
682
+ Figure 2 illustrates the multi-stage training pipeline for producing both TigerLLM (1B) and TigerLLM (9B). The process begins with pre-trained models (LLaMA 3.2 and Gemma-2), followed by continual pretraining on the Bangla-TextBook corpus and subsequent finetuning on the Bangla-Instruct dataset. Figures 3 and 4 depict the loss curves during the pretraining and finetuning stages respectively.
683
+ </p>