Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -18,15 +18,14 @@ license: cc-by-nc-4.0
|
|
18 |
This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages.
|
19 |
We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks.
|
20 |
|
21 |
-
Please refer to our paper for more details.
|
22 |
|
23 |
-
#### Instruction tuning details
|
24 |
* Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
|
25 |
* Instruction languages: English
|
26 |
* Instruction language codes: en
|
27 |
* Training method: full-parameter fine-tuning.
|
28 |
|
29 |
-
|
30 |
The model checkpoint should be loaded using `transformers` library.
|
31 |
|
32 |
```python
|
@@ -36,9 +35,15 @@ tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-1")
|
|
36 |
model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-1")
|
37 |
```
|
38 |
|
39 |
-
|
40 |
```
|
41 |
-
@
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
}
|
43 |
```
|
44 |
|
|
|
18 |
This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages.
|
19 |
We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks.
|
20 |
|
21 |
+
Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details.
|
22 |
|
|
|
23 |
* Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
|
24 |
* Instruction languages: English
|
25 |
* Instruction language codes: en
|
26 |
* Training method: full-parameter fine-tuning.
|
27 |
|
28 |
+
### Usage
|
29 |
The model checkpoint should be loaded using `transformers` library.
|
30 |
|
31 |
```python
|
|
|
35 |
model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-1")
|
36 |
```
|
37 |
|
38 |
+
### Citation
|
39 |
```
|
40 |
+
@misc{lucky52,
|
41 |
+
title = "Lucky 52: How Many Languages Are Needed to Instruction Fine-Tune Large Language Models?",
|
42 |
+
author = "Shaoxiong Ji and Pinzhen Chen",
|
43 |
+
year = "2024",
|
44 |
+
eprint = "2404.04850",
|
45 |
+
archiveprefix = "arXiv",
|
46 |
+
primaryclass = "cs.CL"
|
47 |
}
|
48 |
```
|
49 |
|