xianbin commited on
Commit
0a4dff2
·
verified ·
1 Parent(s): 7f65d1d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -27
README.md CHANGED
@@ -7,11 +7,11 @@ language:
7
  - th
8
  - vi
9
  ---
10
- # LLaMA3 8B SEA-LIONv2 Instruct
11
 
12
  SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
13
 
14
- LLaMA3 8B SEA-LIONv2 Instruct is a multilingual model which has been fine-tuned with **thousands of English and Indonesian instruction-completion pairs** alongside a smaller pool of instruction-completion pairs from other ASEAN languages.
15
  These instructions have been carefully curated and rewritten to ensure the model was trained on truly open, commercially permissive and high quality datasets.
16
 
17
  SEA-LION stands for _Southeast Asian Languages In One Network_.
@@ -24,35 +24,91 @@ SEA-LION stands for _Southeast Asian Languages In One Network_.
24
 
25
  ## Model Details
26
  ### Base model
27
- We performed instruction tuning in English and Indonesian on our [continued pre-trained LLaMA3 8B SEA-LIONv2](https://huggingface.co/aisingapore/llama3-8b-cpt-sealionv2-base), a decoder model using the LLaMA3 architecture, to create LLaMA3 8B SEA-LIONv2 Instruct.
28
 
29
  ### Benchmark Performance
30
- We evaluated LLaMA3 8B SEA-LIONv2 Instruct on the BHASA benchmark ([arXiv](https://arxiv.org/abs/2309.06085v2) and [GitHub](https://github.com/aisingapore/bhasa)) across a variety of tasks.
31
 
32
  BHASA stands out amongst other evaluations for SEA languages for its holistic approach to evaluation, including not just traditional Natural Language Processing (NLP) benchmarking tasks (such as sentiment analysis and question answering), but also linguistic and cultural diagnostic tests which are meticulously handcrafted.
33
 
34
  The evaluation was done zero-shot with Indonesian prompts and only a sample of 100-1000 instances for each dataset was used as per the setting described in the BHASA paper. The scores shown in the table below have been adjusted to only consider answers provided in the appropriate language.
35
 
36
- | Model                          | QA (F1) | Sentiment (F1) | Toxicity (F1) | Eng>Indo (ChrF++) | Indo>Eng (ChrF++) | Summary (ROUGE-L) | NLI (Acc) | Causal (Acc) |
37
- |--------------------------------|---------|----------------|---------------|-------------------|-------------------|-------------------|-----------|--------------|
38
- | SEA-LION-7B-Instruct-Research  | 24.86   | 76.13          | 24.45         | 52.50             | 46.82             | 15.44             | 33.20     | 23.80        |
39
- | SEA-LION-7B-Instruct           | **68.41**| **91.45**     | 17.98         | 57.48             | 58.04             | **17.54**         | 53.10     | 60.80        |
40
- | SeaLLM 7B v1                   | 30.96   | 56.29          | 22.60         | 62.23             | 41.55             | 14.03             | 26.50     | 56.60        |
41
- | SeaLLM 7B v2                   | 44.40   | 80.13          | **55.24**     | 64.01           | **63.28**         | 17.31             | 43.60     | 82.00   |
42
- | Sailor-7B (Base)               | 65.43   | 59.48          | 20.48         | **64.27**         | 60.68             | 8.69              | 15.10     | 38.40        |
43
- | Sailor-7B-Chat | 38.02 | 87.64 | 52.07 | 64.25 | 61.87 | 15.28 | **68.30** |**85.60** |
44
- | Llama 2 7B Chat                | 11.12   | 52.32          | 0.00          | 44.09             | 57.58             | 9.24              | 0.00      | 0.00         |
45
- | Mistral 7B Instruct v0.1       | 38.85   | 74.38          | 20.83         | 30.60             | 51.43             | 15.63             | 28.60     | 50.80        |
46
- | GPT-4 (gpt-4-0314) | 73.60 | 74.14 | 63.96 | 69.38 | 67.53 | 18.71 | 83.20 | 96.00 |
47
-
48
- - For Natural Language Understanding (NLU) tasks, we tested the model on Sentiment Analysis (`Sentiment`) using the NusaX dataset, Question Answering (`QA`) using the TyDiQA dataset, and Toxicity Detection (`Toxicity`) using the Indonesian Multi-Label Hate Speech Detection dataset. The metrics used are F1 scores for all three tasks.
49
- - For Natural Language Generation (NLG) tasks, we tested the model on Machine Translation from English to Indonesian (`Eng>Indo`) and from Indonesian to English (`Indo>Eng`) using the FLORES-200 dataset, and Abstractive Summarization (`Summary`) using the XLSum dataset. The metrics used for Machine Translation and Abstractive Summarization are ChrF++ and ROUGE-L respectively.
50
- - For Natural Language Reasoning (NLR) tasks, we tested the model on Natural Language Inference (`NLI`) using the IndoNLI lay dataset and on Causal Reasoning (`Causal`) using the XCOPA dataset. The metrics are based on accuracy for both tasks.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
  ### Usage
53
  SEA-LION can be run using the 🤗 Transformers library
54
  ```python
55
- # Please use transformers==4.37.2
56
 
57
  from transformers import AutoModelForCausalLM, AutoTokenizer
58
 
@@ -82,17 +138,12 @@ It is important for users to be aware that our model exhibits certain limitation
82
 
83
  Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
84
 
85
- ### Commercially Non-Permissive and Commercially Permissive SEA-LION Releases
86
-
87
- The previous release of the commercially non-permissive SEA-LION-Instruct-Research enabled us to explore the full research potential of SEA-LION when allowed to take full advantage of what is publicly available. In contrast, in building the commercially permissive SEA-LION-7B-Instruct, we had to leave out high-quality instruction data that was either proprietary, restricted by non-commercial licenses or in a legal gray area, leaving us with a much smaller proportion of commercially permissive data to work with — a problem that is even more pronounced for low-resource languages. We thus hope this will sound a call to action for more initiatives to create commercially viable data in the region, enabling practical benefits for all.
88
-
89
-
90
  ## Technical Specifications
91
  ### Fine-Tuning Details
92
- The LLaMA3 8B SEA-LIONv2 Instruct was fine-tuned using 8x A100-40GB using parameter efficient fine tuning in the form of LoRA.
93
 
94
  ## Data
95
- LLaMA3 8B SEA-LIONv2 Instruct was trained on a wide range of instructions that were manually and stringently verified by our team. A large portion of the effort was dedicated to ensuring that each instruction-completion pair that the model sees is of a high quality and any errors were corrected and rewritten by native speakers or else dropped from our mix.
96
 
97
  In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
98
 
 
7
  - th
8
  - vi
9
  ---
10
+ # LLaMA3 8B CPT SEA-LIONv2 Instruct
11
 
12
  SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
13
 
14
+ LLaMA3 8B CPT SEA-LIONv2 Instruct is a multilingual model which has been fine-tuned with **thousands of English and Indonesian instruction-completion pairs** alongside a smaller pool of instruction-completion pairs from other ASEAN languages.
15
  These instructions have been carefully curated and rewritten to ensure the model was trained on truly open, commercially permissive and high quality datasets.
16
 
17
  SEA-LION stands for _Southeast Asian Languages In One Network_.
 
24
 
25
  ## Model Details
26
  ### Base model
27
+ We performed instruction tuning in English and Indonesian on our [continued pre-trained LLaMA3 CPT 8B SEA-LIONv2](https://huggingface.co/aisingapore/llama3-8b-cpt-sealionv2-base), a decoder model using the LLaMA3 architecture, to create LLaMA3 8B SEA-LIONv2 Instruct.
28
 
29
  ### Benchmark Performance
30
+ We evaluated LLaMA3 8B CPT SEA-LIONv2 Instruct on the BHASA benchmark ([arXiv](https://arxiv.org/abs/2309.06085v2) and [GitHub](https://github.com/aisingapore/bhasa)) across a variety of tasks.
31
 
32
  BHASA stands out amongst other evaluations for SEA languages for its holistic approach to evaluation, including not just traditional Natural Language Processing (NLP) benchmarking tasks (such as sentiment analysis and question answering), but also linguistic and cultural diagnostic tests which are meticulously handcrafted.
33
 
34
  The evaluation was done zero-shot with Indonesian prompts and only a sample of 100-1000 instances for each dataset was used as per the setting described in the BHASA paper. The scores shown in the table below have been adjusted to only consider answers provided in the appropriate language.
35
 
36
+ #### General Language Capabilities (BHASA)
37
+ | | | | **QA** | **Sentiment** | **Toxicity** | **Eng>Lang** | **Lang>Eng** | **Summary** | **NLI** | **Causal** | **LINDSEA** |
38
+ |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
39
+ | **Language** | **Model** | **Win-rate** | **F1** | **F1** | **Macro-F1** | **ChrF++** | **ChrF++** | **F1** | **Accuracy** | **Accuracy** | **Accuracy** |
40
+ | ID | llama3-8b-cpt-sealionv2-instruct | 76.39% | 72.23 | 84.72 | 54.64 | 66.71 | 65.29 | 18.70 | 68.90 | 87.40 | 39.91 |
41
+ | ID | gemma-2-9b-it | 76.39% | 54.77 | 78.83 | 53.37 | 66.56 | 65.15 | 18.20 | 72.00 | 94.20 | 72.14 |
42
+ | ID | aya-23-8B | 61.11% | 64.51 | 82.61 | 45.40 | 64.60 | 63.91 | 22.15 | 44.40 | 89.00 | 50.45 |
43
+ | ID | SeaLLM3-7B-Chat | 51.39% | 45.42 | 74.58 | 50.42 | 64.03 | 63.44 | 17.44 | 58.20 | 92.00 | 65.22 |
44
+ | ID | Qwen2-7B-Instruct | 45.83% | 45.77 | 81.97 | 42.92 | 58.83 | 62.79 | 13.66 | 63.70 | 90.80 | 65.32 |
45
+ | ID | Meta-Llama-3.1-8B-Instruct | 41.67% | 63.98 | 61.34 | 37.10 | 63.90 | **65.35** | 19.44 | 29.40 | 83.20 | 57.12 |
46
+ | ID | Sailor-7B-Chat | 41.67% | 36.93 | **85.17** | 42.67 | 66.61 | 63.34 | 14.16 | 59.50 | 85.20 | 54.10 |
47
+ | ID | Meta-Llama-3-8B-Instruct | 36.11% | 55.49 | 72.27 | 44.68 | 56.54 | 55.63 | 15.35 | 71.80 | 82.40 | 59.25 |
48
+ | ID | Mistral-7B-Instruct-v0.3 | 19.44% | 40.69 | 78.84 | 40.33 | 49.88 | 57.89 | 15.74 | 59.60 | 71.80 | 34.48 |
49
+ | | | | | | | | | | | | |
50
+ | VI | gemma-2-9b-it | 78.91% | 48.11 | 64.23 | **50.08** | 57.21 | 59.20 | 17.18 | 52.40 | **92.60** | - |
51
+ | VI | llama3-8b-cpt-sealionv2-instruct | 64.84% | 57.05 | 54.09 | 21.99 | 58.60 | 58.97 | 18.28 | 52.40 | 87.80 | - |
52
+ | VI | SeaLLM3-7B-Chat | 57.81% | 48.71 | 51.36 | 27.60 | 55.05 | 57.64 | 16.40 | 54.50 | 89.40 | - |
53
+ | VI | Qwen2-7B-Instruct | 54.69% | 43.21 | 61.94 | 38.44 | 52.02 | 56.99 | 13.10 | **60.00** | 88.60 | - |
54
+ | VI | aya-23-8B | 54.69% | **73.69** | 42.14 | 21.17 | 56.70 | 57.02 | **22.40** | 50.80 | 86.80 | - |
55
+ | VI | Meta-Llama-3.1-8B-Instruct | 50.00% | 63.49 | 61.43 | 7.02 | 55.91 | **60.07** | 18.78 | 33.20 | 78.40 | - |
56
+ | VI | Sailor-7B-Chat | 40.62% | 31.00 | 13.13 | 30.66 | **58.85** | 59.02 | 11.85 | 49.20 | 85.80 | - |
57
+ | VI | Meta-Llama-3-8B-Instruct | 25.00% | 35.42 | **70.44** | 20.91 | 48.42 | 52.90 | 9.65 | 41.10 | 83.00 | - |
58
+ | VI | Mistral-7B-Instruct-v0.3 | 23.44% | 36.13 | 51.01 | 41.30 | 36.89 | 49.06 | 13.22 | 34.70 | 69.60 | - |
59
+ | | | | | | | | | | | | |
60
+ | TH | gemma-2-9b-it | 82.81% | 76.33 | 49.01 | 65.49 | 43.49 | **56.48** | **25.79** | 38.90 | **90.40** | - |
61
+ | TH | llama3-8b-cpt-sealionv2-instruct | 73.44% | 72.41 | **52.51** | 38.25 | **44.84** | 56.05 | 18.73 | 48.80 | 85.80 | - |
62
+ | TH | Qwen2-7B-Instruct | 62.50% | 39.47 | 50.85 | **65.89** | 36.99 | 52.58 | 21.32 | 47.40 | 88.00 | - |
63
+ | TH | SeaLLM3-7B-Chat | 56.25% | 45.01 | 40.24 | 55.48 | 41.80 | 54.58 | 23.33 | 36.40 | 90.20 | - |
64
+ | TH | Sailor-7B-Chat | 48.44% | 31.44 | 48.11 | 33.10 | 44.26 | 56.03 | 15.24 | 45.30 | 85.60 | - |
65
+ | TH | Meta-Llama-3.1-8B-Instruct | 42.19% | **82.16** | 32.46 | 25.48 | 39.65 | 55.47 | 24.92 | 6.20 | 73.40 | - |
66
+ | TH | Meta-Llama-3-8B-Instruct | 40.62% | 68.57 | 38.80 | 48.63 | 35.03 | 47.74 | 14.21 | **54.30** | 78.20 | - |
67
+ | TH | Mistral-7B-Instruct-v0.3 | 29.69% | 29.78 | 45.91 | 55.58 | 22.90 | 41.85 | 18.65 | 41.70 | 59.20 | - |
68
+ | TH | aya-23-8B | 14.06% | 43.29 | 28.84 | 27.64 | 19.10 | 40.29 | 19.53 | 33.60 | 50.60 | - |
69
+ | | | | | | | | | | | | |
70
+ | TA | gemma-2-9b-it | 81.84% | 39.04 | **97.70** | 0.85 | 0.86 | 11.98 | 89.20 | - | 38.30 | - |
71
+ | TA | llama3-8b-cpt-sealionv2-instruct | 70.51% | 29.35 | 97.19 | 0.87 | 0.86 | 6.80 | 76.80 | - | 34.50 | - |
72
+ | TA | SeaLLM3-7B-Chat | 56.25% | 31.79 | 91.69 | 0.69 | 0.78 | 11.88 | 51.80 | - | 34.60 | - |
73
+ | TA | Qwen2-7B-Instruct | 53.12% | 25.13 | 86.39 | 0.47 | 0.71 | 7.49 | 57.60 | - | 37.20 | - |
74
+ | TA | Meta-Llama-3.1-8B-Instruct | 48.83% | **51.86** | 88.51 | 0.81 | 0.85 | 9.34 | 56.60 | - | 30.80 | - |
75
+ | TA | aya-23-8B | 43.75% | 41.89 | 41.71 | 0.47 | 0.74 | 6.47 | 43.40 | - | 40.60 | - |
76
+ | TA | Sailor-7B-Chat | 37.50% | 17.46 | 32.65 | 0.46 | 0.70 | 5.60 | 11.00 | - | 0.00 | - |
77
+ | TA | Meta-Llama-3-8B-Instruct | 37.50% | 20.88 | 67.40 | 0.71 | 0.70 | 0.74 | 58.60 | - | 41.30 | - |
78
+ | TA | Mistral-7B-Instruct-v0.3 | 20.70% | 13.85 | 0.00 | 0.37 | 0.52 | 5.31 | 14.20 | - | 0.80 | - |
79
+
80
+ #### Instruction-following Capabilities (IFEval)
81
+ | | **Indonesian** | **Vietnamese** | **English** |
82
+ |--- |:---: |:---: |:---: |
83
+ | **Model** | **Lang normalised score** | **Lang normalised score** | **Lang normalised score** |
84
+ | gemma-2-9b-it | 0.88 | 0.77 | 0.85 |
85
+ | Meta-Llama-3.1-8B-Instruct | 0.68 | 0.68 | 0.85 |
86
+ | Qwen2-7B-Instruct | 0.63 | 0.65 | 0.70 |
87
+ | llama3-8b-cpt-sealionv2-instruct | 0.61 | 0.66 | 0.70 |
88
+ | aya-23-8B | 0.58 | 0.56 | 0.67 |
89
+ | SeaLLMs-v3-7B-Chat | 0.55 | 0.52 | 0.67 |
90
+ | Mistral-7B-Instruct-v0.3 | 0.43 | 0.39 | 0.70 |
91
+ | Meta-Llama-3-8B-Instruct | 0.27 | 0.21 | 0.80 |
92
+ | Sailor-7B-Chat | 0.26 | 0.25 | 0.42 |
93
+
94
+ #### Multi-turn Capatbilities (MT-Bench)
95
+ | | **Indonesian** | **Vietnamese** | **English** |
96
+ |---|:---:|:---:|:---:|
97
+ | **Model** | **Weighted Win Rate** | **Weighted Win Rate** | **Weighted Win Rate** |
98
+ | gemma-2-9b-it | 0.684 | 0.674 | 0.638 |
99
+ | SeaLLMs-v3-7B-Chat | 0.583 | 0.656 | 0.429 |
100
+ | Qwen2-7B-Instruct | 0.498 | 0.556 | 0.597 |
101
+ | llama3-8b-cpt-sealionv2-instruct | 0.531 | 0.517 | 0.510 |
102
+ | Meta-Llama-3.1-8B-Instruct | 0.411 | 0.477 | 0.618 |
103
+ | aya-23-8B | 0.499 | 0.546 | 0.416 |
104
+ | Meta-Llama-3-8B-Instruct | 0.403 | 0.437 | 0.564 |
105
+ | Mistral-7B-Instruct-v0.3 | 0.347 | 0.202 | 0.524 |
106
+ | Sailor-7B-Chat | 0.290 | 0.314 | 0.190 |
107
 
108
  ### Usage
109
  SEA-LION can be run using the 🤗 Transformers library
110
  ```python
111
+ # Please use transformers==4.43.2
112
 
113
  from transformers import AutoModelForCausalLM, AutoTokenizer
114
 
 
138
 
139
  Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
140
 
 
 
 
 
 
141
  ## Technical Specifications
142
  ### Fine-Tuning Details
143
+ The LLaMA3 8B CPT SEA-LIONv2 Instruct was fine-tuned using 8x A100-40GB using parameter efficient fine tuning in the form of LoRA.
144
 
145
  ## Data
146
+ LLaMA3 8B CPT SEA-LIONv2 Instruct was trained on a wide range of instructions that were manually and stringently verified by our team. A large portion of the effort was dedicated to ensuring that each instruction-completion pair that the model sees is of a high quality and any errors were corrected and rewritten by native speakers or else dropped from our mix.
147
 
148
  In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
149