lbourdois commited on
Commit
a94b4f4
·
verified ·
1 Parent(s): 7da45e4

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +221 -210
README.md CHANGED
@@ -1,211 +1,222 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- base_model: Qwen/Qwen2.5-0.5B-Instruct
5
- tags:
6
- - generated_from_trainer
7
- - axolotl
8
- language:
9
- - it
10
- - en
11
- pipeline_tag: text-generation
12
- datasets:
13
- - ReDiX/everyday-conversations-ita
14
- - ReDiX/dataforge-cleaned
15
- ---
16
-
17
- # Qwen2.5-0.5B-Instruct-ITA
18
-
19
- This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the [ReDiX/DataForge](https://huggingface.co/datasets/ReDiX/DataForge) dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 1.4100
22
-
23
- ## Model description
24
-
25
- This model is an example of finetuning a sLLM. Italian eval improved and the model learned as espected from the training data
26
-
27
- ## Intended uses & limitations
28
-
29
- More information needed
30
-
31
- ## Training and evaluation data
32
-
33
-
34
- | Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
35
- |------------|------:|------|-----:|--------|---|-----:|---|-----:|
36
- |arc_it | 2|none | 0|acc |↑ |0.2378|± |0.0125|
37
- | | |none | 0|acc_norm|↑ |0.2823|± |0.0132|
38
- |hellaswag_it| 1|none | 0|acc |↑ |0.3163|± |0.0049|
39
- | | |none | 0|acc_norm|↑ |0.3800|± |0.0051|
40
- |m_mmlu_it | 0|none | 5|acc |↑ |0.381 |± |0.0042|
41
-
42
- ## Training procedure
43
-
44
- ### Training hyperparameters
45
-
46
- The following hyperparameters were used during training:
47
- - learning_rate: 0.0001
48
- - train_batch_size: 4
49
- - eval_batch_size: 4
50
- - seed: 42
51
- - gradient_accumulation_steps: 4
52
- - total_train_batch_size: 16
53
- - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
54
- - lr_scheduler_type: cosine
55
- - lr_scheduler_warmup_steps: 10
56
- - num_epochs: 2
57
-
58
-
59
- [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
60
- <details><summary>See axolotl config</summary>
61
-
62
- axolotl version: `0.5.0`
63
- ```yaml
64
- base_model: Qwen/Qwen2.5-0.5B-Instruct
65
-
66
- load_in_8bit: false
67
- load_in_4bit: false
68
- strict: false
69
-
70
- datasets:
71
- - path: ./dataforge
72
- type: chat_template
73
-
74
- field_messages: conversations
75
- message_field_role: from
76
- message_field_content: value
77
-
78
- # chat_template: chatml
79
- dataset_prepared_path: last_run_prepared
80
- val_set_size: 0.1
81
- output_dir: ./outputs/qwen05B
82
-
83
- unfrozen_parameters:
84
- - ^lm_head.weight$
85
- - ^model.embed_tokens.weight$
86
- # mlp.down_proj layers
87
- - model.layers.0.mlp.down_proj
88
- - model.layers.23.mlp.down_proj
89
- - model.layers.1.mlp.down_proj
90
- - model.layers.16.mlp.down_proj
91
- - model.layers.4.mlp.down_proj
92
- - model.layers.17.mlp.down_proj
93
- # mlp.gate_proj layers
94
- - model.layers.0.mlp.gate_proj
95
- - model.layers.1.mlp.gate_proj
96
- - model.layers.2.mlp.gate_proj
97
- - model.layers.3.mlp.gate_proj
98
- - model.layers.4.mlp.gate_proj
99
- - model.layers.7.mlp.gate_proj
100
- # mlp.up_proj layers
101
- - model.layers.1.mlp.up_proj
102
- - model.layers.0.mlp.up_proj
103
- - model.layers.3.mlp.up_proj
104
- - model.layers.4.mlp.up_proj
105
- - model.layers.7.mlp.up_proj
106
- - model.layers.9.mlp.up_proj
107
- # self_attn.k_proj layers
108
- - model.layers.18.self_attn.k_proj
109
- - model.layers.7.self_attn.k_proj
110
- - model.layers.19.self_attn.k_proj
111
- - model.layers.2.self_attn.k_proj
112
- - model.layers.6.self_attn.k_proj
113
- - model.layers.9.self_attn.k_proj
114
- # self_attn.o_proj layers
115
- - model.layers.16.self_attn.o_proj
116
- - model.layers.19.self_attn.o_proj
117
- - model.layers.0.self_attn.o_proj
118
- - model.layers.20.self_attn.o_proj
119
- - model.layers.4.self_attn.o_proj
120
- - model.layers.3.self_attn.o_proj
121
- # self_attn.q_proj layers
122
- - model.layers.13.self_attn.q_proj
123
- - model.layers.16.self_attn.q_proj
124
- - model.layers.21.self_attn.q_proj
125
- - model.layers.11.self_attn.q_proj
126
- - model.layers.15.self_attn.q_proj
127
- - model.layers.6.self_attn.q_proj
128
- # self_attn.v_proj layers
129
- - model.layers.2.self_attn.v_proj
130
- - model.layers.3.self_attn.v_proj
131
- - model.layers.4.self_attn.v_proj
132
- - model.layers.5.self_attn.v_proj
133
- - model.layers.7.self_attn.v_proj
134
- - model.layers.8.self_attn.v_proj
135
-
136
-
137
-
138
- sequence_len: 4096
139
- sample_packing: true
140
- eval_sample_packing: true
141
- pad_to_sequence_len: true
142
-
143
-
144
- wandb_project: axolotl
145
- wandb_entity:
146
- wandb_watch:
147
- wandb_name: qwen2.5-0.5B
148
- wandb_log_model:
149
-
150
- gradient_accumulation_steps: 4
151
- micro_batch_size: 4
152
- num_epochs: 2
153
- optimizer: adamw_bnb_8bit
154
- lr_scheduler: cosine
155
- learning_rate: 1.0e-04
156
-
157
- train_on_inputs: false
158
- group_by_length: false
159
- bf16: true
160
- fp16:
161
- tf32: false
162
-
163
- gradient_checkpointing: true
164
- early_stopping_patience:
165
- resume_from_checkpoint:
166
- local_rank:
167
- logging_steps: 5
168
- xformers_attention:
169
- flash_attention: true
170
-
171
-
172
- warmup_steps: 10
173
- evals_per_epoch: 4
174
- eval_table_size:
175
- eval_max_new_tokens: 128
176
- saves_per_epoch: 1
177
- debug:
178
- deepspeed:
179
- weight_decay: 0.0
180
- fsdp:
181
- fsdp_config:
182
- special_tokens:
183
- pad_token: "<|im_end|>"
184
- eos_token: "<|im_end|>"
185
-
186
-
187
- ```
188
-
189
- </details><br>
190
-
191
-
192
- ### Training results
193
-
194
- | Training Loss | Epoch | Step | Validation Loss |
195
- |:-------------:|:------:|:----:|:---------------:|
196
- | No log | 0.0013 | 1 | 1.7855 |
197
- | 1.2567 | 0.2504 | 194 | 1.5639 |
198
- | 1.2551 | 0.5008 | 388 | 1.4980 |
199
- | 1.1845 | 0.7512 | 582 | 1.4501 |
200
- | 1.3178 | 1.0019 | 776 | 1.4252 |
201
- | 1.06 | 1.2523 | 970 | 1.4187 |
202
- | 1.0697 | 1.5027 | 1164 | 1.4116 |
203
- | 1.0362 | 1.7531 | 1358 | 1.4100 |
204
-
205
-
206
- ### Framework versions
207
-
208
- - Transformers 4.46.2
209
- - Pytorch 2.5.1+cu124
210
- - Datasets 3.1.0
 
 
 
 
 
 
 
 
 
 
 
211
  - Tokenizers 0.20.3
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-0.5B-Instruct
5
+ tags:
6
+ - generated_from_trainer
7
+ - axolotl
8
+ language:
9
+ - zho
10
+ - eng
11
+ - fra
12
+ - spa
13
+ - por
14
+ - deu
15
+ - ita
16
+ - rus
17
+ - jpn
18
+ - kor
19
+ - vie
20
+ - tha
21
+ - ara
22
+ pipeline_tag: text-generation
23
+ datasets:
24
+ - ReDiX/everyday-conversations-ita
25
+ - ReDiX/dataforge-cleaned
26
+ ---
27
+
28
+ # Qwen2.5-0.5B-Instruct-ITA
29
+
30
+ This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the [ReDiX/DataForge](https://huggingface.co/datasets/ReDiX/DataForge) dataset.
31
+ It achieves the following results on the evaluation set:
32
+ - Loss: 1.4100
33
+
34
+ ## Model description
35
+
36
+ This model is an example of finetuning a sLLM. Italian eval improved and the model learned as espected from the training data
37
+
38
+ ## Intended uses & limitations
39
+
40
+ More information needed
41
+
42
+ ## Training and evaluation data
43
+
44
+
45
+ | Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
46
+ |------------|------:|------|-----:|--------|---|-----:|---|-----:|
47
+ |arc_it | 2|none | 0|acc |↑ |0.2378|± |0.0125|
48
+ | | |none | 0|acc_norm|↑ |0.2823|± |0.0132|
49
+ |hellaswag_it| 1|none | 0|acc |↑ |0.3163|± |0.0049|
50
+ | | |none | 0|acc_norm|↑ |0.3800|± |0.0051|
51
+ |m_mmlu_it | 0|none | 5|acc |↑ |0.381 |± |0.0042|
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 0.0001
59
+ - train_batch_size: 4
60
+ - eval_batch_size: 4
61
+ - seed: 42
62
+ - gradient_accumulation_steps: 4
63
+ - total_train_batch_size: 16
64
+ - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
65
+ - lr_scheduler_type: cosine
66
+ - lr_scheduler_warmup_steps: 10
67
+ - num_epochs: 2
68
+
69
+
70
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
71
+ <details><summary>See axolotl config</summary>
72
+
73
+ axolotl version: `0.5.0`
74
+ ```yaml
75
+ base_model: Qwen/Qwen2.5-0.5B-Instruct
76
+
77
+ load_in_8bit: false
78
+ load_in_4bit: false
79
+ strict: false
80
+
81
+ datasets:
82
+ - path: ./dataforge
83
+ type: chat_template
84
+
85
+ field_messages: conversations
86
+ message_field_role: from
87
+ message_field_content: value
88
+
89
+ # chat_template: chatml
90
+ dataset_prepared_path: last_run_prepared
91
+ val_set_size: 0.1
92
+ output_dir: ./outputs/qwen05B
93
+
94
+ unfrozen_parameters:
95
+ - ^lm_head.weight$
96
+ - ^model.embed_tokens.weight$
97
+ # mlp.down_proj layers
98
+ - model.layers.0.mlp.down_proj
99
+ - model.layers.23.mlp.down_proj
100
+ - model.layers.1.mlp.down_proj
101
+ - model.layers.16.mlp.down_proj
102
+ - model.layers.4.mlp.down_proj
103
+ - model.layers.17.mlp.down_proj
104
+ # mlp.gate_proj layers
105
+ - model.layers.0.mlp.gate_proj
106
+ - model.layers.1.mlp.gate_proj
107
+ - model.layers.2.mlp.gate_proj
108
+ - model.layers.3.mlp.gate_proj
109
+ - model.layers.4.mlp.gate_proj
110
+ - model.layers.7.mlp.gate_proj
111
+ # mlp.up_proj layers
112
+ - model.layers.1.mlp.up_proj
113
+ - model.layers.0.mlp.up_proj
114
+ - model.layers.3.mlp.up_proj
115
+ - model.layers.4.mlp.up_proj
116
+ - model.layers.7.mlp.up_proj
117
+ - model.layers.9.mlp.up_proj
118
+ # self_attn.k_proj layers
119
+ - model.layers.18.self_attn.k_proj
120
+ - model.layers.7.self_attn.k_proj
121
+ - model.layers.19.self_attn.k_proj
122
+ - model.layers.2.self_attn.k_proj
123
+ - model.layers.6.self_attn.k_proj
124
+ - model.layers.9.self_attn.k_proj
125
+ # self_attn.o_proj layers
126
+ - model.layers.16.self_attn.o_proj
127
+ - model.layers.19.self_attn.o_proj
128
+ - model.layers.0.self_attn.o_proj
129
+ - model.layers.20.self_attn.o_proj
130
+ - model.layers.4.self_attn.o_proj
131
+ - model.layers.3.self_attn.o_proj
132
+ # self_attn.q_proj layers
133
+ - model.layers.13.self_attn.q_proj
134
+ - model.layers.16.self_attn.q_proj
135
+ - model.layers.21.self_attn.q_proj
136
+ - model.layers.11.self_attn.q_proj
137
+ - model.layers.15.self_attn.q_proj
138
+ - model.layers.6.self_attn.q_proj
139
+ # self_attn.v_proj layers
140
+ - model.layers.2.self_attn.v_proj
141
+ - model.layers.3.self_attn.v_proj
142
+ - model.layers.4.self_attn.v_proj
143
+ - model.layers.5.self_attn.v_proj
144
+ - model.layers.7.self_attn.v_proj
145
+ - model.layers.8.self_attn.v_proj
146
+
147
+
148
+
149
+ sequence_len: 4096
150
+ sample_packing: true
151
+ eval_sample_packing: true
152
+ pad_to_sequence_len: true
153
+
154
+
155
+ wandb_project: axolotl
156
+ wandb_entity:
157
+ wandb_watch:
158
+ wandb_name: qwen2.5-0.5B
159
+ wandb_log_model:
160
+
161
+ gradient_accumulation_steps: 4
162
+ micro_batch_size: 4
163
+ num_epochs: 2
164
+ optimizer: adamw_bnb_8bit
165
+ lr_scheduler: cosine
166
+ learning_rate: 1.0e-04
167
+
168
+ train_on_inputs: false
169
+ group_by_length: false
170
+ bf16: true
171
+ fp16:
172
+ tf32: false
173
+
174
+ gradient_checkpointing: true
175
+ early_stopping_patience:
176
+ resume_from_checkpoint:
177
+ local_rank:
178
+ logging_steps: 5
179
+ xformers_attention:
180
+ flash_attention: true
181
+
182
+
183
+ warmup_steps: 10
184
+ evals_per_epoch: 4
185
+ eval_table_size:
186
+ eval_max_new_tokens: 128
187
+ saves_per_epoch: 1
188
+ debug:
189
+ deepspeed:
190
+ weight_decay: 0.0
191
+ fsdp:
192
+ fsdp_config:
193
+ special_tokens:
194
+ pad_token: "<|im_end|>"
195
+ eos_token: "<|im_end|>"
196
+
197
+
198
+ ```
199
+
200
+ </details><br>
201
+
202
+
203
+ ### Training results
204
+
205
+ | Training Loss | Epoch | Step | Validation Loss |
206
+ |:-------------:|:------:|:----:|:---------------:|
207
+ | No log | 0.0013 | 1 | 1.7855 |
208
+ | 1.2567 | 0.2504 | 194 | 1.5639 |
209
+ | 1.2551 | 0.5008 | 388 | 1.4980 |
210
+ | 1.1845 | 0.7512 | 582 | 1.4501 |
211
+ | 1.3178 | 1.0019 | 776 | 1.4252 |
212
+ | 1.06 | 1.2523 | 970 | 1.4187 |
213
+ | 1.0697 | 1.5027 | 1164 | 1.4116 |
214
+ | 1.0362 | 1.7531 | 1358 | 1.4100 |
215
+
216
+
217
+ ### Framework versions
218
+
219
+ - Transformers 4.46.2
220
+ - Pytorch 2.5.1+cu124
221
+ - Datasets 3.1.0
222
  - Tokenizers 0.20.3