neginashz commited on
Commit
84aa516
·
verified ·
1 Parent(s): 60897cd

End of training

Browse files
Files changed (3) hide show
  1. README.md +199 -0
  2. adapter_model.bin +3 -0
  3. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-7B-Instruct
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - medalpaca/medical_meadow_medqa
10
+ model-index:
11
+ - name: lora-qwen-25-7b-instruct
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.6.0`
22
+ ```yaml
23
+ base_model: Qwen/Qwen2.5-7B-Instruct
24
+ trust_remote_code: true
25
+ model_type: AutoModelForCausalLM
26
+ tokenizer_type: AutoTokenizer
27
+
28
+ load_in_8bit:
29
+ load_in_4bit:
30
+ strict: false
31
+
32
+ datasets:
33
+ - path: medalpaca/medical_meadow_medqa
34
+ type: alpaca
35
+ dataset_prepared_path:
36
+ val_set_size: 0.1
37
+ output_dir: ./lora-qwen25
38
+
39
+ sequence_len: 8192
40
+ sample_packing: true
41
+ eval_sample_packing: true
42
+ pad_to_sequence_len: true
43
+
44
+
45
+ adapter: lora
46
+ lora_r: 256
47
+ lora_alpha: 128
48
+ lora_dropout: 0.05
49
+ #lora_target_modules:
50
+ # - q_proj
51
+ # - v_proj
52
+ # - k_proj
53
+ # - o_proj
54
+ # - gate_proj
55
+ # - down_proj
56
+ # - up_proj
57
+ lora_target_linear: true
58
+
59
+ wandb_project: lora-qwen-25-7b-instruct
60
+ wandb_entity:
61
+ wandb_watch:
62
+ wandb_name:
63
+ wandb_log_model:
64
+
65
+ gradient_accumulation_steps: 1
66
+ micro_batch_size: 1
67
+ num_epochs: 3
68
+ optimizer: adamw_torch
69
+ lr_scheduler: cosine
70
+ learning_rate: 0.00001
71
+
72
+ train_on_inputs: false
73
+ group_by_length: false
74
+ bf16: true
75
+ fp16: false
76
+ tf32: false
77
+
78
+ gradient_checkpointing: true
79
+
80
+ logging_steps: 1
81
+ xformers_attention:
82
+ flash_attention: true
83
+
84
+ warmup_steps:
85
+ eval_steps:
86
+ save_steps:
87
+
88
+ evals_per_epoch: 16
89
+ saves_per_epoch: 2
90
+
91
+ debug:
92
+ deepspeed: deepspeed_configs/zero2.json
93
+ weight_decay:
94
+ fsdp:
95
+ fsdp_config:
96
+ special_tokens:
97
+
98
+ hub_model_id: neginashz/lora-qwen-25-7b-instruct
99
+ hub_strategy:
100
+ early_stopping_patience:
101
+
102
+ resume_from_checkpoint:
103
+ auto_resume_from_checkpoints: true
104
+
105
+
106
+
107
+ ```
108
+
109
+ </details><br>
110
+
111
+ # lora-qwen-25-7b-instruct
112
+
113
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the medalpaca/medical_meadow_medqa dataset.
114
+ It achieves the following results on the evaluation set:
115
+ - Loss: 0.1181
116
+
117
+ ## Model description
118
+
119
+ More information needed
120
+
121
+ ## Intended uses & limitations
122
+
123
+ More information needed
124
+
125
+ ## Training and evaluation data
126
+
127
+ More information needed
128
+
129
+ ## Training procedure
130
+
131
+ ### Training hyperparameters
132
+
133
+ The following hyperparameters were used during training:
134
+ - learning_rate: 1e-05
135
+ - train_batch_size: 1
136
+ - eval_batch_size: 1
137
+ - seed: 42
138
+ - distributed_type: multi-GPU
139
+ - num_devices: 4
140
+ - total_train_batch_size: 4
141
+ - total_eval_batch_size: 4
142
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
143
+ - lr_scheduler_type: cosine
144
+ - lr_scheduler_warmup_steps: 7
145
+ - num_epochs: 3
146
+
147
+ ### Training results
148
+
149
+ | Training Loss | Epoch | Step | Validation Loss |
150
+ |:-------------:|:------:|:----:|:---------------:|
151
+ | 2.774 | 0.0741 | 6 | 2.5571 |
152
+ | 1.4649 | 0.1481 | 12 | 1.3144 |
153
+ | 0.649 | 0.2222 | 18 | 0.4603 |
154
+ | 0.1557 | 0.2963 | 24 | 0.1620 |
155
+ | 0.1792 | 0.3704 | 30 | 0.1539 |
156
+ | 0.1432 | 0.4444 | 36 | 0.1422 |
157
+ | 0.1393 | 0.5185 | 42 | 0.1385 |
158
+ | 0.1137 | 0.5926 | 48 | 0.1340 |
159
+ | 0.1246 | 0.6667 | 54 | 0.1317 |
160
+ | 0.1235 | 0.7407 | 60 | 0.1313 |
161
+ | 0.123 | 0.8148 | 66 | 0.1293 |
162
+ | 0.1413 | 0.8889 | 72 | 0.1277 |
163
+ | 0.1338 | 0.9630 | 78 | 0.1268 |
164
+ | 0.1093 | 1.0247 | 84 | 0.1263 |
165
+ | 0.1442 | 1.0988 | 90 | 0.1265 |
166
+ | 0.1127 | 1.1728 | 96 | 0.1244 |
167
+ | 0.137 | 1.2469 | 102 | 0.1231 |
168
+ | 0.1098 | 1.3210 | 108 | 0.1224 |
169
+ | 0.1276 | 1.3951 | 114 | 0.1223 |
170
+ | 0.102 | 1.4691 | 120 | 0.1215 |
171
+ | 0.1208 | 1.5432 | 126 | 0.1217 |
172
+ | 0.1143 | 1.6173 | 132 | 0.1211 |
173
+ | 0.1315 | 1.6914 | 138 | 0.1204 |
174
+ | 0.1166 | 1.7654 | 144 | 0.1200 |
175
+ | 0.1055 | 1.8395 | 150 | 0.1200 |
176
+ | 0.1235 | 1.9136 | 156 | 0.1194 |
177
+ | 0.12 | 1.9877 | 162 | 0.1193 |
178
+ | 0.0982 | 2.0494 | 168 | 0.1193 |
179
+ | 0.1129 | 2.1235 | 174 | 0.1188 |
180
+ | 0.1094 | 2.1975 | 180 | 0.1190 |
181
+ | 0.1216 | 2.2716 | 186 | 0.1191 |
182
+ | 0.1387 | 2.3457 | 192 | 0.1187 |
183
+ | 0.1001 | 2.4198 | 198 | 0.1184 |
184
+ | 0.1031 | 2.4938 | 204 | 0.1185 |
185
+ | 0.0818 | 2.5679 | 210 | 0.1183 |
186
+ | 0.126 | 2.6420 | 216 | 0.1185 |
187
+ | 0.124 | 2.7160 | 222 | 0.1183 |
188
+ | 0.1193 | 2.7901 | 228 | 0.1184 |
189
+ | 0.1082 | 2.8642 | 234 | 0.1183 |
190
+ | 0.1181 | 2.9383 | 240 | 0.1181 |
191
+
192
+
193
+ ### Framework versions
194
+
195
+ - PEFT 0.14.0
196
+ - Transformers 4.47.0
197
+ - Pytorch 2.5.1+cu124
198
+ - Datasets 3.1.0
199
+ - Tokenizers 0.21.0
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c90de92acd966ca4f3068f556ef2dce57c5638fd3b95fa595a34492824fc6119
3
+ size 1291908410
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dac39fd59001679f1ee320263b8b1ed793c91fa43fb227339a1fd8b92aff1840
3
  size 1291899552
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5012cddc306bbf88d7340ed0762acfc5b94792bc2904854403bde5cb5edbc87
3
  size 1291899552