TIM2177 commited on
Commit
6284e11
·
verified ·
1 Parent(s): 6037af2

Model save

Browse files
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: apache-2.0
4
+ base_model: mistralai/Mistral-7B-v0.1
5
+ tags:
6
+ - trl
7
+ - sft
8
+ - generated_from_trainer
9
+ datasets:
10
+ - generator
11
+ model-index:
12
+ - name: zephyr-7b-sft-qlora
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # zephyr-7b-sft-qlora
20
+
21
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.9562
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.0002
43
+ - train_batch_size: 2
44
+ - eval_batch_size: 1
45
+ - seed: 42
46
+ - distributed_type: multi-GPU
47
+ - gradient_accumulation_steps: 64
48
+ - total_train_batch_size: 128
49
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
+ - lr_scheduler_type: cosine
51
+ - lr_scheduler_warmup_ratio: 0.1
52
+ - num_epochs: 1
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss |
57
+ |:-------------:|:------:|:----:|:---------------:|
58
+ | 0.9468 | 0.9995 | 1083 | 0.9562 |
59
+
60
+
61
+ ### Framework versions
62
+
63
+ - PEFT 0.14.0
64
+ - Transformers 4.48.3
65
+ - Pytorch 2.6.0+cu124
66
+ - Datasets 3.3.0
67
+ - Tokenizers 0.21.0
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9995385325334564,
3
+ "total_flos": 1.2183828414859313e+19,
4
+ "train_loss": 0.9649847933019847,
5
+ "train_runtime": 330634.4933,
6
+ "train_samples": 207864,
7
+ "train_samples_per_second": 0.419,
8
+ "train_steps_per_second": 0.003
9
+ }
runs/Feb17_00-03-01_t25033090/events.out.tfevents.1739721843.t25033090.109000.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d678659d9ed8204e1445bbac4ff7e2527c18df0ac1b80b0979540361d719ab0d
3
- size 52302
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ffdf755459794452d7b45bef8439b93f1edd24c9623e9bb4cc7fef7a3c0e7c6
3
+ size 52927
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9995385325334564,
3
+ "total_flos": 1.2183828414859313e+19,
4
+ "train_loss": 0.9649847933019847,
5
+ "train_runtime": 330634.4933,
6
+ "train_samples": 207864,
7
+ "train_samples_per_second": 0.419,
8
+ "train_steps_per_second": 0.003
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1569 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9995385325334564,
5
+ "eval_steps": 500,
6
+ "global_step": 1083,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0009229349330872173,
13
+ "grad_norm": 0.3153176009654999,
14
+ "learning_rate": 1.8348623853211011e-06,
15
+ "loss": 1.16,
16
+ "step": 1
17
+ },
18
+ {
19
+ "epoch": 0.0046146746654360865,
20
+ "grad_norm": 0.30069586634635925,
21
+ "learning_rate": 9.174311926605506e-06,
22
+ "loss": 1.157,
23
+ "step": 5
24
+ },
25
+ {
26
+ "epoch": 0.009229349330872173,
27
+ "grad_norm": 0.24939967691898346,
28
+ "learning_rate": 1.834862385321101e-05,
29
+ "loss": 1.1423,
30
+ "step": 10
31
+ },
32
+ {
33
+ "epoch": 0.01384402399630826,
34
+ "grad_norm": 0.1986856758594513,
35
+ "learning_rate": 2.7522935779816515e-05,
36
+ "loss": 1.1082,
37
+ "step": 15
38
+ },
39
+ {
40
+ "epoch": 0.018458698661744346,
41
+ "grad_norm": 0.16805031895637512,
42
+ "learning_rate": 3.669724770642202e-05,
43
+ "loss": 1.0915,
44
+ "step": 20
45
+ },
46
+ {
47
+ "epoch": 0.023073373327180433,
48
+ "grad_norm": 0.16181856393814087,
49
+ "learning_rate": 4.587155963302753e-05,
50
+ "loss": 1.0639,
51
+ "step": 25
52
+ },
53
+ {
54
+ "epoch": 0.02768804799261652,
55
+ "grad_norm": 0.13395144045352936,
56
+ "learning_rate": 5.504587155963303e-05,
57
+ "loss": 1.021,
58
+ "step": 30
59
+ },
60
+ {
61
+ "epoch": 0.032302722658052604,
62
+ "grad_norm": 0.09093035757541656,
63
+ "learning_rate": 6.422018348623854e-05,
64
+ "loss": 1.0403,
65
+ "step": 35
66
+ },
67
+ {
68
+ "epoch": 0.03691739732348869,
69
+ "grad_norm": 0.08925778418779373,
70
+ "learning_rate": 7.339449541284404e-05,
71
+ "loss": 1.0398,
72
+ "step": 40
73
+ },
74
+ {
75
+ "epoch": 0.04153207198892478,
76
+ "grad_norm": 0.10212849825620651,
77
+ "learning_rate": 8.256880733944955e-05,
78
+ "loss": 1.0122,
79
+ "step": 45
80
+ },
81
+ {
82
+ "epoch": 0.046146746654360866,
83
+ "grad_norm": 0.0883544310927391,
84
+ "learning_rate": 9.174311926605506e-05,
85
+ "loss": 1.003,
86
+ "step": 50
87
+ },
88
+ {
89
+ "epoch": 0.050761421319796954,
90
+ "grad_norm": 0.09017772227525711,
91
+ "learning_rate": 0.00010091743119266055,
92
+ "loss": 0.9996,
93
+ "step": 55
94
+ },
95
+ {
96
+ "epoch": 0.05537609598523304,
97
+ "grad_norm": 0.08947195112705231,
98
+ "learning_rate": 0.00011009174311926606,
99
+ "loss": 1.0092,
100
+ "step": 60
101
+ },
102
+ {
103
+ "epoch": 0.05999077065066913,
104
+ "grad_norm": 0.09361358731985092,
105
+ "learning_rate": 0.00011926605504587157,
106
+ "loss": 1.0053,
107
+ "step": 65
108
+ },
109
+ {
110
+ "epoch": 0.06460544531610521,
111
+ "grad_norm": 0.08942877501249313,
112
+ "learning_rate": 0.00012844036697247707,
113
+ "loss": 0.979,
114
+ "step": 70
115
+ },
116
+ {
117
+ "epoch": 0.0692201199815413,
118
+ "grad_norm": 0.08816999942064285,
119
+ "learning_rate": 0.00013761467889908258,
120
+ "loss": 0.9817,
121
+ "step": 75
122
+ },
123
+ {
124
+ "epoch": 0.07383479464697738,
125
+ "grad_norm": 0.10292733460664749,
126
+ "learning_rate": 0.0001467889908256881,
127
+ "loss": 0.972,
128
+ "step": 80
129
+ },
130
+ {
131
+ "epoch": 0.07844946931241348,
132
+ "grad_norm": 0.08368176966905594,
133
+ "learning_rate": 0.0001559633027522936,
134
+ "loss": 0.9869,
135
+ "step": 85
136
+ },
137
+ {
138
+ "epoch": 0.08306414397784956,
139
+ "grad_norm": 0.08516489714384079,
140
+ "learning_rate": 0.0001651376146788991,
141
+ "loss": 0.9841,
142
+ "step": 90
143
+ },
144
+ {
145
+ "epoch": 0.08767881864328565,
146
+ "grad_norm": 0.08280258625745773,
147
+ "learning_rate": 0.00017431192660550458,
148
+ "loss": 0.9859,
149
+ "step": 95
150
+ },
151
+ {
152
+ "epoch": 0.09229349330872173,
153
+ "grad_norm": 0.08506108820438385,
154
+ "learning_rate": 0.00018348623853211012,
155
+ "loss": 0.9913,
156
+ "step": 100
157
+ },
158
+ {
159
+ "epoch": 0.09690816797415783,
160
+ "grad_norm": 0.0873042568564415,
161
+ "learning_rate": 0.0001926605504587156,
162
+ "loss": 1.014,
163
+ "step": 105
164
+ },
165
+ {
166
+ "epoch": 0.10152284263959391,
167
+ "grad_norm": 0.09446421265602112,
168
+ "learning_rate": 0.00019999947982262415,
169
+ "loss": 0.9959,
170
+ "step": 110
171
+ },
172
+ {
173
+ "epoch": 0.10613751730503,
174
+ "grad_norm": 0.0792991891503334,
175
+ "learning_rate": 0.00019998127418269004,
176
+ "loss": 0.9976,
177
+ "step": 115
178
+ },
179
+ {
180
+ "epoch": 0.11075219197046608,
181
+ "grad_norm": 0.08645275980234146,
182
+ "learning_rate": 0.00019993706508539968,
183
+ "loss": 0.9774,
184
+ "step": 120
185
+ },
186
+ {
187
+ "epoch": 0.11536686663590216,
188
+ "grad_norm": 0.07994881272315979,
189
+ "learning_rate": 0.0001998668640288,
190
+ "loss": 0.9962,
191
+ "step": 125
192
+ },
193
+ {
194
+ "epoch": 0.11998154130133826,
195
+ "grad_norm": 0.07281699776649475,
196
+ "learning_rate": 0.0001997706892710117,
197
+ "loss": 0.9682,
198
+ "step": 130
199
+ },
200
+ {
201
+ "epoch": 0.12459621596677434,
202
+ "grad_norm": 0.0806204155087471,
203
+ "learning_rate": 0.00019964856582548092,
204
+ "loss": 0.9973,
205
+ "step": 135
206
+ },
207
+ {
208
+ "epoch": 0.12921089063221042,
209
+ "grad_norm": 0.08152524381875992,
210
+ "learning_rate": 0.00019950052545447352,
211
+ "loss": 0.9926,
212
+ "step": 140
213
+ },
214
+ {
215
+ "epoch": 0.13382556529764653,
216
+ "grad_norm": 0.07916297018527985,
217
+ "learning_rate": 0.0001993266066608142,
218
+ "loss": 0.9681,
219
+ "step": 145
220
+ },
221
+ {
222
+ "epoch": 0.1384402399630826,
223
+ "grad_norm": 0.07692616432905197,
224
+ "learning_rate": 0.00019912685467787257,
225
+ "loss": 0.9552,
226
+ "step": 150
227
+ },
228
+ {
229
+ "epoch": 0.1430549146285187,
230
+ "grad_norm": 0.07867322117090225,
231
+ "learning_rate": 0.00019890132145779885,
232
+ "loss": 1.0056,
233
+ "step": 155
234
+ },
235
+ {
236
+ "epoch": 0.14766958929395477,
237
+ "grad_norm": 0.07720212638378143,
238
+ "learning_rate": 0.0001986500656580118,
239
+ "loss": 0.9603,
240
+ "step": 160
241
+ },
242
+ {
243
+ "epoch": 0.15228426395939088,
244
+ "grad_norm": 0.0785161629319191,
245
+ "learning_rate": 0.00019837315262594306,
246
+ "loss": 0.9693,
247
+ "step": 165
248
+ },
249
+ {
250
+ "epoch": 0.15689893862482696,
251
+ "grad_norm": 0.0887812152504921,
252
+ "learning_rate": 0.00019807065438204118,
253
+ "loss": 0.9713,
254
+ "step": 170
255
+ },
256
+ {
257
+ "epoch": 0.16151361329026304,
258
+ "grad_norm": 0.07461399585008621,
259
+ "learning_rate": 0.00019774264960104057,
260
+ "loss": 0.9561,
261
+ "step": 175
262
+ },
263
+ {
264
+ "epoch": 0.16612828795569912,
265
+ "grad_norm": 0.07251787185668945,
266
+ "learning_rate": 0.00019738922359149926,
267
+ "loss": 0.9701,
268
+ "step": 180
269
+ },
270
+ {
271
+ "epoch": 0.1707429626211352,
272
+ "grad_norm": 0.07772507518529892,
273
+ "learning_rate": 0.00019701046827361177,
274
+ "loss": 0.9775,
275
+ "step": 185
276
+ },
277
+ {
278
+ "epoch": 0.1753576372865713,
279
+ "grad_norm": 0.07318438589572906,
280
+ "learning_rate": 0.00019660648215530206,
281
+ "loss": 0.9585,
282
+ "step": 190
283
+ },
284
+ {
285
+ "epoch": 0.17997231195200739,
286
+ "grad_norm": 0.07861252874135971,
287
+ "learning_rate": 0.00019617737030660338,
288
+ "loss": 0.9642,
289
+ "step": 195
290
+ },
291
+ {
292
+ "epoch": 0.18458698661744347,
293
+ "grad_norm": 0.07000173628330231,
294
+ "learning_rate": 0.0001957232443323312,
295
+ "loss": 0.9679,
296
+ "step": 200
297
+ },
298
+ {
299
+ "epoch": 0.18920166128287955,
300
+ "grad_norm": 0.07377786934375763,
301
+ "learning_rate": 0.00019524422234305677,
302
+ "loss": 0.9579,
303
+ "step": 205
304
+ },
305
+ {
306
+ "epoch": 0.19381633594831565,
307
+ "grad_norm": 0.07430287450551987,
308
+ "learning_rate": 0.0001947404289243885,
309
+ "loss": 0.9385,
310
+ "step": 210
311
+ },
312
+ {
313
+ "epoch": 0.19843101061375173,
314
+ "grad_norm": 0.07597630470991135,
315
+ "learning_rate": 0.0001942119951045692,
316
+ "loss": 0.9522,
317
+ "step": 215
318
+ },
319
+ {
320
+ "epoch": 0.20304568527918782,
321
+ "grad_norm": 0.07650153338909149,
322
+ "learning_rate": 0.00019365905832039815,
323
+ "loss": 0.9766,
324
+ "step": 220
325
+ },
326
+ {
327
+ "epoch": 0.2076603599446239,
328
+ "grad_norm": 0.07343017309904099,
329
+ "learning_rate": 0.00019308176238148564,
330
+ "loss": 0.979,
331
+ "step": 225
332
+ },
333
+ {
334
+ "epoch": 0.21227503461006,
335
+ "grad_norm": 0.0811387151479721,
336
+ "learning_rate": 0.0001924802574328509,
337
+ "loss": 0.9575,
338
+ "step": 230
339
+ },
340
+ {
341
+ "epoch": 0.21688970927549608,
342
+ "grad_norm": 0.0797664150595665,
343
+ "learning_rate": 0.00019185469991587166,
344
+ "loss": 0.9746,
345
+ "step": 235
346
+ },
347
+ {
348
+ "epoch": 0.22150438394093216,
349
+ "grad_norm": 0.07318708300590515,
350
+ "learning_rate": 0.00019120525252759647,
351
+ "loss": 0.9659,
352
+ "step": 240
353
+ },
354
+ {
355
+ "epoch": 0.22611905860636825,
356
+ "grad_norm": 0.07425623387098312,
357
+ "learning_rate": 0.00019053208417842978,
358
+ "loss": 0.9644,
359
+ "step": 245
360
+ },
361
+ {
362
+ "epoch": 0.23073373327180433,
363
+ "grad_norm": 0.07410068064928055,
364
+ "learning_rate": 0.0001898353699482014,
365
+ "loss": 0.9652,
366
+ "step": 250
367
+ },
368
+ {
369
+ "epoch": 0.23534840793724043,
370
+ "grad_norm": 0.07423646003007889,
371
+ "learning_rate": 0.0001891152910406309,
372
+ "loss": 0.9691,
373
+ "step": 255
374
+ },
375
+ {
376
+ "epoch": 0.23996308260267651,
377
+ "grad_norm": 0.06945409625768661,
378
+ "learning_rate": 0.00018837203473619978,
379
+ "loss": 0.9739,
380
+ "step": 260
381
+ },
382
+ {
383
+ "epoch": 0.2445777572681126,
384
+ "grad_norm": 0.0697169378399849,
385
+ "learning_rate": 0.0001876057943434428,
386
+ "loss": 0.9772,
387
+ "step": 265
388
+ },
389
+ {
390
+ "epoch": 0.24919243193354867,
391
+ "grad_norm": 0.07200105488300323,
392
+ "learning_rate": 0.00018681676914867175,
393
+ "loss": 0.9742,
394
+ "step": 270
395
+ },
396
+ {
397
+ "epoch": 0.25380710659898476,
398
+ "grad_norm": 0.0744839757680893,
399
+ "learning_rate": 0.0001860051643641443,
400
+ "loss": 0.9613,
401
+ "step": 275
402
+ },
403
+ {
404
+ "epoch": 0.25842178126442084,
405
+ "grad_norm": 0.08401075750589371,
406
+ "learning_rate": 0.0001851711910746919,
407
+ "loss": 0.9721,
408
+ "step": 280
409
+ },
410
+ {
411
+ "epoch": 0.26303645592985697,
412
+ "grad_norm": 0.06904800236225128,
413
+ "learning_rate": 0.00018431506618282,
414
+ "loss": 0.9572,
415
+ "step": 285
416
+ },
417
+ {
418
+ "epoch": 0.26765113059529305,
419
+ "grad_norm": 0.07013043761253357,
420
+ "learning_rate": 0.0001834370123522954,
421
+ "loss": 0.9706,
422
+ "step": 290
423
+ },
424
+ {
425
+ "epoch": 0.27226580526072913,
426
+ "grad_norm": 0.07055062055587769,
427
+ "learning_rate": 0.00018253725795023504,
428
+ "loss": 0.9612,
429
+ "step": 295
430
+ },
431
+ {
432
+ "epoch": 0.2768804799261652,
433
+ "grad_norm": 0.07045809924602509,
434
+ "learning_rate": 0.0001816160369877117,
435
+ "loss": 0.9707,
436
+ "step": 300
437
+ },
438
+ {
439
+ "epoch": 0.2814951545916013,
440
+ "grad_norm": 0.06949017941951752,
441
+ "learning_rate": 0.00018067358905889146,
442
+ "loss": 0.9405,
443
+ "step": 305
444
+ },
445
+ {
446
+ "epoch": 0.2861098292570374,
447
+ "grad_norm": 0.06768112629652023,
448
+ "learning_rate": 0.00017971015927871942,
449
+ "loss": 0.97,
450
+ "step": 310
451
+ },
452
+ {
453
+ "epoch": 0.29072450392247345,
454
+ "grad_norm": 0.06964079290628433,
455
+ "learning_rate": 0.0001787259982191692,
456
+ "loss": 0.9581,
457
+ "step": 315
458
+ },
459
+ {
460
+ "epoch": 0.29533917858790953,
461
+ "grad_norm": 0.07223650813102722,
462
+ "learning_rate": 0.00017772136184407365,
463
+ "loss": 0.9611,
464
+ "step": 320
465
+ },
466
+ {
467
+ "epoch": 0.2999538532533456,
468
+ "grad_norm": 0.06840556859970093,
469
+ "learning_rate": 0.00017669651144255265,
470
+ "loss": 0.9631,
471
+ "step": 325
472
+ },
473
+ {
474
+ "epoch": 0.30456852791878175,
475
+ "grad_norm": 0.07011925429105759,
476
+ "learning_rate": 0.00017565171356105627,
477
+ "loss": 0.9817,
478
+ "step": 330
479
+ },
480
+ {
481
+ "epoch": 0.30918320258421783,
482
+ "grad_norm": 0.06768331676721573,
483
+ "learning_rate": 0.00017458723993404065,
484
+ "loss": 0.9796,
485
+ "step": 335
486
+ },
487
+ {
488
+ "epoch": 0.3137978772496539,
489
+ "grad_norm": 0.06805837899446487,
490
+ "learning_rate": 0.00017350336741329413,
491
+ "loss": 0.9683,
492
+ "step": 340
493
+ },
494
+ {
495
+ "epoch": 0.31841255191509,
496
+ "grad_norm": 0.07038763910531998,
497
+ "learning_rate": 0.00017240037789593307,
498
+ "loss": 0.9762,
499
+ "step": 345
500
+ },
501
+ {
502
+ "epoch": 0.3230272265805261,
503
+ "grad_norm": 0.0689588338136673,
504
+ "learning_rate": 0.0001712785582510848,
505
+ "loss": 0.9537,
506
+ "step": 350
507
+ },
508
+ {
509
+ "epoch": 0.32764190124596215,
510
+ "grad_norm": 0.06688909232616425,
511
+ "learning_rate": 0.00017013820024527798,
512
+ "loss": 0.9643,
513
+ "step": 355
514
+ },
515
+ {
516
+ "epoch": 0.33225657591139823,
517
+ "grad_norm": 0.073676697909832,
518
+ "learning_rate": 0.00016897960046655886,
519
+ "loss": 0.9611,
520
+ "step": 360
521
+ },
522
+ {
523
+ "epoch": 0.3368712505768343,
524
+ "grad_norm": 0.07165560126304626,
525
+ "learning_rate": 0.00016780306024735382,
526
+ "loss": 0.957,
527
+ "step": 365
528
+ },
529
+ {
530
+ "epoch": 0.3414859252422704,
531
+ "grad_norm": 0.07094935327768326,
532
+ "learning_rate": 0.00016660888558609773,
533
+ "loss": 0.9854,
534
+ "step": 370
535
+ },
536
+ {
537
+ "epoch": 0.34610059990770653,
538
+ "grad_norm": 0.06721330434083939,
539
+ "learning_rate": 0.00016539738706764894,
540
+ "loss": 0.9648,
541
+ "step": 375
542
+ },
543
+ {
544
+ "epoch": 0.3507152745731426,
545
+ "grad_norm": 0.0699782595038414,
546
+ "learning_rate": 0.00016416887978251135,
547
+ "loss": 0.9587,
548
+ "step": 380
549
+ },
550
+ {
551
+ "epoch": 0.3553299492385787,
552
+ "grad_norm": 0.06915416568517685,
553
+ "learning_rate": 0.00016292368324488462,
554
+ "loss": 0.9395,
555
+ "step": 385
556
+ },
557
+ {
558
+ "epoch": 0.35994462390401477,
559
+ "grad_norm": 0.06952769309282303,
560
+ "learning_rate": 0.00016166212130956382,
561
+ "loss": 0.9476,
562
+ "step": 390
563
+ },
564
+ {
565
+ "epoch": 0.36455929856945085,
566
+ "grad_norm": 0.06895878911018372,
567
+ "learning_rate": 0.00016038452208771037,
568
+ "loss": 0.9725,
569
+ "step": 395
570
+ },
571
+ {
572
+ "epoch": 0.36917397323488693,
573
+ "grad_norm": 0.06924083083868027,
574
+ "learning_rate": 0.00015909121786151568,
575
+ "loss": 0.9541,
576
+ "step": 400
577
+ },
578
+ {
579
+ "epoch": 0.373788647900323,
580
+ "grad_norm": 0.06652365624904633,
581
+ "learning_rate": 0.00015778254499778006,
582
+ "loss": 0.9592,
583
+ "step": 405
584
+ },
585
+ {
586
+ "epoch": 0.3784033225657591,
587
+ "grad_norm": 0.06965645402669907,
588
+ "learning_rate": 0.00015645884386042958,
589
+ "loss": 0.9484,
590
+ "step": 410
591
+ },
592
+ {
593
+ "epoch": 0.3830179972311952,
594
+ "grad_norm": 0.0711289793252945,
595
+ "learning_rate": 0.00015512045872199276,
596
+ "loss": 0.9395,
597
+ "step": 415
598
+ },
599
+ {
600
+ "epoch": 0.3876326718966313,
601
+ "grad_norm": 0.0690288171172142,
602
+ "learning_rate": 0.00015376773767406142,
603
+ "loss": 0.983,
604
+ "step": 420
605
+ },
606
+ {
607
+ "epoch": 0.3922473465620674,
608
+ "grad_norm": 0.0725955069065094,
609
+ "learning_rate": 0.00015240103253675756,
610
+ "loss": 0.956,
611
+ "step": 425
612
+ },
613
+ {
614
+ "epoch": 0.39686202122750347,
615
+ "grad_norm": 0.07096763700246811,
616
+ "learning_rate": 0.00015102069876723098,
617
+ "loss": 0.9638,
618
+ "step": 430
619
+ },
620
+ {
621
+ "epoch": 0.40147669589293955,
622
+ "grad_norm": 0.06781638413667679,
623
+ "learning_rate": 0.00014962709536721087,
624
+ "loss": 0.9413,
625
+ "step": 435
626
+ },
627
+ {
628
+ "epoch": 0.40609137055837563,
629
+ "grad_norm": 0.06911034137010574,
630
+ "learning_rate": 0.00014822058478963532,
631
+ "loss": 0.9576,
632
+ "step": 440
633
+ },
634
+ {
635
+ "epoch": 0.4107060452238117,
636
+ "grad_norm": 0.07025574147701263,
637
+ "learning_rate": 0.00014680153284438345,
638
+ "loss": 0.961,
639
+ "step": 445
640
+ },
641
+ {
642
+ "epoch": 0.4153207198892478,
643
+ "grad_norm": 0.07122451812028885,
644
+ "learning_rate": 0.00014537030860313442,
645
+ "loss": 0.9678,
646
+ "step": 450
647
+ },
648
+ {
649
+ "epoch": 0.41993539455468387,
650
+ "grad_norm": 0.06810062378644943,
651
+ "learning_rate": 0.000143927284303378,
652
+ "loss": 0.9497,
653
+ "step": 455
654
+ },
655
+ {
656
+ "epoch": 0.42455006922012,
657
+ "grad_norm": 0.06889387220144272,
658
+ "learning_rate": 0.00014247283525160178,
659
+ "loss": 0.9461,
660
+ "step": 460
661
+ },
662
+ {
663
+ "epoch": 0.4291647438855561,
664
+ "grad_norm": 0.06921575218439102,
665
+ "learning_rate": 0.00014100733972568038,
666
+ "loss": 0.9625,
667
+ "step": 465
668
+ },
669
+ {
670
+ "epoch": 0.43377941855099217,
671
+ "grad_norm": 0.06750404834747314,
672
+ "learning_rate": 0.00013953117887649153,
673
+ "loss": 0.9678,
674
+ "step": 470
675
+ },
676
+ {
677
+ "epoch": 0.43839409321642825,
678
+ "grad_norm": 0.07006347179412842,
679
+ "learning_rate": 0.00013804473662878519,
680
+ "loss": 0.9646,
681
+ "step": 475
682
+ },
683
+ {
684
+ "epoch": 0.44300876788186433,
685
+ "grad_norm": 0.06842590123414993,
686
+ "learning_rate": 0.00013654839958133117,
687
+ "loss": 0.956,
688
+ "step": 480
689
+ },
690
+ {
691
+ "epoch": 0.4476234425473004,
692
+ "grad_norm": 0.07282765209674835,
693
+ "learning_rate": 0.0001350425569063712,
694
+ "loss": 0.9787,
695
+ "step": 485
696
+ },
697
+ {
698
+ "epoch": 0.4522381172127365,
699
+ "grad_norm": 0.06680766493082047,
700
+ "learning_rate": 0.00013352760024840175,
701
+ "loss": 0.9544,
702
+ "step": 490
703
+ },
704
+ {
705
+ "epoch": 0.45685279187817257,
706
+ "grad_norm": 0.0665644034743309,
707
+ "learning_rate": 0.00013200392362231383,
708
+ "loss": 0.9397,
709
+ "step": 495
710
+ },
711
+ {
712
+ "epoch": 0.46146746654360865,
713
+ "grad_norm": 0.06751461327075958,
714
+ "learning_rate": 0.00013047192331091636,
715
+ "loss": 0.974,
716
+ "step": 500
717
+ },
718
+ {
719
+ "epoch": 0.4660821412090448,
720
+ "grad_norm": 0.06943210959434509,
721
+ "learning_rate": 0.00012893199776186956,
722
+ "loss": 0.9487,
723
+ "step": 505
724
+ },
725
+ {
726
+ "epoch": 0.47069681587448087,
727
+ "grad_norm": 0.06830769777297974,
728
+ "learning_rate": 0.0001273845474840555,
729
+ "loss": 0.9614,
730
+ "step": 510
731
+ },
732
+ {
733
+ "epoch": 0.47531149053991695,
734
+ "grad_norm": 0.06783465296030045,
735
+ "learning_rate": 0.0001258299749434123,
736
+ "loss": 0.9716,
737
+ "step": 515
738
+ },
739
+ {
740
+ "epoch": 0.47992616520535303,
741
+ "grad_norm": 0.07035640627145767,
742
+ "learning_rate": 0.00012426868445825954,
743
+ "loss": 0.9639,
744
+ "step": 520
745
+ },
746
+ {
747
+ "epoch": 0.4845408398707891,
748
+ "grad_norm": 0.06989864259958267,
749
+ "learning_rate": 0.00012270108209414186,
750
+ "loss": 0.9483,
751
+ "step": 525
752
+ },
753
+ {
754
+ "epoch": 0.4891555145362252,
755
+ "grad_norm": 0.06680671125650406,
756
+ "learning_rate": 0.00012112757555821797,
757
+ "loss": 0.9596,
758
+ "step": 530
759
+ },
760
+ {
761
+ "epoch": 0.49377018920166127,
762
+ "grad_norm": 0.07067760825157166,
763
+ "learning_rate": 0.00011954857409322302,
764
+ "loss": 0.9449,
765
+ "step": 535
766
+ },
767
+ {
768
+ "epoch": 0.49838486386709735,
769
+ "grad_norm": 0.06822020560503006,
770
+ "learning_rate": 0.00011796448837103129,
771
+ "loss": 0.9558,
772
+ "step": 540
773
+ },
774
+ {
775
+ "epoch": 0.5029995385325334,
776
+ "grad_norm": 0.06769659370183945,
777
+ "learning_rate": 0.00011637573038584729,
778
+ "loss": 0.9458,
779
+ "step": 545
780
+ },
781
+ {
782
+ "epoch": 0.5076142131979695,
783
+ "grad_norm": 0.06936723738908768,
784
+ "learning_rate": 0.00011478271334705302,
785
+ "loss": 0.9504,
786
+ "step": 550
787
+ },
788
+ {
789
+ "epoch": 0.5122288878634056,
790
+ "grad_norm": 0.07083772867918015,
791
+ "learning_rate": 0.00011318585157173913,
792
+ "loss": 0.9628,
793
+ "step": 555
794
+ },
795
+ {
796
+ "epoch": 0.5168435625288417,
797
+ "grad_norm": 0.07207299768924713,
798
+ "learning_rate": 0.0001115855603769479,
799
+ "loss": 0.9492,
800
+ "step": 560
801
+ },
802
+ {
803
+ "epoch": 0.5214582371942778,
804
+ "grad_norm": 0.06986602395772934,
805
+ "learning_rate": 0.00010998225597165628,
806
+ "loss": 0.9507,
807
+ "step": 565
808
+ },
809
+ {
810
+ "epoch": 0.5260729118597139,
811
+ "grad_norm": 0.07052252441644669,
812
+ "learning_rate": 0.00010837635534852686,
813
+ "loss": 0.9497,
814
+ "step": 570
815
+ },
816
+ {
817
+ "epoch": 0.53068758652515,
818
+ "grad_norm": 0.07066074013710022,
819
+ "learning_rate": 0.00010676827617545511,
820
+ "loss": 0.9506,
821
+ "step": 575
822
+ },
823
+ {
824
+ "epoch": 0.5353022611905861,
825
+ "grad_norm": 0.07176294177770615,
826
+ "learning_rate": 0.00010515843668694085,
827
+ "loss": 0.9501,
828
+ "step": 580
829
+ },
830
+ {
831
+ "epoch": 0.5399169358560222,
832
+ "grad_norm": 0.07183931022882462,
833
+ "learning_rate": 0.00010354725557531257,
834
+ "loss": 0.9514,
835
+ "step": 585
836
+ },
837
+ {
838
+ "epoch": 0.5445316105214583,
839
+ "grad_norm": 0.06932860612869263,
840
+ "learning_rate": 0.00010193515188183245,
841
+ "loss": 0.9452,
842
+ "step": 590
843
+ },
844
+ {
845
+ "epoch": 0.5491462851868943,
846
+ "grad_norm": 0.06957747042179108,
847
+ "learning_rate": 0.0001003225448877108,
848
+ "loss": 0.9649,
849
+ "step": 595
850
+ },
851
+ {
852
+ "epoch": 0.5537609598523304,
853
+ "grad_norm": 0.06706763803958893,
854
+ "learning_rate": 9.870985400505804e-05,
855
+ "loss": 0.9468,
856
+ "step": 600
857
+ },
858
+ {
859
+ "epoch": 0.5583756345177665,
860
+ "grad_norm": 0.06936302781105042,
861
+ "learning_rate": 9.709749866780248e-05,
862
+ "loss": 0.9569,
863
+ "step": 605
864
+ },
865
+ {
866
+ "epoch": 0.5629903091832026,
867
+ "grad_norm": 0.06822703033685684,
868
+ "learning_rate": 9.548589822260281e-05,
869
+ "loss": 0.9672,
870
+ "step": 610
871
+ },
872
+ {
873
+ "epoch": 0.5676049838486387,
874
+ "grad_norm": 0.06881389766931534,
875
+ "learning_rate": 9.387547181978291e-05,
876
+ "loss": 0.9493,
877
+ "step": 615
878
+ },
879
+ {
880
+ "epoch": 0.5722196585140747,
881
+ "grad_norm": 0.06797238439321518,
882
+ "learning_rate": 9.226663830431777e-05,
883
+ "loss": 0.9658,
884
+ "step": 620
885
+ },
886
+ {
887
+ "epoch": 0.5768343331795108,
888
+ "grad_norm": 0.06790480017662048,
889
+ "learning_rate": 9.065981610689914e-05,
890
+ "loss": 0.9506,
891
+ "step": 625
892
+ },
893
+ {
894
+ "epoch": 0.5814490078449469,
895
+ "grad_norm": 0.06801264733076096,
896
+ "learning_rate": 8.905542313510846e-05,
897
+ "loss": 0.9552,
898
+ "step": 630
899
+ },
900
+ {
901
+ "epoch": 0.586063682510383,
902
+ "grad_norm": 0.07247908413410187,
903
+ "learning_rate": 8.745387666472637e-05,
904
+ "loss": 0.9632,
905
+ "step": 635
906
+ },
907
+ {
908
+ "epoch": 0.5906783571758191,
909
+ "grad_norm": 0.06877297908067703,
910
+ "learning_rate": 8.58555932312059e-05,
911
+ "loss": 0.9652,
912
+ "step": 640
913
+ },
914
+ {
915
+ "epoch": 0.5952930318412551,
916
+ "grad_norm": 0.06959784030914307,
917
+ "learning_rate": 8.426098852133892e-05,
918
+ "loss": 0.9436,
919
+ "step": 645
920
+ },
921
+ {
922
+ "epoch": 0.5999077065066912,
923
+ "grad_norm": 0.06700550764799118,
924
+ "learning_rate": 8.267047726514278e-05,
925
+ "loss": 0.9644,
926
+ "step": 650
927
+ },
928
+ {
929
+ "epoch": 0.6045223811721273,
930
+ "grad_norm": 0.06979133933782578,
931
+ "learning_rate": 8.108447312799587e-05,
932
+ "loss": 0.9603,
933
+ "step": 655
934
+ },
935
+ {
936
+ "epoch": 0.6091370558375635,
937
+ "grad_norm": 0.06916554272174835,
938
+ "learning_rate": 7.950338860305048e-05,
939
+ "loss": 0.9479,
940
+ "step": 660
941
+ },
942
+ {
943
+ "epoch": 0.6137517305029996,
944
+ "grad_norm": 0.0692073404788971,
945
+ "learning_rate": 7.792763490394984e-05,
946
+ "loss": 0.9583,
947
+ "step": 665
948
+ },
949
+ {
950
+ "epoch": 0.6183664051684357,
951
+ "grad_norm": 0.06973334401845932,
952
+ "learning_rate": 7.635762185787868e-05,
953
+ "loss": 0.9598,
954
+ "step": 670
955
+ },
956
+ {
957
+ "epoch": 0.6229810798338717,
958
+ "grad_norm": 0.069077268242836,
959
+ "learning_rate": 7.479375779897379e-05,
960
+ "loss": 0.9508,
961
+ "step": 675
962
+ },
963
+ {
964
+ "epoch": 0.6275957544993078,
965
+ "grad_norm": 0.06755080074071884,
966
+ "learning_rate": 7.323644946212331e-05,
967
+ "loss": 0.9538,
968
+ "step": 680
969
+ },
970
+ {
971
+ "epoch": 0.6322104291647439,
972
+ "grad_norm": 0.06810387223958969,
973
+ "learning_rate": 7.168610187718164e-05,
974
+ "loss": 0.9563,
975
+ "step": 685
976
+ },
977
+ {
978
+ "epoch": 0.63682510383018,
979
+ "grad_norm": 0.06826294958591461,
980
+ "learning_rate": 7.014311826362804e-05,
981
+ "loss": 0.951,
982
+ "step": 690
983
+ },
984
+ {
985
+ "epoch": 0.6414397784956161,
986
+ "grad_norm": 0.07014641910791397,
987
+ "learning_rate": 6.8607899925696e-05,
988
+ "loss": 0.9669,
989
+ "step": 695
990
+ },
991
+ {
992
+ "epoch": 0.6460544531610521,
993
+ "grad_norm": 0.0688079446554184,
994
+ "learning_rate": 6.708084614800064e-05,
995
+ "loss": 0.9361,
996
+ "step": 700
997
+ },
998
+ {
999
+ "epoch": 0.6506691278264882,
1000
+ "grad_norm": 0.07005127519369125,
1001
+ "learning_rate": 6.556235409169154e-05,
1002
+ "loss": 0.9337,
1003
+ "step": 705
1004
+ },
1005
+ {
1006
+ "epoch": 0.6552838024919243,
1007
+ "grad_norm": 0.06724046170711517,
1008
+ "learning_rate": 6.405281869115768e-05,
1009
+ "loss": 0.9404,
1010
+ "step": 710
1011
+ },
1012
+ {
1013
+ "epoch": 0.6598984771573604,
1014
+ "grad_norm": 0.07361240684986115,
1015
+ "learning_rate": 6.255263255131172e-05,
1016
+ "loss": 0.9527,
1017
+ "step": 715
1018
+ },
1019
+ {
1020
+ "epoch": 0.6645131518227965,
1021
+ "grad_norm": 0.06800534576177597,
1022
+ "learning_rate": 6.106218584547991e-05,
1023
+ "loss": 0.9531,
1024
+ "step": 720
1025
+ },
1026
+ {
1027
+ "epoch": 0.6691278264882325,
1028
+ "grad_norm": 0.0687314122915268,
1029
+ "learning_rate": 5.9581866213924656e-05,
1030
+ "loss": 0.9421,
1031
+ "step": 725
1032
+ },
1033
+ {
1034
+ "epoch": 0.6737425011536686,
1035
+ "grad_norm": 0.06845971941947937,
1036
+ "learning_rate": 5.8112058663025706e-05,
1037
+ "loss": 0.9455,
1038
+ "step": 730
1039
+ },
1040
+ {
1041
+ "epoch": 0.6783571758191047,
1042
+ "grad_norm": 0.06654047966003418,
1043
+ "learning_rate": 5.665314546514633e-05,
1044
+ "loss": 0.9514,
1045
+ "step": 735
1046
+ },
1047
+ {
1048
+ "epoch": 0.6829718504845408,
1049
+ "grad_norm": 0.069442018866539,
1050
+ "learning_rate": 5.520550605921091e-05,
1051
+ "loss": 0.9537,
1052
+ "step": 740
1053
+ },
1054
+ {
1055
+ "epoch": 0.687586525149977,
1056
+ "grad_norm": 0.0674603134393692,
1057
+ "learning_rate": 5.376951695201894e-05,
1058
+ "loss": 0.9536,
1059
+ "step": 745
1060
+ },
1061
+ {
1062
+ "epoch": 0.6922011998154131,
1063
+ "grad_norm": 0.06752391904592514,
1064
+ "learning_rate": 5.234555162032221e-05,
1065
+ "loss": 0.9477,
1066
+ "step": 750
1067
+ },
1068
+ {
1069
+ "epoch": 0.6968158744808491,
1070
+ "grad_norm": 0.07034114748239517,
1071
+ "learning_rate": 5.093398041368942e-05,
1072
+ "loss": 0.9637,
1073
+ "step": 755
1074
+ },
1075
+ {
1076
+ "epoch": 0.7014305491462852,
1077
+ "grad_norm": 0.06883241981267929,
1078
+ "learning_rate": 4.953517045818473e-05,
1079
+ "loss": 0.9556,
1080
+ "step": 760
1081
+ },
1082
+ {
1083
+ "epoch": 0.7060452238117213,
1084
+ "grad_norm": 0.06869496405124664,
1085
+ "learning_rate": 4.81494855608843e-05,
1086
+ "loss": 0.9581,
1087
+ "step": 765
1088
+ },
1089
+ {
1090
+ "epoch": 0.7106598984771574,
1091
+ "grad_norm": 0.07270540297031403,
1092
+ "learning_rate": 4.677728611525605e-05,
1093
+ "loss": 0.9503,
1094
+ "step": 770
1095
+ },
1096
+ {
1097
+ "epoch": 0.7152745731425935,
1098
+ "grad_norm": 0.06813713908195496,
1099
+ "learning_rate": 4.541892900742757e-05,
1100
+ "loss": 0.9428,
1101
+ "step": 775
1102
+ },
1103
+ {
1104
+ "epoch": 0.7198892478080295,
1105
+ "grad_norm": 0.06918053328990936,
1106
+ "learning_rate": 4.407476752336576e-05,
1107
+ "loss": 0.9433,
1108
+ "step": 780
1109
+ },
1110
+ {
1111
+ "epoch": 0.7245039224734656,
1112
+ "grad_norm": 0.06645094603300095,
1113
+ "learning_rate": 4.274515125699332e-05,
1114
+ "loss": 0.9444,
1115
+ "step": 785
1116
+ },
1117
+ {
1118
+ "epoch": 0.7291185971389017,
1119
+ "grad_norm": 0.06784369051456451,
1120
+ "learning_rate": 4.1430426019264924e-05,
1121
+ "loss": 0.9558,
1122
+ "step": 790
1123
+ },
1124
+ {
1125
+ "epoch": 0.7337332718043378,
1126
+ "grad_norm": 0.0678802952170372,
1127
+ "learning_rate": 4.0130933748227885e-05,
1128
+ "loss": 0.938,
1129
+ "step": 795
1130
+ },
1131
+ {
1132
+ "epoch": 0.7383479464697739,
1133
+ "grad_norm": 0.06705697625875473,
1134
+ "learning_rate": 3.884701242008949e-05,
1135
+ "loss": 0.9309,
1136
+ "step": 800
1137
+ },
1138
+ {
1139
+ "epoch": 0.7429626211352099,
1140
+ "grad_norm": 0.06846518814563751,
1141
+ "learning_rate": 3.757899596131529e-05,
1142
+ "loss": 0.9602,
1143
+ "step": 805
1144
+ },
1145
+ {
1146
+ "epoch": 0.747577295800646,
1147
+ "grad_norm": 0.07015591859817505,
1148
+ "learning_rate": 3.632721416178029e-05,
1149
+ "loss": 0.9831,
1150
+ "step": 810
1151
+ },
1152
+ {
1153
+ "epoch": 0.7521919704660821,
1154
+ "grad_norm": 0.06770645827054977,
1155
+ "learning_rate": 3.509199258899603e-05,
1156
+ "loss": 0.9638,
1157
+ "step": 815
1158
+ },
1159
+ {
1160
+ "epoch": 0.7568066451315182,
1161
+ "grad_norm": 0.07250072807073593,
1162
+ "learning_rate": 3.387365250343615e-05,
1163
+ "loss": 0.9607,
1164
+ "step": 820
1165
+ },
1166
+ {
1167
+ "epoch": 0.7614213197969543,
1168
+ "grad_norm": 0.06818177551031113,
1169
+ "learning_rate": 3.267251077498169e-05,
1170
+ "loss": 0.9445,
1171
+ "step": 825
1172
+ },
1173
+ {
1174
+ "epoch": 0.7660359944623903,
1175
+ "grad_norm": 0.06822264194488525,
1176
+ "learning_rate": 3.148887980050872e-05,
1177
+ "loss": 0.9521,
1178
+ "step": 830
1179
+ },
1180
+ {
1181
+ "epoch": 0.7706506691278265,
1182
+ "grad_norm": 0.06892835348844528,
1183
+ "learning_rate": 3.0323067422638908e-05,
1184
+ "loss": 0.9624,
1185
+ "step": 835
1186
+ },
1187
+ {
1188
+ "epoch": 0.7752653437932626,
1189
+ "grad_norm": 0.06881006807088852,
1190
+ "learning_rate": 2.9175376849675073e-05,
1191
+ "loss": 0.9773,
1192
+ "step": 840
1193
+ },
1194
+ {
1195
+ "epoch": 0.7798800184586987,
1196
+ "grad_norm": 0.06809905171394348,
1197
+ "learning_rate": 2.8046106576741605e-05,
1198
+ "loss": 0.9462,
1199
+ "step": 845
1200
+ },
1201
+ {
1202
+ "epoch": 0.7844946931241348,
1203
+ "grad_norm": 0.06759600341320038,
1204
+ "learning_rate": 2.6935550308150847e-05,
1205
+ "loss": 0.9488,
1206
+ "step": 850
1207
+ },
1208
+ {
1209
+ "epoch": 0.7891093677895709,
1210
+ "grad_norm": 0.06885316222906113,
1211
+ "learning_rate": 2.5843996881015676e-05,
1212
+ "loss": 0.9526,
1213
+ "step": 855
1214
+ },
1215
+ {
1216
+ "epoch": 0.7937240424550069,
1217
+ "grad_norm": 0.06799903512001038,
1218
+ "learning_rate": 2.4771730190127618e-05,
1219
+ "loss": 0.9446,
1220
+ "step": 860
1221
+ },
1222
+ {
1223
+ "epoch": 0.798338717120443,
1224
+ "grad_norm": 0.06789611279964447,
1225
+ "learning_rate": 2.3719029114120716e-05,
1226
+ "loss": 0.949,
1227
+ "step": 865
1228
+ },
1229
+ {
1230
+ "epoch": 0.8029533917858791,
1231
+ "grad_norm": 0.06740868836641312,
1232
+ "learning_rate": 2.268616744293973e-05,
1233
+ "loss": 0.9531,
1234
+ "step": 870
1235
+ },
1236
+ {
1237
+ "epoch": 0.8075680664513152,
1238
+ "grad_norm": 0.06866241991519928,
1239
+ "learning_rate": 2.1673413806632102e-05,
1240
+ "loss": 0.9534,
1241
+ "step": 875
1242
+ },
1243
+ {
1244
+ "epoch": 0.8121827411167513,
1245
+ "grad_norm": 0.06787573546171188,
1246
+ "learning_rate": 2.068103160548156e-05,
1247
+ "loss": 0.9473,
1248
+ "step": 880
1249
+ },
1250
+ {
1251
+ "epoch": 0.8167974157821873,
1252
+ "grad_norm": 0.07063360512256622,
1253
+ "learning_rate": 1.9709278941502363e-05,
1254
+ "loss": 0.9486,
1255
+ "step": 885
1256
+ },
1257
+ {
1258
+ "epoch": 0.8214120904476234,
1259
+ "grad_norm": 0.07002748548984528,
1260
+ "learning_rate": 1.8758408551311047e-05,
1261
+ "loss": 0.943,
1262
+ "step": 890
1263
+ },
1264
+ {
1265
+ "epoch": 0.8260267651130595,
1266
+ "grad_norm": 0.07106012105941772,
1267
+ "learning_rate": 1.7828667740394044e-05,
1268
+ "loss": 0.9662,
1269
+ "step": 895
1270
+ },
1271
+ {
1272
+ "epoch": 0.8306414397784956,
1273
+ "grad_norm": 0.06852705031633377,
1274
+ "learning_rate": 1.692029831878753e-05,
1275
+ "loss": 0.9505,
1276
+ "step": 900
1277
+ },
1278
+ {
1279
+ "epoch": 0.8352561144439317,
1280
+ "grad_norm": 0.07118076831102371,
1281
+ "learning_rate": 1.6033536538186778e-05,
1282
+ "loss": 0.9553,
1283
+ "step": 905
1284
+ },
1285
+ {
1286
+ "epoch": 0.8398707891093677,
1287
+ "grad_norm": 0.06763019412755966,
1288
+ "learning_rate": 1.5168613030500923e-05,
1289
+ "loss": 0.9445,
1290
+ "step": 910
1291
+ },
1292
+ {
1293
+ "epoch": 0.8444854637748038,
1294
+ "grad_norm": 0.06864377856254578,
1295
+ "learning_rate": 1.4325752747869626e-05,
1296
+ "loss": 0.9553,
1297
+ "step": 915
1298
+ },
1299
+ {
1300
+ "epoch": 0.84910013844024,
1301
+ "grad_norm": 0.06862477213144302,
1302
+ "learning_rate": 1.3505174904156593e-05,
1303
+ "loss": 0.948,
1304
+ "step": 920
1305
+ },
1306
+ {
1307
+ "epoch": 0.8537148131056761,
1308
+ "grad_norm": 0.06800781190395355,
1309
+ "learning_rate": 1.2707092917935914e-05,
1310
+ "loss": 0.9604,
1311
+ "step": 925
1312
+ },
1313
+ {
1314
+ "epoch": 0.8583294877711122,
1315
+ "grad_norm": 0.06934888660907745,
1316
+ "learning_rate": 1.1931714356985257e-05,
1317
+ "loss": 0.9487,
1318
+ "step": 930
1319
+ },
1320
+ {
1321
+ "epoch": 0.8629441624365483,
1322
+ "grad_norm": 0.06795363873243332,
1323
+ "learning_rate": 1.1179240884301156e-05,
1324
+ "loss": 0.94,
1325
+ "step": 935
1326
+ },
1327
+ {
1328
+ "epoch": 0.8675588371019843,
1329
+ "grad_norm": 0.06862738728523254,
1330
+ "learning_rate": 1.0449868205649649e-05,
1331
+ "loss": 0.9314,
1332
+ "step": 940
1333
+ },
1334
+ {
1335
+ "epoch": 0.8721735117674204,
1336
+ "grad_norm": 0.06817808747291565,
1337
+ "learning_rate": 9.74378601866669e-06,
1338
+ "loss": 0.9552,
1339
+ "step": 945
1340
+ },
1341
+ {
1342
+ "epoch": 0.8767881864328565,
1343
+ "grad_norm": 0.06724893301725388,
1344
+ "learning_rate": 9.061177963520751e-06,
1345
+ "loss": 0.9624,
1346
+ "step": 950
1347
+ },
1348
+ {
1349
+ "epoch": 0.8814028610982926,
1350
+ "grad_norm": 0.06659146398305893,
1351
+ "learning_rate": 8.402221575151238e-06,
1352
+ "loss": 0.9275,
1353
+ "step": 955
1354
+ },
1355
+ {
1356
+ "epoch": 0.8860175357637287,
1357
+ "grad_norm": 0.06741462647914886,
1358
+ "learning_rate": 7.767088237094577e-06,
1359
+ "loss": 0.9576,
1360
+ "step": 960
1361
+ },
1362
+ {
1363
+ "epoch": 0.8906322104291647,
1364
+ "grad_norm": 0.06838846951723099,
1365
+ "learning_rate": 7.155943136910193e-06,
1366
+ "loss": 0.9583,
1367
+ "step": 965
1368
+ },
1369
+ {
1370
+ "epoch": 0.8952468850946008,
1371
+ "grad_norm": 0.06980287283658981,
1372
+ "learning_rate": 6.5689452232180485e-06,
1373
+ "loss": 0.9594,
1374
+ "step": 970
1375
+ },
1376
+ {
1377
+ "epoch": 0.8998615597600369,
1378
+ "grad_norm": 0.0681188777089119,
1379
+ "learning_rate": 6.00624716435868e-06,
1380
+ "loss": 0.9361,
1381
+ "step": 975
1382
+ },
1383
+ {
1384
+ "epoch": 0.904476234425473,
1385
+ "grad_norm": 0.06910879909992218,
1386
+ "learning_rate": 5.467995308686813e-06,
1387
+ "loss": 0.9675,
1388
+ "step": 980
1389
+ },
1390
+ {
1391
+ "epoch": 0.9090909090909091,
1392
+ "grad_norm": 0.07297144830226898,
1393
+ "learning_rate": 4.954329646508505e-06,
1394
+ "loss": 0.9525,
1395
+ "step": 985
1396
+ },
1397
+ {
1398
+ "epoch": 0.9137055837563451,
1399
+ "grad_norm": 0.06912152469158173,
1400
+ "learning_rate": 4.465383773672127e-06,
1401
+ "loss": 0.9597,
1402
+ "step": 990
1403
+ },
1404
+ {
1405
+ "epoch": 0.9183202584217812,
1406
+ "grad_norm": 0.06780077517032623,
1407
+ "learning_rate": 4.001284856822174e-06,
1408
+ "loss": 0.9572,
1409
+ "step": 995
1410
+ },
1411
+ {
1412
+ "epoch": 0.9229349330872173,
1413
+ "grad_norm": 0.0673639103770256,
1414
+ "learning_rate": 3.562153600325491e-06,
1415
+ "loss": 0.9394,
1416
+ "step": 1000
1417
+ },
1418
+ {
1419
+ "epoch": 0.9275496077526535,
1420
+ "grad_norm": 0.06862404197454453,
1421
+ "learning_rate": 3.1481042148779672e-06,
1422
+ "loss": 0.9654,
1423
+ "step": 1005
1424
+ },
1425
+ {
1426
+ "epoch": 0.9321642824180896,
1427
+ "grad_norm": 0.06896745413541794,
1428
+ "learning_rate": 2.7592443878003195e-06,
1429
+ "loss": 0.963,
1430
+ "step": 1010
1431
+ },
1432
+ {
1433
+ "epoch": 0.9367789570835257,
1434
+ "grad_norm": 0.06962341070175171,
1435
+ "learning_rate": 2.395675255030383e-06,
1436
+ "loss": 0.9473,
1437
+ "step": 1015
1438
+ },
1439
+ {
1440
+ "epoch": 0.9413936317489617,
1441
+ "grad_norm": 0.06696717441082001,
1442
+ "learning_rate": 2.0574913748193647e-06,
1443
+ "loss": 0.9493,
1444
+ "step": 1020
1445
+ },
1446
+ {
1447
+ "epoch": 0.9460083064143978,
1448
+ "grad_norm": 0.06831862032413483,
1449
+ "learning_rate": 1.7447807031388264e-06,
1450
+ "loss": 0.9504,
1451
+ "step": 1025
1452
+ },
1453
+ {
1454
+ "epoch": 0.9506229810798339,
1455
+ "grad_norm": 0.07115928083658218,
1456
+ "learning_rate": 1.457624570804772e-06,
1457
+ "loss": 0.943,
1458
+ "step": 1030
1459
+ },
1460
+ {
1461
+ "epoch": 0.95523765574527,
1462
+ "grad_norm": 0.06887007504701614,
1463
+ "learning_rate": 1.196097662324902e-06,
1464
+ "loss": 0.9679,
1465
+ "step": 1035
1466
+ },
1467
+ {
1468
+ "epoch": 0.9598523304107061,
1469
+ "grad_norm": 0.06956353038549423,
1470
+ "learning_rate": 9.602679964744288e-07,
1471
+ "loss": 0.9413,
1472
+ "step": 1040
1473
+ },
1474
+ {
1475
+ "epoch": 0.9644670050761421,
1476
+ "grad_norm": 0.06954739987850189,
1477
+ "learning_rate": 7.501969086054717e-07,
1478
+ "loss": 0.9497,
1479
+ "step": 1045
1480
+ },
1481
+ {
1482
+ "epoch": 0.9690816797415782,
1483
+ "grad_norm": 0.06847439706325531,
1484
+ "learning_rate": 5.659390346948179e-07,
1485
+ "loss": 0.9641,
1486
+ "step": 1050
1487
+ },
1488
+ {
1489
+ "epoch": 0.9736963544070143,
1490
+ "grad_norm": 0.06772764027118683,
1491
+ "learning_rate": 4.075422971340115e-07,
1492
+ "loss": 0.9616,
1493
+ "step": 1055
1494
+ },
1495
+ {
1496
+ "epoch": 0.9783110290724504,
1497
+ "grad_norm": 0.06802436709403992,
1498
+ "learning_rate": 2.7504789226548977e-07,
1499
+ "loss": 0.953,
1500
+ "step": 1060
1501
+ },
1502
+ {
1503
+ "epoch": 0.9829257037378865,
1504
+ "grad_norm": 0.06744276732206345,
1505
+ "learning_rate": 1.6849027966816532e-07,
1506
+ "loss": 0.9489,
1507
+ "step": 1065
1508
+ },
1509
+ {
1510
+ "epoch": 0.9875403784033225,
1511
+ "grad_norm": 0.06742815673351288,
1512
+ "learning_rate": 8.789717319505065e-08,
1513
+ "loss": 0.9593,
1514
+ "step": 1070
1515
+ },
1516
+ {
1517
+ "epoch": 0.9921550530687586,
1518
+ "grad_norm": 0.07070456445217133,
1519
+ "learning_rate": 3.328953376530164e-08,
1520
+ "loss": 0.9528,
1521
+ "step": 1075
1522
+ },
1523
+ {
1524
+ "epoch": 0.9967697277341947,
1525
+ "grad_norm": 0.06779249012470245,
1526
+ "learning_rate": 4.6815639127006925e-09,
1527
+ "loss": 0.9468,
1528
+ "step": 1080
1529
+ },
1530
+ {
1531
+ "epoch": 0.9995385325334564,
1532
+ "eval_loss": 0.9562498331069946,
1533
+ "eval_runtime": 12105.6627,
1534
+ "eval_samples_per_second": 1.268,
1535
+ "eval_steps_per_second": 1.268,
1536
+ "step": 1083
1537
+ },
1538
+ {
1539
+ "epoch": 0.9995385325334564,
1540
+ "step": 1083,
1541
+ "total_flos": 1.2183828414859313e+19,
1542
+ "train_loss": 0.9649847933019847,
1543
+ "train_runtime": 330634.4933,
1544
+ "train_samples_per_second": 0.419,
1545
+ "train_steps_per_second": 0.003
1546
+ }
1547
+ ],
1548
+ "logging_steps": 5,
1549
+ "max_steps": 1083,
1550
+ "num_input_tokens_seen": 0,
1551
+ "num_train_epochs": 1,
1552
+ "save_steps": 100,
1553
+ "stateful_callbacks": {
1554
+ "TrainerControl": {
1555
+ "args": {
1556
+ "should_epoch_stop": false,
1557
+ "should_evaluate": false,
1558
+ "should_log": false,
1559
+ "should_save": true,
1560
+ "should_training_stop": true
1561
+ },
1562
+ "attributes": {}
1563
+ }
1564
+ },
1565
+ "total_flos": 1.2183828414859313e+19,
1566
+ "train_batch_size": 2,
1567
+ "trial_name": null,
1568
+ "trial_params": null
1569
+ }