qingyangzhang commited on
Commit
0491de0
·
verified ·
1 Parent(s): ea7d1c4

Model save

Browse files
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ model_name: Qwen2.5-3B-GRPO-Natural-Reasoning
4
+ tags:
5
+ - generated_from_trainer
6
+ - trl
7
+ - grpo
8
+ licence: license
9
+ ---
10
+
11
+ # Model Card for Qwen2.5-3B-GRPO-Natural-Reasoning
12
+
13
+ This model is a fine-tuned version of [None](https://huggingface.co/None).
14
+ It has been trained using [TRL](https://github.com/huggingface/trl).
15
+
16
+ ## Quick start
17
+
18
+ ```python
19
+ from transformers import pipeline
20
+
21
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
22
+ generator = pipeline("text-generation", model="qingyangzhang/Qwen2.5-3B-GRPO-Natural-Reasoning", device="cuda")
23
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
24
+ print(output["generated_text"])
25
+ ```
26
+
27
+ ## Training procedure
28
+
29
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zqyoung1127-tianjin-university/huggingface/runs/o8d2wzv6)
30
+
31
+
32
+ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
33
+
34
+ ### Framework versions
35
+
36
+ - TRL: 0.14.0
37
+ - Transformers: 4.48.3
38
+ - Pytorch: 2.5.1
39
+ - Datasets: 3.1.0
40
+ - Tokenizers: 0.21.0
41
+
42
+ ## Citations
43
+
44
+ Cite GRPO as:
45
+
46
+ ```bibtex
47
+ @article{zhihong2024deepseekmath,
48
+ title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
49
+ author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
50
+ year = 2024,
51
+ eprint = {arXiv:2402.03300},
52
+ }
53
+
54
+ ```
55
+
56
+ Cite TRL as:
57
+
58
+ ```bibtex
59
+ @misc{vonwerra2022trl,
60
+ title = {{TRL: Transformer Reinforcement Learning}},
61
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
62
+ year = 2020,
63
+ journal = {GitHub repository},
64
+ publisher = {GitHub},
65
+ howpublished = {\url{https://github.com/huggingface/trl}}
66
+ }
67
+ ```
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 2.3229669920965534e-08,
4
+ "train_runtime": 35950.9286,
5
+ "train_samples": 12058,
6
+ "train_samples_per_second": 0.335,
7
+ "train_steps_per_second": 0.003
8
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.48.3"
6
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 2.3229669920965534e-08,
4
+ "train_runtime": 35950.9286,
5
+ "train_samples": 12058,
6
+ "train_samples_per_second": 0.335,
7
+ "train_steps_per_second": 0.003
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1417 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9950248756218906,
5
+ "eval_steps": 100,
6
+ "global_step": 125,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "completion_length": 376.33854484558105,
13
+ "epoch": 0.007960199004975124,
14
+ "grad_norm": 0.21353177726268768,
15
+ "learning_rate": 1e-06,
16
+ "loss": 0.0,
17
+ "reward": 0.2578125004656613,
18
+ "reward_std": 0.2655366016551852,
19
+ "rewards/accuracy_reward": 0.2578125004656613,
20
+ "step": 1
21
+ },
22
+ {
23
+ "completion_length": 359.9583339691162,
24
+ "epoch": 0.015920398009950248,
25
+ "grad_norm": 0.02424778789281845,
26
+ "learning_rate": 1e-06,
27
+ "loss": 0.0,
28
+ "reward": 0.2942708353511989,
29
+ "reward_std": 0.26902162190526724,
30
+ "rewards/accuracy_reward": 0.2942708353511989,
31
+ "step": 2
32
+ },
33
+ {
34
+ "completion_length": 372.2769145965576,
35
+ "epoch": 0.023880597014925373,
36
+ "grad_norm": 0.08308012783527374,
37
+ "learning_rate": 1e-06,
38
+ "loss": 0.0,
39
+ "reward": 0.25000000349245965,
40
+ "reward_std": 0.2666853931732476,
41
+ "rewards/accuracy_reward": 0.25000000349245965,
42
+ "step": 3
43
+ },
44
+ {
45
+ "completion_length": 366.7196235656738,
46
+ "epoch": 0.031840796019900496,
47
+ "grad_norm": 0.035361409187316895,
48
+ "learning_rate": 1e-06,
49
+ "loss": 0.0,
50
+ "reward": 0.30295139644294977,
51
+ "reward_std": 0.2662010188214481,
52
+ "rewards/accuracy_reward": 0.30295139644294977,
53
+ "step": 4
54
+ },
55
+ {
56
+ "completion_length": 358.4774341583252,
57
+ "epoch": 0.03980099502487562,
58
+ "grad_norm": 0.05183703452348709,
59
+ "learning_rate": 1e-06,
60
+ "loss": 0.0,
61
+ "reward": 0.3211805585306138,
62
+ "reward_std": 0.3101580459624529,
63
+ "rewards/accuracy_reward": 0.3211805585306138,
64
+ "step": 5
65
+ },
66
+ {
67
+ "completion_length": 359.42882347106934,
68
+ "epoch": 0.04776119402985075,
69
+ "grad_norm": 0.024956537410616875,
70
+ "learning_rate": 1e-06,
71
+ "loss": 0.0,
72
+ "reward": 0.37065972574055195,
73
+ "reward_std": 0.3160043712705374,
74
+ "rewards/accuracy_reward": 0.37065972574055195,
75
+ "step": 6
76
+ },
77
+ {
78
+ "completion_length": 386.2057285308838,
79
+ "epoch": 0.05572139303482587,
80
+ "grad_norm": 0.02574196085333824,
81
+ "learning_rate": 1e-06,
82
+ "loss": 0.0,
83
+ "reward": 0.33159722574055195,
84
+ "reward_std": 0.30505089182406664,
85
+ "rewards/accuracy_reward": 0.33159722574055195,
86
+ "step": 7
87
+ },
88
+ {
89
+ "completion_length": 374.69618797302246,
90
+ "epoch": 0.06368159203980099,
91
+ "grad_norm": 0.03463900834321976,
92
+ "learning_rate": 1e-06,
93
+ "loss": 0.0,
94
+ "reward": 0.3211805559694767,
95
+ "reward_std": 0.2687861230224371,
96
+ "rewards/accuracy_reward": 0.3211805559694767,
97
+ "step": 8
98
+ },
99
+ {
100
+ "completion_length": 366.6770896911621,
101
+ "epoch": 0.07164179104477612,
102
+ "grad_norm": 0.09361224621534348,
103
+ "learning_rate": 1e-06,
104
+ "loss": 0.0,
105
+ "reward": 0.41319445334374905,
106
+ "reward_std": 0.3116005442570895,
107
+ "rewards/accuracy_reward": 0.41319445334374905,
108
+ "step": 9
109
+ },
110
+ {
111
+ "completion_length": 391.65625381469727,
112
+ "epoch": 0.07960199004975124,
113
+ "grad_norm": 0.01774750091135502,
114
+ "learning_rate": 1e-06,
115
+ "loss": 0.0,
116
+ "reward": 0.37326389690861106,
117
+ "reward_std": 0.3113201856613159,
118
+ "rewards/accuracy_reward": 0.37326389690861106,
119
+ "step": 10
120
+ },
121
+ {
122
+ "completion_length": 406.9496593475342,
123
+ "epoch": 0.08756218905472637,
124
+ "grad_norm": 0.1700211465358734,
125
+ "learning_rate": 1e-06,
126
+ "loss": 0.0,
127
+ "reward": 0.36979166616220027,
128
+ "reward_std": 0.3051337222568691,
129
+ "rewards/accuracy_reward": 0.36979166616220027,
130
+ "step": 11
131
+ },
132
+ {
133
+ "completion_length": 407.82986640930176,
134
+ "epoch": 0.0955223880597015,
135
+ "grad_norm": 0.01841021701693535,
136
+ "learning_rate": 1e-06,
137
+ "loss": 0.0,
138
+ "reward": 0.3515625,
139
+ "reward_std": 0.27109134290367365,
140
+ "rewards/accuracy_reward": 0.3515625,
141
+ "step": 12
142
+ },
143
+ {
144
+ "completion_length": 438.81944847106934,
145
+ "epoch": 0.10348258706467661,
146
+ "grad_norm": 0.014034698717296124,
147
+ "learning_rate": 1e-06,
148
+ "loss": 0.0,
149
+ "reward": 0.4001736156642437,
150
+ "reward_std": 0.29678336903452873,
151
+ "rewards/accuracy_reward": 0.4001736156642437,
152
+ "step": 13
153
+ },
154
+ {
155
+ "completion_length": 419.9982662200928,
156
+ "epoch": 0.11144278606965174,
157
+ "grad_norm": 0.02144305780529976,
158
+ "learning_rate": 1e-06,
159
+ "loss": 0.0,
160
+ "reward": 0.33767361380159855,
161
+ "reward_std": 0.2604841380380094,
162
+ "rewards/accuracy_reward": 0.33767361380159855,
163
+ "step": 14
164
+ },
165
+ {
166
+ "completion_length": 425.06771659851074,
167
+ "epoch": 0.11940298507462686,
168
+ "grad_norm": 0.010681218467652798,
169
+ "learning_rate": 1e-06,
170
+ "loss": 0.0,
171
+ "reward": 0.40451388992369175,
172
+ "reward_std": 0.25912062590941787,
173
+ "rewards/accuracy_reward": 0.40451388992369175,
174
+ "step": 15
175
+ },
176
+ {
177
+ "completion_length": 431.0529499053955,
178
+ "epoch": 0.12736318407960198,
179
+ "grad_norm": 0.009904686361551285,
180
+ "learning_rate": 1e-06,
181
+ "loss": 0.0,
182
+ "reward": 0.4123263955116272,
183
+ "reward_std": 0.254815224558115,
184
+ "rewards/accuracy_reward": 0.4123263955116272,
185
+ "step": 16
186
+ },
187
+ {
188
+ "completion_length": 431.28211784362793,
189
+ "epoch": 0.13532338308457711,
190
+ "grad_norm": 0.010264623910188675,
191
+ "learning_rate": 1e-06,
192
+ "loss": 0.0,
193
+ "reward": 0.4027777807787061,
194
+ "reward_std": 0.22909809951670468,
195
+ "rewards/accuracy_reward": 0.4027777807787061,
196
+ "step": 17
197
+ },
198
+ {
199
+ "completion_length": 410.7699718475342,
200
+ "epoch": 0.14328358208955225,
201
+ "grad_norm": 0.012061933986842632,
202
+ "learning_rate": 1e-06,
203
+ "loss": 0.0,
204
+ "reward": 0.44531250186264515,
205
+ "reward_std": 0.258999613346532,
206
+ "rewards/accuracy_reward": 0.44531250186264515,
207
+ "step": 18
208
+ },
209
+ {
210
+ "completion_length": 461.3133716583252,
211
+ "epoch": 0.15124378109452735,
212
+ "grad_norm": 0.008844373747706413,
213
+ "learning_rate": 1e-06,
214
+ "loss": 0.0,
215
+ "reward": 0.3836805624887347,
216
+ "reward_std": 0.232815052382648,
217
+ "rewards/accuracy_reward": 0.3836805624887347,
218
+ "step": 19
219
+ },
220
+ {
221
+ "completion_length": 434.8680610656738,
222
+ "epoch": 0.15920398009950248,
223
+ "grad_norm": 0.008117014542222023,
224
+ "learning_rate": 1e-06,
225
+ "loss": 0.0,
226
+ "reward": 0.4652777798473835,
227
+ "reward_std": 0.24379803240299225,
228
+ "rewards/accuracy_reward": 0.4652777798473835,
229
+ "step": 20
230
+ },
231
+ {
232
+ "completion_length": 432.0590362548828,
233
+ "epoch": 0.16716417910447762,
234
+ "grad_norm": 0.020449288189411163,
235
+ "learning_rate": 1e-06,
236
+ "loss": 0.0,
237
+ "reward": 0.4314236156642437,
238
+ "reward_std": 0.26850314904004335,
239
+ "rewards/accuracy_reward": 0.4314236156642437,
240
+ "step": 21
241
+ },
242
+ {
243
+ "completion_length": 444.558162689209,
244
+ "epoch": 0.17512437810945275,
245
+ "grad_norm": 0.008622939698398113,
246
+ "learning_rate": 1e-06,
247
+ "loss": 0.0,
248
+ "reward": 0.4166666679084301,
249
+ "reward_std": 0.22840349189937115,
250
+ "rewards/accuracy_reward": 0.4166666679084301,
251
+ "step": 22
252
+ },
253
+ {
254
+ "completion_length": 468.4470462799072,
255
+ "epoch": 0.18308457711442785,
256
+ "grad_norm": 0.008969324640929699,
257
+ "learning_rate": 1e-06,
258
+ "loss": 0.0,
259
+ "reward": 0.4010416716337204,
260
+ "reward_std": 0.2720437094103545,
261
+ "rewards/accuracy_reward": 0.4010416716337204,
262
+ "step": 23
263
+ },
264
+ {
265
+ "completion_length": 432.6501770019531,
266
+ "epoch": 0.191044776119403,
267
+ "grad_norm": 0.009219987317919731,
268
+ "learning_rate": 1e-06,
269
+ "loss": 0.0,
270
+ "reward": 0.493055559694767,
271
+ "reward_std": 0.277205478399992,
272
+ "rewards/accuracy_reward": 0.493055559694767,
273
+ "step": 24
274
+ },
275
+ {
276
+ "completion_length": 462.31423568725586,
277
+ "epoch": 0.19900497512437812,
278
+ "grad_norm": 0.00965342577546835,
279
+ "learning_rate": 1e-06,
280
+ "loss": 0.0,
281
+ "reward": 0.45052083767950535,
282
+ "reward_std": 0.27437643241137266,
283
+ "rewards/accuracy_reward": 0.45052083767950535,
284
+ "step": 25
285
+ },
286
+ {
287
+ "completion_length": 483.3194465637207,
288
+ "epoch": 0.20696517412935322,
289
+ "grad_norm": 0.006745666265487671,
290
+ "learning_rate": 1e-06,
291
+ "loss": 0.0,
292
+ "reward": 0.4088541737291962,
293
+ "reward_std": 0.1909226190764457,
294
+ "rewards/accuracy_reward": 0.4088541737291962,
295
+ "step": 26
296
+ },
297
+ {
298
+ "completion_length": 457.39236640930176,
299
+ "epoch": 0.21492537313432836,
300
+ "grad_norm": 0.008856388740241528,
301
+ "learning_rate": 1e-06,
302
+ "loss": 0.0,
303
+ "reward": 0.5269097276031971,
304
+ "reward_std": 0.23024124139919877,
305
+ "rewards/accuracy_reward": 0.5269097276031971,
306
+ "step": 27
307
+ },
308
+ {
309
+ "completion_length": 474.4479236602783,
310
+ "epoch": 0.2228855721393035,
311
+ "grad_norm": 0.007453792728483677,
312
+ "learning_rate": 1e-06,
313
+ "loss": 0.0,
314
+ "reward": 0.43750000558793545,
315
+ "reward_std": 0.231502128764987,
316
+ "rewards/accuracy_reward": 0.43750000558793545,
317
+ "step": 28
318
+ },
319
+ {
320
+ "completion_length": 485.6154537200928,
321
+ "epoch": 0.2308457711442786,
322
+ "grad_norm": 0.009123899042606354,
323
+ "learning_rate": 1e-06,
324
+ "loss": 0.0,
325
+ "reward": 0.41059028171002865,
326
+ "reward_std": 0.2188729103654623,
327
+ "rewards/accuracy_reward": 0.41059028171002865,
328
+ "step": 29
329
+ },
330
+ {
331
+ "completion_length": 510.6953182220459,
332
+ "epoch": 0.23880597014925373,
333
+ "grad_norm": 0.008295816369354725,
334
+ "learning_rate": 1e-06,
335
+ "loss": 0.0,
336
+ "reward": 0.39930556435137987,
337
+ "reward_std": 0.2771527715958655,
338
+ "rewards/accuracy_reward": 0.39930556435137987,
339
+ "step": 30
340
+ },
341
+ {
342
+ "completion_length": 480.7777843475342,
343
+ "epoch": 0.24676616915422886,
344
+ "grad_norm": 0.007505806162953377,
345
+ "learning_rate": 1e-06,
346
+ "loss": 0.0,
347
+ "reward": 0.4817708386108279,
348
+ "reward_std": 0.23485751589760184,
349
+ "rewards/accuracy_reward": 0.4817708386108279,
350
+ "step": 31
351
+ },
352
+ {
353
+ "completion_length": 515.5642375946045,
354
+ "epoch": 0.25472636815920396,
355
+ "grad_norm": 0.008023527450859547,
356
+ "learning_rate": 1e-06,
357
+ "loss": 0.0,
358
+ "reward": 0.4557291707023978,
359
+ "reward_std": 0.24294046964496374,
360
+ "rewards/accuracy_reward": 0.4557291707023978,
361
+ "step": 32
362
+ },
363
+ {
364
+ "completion_length": 483.67101097106934,
365
+ "epoch": 0.2626865671641791,
366
+ "grad_norm": 0.008072705008089542,
367
+ "learning_rate": 1e-06,
368
+ "loss": 0.0,
369
+ "reward": 0.48437500558793545,
370
+ "reward_std": 0.2514351741410792,
371
+ "rewards/accuracy_reward": 0.48437500558793545,
372
+ "step": 33
373
+ },
374
+ {
375
+ "completion_length": 479.74914169311523,
376
+ "epoch": 0.27064676616915423,
377
+ "grad_norm": 0.007777619641274214,
378
+ "learning_rate": 1e-06,
379
+ "loss": 0.0,
380
+ "reward": 0.49305556155741215,
381
+ "reward_std": 0.20784300100058317,
382
+ "rewards/accuracy_reward": 0.49305556155741215,
383
+ "step": 34
384
+ },
385
+ {
386
+ "completion_length": 487.4548645019531,
387
+ "epoch": 0.27860696517412936,
388
+ "grad_norm": 0.007740811910480261,
389
+ "learning_rate": 1e-06,
390
+ "loss": 0.0,
391
+ "reward": 0.44878472946584225,
392
+ "reward_std": 0.21900415536947548,
393
+ "rewards/accuracy_reward": 0.44878472946584225,
394
+ "step": 35
395
+ },
396
+ {
397
+ "completion_length": 494.6788215637207,
398
+ "epoch": 0.2865671641791045,
399
+ "grad_norm": 0.007440795190632343,
400
+ "learning_rate": 1e-06,
401
+ "loss": 0.0,
402
+ "reward": 0.4652777789160609,
403
+ "reward_std": 0.20190619095228612,
404
+ "rewards/accuracy_reward": 0.4652777789160609,
405
+ "step": 36
406
+ },
407
+ {
408
+ "completion_length": 503.41319274902344,
409
+ "epoch": 0.2945273631840796,
410
+ "grad_norm": 0.007378404960036278,
411
+ "learning_rate": 1e-06,
412
+ "loss": 0.0,
413
+ "reward": 0.4782986221835017,
414
+ "reward_std": 0.21211031870916486,
415
+ "rewards/accuracy_reward": 0.4782986221835017,
416
+ "step": 37
417
+ },
418
+ {
419
+ "completion_length": 520.353307723999,
420
+ "epoch": 0.3024875621890547,
421
+ "grad_norm": 0.01668449677526951,
422
+ "learning_rate": 1e-06,
423
+ "loss": 0.0,
424
+ "reward": 0.4401041737291962,
425
+ "reward_std": 0.22917978325858712,
426
+ "rewards/accuracy_reward": 0.4401041737291962,
427
+ "step": 38
428
+ },
429
+ {
430
+ "completion_length": 483.45833587646484,
431
+ "epoch": 0.31044776119402984,
432
+ "grad_norm": 0.006350772920995951,
433
+ "learning_rate": 1e-06,
434
+ "loss": 0.0,
435
+ "reward": 0.4539930587634444,
436
+ "reward_std": 0.18883798900060356,
437
+ "rewards/accuracy_reward": 0.4539930587634444,
438
+ "step": 39
439
+ },
440
+ {
441
+ "completion_length": 514.628475189209,
442
+ "epoch": 0.31840796019900497,
443
+ "grad_norm": 0.007026695180684328,
444
+ "learning_rate": 1e-06,
445
+ "loss": 0.0,
446
+ "reward": 0.5251736100763083,
447
+ "reward_std": 0.23443278204649687,
448
+ "rewards/accuracy_reward": 0.5251736100763083,
449
+ "step": 40
450
+ },
451
+ {
452
+ "completion_length": 518.1093788146973,
453
+ "epoch": 0.3263681592039801,
454
+ "grad_norm": 0.008940315805375576,
455
+ "learning_rate": 1e-06,
456
+ "loss": 0.0,
457
+ "reward": 0.44010417396202683,
458
+ "reward_std": 0.17670530593022704,
459
+ "rewards/accuracy_reward": 0.44010417396202683,
460
+ "step": 41
461
+ },
462
+ {
463
+ "completion_length": 477.1041679382324,
464
+ "epoch": 0.33432835820895523,
465
+ "grad_norm": 0.006643439643085003,
466
+ "learning_rate": 1e-06,
467
+ "loss": 0.0,
468
+ "reward": 0.5312499990686774,
469
+ "reward_std": 0.20215208712033927,
470
+ "rewards/accuracy_reward": 0.5312499990686774,
471
+ "step": 42
472
+ },
473
+ {
474
+ "completion_length": 507.63542556762695,
475
+ "epoch": 0.34228855721393037,
476
+ "grad_norm": 0.007287212181836367,
477
+ "learning_rate": 1e-06,
478
+ "loss": 0.0,
479
+ "reward": 0.5182291716337204,
480
+ "reward_std": 0.2360649723559618,
481
+ "rewards/accuracy_reward": 0.5182291716337204,
482
+ "step": 43
483
+ },
484
+ {
485
+ "completion_length": 496.23959159851074,
486
+ "epoch": 0.3502487562189055,
487
+ "grad_norm": 0.007122528273612261,
488
+ "learning_rate": 1e-06,
489
+ "loss": 0.0,
490
+ "reward": 0.45312500931322575,
491
+ "reward_std": 0.20658620377071202,
492
+ "rewards/accuracy_reward": 0.45312500931322575,
493
+ "step": 44
494
+ },
495
+ {
496
+ "completion_length": 485.8533020019531,
497
+ "epoch": 0.3582089552238806,
498
+ "grad_norm": 0.007156004197895527,
499
+ "learning_rate": 1e-06,
500
+ "loss": 0.0,
501
+ "reward": 0.530381953343749,
502
+ "reward_std": 0.24199844780378044,
503
+ "rewards/accuracy_reward": 0.530381953343749,
504
+ "step": 45
505
+ },
506
+ {
507
+ "completion_length": 502.89844512939453,
508
+ "epoch": 0.3661691542288557,
509
+ "grad_norm": 0.0069762468338012695,
510
+ "learning_rate": 1e-06,
511
+ "loss": 0.0,
512
+ "reward": 0.4661458395421505,
513
+ "reward_std": 0.22123132436536252,
514
+ "rewards/accuracy_reward": 0.4661458395421505,
515
+ "step": 46
516
+ },
517
+ {
518
+ "completion_length": 500.3958339691162,
519
+ "epoch": 0.37412935323383084,
520
+ "grad_norm": 0.006156955845654011,
521
+ "learning_rate": 1e-06,
522
+ "loss": 0.0,
523
+ "reward": 0.5052083404734731,
524
+ "reward_std": 0.20680285105481744,
525
+ "rewards/accuracy_reward": 0.5052083404734731,
526
+ "step": 47
527
+ },
528
+ {
529
+ "completion_length": 494.1701469421387,
530
+ "epoch": 0.382089552238806,
531
+ "grad_norm": 0.005795426666736603,
532
+ "learning_rate": 1e-06,
533
+ "loss": 0.0,
534
+ "reward": 0.46527777798473835,
535
+ "reward_std": 0.14974681939929724,
536
+ "rewards/accuracy_reward": 0.46527777798473835,
537
+ "step": 48
538
+ },
539
+ {
540
+ "completion_length": 489.7239627838135,
541
+ "epoch": 0.3900497512437811,
542
+ "grad_norm": 0.006671064533293247,
543
+ "learning_rate": 1e-06,
544
+ "loss": 0.0,
545
+ "reward": 0.44010416977107525,
546
+ "reward_std": 0.2161168558523059,
547
+ "rewards/accuracy_reward": 0.44010416977107525,
548
+ "step": 49
549
+ },
550
+ {
551
+ "completion_length": 519.2543487548828,
552
+ "epoch": 0.39800995024875624,
553
+ "grad_norm": 0.025584502145648003,
554
+ "learning_rate": 1e-06,
555
+ "loss": 0.0,
556
+ "reward": 0.42795139038935304,
557
+ "reward_std": 0.20397222600877285,
558
+ "rewards/accuracy_reward": 0.42795139038935304,
559
+ "step": 50
560
+ },
561
+ {
562
+ "completion_length": 500.26996994018555,
563
+ "epoch": 0.4059701492537313,
564
+ "grad_norm": 0.008419223129749298,
565
+ "learning_rate": 1e-06,
566
+ "loss": 0.0,
567
+ "reward": 0.45399306155741215,
568
+ "reward_std": 0.23704699613153934,
569
+ "rewards/accuracy_reward": 0.45399306155741215,
570
+ "step": 51
571
+ },
572
+ {
573
+ "completion_length": 537.690107345581,
574
+ "epoch": 0.41393034825870645,
575
+ "grad_norm": 0.0072258333675563335,
576
+ "learning_rate": 1e-06,
577
+ "loss": 0.0,
578
+ "reward": 0.461805553175509,
579
+ "reward_std": 0.2027341139037162,
580
+ "rewards/accuracy_reward": 0.461805553175509,
581
+ "step": 52
582
+ },
583
+ {
584
+ "completion_length": 513.0434074401855,
585
+ "epoch": 0.4218905472636816,
586
+ "grad_norm": 0.006870542652904987,
587
+ "learning_rate": 1e-06,
588
+ "loss": 0.0,
589
+ "reward": 0.471354172565043,
590
+ "reward_std": 0.20462280698120594,
591
+ "rewards/accuracy_reward": 0.471354172565043,
592
+ "step": 53
593
+ },
594
+ {
595
+ "completion_length": 499.86979484558105,
596
+ "epoch": 0.4298507462686567,
597
+ "grad_norm": 0.007057087495923042,
598
+ "learning_rate": 1e-06,
599
+ "loss": 0.0,
600
+ "reward": 0.4947916716337204,
601
+ "reward_std": 0.2298571434803307,
602
+ "rewards/accuracy_reward": 0.4947916716337204,
603
+ "step": 54
604
+ },
605
+ {
606
+ "completion_length": 499.2500057220459,
607
+ "epoch": 0.43781094527363185,
608
+ "grad_norm": 0.008050983771681786,
609
+ "learning_rate": 1e-06,
610
+ "loss": 0.0,
611
+ "reward": 0.49826389364898205,
612
+ "reward_std": 0.21837170561775565,
613
+ "rewards/accuracy_reward": 0.49826389364898205,
614
+ "step": 55
615
+ },
616
+ {
617
+ "completion_length": 485.90799140930176,
618
+ "epoch": 0.445771144278607,
619
+ "grad_norm": 0.006851210258901119,
620
+ "learning_rate": 1e-06,
621
+ "loss": 0.0,
622
+ "reward": 0.47569445241242647,
623
+ "reward_std": 0.18343511526472867,
624
+ "rewards/accuracy_reward": 0.47569445241242647,
625
+ "step": 56
626
+ },
627
+ {
628
+ "completion_length": 473.76736068725586,
629
+ "epoch": 0.4537313432835821,
630
+ "grad_norm": 0.006152069661766291,
631
+ "learning_rate": 1e-06,
632
+ "loss": 0.0,
633
+ "reward": 0.5147569477558136,
634
+ "reward_std": 0.18534683482721448,
635
+ "rewards/accuracy_reward": 0.5147569477558136,
636
+ "step": 57
637
+ },
638
+ {
639
+ "completion_length": 509.58506965637207,
640
+ "epoch": 0.4616915422885572,
641
+ "grad_norm": 0.010075108148157597,
642
+ "learning_rate": 1e-06,
643
+ "loss": 0.0,
644
+ "reward": 0.5260416697710752,
645
+ "reward_std": 0.18553904327563941,
646
+ "rewards/accuracy_reward": 0.5260416697710752,
647
+ "step": 58
648
+ },
649
+ {
650
+ "completion_length": 522.5798645019531,
651
+ "epoch": 0.4696517412935323,
652
+ "grad_norm": 0.006305539049208164,
653
+ "learning_rate": 1e-06,
654
+ "loss": 0.0,
655
+ "reward": 0.4357638917863369,
656
+ "reward_std": 0.19109400524757802,
657
+ "rewards/accuracy_reward": 0.4357638917863369,
658
+ "step": 59
659
+ },
660
+ {
661
+ "completion_length": 488.6762180328369,
662
+ "epoch": 0.47761194029850745,
663
+ "grad_norm": 0.007000816985964775,
664
+ "learning_rate": 1e-06,
665
+ "loss": 0.0,
666
+ "reward": 0.4236111119389534,
667
+ "reward_std": 0.18203387153334916,
668
+ "rewards/accuracy_reward": 0.4236111119389534,
669
+ "step": 60
670
+ },
671
+ {
672
+ "completion_length": 513.9618091583252,
673
+ "epoch": 0.4855721393034826,
674
+ "grad_norm": 0.006217929068952799,
675
+ "learning_rate": 1e-06,
676
+ "loss": 0.0,
677
+ "reward": 0.4435763927176595,
678
+ "reward_std": 0.18900510389357805,
679
+ "rewards/accuracy_reward": 0.4435763927176595,
680
+ "step": 61
681
+ },
682
+ {
683
+ "completion_length": 519.9574699401855,
684
+ "epoch": 0.4935323383084577,
685
+ "grad_norm": 0.006258189212530851,
686
+ "learning_rate": 1e-06,
687
+ "loss": 0.0,
688
+ "reward": 0.4739583358168602,
689
+ "reward_std": 0.20483243186026812,
690
+ "rewards/accuracy_reward": 0.4739583358168602,
691
+ "step": 62
692
+ },
693
+ {
694
+ "completion_length": 491.98351287841797,
695
+ "epoch": 0.5014925373134328,
696
+ "grad_norm": 0.006038163788616657,
697
+ "learning_rate": 1e-06,
698
+ "loss": 0.0,
699
+ "reward": 0.4887152789160609,
700
+ "reward_std": 0.16513075795955956,
701
+ "rewards/accuracy_reward": 0.4887152789160609,
702
+ "step": 63
703
+ },
704
+ {
705
+ "completion_length": 497.38976097106934,
706
+ "epoch": 0.5094527363184079,
707
+ "grad_norm": 0.007668279577046633,
708
+ "learning_rate": 1e-06,
709
+ "loss": 0.0,
710
+ "reward": 0.509548619389534,
711
+ "reward_std": 0.22911294596269727,
712
+ "rewards/accuracy_reward": 0.509548619389534,
713
+ "step": 64
714
+ },
715
+ {
716
+ "completion_length": 493.03125190734863,
717
+ "epoch": 0.5174129353233831,
718
+ "grad_norm": 0.03523726761341095,
719
+ "learning_rate": 1e-06,
720
+ "loss": 0.0,
721
+ "reward": 0.5390625074505806,
722
+ "reward_std": 0.15510419360361993,
723
+ "rewards/accuracy_reward": 0.5390625074505806,
724
+ "step": 65
725
+ },
726
+ {
727
+ "completion_length": 501.24392890930176,
728
+ "epoch": 0.5253731343283582,
729
+ "grad_norm": 0.005824146326631308,
730
+ "learning_rate": 1e-06,
731
+ "loss": 0.0,
732
+ "reward": 0.45659722574055195,
733
+ "reward_std": 0.18801256455481052,
734
+ "rewards/accuracy_reward": 0.45659722574055195,
735
+ "step": 66
736
+ },
737
+ {
738
+ "completion_length": 507.4540042877197,
739
+ "epoch": 0.5333333333333333,
740
+ "grad_norm": 0.018976733088493347,
741
+ "learning_rate": 1e-06,
742
+ "loss": 0.0,
743
+ "reward": 0.46180555783212185,
744
+ "reward_std": 0.16340018948540092,
745
+ "rewards/accuracy_reward": 0.46180555783212185,
746
+ "step": 67
747
+ },
748
+ {
749
+ "completion_length": 521.8541736602783,
750
+ "epoch": 0.5412935323383085,
751
+ "grad_norm": 0.006117444485425949,
752
+ "learning_rate": 1e-06,
753
+ "loss": 0.0,
754
+ "reward": 0.5338541683740914,
755
+ "reward_std": 0.21227112039923668,
756
+ "rewards/accuracy_reward": 0.5338541683740914,
757
+ "step": 68
758
+ },
759
+ {
760
+ "completion_length": 518.9166679382324,
761
+ "epoch": 0.5492537313432836,
762
+ "grad_norm": 0.006700050085783005,
763
+ "learning_rate": 1e-06,
764
+ "loss": 0.0,
765
+ "reward": 0.4574652770534158,
766
+ "reward_std": 0.22972481418401003,
767
+ "rewards/accuracy_reward": 0.4574652770534158,
768
+ "step": 69
769
+ },
770
+ {
771
+ "completion_length": 498.01736068725586,
772
+ "epoch": 0.5572139303482587,
773
+ "grad_norm": 0.005739975720643997,
774
+ "learning_rate": 1e-06,
775
+ "loss": 0.0,
776
+ "reward": 0.4522569514811039,
777
+ "reward_std": 0.19061035430058837,
778
+ "rewards/accuracy_reward": 0.4522569514811039,
779
+ "step": 70
780
+ },
781
+ {
782
+ "completion_length": 537.0442790985107,
783
+ "epoch": 0.5651741293532339,
784
+ "grad_norm": 0.00948350690305233,
785
+ "learning_rate": 1e-06,
786
+ "loss": 0.0,
787
+ "reward": 0.42708334140479565,
788
+ "reward_std": 0.15355130773968995,
789
+ "rewards/accuracy_reward": 0.42708334140479565,
790
+ "step": 71
791
+ },
792
+ {
793
+ "completion_length": 474.55295753479004,
794
+ "epoch": 0.573134328358209,
795
+ "grad_norm": 0.007795177400112152,
796
+ "learning_rate": 1e-06,
797
+ "loss": 0.0,
798
+ "reward": 0.543402774259448,
799
+ "reward_std": 0.19562758482061327,
800
+ "rewards/accuracy_reward": 0.543402774259448,
801
+ "step": 72
802
+ },
803
+ {
804
+ "completion_length": 525.5338649749756,
805
+ "epoch": 0.5810945273631841,
806
+ "grad_norm": 0.005621105432510376,
807
+ "learning_rate": 1e-06,
808
+ "loss": 0.0,
809
+ "reward": 0.4956597238779068,
810
+ "reward_std": 0.207592043094337,
811
+ "rewards/accuracy_reward": 0.4956597238779068,
812
+ "step": 73
813
+ },
814
+ {
815
+ "completion_length": 473.47743797302246,
816
+ "epoch": 0.5890547263681593,
817
+ "grad_norm": 0.0060116685926914215,
818
+ "learning_rate": 1e-06,
819
+ "loss": 0.0,
820
+ "reward": 0.599826393648982,
821
+ "reward_std": 0.18716048658825457,
822
+ "rewards/accuracy_reward": 0.599826393648982,
823
+ "step": 74
824
+ },
825
+ {
826
+ "completion_length": 493.51215744018555,
827
+ "epoch": 0.5970149253731343,
828
+ "grad_norm": 0.005957755260169506,
829
+ "learning_rate": 1e-06,
830
+ "loss": 0.0,
831
+ "reward": 0.5390625055879354,
832
+ "reward_std": 0.18902742909267545,
833
+ "rewards/accuracy_reward": 0.5390625055879354,
834
+ "step": 75
835
+ },
836
+ {
837
+ "completion_length": 501.5486183166504,
838
+ "epoch": 0.6049751243781094,
839
+ "grad_norm": 0.007860912010073662,
840
+ "learning_rate": 1e-06,
841
+ "loss": 0.0,
842
+ "reward": 0.4644097303971648,
843
+ "reward_std": 0.19782971846871078,
844
+ "rewards/accuracy_reward": 0.4644097303971648,
845
+ "step": 76
846
+ },
847
+ {
848
+ "completion_length": 488.24045181274414,
849
+ "epoch": 0.6129353233830845,
850
+ "grad_norm": 0.005519668105989695,
851
+ "learning_rate": 1e-06,
852
+ "loss": 0.0,
853
+ "reward": 0.5008680624887347,
854
+ "reward_std": 0.15759161300957203,
855
+ "rewards/accuracy_reward": 0.5008680624887347,
856
+ "step": 77
857
+ },
858
+ {
859
+ "completion_length": 499.68663787841797,
860
+ "epoch": 0.6208955223880597,
861
+ "grad_norm": 0.006401211954653263,
862
+ "learning_rate": 1e-06,
863
+ "loss": 0.0,
864
+ "reward": 0.5434027835726738,
865
+ "reward_std": 0.2069430819246918,
866
+ "rewards/accuracy_reward": 0.5434027835726738,
867
+ "step": 78
868
+ },
869
+ {
870
+ "completion_length": 494.0677146911621,
871
+ "epoch": 0.6288557213930348,
872
+ "grad_norm": 0.006467628292739391,
873
+ "learning_rate": 1e-06,
874
+ "loss": 0.0,
875
+ "reward": 0.4635416744276881,
876
+ "reward_std": 0.20270957378670573,
877
+ "rewards/accuracy_reward": 0.4635416744276881,
878
+ "step": 79
879
+ },
880
+ {
881
+ "completion_length": 509.86806297302246,
882
+ "epoch": 0.6368159203980099,
883
+ "grad_norm": 0.005684923380613327,
884
+ "learning_rate": 1e-06,
885
+ "loss": 0.0,
886
+ "reward": 0.469618059694767,
887
+ "reward_std": 0.19833743665367365,
888
+ "rewards/accuracy_reward": 0.469618059694767,
889
+ "step": 80
890
+ },
891
+ {
892
+ "completion_length": 502.8680648803711,
893
+ "epoch": 0.6447761194029851,
894
+ "grad_norm": 0.005998033564537764,
895
+ "learning_rate": 1e-06,
896
+ "loss": 0.0,
897
+ "reward": 0.5711805634200573,
898
+ "reward_std": 0.18520855018869042,
899
+ "rewards/accuracy_reward": 0.5711805634200573,
900
+ "step": 81
901
+ },
902
+ {
903
+ "completion_length": 508.05382347106934,
904
+ "epoch": 0.6527363184079602,
905
+ "grad_norm": 0.005817387718707323,
906
+ "learning_rate": 1e-06,
907
+ "loss": 0.0,
908
+ "reward": 0.5017361203208566,
909
+ "reward_std": 0.19694075919687748,
910
+ "rewards/accuracy_reward": 0.5017361203208566,
911
+ "step": 82
912
+ },
913
+ {
914
+ "completion_length": 529.6744899749756,
915
+ "epoch": 0.6606965174129353,
916
+ "grad_norm": 0.005400301422923803,
917
+ "learning_rate": 1e-06,
918
+ "loss": 0.0,
919
+ "reward": 0.5060763908550143,
920
+ "reward_std": 0.17399667436257005,
921
+ "rewards/accuracy_reward": 0.5060763908550143,
922
+ "step": 83
923
+ },
924
+ {
925
+ "completion_length": 502.5468807220459,
926
+ "epoch": 0.6686567164179105,
927
+ "grad_norm": 0.0058842068538069725,
928
+ "learning_rate": 1e-06,
929
+ "loss": 0.0,
930
+ "reward": 0.4739583395421505,
931
+ "reward_std": 0.16876469319686294,
932
+ "rewards/accuracy_reward": 0.4739583395421505,
933
+ "step": 84
934
+ },
935
+ {
936
+ "completion_length": 518.564245223999,
937
+ "epoch": 0.6766169154228856,
938
+ "grad_norm": 0.005277169402688742,
939
+ "learning_rate": 1e-06,
940
+ "loss": 0.0,
941
+ "reward": 0.47916667512618005,
942
+ "reward_std": 0.19327577715739608,
943
+ "rewards/accuracy_reward": 0.47916667512618005,
944
+ "step": 85
945
+ },
946
+ {
947
+ "completion_length": 475.6883716583252,
948
+ "epoch": 0.6845771144278607,
949
+ "grad_norm": 0.0056811547838151455,
950
+ "learning_rate": 1e-06,
951
+ "loss": 0.0,
952
+ "reward": 0.45746528450399637,
953
+ "reward_std": 0.15142568410374224,
954
+ "rewards/accuracy_reward": 0.45746528450399637,
955
+ "step": 86
956
+ },
957
+ {
958
+ "completion_length": 540.3324699401855,
959
+ "epoch": 0.6925373134328359,
960
+ "grad_norm": 0.009126154705882072,
961
+ "learning_rate": 1e-06,
962
+ "loss": 0.0,
963
+ "reward": 0.42795139411464334,
964
+ "reward_std": 0.19471946358680725,
965
+ "rewards/accuracy_reward": 0.42795139411464334,
966
+ "step": 87
967
+ },
968
+ {
969
+ "completion_length": 505.52084159851074,
970
+ "epoch": 0.700497512437811,
971
+ "grad_norm": 0.0066405292600393295,
972
+ "learning_rate": 1e-06,
973
+ "loss": 0.0,
974
+ "reward": 0.5182291679084301,
975
+ "reward_std": 0.2177837509661913,
976
+ "rewards/accuracy_reward": 0.5182291679084301,
977
+ "step": 88
978
+ },
979
+ {
980
+ "completion_length": 506.095495223999,
981
+ "epoch": 0.708457711442786,
982
+ "grad_norm": 0.005975022446364164,
983
+ "learning_rate": 1e-06,
984
+ "loss": 0.0,
985
+ "reward": 0.5199652845039964,
986
+ "reward_std": 0.17908447259105742,
987
+ "rewards/accuracy_reward": 0.5199652845039964,
988
+ "step": 89
989
+ },
990
+ {
991
+ "completion_length": 510.58246994018555,
992
+ "epoch": 0.7164179104477612,
993
+ "grad_norm": 0.006494089029729366,
994
+ "learning_rate": 1e-06,
995
+ "loss": 0.0,
996
+ "reward": 0.4973958358168602,
997
+ "reward_std": 0.1795428330078721,
998
+ "rewards/accuracy_reward": 0.4973958358168602,
999
+ "step": 90
1000
+ },
1001
+ {
1002
+ "completion_length": 487.8923625946045,
1003
+ "epoch": 0.7243781094527363,
1004
+ "grad_norm": 0.0053886245004832745,
1005
+ "learning_rate": 1e-06,
1006
+ "loss": 0.0,
1007
+ "reward": 0.5616319570690393,
1008
+ "reward_std": 0.16185199399478734,
1009
+ "rewards/accuracy_reward": 0.5616319570690393,
1010
+ "step": 91
1011
+ },
1012
+ {
1013
+ "completion_length": 505.83073234558105,
1014
+ "epoch": 0.7323383084577114,
1015
+ "grad_norm": 0.006866929121315479,
1016
+ "learning_rate": 1e-06,
1017
+ "loss": 0.0,
1018
+ "reward": 0.5407986268401146,
1019
+ "reward_std": 0.22892741695977747,
1020
+ "rewards/accuracy_reward": 0.5407986268401146,
1021
+ "step": 92
1022
+ },
1023
+ {
1024
+ "completion_length": 510.71788787841797,
1025
+ "epoch": 0.7402985074626866,
1026
+ "grad_norm": 0.005727425683289766,
1027
+ "learning_rate": 1e-06,
1028
+ "loss": 0.0,
1029
+ "reward": 0.4331597206182778,
1030
+ "reward_std": 0.16079934942536056,
1031
+ "rewards/accuracy_reward": 0.4331597206182778,
1032
+ "step": 93
1033
+ },
1034
+ {
1035
+ "completion_length": 528.9479293823242,
1036
+ "epoch": 0.7482587064676617,
1037
+ "grad_norm": 0.005505099426954985,
1038
+ "learning_rate": 1e-06,
1039
+ "loss": 0.0,
1040
+ "reward": 0.5173611119389534,
1041
+ "reward_std": 0.16449494729749858,
1042
+ "rewards/accuracy_reward": 0.5173611119389534,
1043
+ "step": 94
1044
+ },
1045
+ {
1046
+ "completion_length": 467.2135429382324,
1047
+ "epoch": 0.7562189054726368,
1048
+ "grad_norm": 0.005926135461777449,
1049
+ "learning_rate": 1e-06,
1050
+ "loss": 0.0,
1051
+ "reward": 0.5173611175268888,
1052
+ "reward_std": 0.16620213235728443,
1053
+ "rewards/accuracy_reward": 0.5173611175268888,
1054
+ "step": 95
1055
+ },
1056
+ {
1057
+ "completion_length": 490.4392375946045,
1058
+ "epoch": 0.764179104477612,
1059
+ "grad_norm": 0.0067347330041229725,
1060
+ "learning_rate": 1e-06,
1061
+ "loss": 0.0,
1062
+ "reward": 0.4739583395421505,
1063
+ "reward_std": 0.15511562791652977,
1064
+ "rewards/accuracy_reward": 0.4739583395421505,
1065
+ "step": 96
1066
+ },
1067
+ {
1068
+ "completion_length": 540.5928859710693,
1069
+ "epoch": 0.7721393034825871,
1070
+ "grad_norm": 0.005511025432497263,
1071
+ "learning_rate": 1e-06,
1072
+ "loss": 0.0,
1073
+ "reward": 0.41319444216787815,
1074
+ "reward_std": 0.14969537244178355,
1075
+ "rewards/accuracy_reward": 0.41319444216787815,
1076
+ "step": 97
1077
+ },
1078
+ {
1079
+ "completion_length": 494.7031364440918,
1080
+ "epoch": 0.7800995024875622,
1081
+ "grad_norm": 0.005637906491756439,
1082
+ "learning_rate": 1e-06,
1083
+ "loss": 0.0,
1084
+ "reward": 0.5633680630126037,
1085
+ "reward_std": 0.13407049677334726,
1086
+ "rewards/accuracy_reward": 0.5633680630126037,
1087
+ "step": 98
1088
+ },
1089
+ {
1090
+ "completion_length": 478.24132537841797,
1091
+ "epoch": 0.7880597014925373,
1092
+ "grad_norm": 0.005772520788013935,
1093
+ "learning_rate": 1e-06,
1094
+ "loss": 0.0,
1095
+ "reward": 0.5373263955116272,
1096
+ "reward_std": 0.1734980179462582,
1097
+ "rewards/accuracy_reward": 0.5373263955116272,
1098
+ "step": 99
1099
+ },
1100
+ {
1101
+ "completion_length": 513.6310844421387,
1102
+ "epoch": 0.7960199004975125,
1103
+ "grad_norm": 0.006371782626956701,
1104
+ "learning_rate": 1e-06,
1105
+ "loss": 0.0,
1106
+ "reward": 0.42968750186264515,
1107
+ "reward_std": 0.20266994088888168,
1108
+ "rewards/accuracy_reward": 0.42968750186264515,
1109
+ "step": 100
1110
+ },
1111
+ {
1112
+ "completion_length": 488.7838592529297,
1113
+ "epoch": 0.8039800995024876,
1114
+ "grad_norm": 0.006290055345743895,
1115
+ "learning_rate": 1e-06,
1116
+ "loss": 0.0,
1117
+ "reward": 0.5668402779847383,
1118
+ "reward_std": 0.20122395176440477,
1119
+ "rewards/accuracy_reward": 0.5668402779847383,
1120
+ "step": 101
1121
+ },
1122
+ {
1123
+ "completion_length": 504.9401111602783,
1124
+ "epoch": 0.8119402985074626,
1125
+ "grad_norm": 0.005973901599645615,
1126
+ "learning_rate": 1e-06,
1127
+ "loss": 0.0,
1128
+ "reward": 0.5269097238779068,
1129
+ "reward_std": 0.1675855671055615,
1130
+ "rewards/accuracy_reward": 0.5269097238779068,
1131
+ "step": 102
1132
+ },
1133
+ {
1134
+ "completion_length": 514.7942790985107,
1135
+ "epoch": 0.8199004975124378,
1136
+ "grad_norm": 0.006263998337090015,
1137
+ "learning_rate": 1e-06,
1138
+ "loss": 0.0,
1139
+ "reward": 0.511284725740552,
1140
+ "reward_std": 0.17091166111640632,
1141
+ "rewards/accuracy_reward": 0.511284725740552,
1142
+ "step": 103
1143
+ },
1144
+ {
1145
+ "completion_length": 497.96963119506836,
1146
+ "epoch": 0.8278606965174129,
1147
+ "grad_norm": 0.006026262417435646,
1148
+ "learning_rate": 1e-06,
1149
+ "loss": 0.0,
1150
+ "reward": 0.5069444456603378,
1151
+ "reward_std": 0.18491672072559595,
1152
+ "rewards/accuracy_reward": 0.5069444456603378,
1153
+ "step": 104
1154
+ },
1155
+ {
1156
+ "completion_length": 513.2239627838135,
1157
+ "epoch": 0.835820895522388,
1158
+ "grad_norm": 0.005511322058737278,
1159
+ "learning_rate": 1e-06,
1160
+ "loss": 0.0,
1161
+ "reward": 0.5008680615574121,
1162
+ "reward_std": 0.155679541407153,
1163
+ "rewards/accuracy_reward": 0.5008680615574121,
1164
+ "step": 105
1165
+ },
1166
+ {
1167
+ "completion_length": 506.9704837799072,
1168
+ "epoch": 0.8437810945273632,
1169
+ "grad_norm": 0.005759282503277063,
1170
+ "learning_rate": 1e-06,
1171
+ "loss": 0.0,
1172
+ "reward": 0.5190972294658422,
1173
+ "reward_std": 0.15153015381656587,
1174
+ "rewards/accuracy_reward": 0.5190972294658422,
1175
+ "step": 106
1176
+ },
1177
+ {
1178
+ "completion_length": 508.5946216583252,
1179
+ "epoch": 0.8517412935323383,
1180
+ "grad_norm": 0.005872826091945171,
1181
+ "learning_rate": 1e-06,
1182
+ "loss": 0.0,
1183
+ "reward": 0.4861111119389534,
1184
+ "reward_std": 0.17534176167100668,
1185
+ "rewards/accuracy_reward": 0.4861111119389534,
1186
+ "step": 107
1187
+ },
1188
+ {
1189
+ "completion_length": 513.0425434112549,
1190
+ "epoch": 0.8597014925373134,
1191
+ "grad_norm": 0.006046103313565254,
1192
+ "learning_rate": 1e-06,
1193
+ "loss": 0.0,
1194
+ "reward": 0.507812506519258,
1195
+ "reward_std": 0.18332828022539616,
1196
+ "rewards/accuracy_reward": 0.507812506519258,
1197
+ "step": 108
1198
+ },
1199
+ {
1200
+ "completion_length": 515.2300395965576,
1201
+ "epoch": 0.8676616915422886,
1202
+ "grad_norm": 0.0062822867184877396,
1203
+ "learning_rate": 1e-06,
1204
+ "loss": 0.0,
1205
+ "reward": 0.4765625046566129,
1206
+ "reward_std": 0.19052056316286325,
1207
+ "rewards/accuracy_reward": 0.4765625046566129,
1208
+ "step": 109
1209
+ },
1210
+ {
1211
+ "completion_length": 500.3810749053955,
1212
+ "epoch": 0.8756218905472637,
1213
+ "grad_norm": 0.005929249804466963,
1214
+ "learning_rate": 1e-06,
1215
+ "loss": 0.0,
1216
+ "reward": 0.5277777798473835,
1217
+ "reward_std": 0.17764607537537813,
1218
+ "rewards/accuracy_reward": 0.5277777798473835,
1219
+ "step": 110
1220
+ },
1221
+ {
1222
+ "completion_length": 519.638017654419,
1223
+ "epoch": 0.8835820895522388,
1224
+ "grad_norm": 0.0055696116760373116,
1225
+ "learning_rate": 1e-06,
1226
+ "loss": 0.0,
1227
+ "reward": 0.5164930606260896,
1228
+ "reward_std": 0.2034295415505767,
1229
+ "rewards/accuracy_reward": 0.5164930606260896,
1230
+ "step": 111
1231
+ },
1232
+ {
1233
+ "completion_length": 507.8932342529297,
1234
+ "epoch": 0.891542288557214,
1235
+ "grad_norm": 0.00611339695751667,
1236
+ "learning_rate": 1e-06,
1237
+ "loss": 0.0,
1238
+ "reward": 0.5225694514811039,
1239
+ "reward_std": 0.18155742809176445,
1240
+ "rewards/accuracy_reward": 0.5225694514811039,
1241
+ "step": 112
1242
+ },
1243
+ {
1244
+ "completion_length": 478.3941020965576,
1245
+ "epoch": 0.8995024875621891,
1246
+ "grad_norm": 0.005754369776695967,
1247
+ "learning_rate": 1e-06,
1248
+ "loss": 0.0,
1249
+ "reward": 0.5572916716337204,
1250
+ "reward_std": 0.15763491089455783,
1251
+ "rewards/accuracy_reward": 0.5572916716337204,
1252
+ "step": 113
1253
+ },
1254
+ {
1255
+ "completion_length": 503.7673645019531,
1256
+ "epoch": 0.9074626865671642,
1257
+ "grad_norm": 0.005771811120212078,
1258
+ "learning_rate": 1e-06,
1259
+ "loss": 0.0,
1260
+ "reward": 0.5703125055879354,
1261
+ "reward_std": 0.17914820974692702,
1262
+ "rewards/accuracy_reward": 0.5703125055879354,
1263
+ "step": 114
1264
+ },
1265
+ {
1266
+ "completion_length": 484.3923645019531,
1267
+ "epoch": 0.9154228855721394,
1268
+ "grad_norm": 0.0070279622450470924,
1269
+ "learning_rate": 1e-06,
1270
+ "loss": 0.0,
1271
+ "reward": 0.5347222220152617,
1272
+ "reward_std": 0.19823500025086105,
1273
+ "rewards/accuracy_reward": 0.5347222220152617,
1274
+ "step": 115
1275
+ },
1276
+ {
1277
+ "completion_length": 516.4401035308838,
1278
+ "epoch": 0.9233830845771144,
1279
+ "grad_norm": 0.006754903122782707,
1280
+ "learning_rate": 1e-06,
1281
+ "loss": 0.0,
1282
+ "reward": 0.49045139644294977,
1283
+ "reward_std": 0.2149109251331538,
1284
+ "rewards/accuracy_reward": 0.49045139644294977,
1285
+ "step": 116
1286
+ },
1287
+ {
1288
+ "completion_length": 501.6666736602783,
1289
+ "epoch": 0.9313432835820895,
1290
+ "grad_norm": 0.006792979780584574,
1291
+ "learning_rate": 1e-06,
1292
+ "loss": 0.0,
1293
+ "reward": 0.49392362032085657,
1294
+ "reward_std": 0.19255853188224137,
1295
+ "rewards/accuracy_reward": 0.49392362032085657,
1296
+ "step": 117
1297
+ },
1298
+ {
1299
+ "completion_length": 483.9678840637207,
1300
+ "epoch": 0.9393034825870646,
1301
+ "grad_norm": 0.005810786038637161,
1302
+ "learning_rate": 1e-06,
1303
+ "loss": 0.0,
1304
+ "reward": 0.5112847303971648,
1305
+ "reward_std": 0.18844515248201787,
1306
+ "rewards/accuracy_reward": 0.5112847303971648,
1307
+ "step": 118
1308
+ },
1309
+ {
1310
+ "completion_length": 510.7786464691162,
1311
+ "epoch": 0.9472636815920398,
1312
+ "grad_norm": 0.006106381770223379,
1313
+ "learning_rate": 1e-06,
1314
+ "loss": 0.0,
1315
+ "reward": 0.42881944589316845,
1316
+ "reward_std": 0.17682828847318888,
1317
+ "rewards/accuracy_reward": 0.42881944589316845,
1318
+ "step": 119
1319
+ },
1320
+ {
1321
+ "completion_length": 495.50608253479004,
1322
+ "epoch": 0.9552238805970149,
1323
+ "grad_norm": 0.00595560297369957,
1324
+ "learning_rate": 1e-06,
1325
+ "loss": 0.0,
1326
+ "reward": 0.5217013895162381,
1327
+ "reward_std": 0.16389768570661545,
1328
+ "rewards/accuracy_reward": 0.5217013895162381,
1329
+ "step": 120
1330
+ },
1331
+ {
1332
+ "completion_length": 535.8029499053955,
1333
+ "epoch": 0.96318407960199,
1334
+ "grad_norm": 0.0052213650196790695,
1335
+ "learning_rate": 1e-06,
1336
+ "loss": 0.0,
1337
+ "reward": 0.48871528543531895,
1338
+ "reward_std": 0.17809172347187996,
1339
+ "rewards/accuracy_reward": 0.48871528543531895,
1340
+ "step": 121
1341
+ },
1342
+ {
1343
+ "completion_length": 516.939245223999,
1344
+ "epoch": 0.9711442786069652,
1345
+ "grad_norm": 0.00579698896035552,
1346
+ "learning_rate": 1e-06,
1347
+ "loss": 0.0,
1348
+ "reward": 0.4652777789160609,
1349
+ "reward_std": 0.2078009396791458,
1350
+ "rewards/accuracy_reward": 0.4652777789160609,
1351
+ "step": 122
1352
+ },
1353
+ {
1354
+ "completion_length": 497.8142395019531,
1355
+ "epoch": 0.9791044776119403,
1356
+ "grad_norm": 0.005309565458446741,
1357
+ "learning_rate": 1e-06,
1358
+ "loss": 0.0,
1359
+ "reward": 0.49652778543531895,
1360
+ "reward_std": 0.1809900659136474,
1361
+ "rewards/accuracy_reward": 0.49652778543531895,
1362
+ "step": 123
1363
+ },
1364
+ {
1365
+ "completion_length": 486.0590305328369,
1366
+ "epoch": 0.9870646766169154,
1367
+ "grad_norm": 0.005974112078547478,
1368
+ "learning_rate": 1e-06,
1369
+ "loss": 0.0,
1370
+ "reward": 0.5312500074505806,
1371
+ "reward_std": 0.16324150539003313,
1372
+ "rewards/accuracy_reward": 0.5312500074505806,
1373
+ "step": 124
1374
+ },
1375
+ {
1376
+ "completion_length": 469.81510734558105,
1377
+ "epoch": 0.9950248756218906,
1378
+ "grad_norm": 0.005214582197368145,
1379
+ "learning_rate": 1e-06,
1380
+ "loss": 0.0,
1381
+ "reward": 0.6154513941146433,
1382
+ "reward_std": 0.13105975766666234,
1383
+ "rewards/accuracy_reward": 0.6154513941146433,
1384
+ "step": 125
1385
+ },
1386
+ {
1387
+ "epoch": 0.9950248756218906,
1388
+ "step": 125,
1389
+ "total_flos": 0.0,
1390
+ "train_loss": 2.3229669920965534e-08,
1391
+ "train_runtime": 35950.9286,
1392
+ "train_samples_per_second": 0.335,
1393
+ "train_steps_per_second": 0.003
1394
+ }
1395
+ ],
1396
+ "logging_steps": 1,
1397
+ "max_steps": 125,
1398
+ "num_input_tokens_seen": 0,
1399
+ "num_train_epochs": 1,
1400
+ "save_steps": 10,
1401
+ "stateful_callbacks": {
1402
+ "TrainerControl": {
1403
+ "args": {
1404
+ "should_epoch_stop": false,
1405
+ "should_evaluate": false,
1406
+ "should_log": false,
1407
+ "should_save": true,
1408
+ "should_training_stop": true
1409
+ },
1410
+ "attributes": {}
1411
+ }
1412
+ },
1413
+ "total_flos": 0.0,
1414
+ "train_batch_size": 1,
1415
+ "trial_name": null,
1416
+ "trial_params": null
1417
+ }