YeongminKim commited on
Commit
23028f3
·
verified ·
1 Parent(s): 12c7cb8

Model save

Browse files
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: alignment-handbook/zephyr-7b-sft-full
5
+ tags:
6
+ - trl
7
+ - dpo
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: zephyr-7b-dpo-full
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # zephyr-7b-dpo-full
18
+
19
+ This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.7707
22
+ - Rewards/chosen: -1.5789
23
+ - Rewards/rejected: -2.6963
24
+ - Rewards/accuracies: 0.7857
25
+ - Rewards/margins: 1.1174
26
+ - Logps/rejected: -529.8352
27
+ - Logps/chosen: -439.8712
28
+ - Logits/rejected: 1.9213
29
+ - Logits/chosen: 0.5837
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 5e-07
49
+ - train_batch_size: 8
50
+ - eval_batch_size: 8
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - num_devices: 4
54
+ - gradient_accumulation_steps: 2
55
+ - total_train_batch_size: 64
56
+ - total_eval_batch_size: 32
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: cosine
59
+ - lr_scheduler_warmup_ratio: 0.1
60
+ - num_epochs: 1
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
65
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
66
+ | 0.9314 | 0.1047 | 100 | 0.9248 | -0.1341 | -0.3626 | 0.7103 | 0.2286 | -296.4655 | -295.3837 | -2.3941 | -2.4608 |
67
+ | 0.8722 | 0.2093 | 200 | 0.8865 | -0.5236 | -1.0726 | 0.7520 | 0.5489 | -367.4581 | -334.3411 | -1.6165 | -1.9828 |
68
+ | 0.8208 | 0.3140 | 300 | 0.8215 | -0.8865 | -1.6789 | 0.7639 | 0.7924 | -428.0927 | -370.6316 | 0.3061 | -0.4089 |
69
+ | 0.8208 | 0.4186 | 400 | 0.7982 | -1.1907 | -1.9986 | 0.7718 | 0.8079 | -460.0637 | -401.0516 | 0.5905 | -0.4262 |
70
+ | 0.7826 | 0.5233 | 500 | 0.7799 | -1.3975 | -2.4383 | 0.7758 | 1.0408 | -504.0349 | -421.7270 | 2.2339 | 1.0156 |
71
+ | 0.7546 | 0.6279 | 600 | 0.7723 | -1.5567 | -2.6664 | 0.7837 | 1.1097 | -526.8406 | -437.6459 | 1.6798 | 0.3290 |
72
+ | 0.7533 | 0.7326 | 700 | 0.7732 | -1.6247 | -2.7103 | 0.7837 | 1.0856 | -531.2306 | -444.4420 | 2.0190 | 0.6982 |
73
+ | 0.7498 | 0.8373 | 800 | 0.7710 | -1.5564 | -2.6320 | 0.7857 | 1.0756 | -523.4053 | -437.6152 | 1.7010 | 0.4246 |
74
+ | 0.7471 | 0.9419 | 900 | 0.7707 | -1.5789 | -2.6963 | 0.7857 | 1.1174 | -529.8352 | -439.8712 | 1.9213 | 0.5837 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - Transformers 4.44.2
80
+ - Pytorch 2.2.1+cu118
81
+ - Datasets 2.14.7
82
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9994767137624281,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.8142244146756477,
5
+ "train_runtime": 34283.0174,
6
+ "train_samples": 61134,
7
+ "train_samples_per_second": 1.783,
8
+ "train_steps_per_second": 0.028
9
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.44.2"
6
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9994767137624281,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.8142244146756477,
5
+ "train_runtime": 34283.0174,
6
+ "train_samples": 61134,
7
+ "train_samples_per_second": 1.783,
8
+ "train_steps_per_second": 0.028
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1626 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9994767137624281,
5
+ "eval_steps": 100,
6
+ "global_step": 955,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0010465724751439038,
13
+ "grad_norm": 12.684210249452411,
14
+ "learning_rate": 5.208333333333333e-09,
15
+ "logits/chosen": -2.9233341217041016,
16
+ "logits/rejected": -2.7917747497558594,
17
+ "logps/chosen": -380.82366943359375,
18
+ "logps/rejected": -358.487060546875,
19
+ "loss": 1.0,
20
+ "rewards/accuracies": 0.0,
21
+ "rewards/chosen": 0.0,
22
+ "rewards/margins": 0.0,
23
+ "rewards/rejected": 0.0,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.010465724751439037,
28
+ "grad_norm": 10.879665963860996,
29
+ "learning_rate": 5.208333333333333e-08,
30
+ "logits/chosen": -2.5956761837005615,
31
+ "logits/rejected": -2.569300413131714,
32
+ "logps/chosen": -256.6047058105469,
33
+ "logps/rejected": -234.93711853027344,
34
+ "loss": 0.9998,
35
+ "rewards/accuracies": 0.4722222089767456,
36
+ "rewards/chosen": 0.00017333232972305268,
37
+ "rewards/margins": 0.0007588959997519851,
38
+ "rewards/rejected": -0.0005855637136846781,
39
+ "step": 10
40
+ },
41
+ {
42
+ "epoch": 0.020931449502878074,
43
+ "grad_norm": 12.660694439558682,
44
+ "learning_rate": 1.0416666666666667e-07,
45
+ "logits/chosen": -2.613247871398926,
46
+ "logits/rejected": -2.5758299827575684,
47
+ "logps/chosen": -283.08770751953125,
48
+ "logps/rejected": -282.29901123046875,
49
+ "loss": 0.9998,
50
+ "rewards/accuracies": 0.5,
51
+ "rewards/chosen": 0.00026421304210089147,
52
+ "rewards/margins": 0.00011151668149977922,
53
+ "rewards/rejected": 0.00015269630239345133,
54
+ "step": 20
55
+ },
56
+ {
57
+ "epoch": 0.03139717425431711,
58
+ "grad_norm": 12.337952256825348,
59
+ "learning_rate": 1.5624999999999999e-07,
60
+ "logits/chosen": -2.691951036453247,
61
+ "logits/rejected": -2.667614698410034,
62
+ "logps/chosen": -270.21954345703125,
63
+ "logps/rejected": -276.70257568359375,
64
+ "loss": 0.9992,
65
+ "rewards/accuracies": 0.53125,
66
+ "rewards/chosen": 0.00038629298796877265,
67
+ "rewards/margins": 0.0007968698628246784,
68
+ "rewards/rejected": -0.00041057687485590577,
69
+ "step": 30
70
+ },
71
+ {
72
+ "epoch": 0.04186289900575615,
73
+ "grad_norm": 11.876881014853103,
74
+ "learning_rate": 2.0833333333333333e-07,
75
+ "logits/chosen": -2.664839267730713,
76
+ "logits/rejected": -2.589832067489624,
77
+ "logps/chosen": -290.53656005859375,
78
+ "logps/rejected": -282.1867980957031,
79
+ "loss": 0.9971,
80
+ "rewards/accuracies": 0.6312500238418579,
81
+ "rewards/chosen": 0.004319709725677967,
82
+ "rewards/margins": 0.005323711317032576,
83
+ "rewards/rejected": -0.0010040017077699304,
84
+ "step": 40
85
+ },
86
+ {
87
+ "epoch": 0.052328623757195186,
88
+ "grad_norm": 13.270256617763115,
89
+ "learning_rate": 2.604166666666667e-07,
90
+ "logits/chosen": -2.6715073585510254,
91
+ "logits/rejected": -2.5877907276153564,
92
+ "logps/chosen": -266.10845947265625,
93
+ "logps/rejected": -236.5567626953125,
94
+ "loss": 0.9929,
95
+ "rewards/accuracies": 0.6937500238418579,
96
+ "rewards/chosen": 0.015197384171187878,
97
+ "rewards/margins": 0.015182172879576683,
98
+ "rewards/rejected": 1.5211687241389882e-05,
99
+ "step": 50
100
+ },
101
+ {
102
+ "epoch": 0.06279434850863422,
103
+ "grad_norm": 11.894246837548359,
104
+ "learning_rate": 3.1249999999999997e-07,
105
+ "logits/chosen": -2.6269617080688477,
106
+ "logits/rejected": -2.5917298793792725,
107
+ "logps/chosen": -299.65716552734375,
108
+ "logps/rejected": -274.43682861328125,
109
+ "loss": 0.9862,
110
+ "rewards/accuracies": 0.6937500238418579,
111
+ "rewards/chosen": 0.04350100830197334,
112
+ "rewards/margins": 0.029931485652923584,
113
+ "rewards/rejected": 0.013569521717727184,
114
+ "step": 60
115
+ },
116
+ {
117
+ "epoch": 0.07326007326007326,
118
+ "grad_norm": 11.94629840214185,
119
+ "learning_rate": 3.645833333333333e-07,
120
+ "logits/chosen": -2.52927827835083,
121
+ "logits/rejected": -2.521303653717041,
122
+ "logps/chosen": -257.636474609375,
123
+ "logps/rejected": -262.7804260253906,
124
+ "loss": 0.9774,
125
+ "rewards/accuracies": 0.6875,
126
+ "rewards/chosen": 0.019934870302677155,
127
+ "rewards/margins": 0.0652446299791336,
128
+ "rewards/rejected": -0.04530975967645645,
129
+ "step": 70
130
+ },
131
+ {
132
+ "epoch": 0.0837257980115123,
133
+ "grad_norm": 13.797354905847879,
134
+ "learning_rate": 4.1666666666666667e-07,
135
+ "logits/chosen": -2.543499708175659,
136
+ "logits/rejected": -2.4710183143615723,
137
+ "logps/chosen": -274.4893493652344,
138
+ "logps/rejected": -261.1844482421875,
139
+ "loss": 0.9567,
140
+ "rewards/accuracies": 0.706250011920929,
141
+ "rewards/chosen": 0.015994269400835037,
142
+ "rewards/margins": 0.11483701318502426,
143
+ "rewards/rejected": -0.09884275496006012,
144
+ "step": 80
145
+ },
146
+ {
147
+ "epoch": 0.09419152276295134,
148
+ "grad_norm": 15.051298123886353,
149
+ "learning_rate": 4.6874999999999996e-07,
150
+ "logits/chosen": -2.483623504638672,
151
+ "logits/rejected": -2.4481821060180664,
152
+ "logps/chosen": -260.9582824707031,
153
+ "logps/rejected": -278.18963623046875,
154
+ "loss": 0.9481,
155
+ "rewards/accuracies": 0.668749988079071,
156
+ "rewards/chosen": -0.05396522209048271,
157
+ "rewards/margins": 0.08080872148275375,
158
+ "rewards/rejected": -0.13477393984794617,
159
+ "step": 90
160
+ },
161
+ {
162
+ "epoch": 0.10465724751439037,
163
+ "grad_norm": 13.876516756747606,
164
+ "learning_rate": 4.999732492681437e-07,
165
+ "logits/chosen": -2.4570467472076416,
166
+ "logits/rejected": -2.381937026977539,
167
+ "logps/chosen": -317.0618591308594,
168
+ "logps/rejected": -315.6556396484375,
169
+ "loss": 0.9314,
170
+ "rewards/accuracies": 0.731249988079071,
171
+ "rewards/chosen": -0.17215368151664734,
172
+ "rewards/margins": 0.18427203595638275,
173
+ "rewards/rejected": -0.35642576217651367,
174
+ "step": 100
175
+ },
176
+ {
177
+ "epoch": 0.10465724751439037,
178
+ "eval_logits/chosen": -2.4607555866241455,
179
+ "eval_logits/rejected": -2.3940787315368652,
180
+ "eval_logps/chosen": -295.3836669921875,
181
+ "eval_logps/rejected": -296.4655456542969,
182
+ "eval_loss": 0.9247981905937195,
183
+ "eval_rewards/accuracies": 0.7103174328804016,
184
+ "eval_rewards/chosen": -0.1340663731098175,
185
+ "eval_rewards/margins": 0.22856087982654572,
186
+ "eval_rewards/rejected": -0.3626272976398468,
187
+ "eval_runtime": 410.2228,
188
+ "eval_samples_per_second": 4.875,
189
+ "eval_steps_per_second": 0.154,
190
+ "step": 100
191
+ },
192
+ {
193
+ "epoch": 0.1151229722658294,
194
+ "grad_norm": 20.34786259548405,
195
+ "learning_rate": 4.996723692767926e-07,
196
+ "logits/chosen": -2.4748752117156982,
197
+ "logits/rejected": -2.420341730117798,
198
+ "logps/chosen": -263.1013488769531,
199
+ "logps/rejected": -280.06256103515625,
200
+ "loss": 0.9303,
201
+ "rewards/accuracies": 0.6812499761581421,
202
+ "rewards/chosen": -0.21624498069286346,
203
+ "rewards/margins": 0.21136274933815002,
204
+ "rewards/rejected": -0.4276077151298523,
205
+ "step": 110
206
+ },
207
+ {
208
+ "epoch": 0.12558869701726844,
209
+ "grad_norm": 18.011468034651415,
210
+ "learning_rate": 4.990375746213598e-07,
211
+ "logits/chosen": -2.4168341159820557,
212
+ "logits/rejected": -2.353346109390259,
213
+ "logps/chosen": -284.01434326171875,
214
+ "logps/rejected": -333.86517333984375,
215
+ "loss": 0.9134,
216
+ "rewards/accuracies": 0.731249988079071,
217
+ "rewards/chosen": -0.2143399715423584,
218
+ "rewards/margins": 0.23967032134532928,
219
+ "rewards/rejected": -0.4540103077888489,
220
+ "step": 120
221
+ },
222
+ {
223
+ "epoch": 0.1360544217687075,
224
+ "grad_norm": 20.20595442453097,
225
+ "learning_rate": 4.980697142834314e-07,
226
+ "logits/chosen": -2.333416223526001,
227
+ "logits/rejected": -2.272174835205078,
228
+ "logps/chosen": -326.86810302734375,
229
+ "logps/rejected": -334.0741271972656,
230
+ "loss": 0.9077,
231
+ "rewards/accuracies": 0.6937500238418579,
232
+ "rewards/chosen": -0.45602983236312866,
233
+ "rewards/margins": 0.34080833196640015,
234
+ "rewards/rejected": -0.7968382239341736,
235
+ "step": 130
236
+ },
237
+ {
238
+ "epoch": 0.14652014652014653,
239
+ "grad_norm": 25.245651601287875,
240
+ "learning_rate": 4.967700826904229e-07,
241
+ "logits/chosen": -1.9765796661376953,
242
+ "logits/rejected": -1.9096320867538452,
243
+ "logps/chosen": -344.718505859375,
244
+ "logps/rejected": -383.64544677734375,
245
+ "loss": 0.8734,
246
+ "rewards/accuracies": 0.737500011920929,
247
+ "rewards/chosen": -0.5360048413276672,
248
+ "rewards/margins": 0.41050204634666443,
249
+ "rewards/rejected": -0.9465070962905884,
250
+ "step": 140
251
+ },
252
+ {
253
+ "epoch": 0.15698587127158556,
254
+ "grad_norm": 21.269931947577092,
255
+ "learning_rate": 4.951404179843962e-07,
256
+ "logits/chosen": -2.0397307872772217,
257
+ "logits/rejected": -1.8855133056640625,
258
+ "logps/chosen": -348.33074951171875,
259
+ "logps/rejected": -355.48370361328125,
260
+ "loss": 0.8763,
261
+ "rewards/accuracies": 0.6937500238418579,
262
+ "rewards/chosen": -0.6048094034194946,
263
+ "rewards/margins": 0.4798543453216553,
264
+ "rewards/rejected": -1.0846636295318604,
265
+ "step": 150
266
+ },
267
+ {
268
+ "epoch": 0.1674515960230246,
269
+ "grad_norm": 33.533617020269915,
270
+ "learning_rate": 4.931828996974498e-07,
271
+ "logits/chosen": -2.1307215690612793,
272
+ "logits/rejected": -1.8285764455795288,
273
+ "logps/chosen": -352.92236328125,
274
+ "logps/rejected": -371.9589538574219,
275
+ "loss": 0.8427,
276
+ "rewards/accuracies": 0.731249988079071,
277
+ "rewards/chosen": -0.5708774924278259,
278
+ "rewards/margins": 0.5884283781051636,
279
+ "rewards/rejected": -1.1593058109283447,
280
+ "step": 160
281
+ },
282
+ {
283
+ "epoch": 0.17791732077446362,
284
+ "grad_norm": 32.757872064459335,
285
+ "learning_rate": 4.909001458367866e-07,
286
+ "logits/chosen": -1.2587175369262695,
287
+ "logits/rejected": -0.8609746098518372,
288
+ "logps/chosen": -408.7366027832031,
289
+ "logps/rejected": -442.3734436035156,
290
+ "loss": 0.8506,
291
+ "rewards/accuracies": 0.731249988079071,
292
+ "rewards/chosen": -1.3020718097686768,
293
+ "rewards/margins": 0.713103175163269,
294
+ "rewards/rejected": -2.015174627304077,
295
+ "step": 170
296
+ },
297
+ {
298
+ "epoch": 0.18838304552590268,
299
+ "grad_norm": 39.16606354469095,
300
+ "learning_rate": 4.882952093833627e-07,
301
+ "logits/chosen": -1.010235071182251,
302
+ "logits/rejected": -0.6893966794013977,
303
+ "logps/chosen": -386.0320739746094,
304
+ "logps/rejected": -459.80596923828125,
305
+ "loss": 0.8371,
306
+ "rewards/accuracies": 0.731249988079071,
307
+ "rewards/chosen": -1.4610180854797363,
308
+ "rewards/margins": 0.6689072847366333,
309
+ "rewards/rejected": -2.12992525100708,
310
+ "step": 180
311
+ },
312
+ {
313
+ "epoch": 0.1988487702773417,
314
+ "grad_norm": 38.44785852081829,
315
+ "learning_rate": 4.853715742087946e-07,
316
+ "logits/chosen": -1.110741138458252,
317
+ "logits/rejected": -0.910930335521698,
318
+ "logps/chosen": -359.8037414550781,
319
+ "logps/rejected": -432.5801696777344,
320
+ "loss": 0.8426,
321
+ "rewards/accuracies": 0.7749999761581421,
322
+ "rewards/chosen": -1.2385684251785278,
323
+ "rewards/margins": 0.5880638360977173,
324
+ "rewards/rejected": -1.8266319036483765,
325
+ "step": 190
326
+ },
327
+ {
328
+ "epoch": 0.20931449502878074,
329
+ "grad_norm": 29.500721336346615,
330
+ "learning_rate": 4.821331504159906e-07,
331
+ "logits/chosen": -1.7968833446502686,
332
+ "logits/rejected": -1.3632351160049438,
333
+ "logps/chosen": -372.0254211425781,
334
+ "logps/rejected": -369.52032470703125,
335
+ "loss": 0.8722,
336
+ "rewards/accuracies": 0.6875,
337
+ "rewards/chosen": -0.6137913465499878,
338
+ "rewards/margins": 0.5101683139801025,
339
+ "rewards/rejected": -1.1239596605300903,
340
+ "step": 200
341
+ },
342
+ {
343
+ "epoch": 0.20931449502878074,
344
+ "eval_logits/chosen": -1.9828360080718994,
345
+ "eval_logits/rejected": -1.6164638996124268,
346
+ "eval_logps/chosen": -334.34112548828125,
347
+ "eval_logps/rejected": -367.4581298828125,
348
+ "eval_loss": 0.8864663243293762,
349
+ "eval_rewards/accuracies": 0.7519841194152832,
350
+ "eval_rewards/chosen": -0.5236411094665527,
351
+ "eval_rewards/margins": 0.5489121079444885,
352
+ "eval_rewards/rejected": -1.072553277015686,
353
+ "eval_runtime": 405.4284,
354
+ "eval_samples_per_second": 4.933,
355
+ "eval_steps_per_second": 0.155,
356
+ "step": 200
357
+ },
358
+ {
359
+ "epoch": 0.21978021978021978,
360
+ "grad_norm": 33.64481516454493,
361
+ "learning_rate": 4.785842691097342e-07,
362
+ "logits/chosen": -1.8171882629394531,
363
+ "logits/rejected": -1.3618426322937012,
364
+ "logps/chosen": -348.45599365234375,
365
+ "logps/rejected": -389.9690856933594,
366
+ "loss": 0.8788,
367
+ "rewards/accuracies": 0.643750011920929,
368
+ "rewards/chosen": -0.6564497351646423,
369
+ "rewards/margins": 0.5203536152839661,
370
+ "rewards/rejected": -1.1768033504486084,
371
+ "step": 210
372
+ },
373
+ {
374
+ "epoch": 0.2302459445316588,
375
+ "grad_norm": 40.4541290817884,
376
+ "learning_rate": 4.7472967660421603e-07,
377
+ "logits/chosen": -1.0906522274017334,
378
+ "logits/rejected": -0.35459861159324646,
379
+ "logps/chosen": -383.7257080078125,
380
+ "logps/rejected": -398.504150390625,
381
+ "loss": 0.8394,
382
+ "rewards/accuracies": 0.731249988079071,
383
+ "rewards/chosen": -0.965796172618866,
384
+ "rewards/margins": 0.657662570476532,
385
+ "rewards/rejected": -1.6234586238861084,
386
+ "step": 220
387
+ },
388
+ {
389
+ "epoch": 0.24071166928309787,
390
+ "grad_norm": 35.38638904783933,
391
+ "learning_rate": 4.705745280752585e-07,
392
+ "logits/chosen": -1.0554046630859375,
393
+ "logits/rejected": -0.6607178449630737,
394
+ "logps/chosen": -392.6568603515625,
395
+ "logps/rejected": -424.40814208984375,
396
+ "loss": 0.8436,
397
+ "rewards/accuracies": 0.7250000238418579,
398
+ "rewards/chosen": -1.0391901731491089,
399
+ "rewards/margins": 0.5193502902984619,
400
+ "rewards/rejected": -1.5585404634475708,
401
+ "step": 230
402
+ },
403
+ {
404
+ "epoch": 0.25117739403453687,
405
+ "grad_norm": 51.716672818453596,
406
+ "learning_rate": 4.6612438066572555e-07,
407
+ "logits/chosen": -0.8230894207954407,
408
+ "logits/rejected": -0.2124086320400238,
409
+ "logps/chosen": -360.98162841796875,
410
+ "logps/rejected": -425.2185974121094,
411
+ "loss": 0.8129,
412
+ "rewards/accuracies": 0.731249988079071,
413
+ "rewards/chosen": -0.9339190721511841,
414
+ "rewards/margins": 0.7763990163803101,
415
+ "rewards/rejected": -1.7103179693222046,
416
+ "step": 240
417
+ },
418
+ {
419
+ "epoch": 0.2616431187859759,
420
+ "grad_norm": 33.945957236730834,
421
+ "learning_rate": 4.6138518605333664e-07,
422
+ "logits/chosen": -0.6045997142791748,
423
+ "logits/rejected": -0.11657794564962387,
424
+ "logps/chosen": -377.04498291015625,
425
+ "logps/rejected": -422.51678466796875,
426
+ "loss": 0.7922,
427
+ "rewards/accuracies": 0.699999988079071,
428
+ "rewards/chosen": -0.9926928281784058,
429
+ "rewards/margins": 0.6563337445259094,
430
+ "rewards/rejected": -1.64902663230896,
431
+ "step": 250
432
+ },
433
+ {
434
+ "epoch": 0.272108843537415,
435
+ "grad_norm": 34.96189412822079,
436
+ "learning_rate": 4.5636328249082514e-07,
437
+ "logits/chosen": -0.0778447613120079,
438
+ "logits/rejected": 0.5611258149147034,
439
+ "logps/chosen": -424.87054443359375,
440
+ "logps/rejected": -477.55487060546875,
441
+ "loss": 0.8389,
442
+ "rewards/accuracies": 0.6875,
443
+ "rewards/chosen": -1.358944296836853,
444
+ "rewards/margins": 0.5935989618301392,
445
+ "rewards/rejected": -1.9525432586669922,
446
+ "step": 260
447
+ },
448
+ {
449
+ "epoch": 0.282574568288854,
450
+ "grad_norm": 31.898898777717502,
451
+ "learning_rate": 4.510653863290871e-07,
452
+ "logits/chosen": -1.0117652416229248,
453
+ "logits/rejected": -0.5077033042907715,
454
+ "logps/chosen": -378.0097961425781,
455
+ "logps/rejected": -400.56414794921875,
456
+ "loss": 0.798,
457
+ "rewards/accuracies": 0.75,
458
+ "rewards/chosen": -0.7833204865455627,
459
+ "rewards/margins": 0.6868009567260742,
460
+ "rewards/rejected": -1.4701216220855713,
461
+ "step": 270
462
+ },
463
+ {
464
+ "epoch": 0.29304029304029305,
465
+ "grad_norm": 32.916951386686485,
466
+ "learning_rate": 4.4549858303465737e-07,
467
+ "logits/chosen": -0.3384867310523987,
468
+ "logits/rejected": 0.2598015367984772,
469
+ "logps/chosen": -389.6136474609375,
470
+ "logps/rejected": -451.6962890625,
471
+ "loss": 0.846,
472
+ "rewards/accuracies": 0.762499988079071,
473
+ "rewards/chosen": -0.9335886240005493,
474
+ "rewards/margins": 0.7509840130805969,
475
+ "rewards/rejected": -1.6845725774765015,
476
+ "step": 280
477
+ },
478
+ {
479
+ "epoch": 0.3035060177917321,
480
+ "grad_norm": 37.72765713788291,
481
+ "learning_rate": 4.396703177135261e-07,
482
+ "logits/chosen": -0.23731651902198792,
483
+ "logits/rejected": 0.5401536822319031,
484
+ "logps/chosen": -377.9930114746094,
485
+ "logps/rejected": -415.17022705078125,
486
+ "loss": 0.8217,
487
+ "rewards/accuracies": 0.737500011920929,
488
+ "rewards/chosen": -1.052441120147705,
489
+ "rewards/margins": 0.7287407517433167,
490
+ "rewards/rejected": -1.7811816930770874,
491
+ "step": 290
492
+ },
493
+ {
494
+ "epoch": 0.3139717425431711,
495
+ "grad_norm": 32.913268476105344,
496
+ "learning_rate": 4.335883851539693e-07,
497
+ "logits/chosen": -0.5817835330963135,
498
+ "logits/rejected": -0.08701565116643906,
499
+ "logps/chosen": -362.9393615722656,
500
+ "logps/rejected": -424.712158203125,
501
+ "loss": 0.8208,
502
+ "rewards/accuracies": 0.699999988079071,
503
+ "rewards/chosen": -0.9393421411514282,
504
+ "rewards/margins": 0.6302340030670166,
505
+ "rewards/rejected": -1.5695761442184448,
506
+ "step": 300
507
+ },
508
+ {
509
+ "epoch": 0.3139717425431711,
510
+ "eval_logits/chosen": -0.4089333117008209,
511
+ "eval_logits/rejected": 0.30610814690589905,
512
+ "eval_logps/chosen": -370.631591796875,
513
+ "eval_logps/rejected": -428.09271240234375,
514
+ "eval_loss": 0.8214874267578125,
515
+ "eval_rewards/accuracies": 0.7638888955116272,
516
+ "eval_rewards/chosen": -0.8865460753440857,
517
+ "eval_rewards/margins": 0.792353093624115,
518
+ "eval_rewards/rejected": -1.6788992881774902,
519
+ "eval_runtime": 394.7768,
520
+ "eval_samples_per_second": 5.066,
521
+ "eval_steps_per_second": 0.16,
522
+ "step": 300
523
+ },
524
+ {
525
+ "epoch": 0.32443746729461015,
526
+ "grad_norm": 35.677999783537,
527
+ "learning_rate": 4.272609194017105e-07,
528
+ "logits/chosen": -0.5777407884597778,
529
+ "logits/rejected": 0.6635278463363647,
530
+ "logps/chosen": -369.7853088378906,
531
+ "logps/rejected": -404.04779052734375,
532
+ "loss": 0.7973,
533
+ "rewards/accuracies": 0.7562500238418579,
534
+ "rewards/chosen": -0.8606454133987427,
535
+ "rewards/margins": 0.8526867628097534,
536
+ "rewards/rejected": -1.713331937789917,
537
+ "step": 310
538
+ },
539
+ {
540
+ "epoch": 0.3349031920460492,
541
+ "grad_norm": 32.27841917992221,
542
+ "learning_rate": 4.2069638288135547e-07,
543
+ "logits/chosen": 0.1319276988506317,
544
+ "logits/rejected": 0.9114134907722473,
545
+ "logps/chosen": -353.5772399902344,
546
+ "logps/rejected": -430.6465759277344,
547
+ "loss": 0.8394,
548
+ "rewards/accuracies": 0.71875,
549
+ "rewards/chosen": -1.0053209066390991,
550
+ "rewards/margins": 0.7951760292053223,
551
+ "rewards/rejected": -1.8004968166351318,
552
+ "step": 320
553
+ },
554
+ {
555
+ "epoch": 0.3453689167974882,
556
+ "grad_norm": 36.77279969745553,
557
+ "learning_rate": 4.139035550786494e-07,
558
+ "logits/chosen": 0.24769747257232666,
559
+ "logits/rejected": 0.9449674487113953,
560
+ "logps/chosen": -359.6546936035156,
561
+ "logps/rejected": -403.91668701171875,
562
+ "loss": 0.807,
563
+ "rewards/accuracies": 0.7250000238418579,
564
+ "rewards/chosen": -1.0324417352676392,
565
+ "rewards/margins": 0.6857098937034607,
566
+ "rewards/rejected": -1.7181516885757446,
567
+ "step": 330
568
+ },
569
+ {
570
+ "epoch": 0.35583464154892724,
571
+ "grad_norm": 37.42709989098109,
572
+ "learning_rate": 4.0689152079869306e-07,
573
+ "logits/chosen": 0.4464758336544037,
574
+ "logits/rejected": 1.1670682430267334,
575
+ "logps/chosen": -367.1394348144531,
576
+ "logps/rejected": -414.1123046875,
577
+ "loss": 0.8191,
578
+ "rewards/accuracies": 0.706250011920929,
579
+ "rewards/chosen": -0.9761883020401001,
580
+ "rewards/margins": 0.8046461343765259,
581
+ "rewards/rejected": -1.7808345556259155,
582
+ "step": 340
583
+ },
584
+ {
585
+ "epoch": 0.3663003663003663,
586
+ "grad_norm": 36.82109712593396,
587
+ "learning_rate": 3.99669658015821e-07,
588
+ "logits/chosen": 0.28100523352622986,
589
+ "logits/rejected": 1.06730055809021,
590
+ "logps/chosen": -388.6217041015625,
591
+ "logps/rejected": -429.86822509765625,
592
+ "loss": 0.7899,
593
+ "rewards/accuracies": 0.737500011920929,
594
+ "rewards/chosen": -0.9743186831474304,
595
+ "rewards/margins": 0.7633944749832153,
596
+ "rewards/rejected": -1.7377132177352905,
597
+ "step": 350
598
+ },
599
+ {
600
+ "epoch": 0.37676609105180536,
601
+ "grad_norm": 45.084347178428885,
602
+ "learning_rate": 3.92247625331392e-07,
603
+ "logits/chosen": 0.001661926507949829,
604
+ "logits/rejected": 0.8640968203544617,
605
+ "logps/chosen": -382.6606750488281,
606
+ "logps/rejected": -439.5569763183594,
607
+ "loss": 0.7609,
608
+ "rewards/accuracies": 0.8062499761581421,
609
+ "rewards/chosen": -0.967884361743927,
610
+ "rewards/margins": 0.8003349304199219,
611
+ "rewards/rejected": -1.7682193517684937,
612
+ "step": 360
613
+ },
614
+ {
615
+ "epoch": 0.3872318158032444,
616
+ "grad_norm": 47.00073205229117,
617
+ "learning_rate": 3.846353490562664e-07,
618
+ "logits/chosen": -0.05083969235420227,
619
+ "logits/rejected": 0.8022430539131165,
620
+ "logps/chosen": -356.86920166015625,
621
+ "logps/rejected": -404.42755126953125,
622
+ "loss": 0.8,
623
+ "rewards/accuracies": 0.731249988079071,
624
+ "rewards/chosen": -0.9355790019035339,
625
+ "rewards/margins": 0.7523115873336792,
626
+ "rewards/rejected": -1.687890648841858,
627
+ "step": 370
628
+ },
629
+ {
630
+ "epoch": 0.3976975405546834,
631
+ "grad_norm": 54.67041616108794,
632
+ "learning_rate": 3.768430099352445e-07,
633
+ "logits/chosen": 0.15414004027843475,
634
+ "logits/rejected": 1.1920832395553589,
635
+ "logps/chosen": -355.2337951660156,
636
+ "logps/rejected": -432.7115173339844,
637
+ "loss": 0.8138,
638
+ "rewards/accuracies": 0.762499988079071,
639
+ "rewards/chosen": -1.0056902170181274,
640
+ "rewards/margins": 0.8578664064407349,
641
+ "rewards/rejected": -1.8635566234588623,
642
+ "step": 380
643
+ },
644
+ {
645
+ "epoch": 0.40816326530612246,
646
+ "grad_norm": 36.59955698934101,
647
+ "learning_rate": 3.6888102953122304e-07,
648
+ "logits/chosen": -0.0811910480260849,
649
+ "logits/rejected": 0.8333920240402222,
650
+ "logps/chosen": -386.803955078125,
651
+ "logps/rejected": -451.043701171875,
652
+ "loss": 0.8125,
653
+ "rewards/accuracies": 0.737500011920929,
654
+ "rewards/chosen": -1.0521795749664307,
655
+ "rewards/margins": 0.7749984264373779,
656
+ "rewards/rejected": -1.8271780014038086,
657
+ "step": 390
658
+ },
659
+ {
660
+ "epoch": 0.4186289900575615,
661
+ "grad_norm": 44.12526971490869,
662
+ "learning_rate": 3.607600562872785e-07,
663
+ "logits/chosen": -0.16013869643211365,
664
+ "logits/rejected": 0.554747462272644,
665
+ "logps/chosen": -390.5782775878906,
666
+ "logps/rejected": -457.771484375,
667
+ "loss": 0.8208,
668
+ "rewards/accuracies": 0.7124999761581421,
669
+ "rewards/chosen": -1.205999732017517,
670
+ "rewards/margins": 0.694991946220398,
671
+ "rewards/rejected": -1.9009917974472046,
672
+ "step": 400
673
+ },
674
+ {
675
+ "epoch": 0.4186289900575615,
676
+ "eval_logits/chosen": -0.42617201805114746,
677
+ "eval_logits/rejected": 0.5905421376228333,
678
+ "eval_logps/chosen": -401.0516052246094,
679
+ "eval_logps/rejected": -460.0636901855469,
680
+ "eval_loss": 0.7982370257377625,
681
+ "eval_rewards/accuracies": 0.77182537317276,
682
+ "eval_rewards/chosen": -1.1907460689544678,
683
+ "eval_rewards/margins": 0.8078626394271851,
684
+ "eval_rewards/rejected": -1.9986087083816528,
685
+ "eval_runtime": 407.5508,
686
+ "eval_samples_per_second": 4.907,
687
+ "eval_steps_per_second": 0.155,
688
+ "step": 400
689
+ },
690
+ {
691
+ "epoch": 0.4290947148090005,
692
+ "grad_norm": 27.932485684518436,
693
+ "learning_rate": 3.5249095128531856e-07,
694
+ "logits/chosen": 0.006985366344451904,
695
+ "logits/rejected": 0.9441334009170532,
696
+ "logps/chosen": -385.29931640625,
697
+ "logps/rejected": -432.65802001953125,
698
+ "loss": 0.8168,
699
+ "rewards/accuracies": 0.75,
700
+ "rewards/chosen": -1.135567307472229,
701
+ "rewards/margins": 0.7227233648300171,
702
+ "rewards/rejected": -1.858290672302246,
703
+ "step": 410
704
+ },
705
+ {
706
+ "epoch": 0.43956043956043955,
707
+ "grad_norm": 81.65756808013388,
708
+ "learning_rate": 3.4408477372034736e-07,
709
+ "logits/chosen": -0.2533782124519348,
710
+ "logits/rejected": 1.4089046716690063,
711
+ "logps/chosen": -373.7265625,
712
+ "logps/rejected": -432.09588623046875,
713
+ "loss": 0.8162,
714
+ "rewards/accuracies": 0.78125,
715
+ "rewards/chosen": -0.915477454662323,
716
+ "rewards/margins": 0.8855735659599304,
717
+ "rewards/rejected": -1.8010507822036743,
718
+ "step": 420
719
+ },
720
+ {
721
+ "epoch": 0.4500261643118786,
722
+ "grad_norm": 64.08330230011423,
723
+ "learning_rate": 3.3555276610977276e-07,
724
+ "logits/chosen": 0.9608128666877747,
725
+ "logits/rejected": 2.096973180770874,
726
+ "logps/chosen": -359.41241455078125,
727
+ "logps/rejected": -445.40631103515625,
728
+ "loss": 0.7851,
729
+ "rewards/accuracies": 0.75,
730
+ "rewards/chosen": -1.056495189666748,
731
+ "rewards/margins": 0.8955538868904114,
732
+ "rewards/rejected": -1.952048897743225,
733
+ "step": 430
734
+ },
735
+ {
736
+ "epoch": 0.4604918890633176,
737
+ "grad_norm": 45.75501310686222,
738
+ "learning_rate": 3.269063392575352e-07,
739
+ "logits/chosen": 0.31944066286087036,
740
+ "logits/rejected": 1.1516392230987549,
741
+ "logps/chosen": -387.1063537597656,
742
+ "logps/rejected": -415.3641052246094,
743
+ "loss": 0.8046,
744
+ "rewards/accuracies": 0.7250000238418579,
745
+ "rewards/chosen": -0.9187418818473816,
746
+ "rewards/margins": 0.680588960647583,
747
+ "rewards/rejected": -1.5993306636810303,
748
+ "step": 440
749
+ },
750
+ {
751
+ "epoch": 0.47095761381475665,
752
+ "grad_norm": 40.092859728853355,
753
+ "learning_rate": 3.1815705699316964e-07,
754
+ "logits/chosen": -0.16375192999839783,
755
+ "logits/rejected": 0.5893057584762573,
756
+ "logps/chosen": -385.47412109375,
757
+ "logps/rejected": -477.24847412109375,
758
+ "loss": 0.797,
759
+ "rewards/accuracies": 0.7875000238418579,
760
+ "rewards/chosen": -1.030901551246643,
761
+ "rewards/margins": 0.9582377672195435,
762
+ "rewards/rejected": -1.989139199256897,
763
+ "step": 450
764
+ },
765
+ {
766
+ "epoch": 0.48142333856619574,
767
+ "grad_norm": 41.71685104223482,
768
+ "learning_rate": 3.0931662070620794e-07,
769
+ "logits/chosen": 0.22953590750694275,
770
+ "logits/rejected": 1.0651460886001587,
771
+ "logps/chosen": -399.14300537109375,
772
+ "logps/rejected": -465.94744873046875,
773
+ "loss": 0.8182,
774
+ "rewards/accuracies": 0.75,
775
+ "rewards/chosen": -1.4160016775131226,
776
+ "rewards/margins": 0.7932834029197693,
777
+ "rewards/rejected": -2.209285259246826,
778
+ "step": 460
779
+ },
780
+ {
781
+ "epoch": 0.49188906331763477,
782
+ "grad_norm": 45.00081885012515,
783
+ "learning_rate": 3.003968536966078e-07,
784
+ "logits/chosen": 0.9409812092781067,
785
+ "logits/rejected": 1.9240789413452148,
786
+ "logps/chosen": -418.9644470214844,
787
+ "logps/rejected": -467.4033203125,
788
+ "loss": 0.7631,
789
+ "rewards/accuracies": 0.731249988079071,
790
+ "rewards/chosen": -1.3515257835388184,
791
+ "rewards/margins": 0.8228591084480286,
792
+ "rewards/rejected": -2.1743850708007812,
793
+ "step": 470
794
+ },
795
+ {
796
+ "epoch": 0.5023547880690737,
797
+ "grad_norm": 51.675134784509765,
798
+ "learning_rate": 2.9140968536213693e-07,
799
+ "logits/chosen": 1.075547456741333,
800
+ "logits/rejected": 2.493340015411377,
801
+ "logps/chosen": -390.1001892089844,
802
+ "logps/rejected": -482.7947692871094,
803
+ "loss": 0.7996,
804
+ "rewards/accuracies": 0.768750011920929,
805
+ "rewards/chosen": -1.228891134262085,
806
+ "rewards/margins": 1.0860393047332764,
807
+ "rewards/rejected": -2.3149304389953613,
808
+ "step": 480
809
+ },
810
+ {
811
+ "epoch": 0.5128205128205128,
812
+ "grad_norm": 36.865587472816905,
813
+ "learning_rate": 2.823671352438608e-07,
814
+ "logits/chosen": -0.14976122975349426,
815
+ "logits/rejected": 1.63918137550354,
816
+ "logps/chosen": -416.5582580566406,
817
+ "logps/rejected": -461.94415283203125,
818
+ "loss": 0.7795,
819
+ "rewards/accuracies": 0.8374999761581421,
820
+ "rewards/chosen": -1.046267032623291,
821
+ "rewards/margins": 1.0088865756988525,
822
+ "rewards/rejected": -2.0551538467407227,
823
+ "step": 490
824
+ },
825
+ {
826
+ "epoch": 0.5232862375719518,
827
+ "grad_norm": 41.30030890040398,
828
+ "learning_rate": 2.73281296951072e-07,
829
+ "logits/chosen": 1.0807673931121826,
830
+ "logits/rejected": 1.9634937047958374,
831
+ "logps/chosen": -394.7601318359375,
832
+ "logps/rejected": -450.50628662109375,
833
+ "loss": 0.7826,
834
+ "rewards/accuracies": 0.768750011920929,
835
+ "rewards/chosen": -1.3611440658569336,
836
+ "rewards/margins": 0.870000958442688,
837
+ "rewards/rejected": -2.231145143508911,
838
+ "step": 500
839
+ },
840
+ {
841
+ "epoch": 0.5232862375719518,
842
+ "eval_logits/chosen": 1.0156385898590088,
843
+ "eval_logits/rejected": 2.233916997909546,
844
+ "eval_logps/chosen": -421.72698974609375,
845
+ "eval_logps/rejected": -504.034912109375,
846
+ "eval_loss": 0.7799234390258789,
847
+ "eval_rewards/accuracies": 0.7757936716079712,
848
+ "eval_rewards/chosen": -1.397499680519104,
849
+ "eval_rewards/margins": 1.0408216714859009,
850
+ "eval_rewards/rejected": -2.438321590423584,
851
+ "eval_runtime": 404.3457,
852
+ "eval_samples_per_second": 4.946,
853
+ "eval_steps_per_second": 0.156,
854
+ "step": 500
855
+ },
856
+ {
857
+ "epoch": 0.533751962323391,
858
+ "grad_norm": 54.334410731861475,
859
+ "learning_rate": 2.641643219871597e-07,
860
+ "logits/chosen": 1.0641226768493652,
861
+ "logits/rejected": 2.259822130203247,
862
+ "logps/chosen": -399.194580078125,
863
+ "logps/rejected": -516.8468017578125,
864
+ "loss": 0.7452,
865
+ "rewards/accuracies": 0.793749988079071,
866
+ "rewards/chosen": -1.1524865627288818,
867
+ "rewards/margins": 1.3779429197311401,
868
+ "rewards/rejected": -2.5304293632507324,
869
+ "step": 510
870
+ },
871
+ {
872
+ "epoch": 0.54421768707483,
873
+ "grad_norm": 88.58051947279036,
874
+ "learning_rate": 2.550284034980507e-07,
875
+ "logits/chosen": 2.1809422969818115,
876
+ "logits/rejected": 3.113351345062256,
877
+ "logps/chosen": -471.63287353515625,
878
+ "logps/rejected": -541.3626708984375,
879
+ "loss": 0.8148,
880
+ "rewards/accuracies": 0.706250011920929,
881
+ "rewards/chosen": -1.9956514835357666,
882
+ "rewards/margins": 0.9180843234062195,
883
+ "rewards/rejected": -2.913735866546631,
884
+ "step": 520
885
+ },
886
+ {
887
+ "epoch": 0.554683411826269,
888
+ "grad_norm": 39.757328239690565,
889
+ "learning_rate": 2.4588575996495794e-07,
890
+ "logits/chosen": 0.6403165459632874,
891
+ "logits/rejected": 1.7489306926727295,
892
+ "logps/chosen": -415.21319580078125,
893
+ "logps/rejected": -499.917236328125,
894
+ "loss": 0.7868,
895
+ "rewards/accuracies": 0.7250000238418579,
896
+ "rewards/chosen": -1.4644101858139038,
897
+ "rewards/margins": 0.9843127131462097,
898
+ "rewards/rejected": -2.4487228393554688,
899
+ "step": 530
900
+ },
901
+ {
902
+ "epoch": 0.565149136577708,
903
+ "grad_norm": 40.25852990858294,
904
+ "learning_rate": 2.367486188632446e-07,
905
+ "logits/chosen": -0.26262250542640686,
906
+ "logits/rejected": 0.8144134283065796,
907
+ "logps/chosen": -404.18792724609375,
908
+ "logps/rejected": -461.5171813964844,
909
+ "loss": 0.8012,
910
+ "rewards/accuracies": 0.737500011920929,
911
+ "rewards/chosen": -1.1386909484863281,
912
+ "rewards/margins": 0.8705164194107056,
913
+ "rewards/rejected": -2.009207248687744,
914
+ "step": 540
915
+ },
916
+ {
917
+ "epoch": 0.5756148613291471,
918
+ "grad_norm": 39.67017653274985,
919
+ "learning_rate": 2.276292003092593e-07,
920
+ "logits/chosen": 0.2649956941604614,
921
+ "logits/rejected": 1.6869157552719116,
922
+ "logps/chosen": -412.3680114746094,
923
+ "logps/rejected": -480.45892333984375,
924
+ "loss": 0.8035,
925
+ "rewards/accuracies": 0.768750011920929,
926
+ "rewards/chosen": -1.410407543182373,
927
+ "rewards/margins": 1.1013845205307007,
928
+ "rewards/rejected": -2.511791944503784,
929
+ "step": 550
930
+ },
931
+ {
932
+ "epoch": 0.5860805860805861,
933
+ "grad_norm": 48.8987231552702,
934
+ "learning_rate": 2.185397007170141e-07,
935
+ "logits/chosen": 0.45904502272605896,
936
+ "logits/rejected": 2.004812717437744,
937
+ "logps/chosen": -432.9220275878906,
938
+ "logps/rejected": -502.25054931640625,
939
+ "loss": 0.8156,
940
+ "rewards/accuracies": 0.75,
941
+ "rewards/chosen": -1.5010192394256592,
942
+ "rewards/margins": 0.9719829559326172,
943
+ "rewards/rejected": -2.4730021953582764,
944
+ "step": 560
945
+ },
946
+ {
947
+ "epoch": 0.5965463108320251,
948
+ "grad_norm": 46.52679933808478,
949
+ "learning_rate": 2.094922764865619e-07,
950
+ "logits/chosen": 0.06010497733950615,
951
+ "logits/rejected": 1.337308645248413,
952
+ "logps/chosen": -434.7491760253906,
953
+ "logps/rejected": -508.05413818359375,
954
+ "loss": 0.7681,
955
+ "rewards/accuracies": 0.737500011920929,
956
+ "rewards/chosen": -1.5524450540542603,
957
+ "rewards/margins": 0.8663631677627563,
958
+ "rewards/rejected": -2.4188084602355957,
959
+ "step": 570
960
+ },
961
+ {
962
+ "epoch": 0.6070120355834642,
963
+ "grad_norm": 49.619200203200066,
964
+ "learning_rate": 2.0049902774588797e-07,
965
+ "logits/chosen": 0.18441572785377502,
966
+ "logits/rejected": 1.5482475757598877,
967
+ "logps/chosen": -395.56878662109375,
968
+ "logps/rejected": -450.8467712402344,
969
+ "loss": 0.7832,
970
+ "rewards/accuracies": 0.78125,
971
+ "rewards/chosen": -1.4222776889801025,
972
+ "rewards/margins": 0.9578974843025208,
973
+ "rewards/rejected": -2.3801751136779785,
974
+ "step": 580
975
+ },
976
+ {
977
+ "epoch": 0.6174777603349032,
978
+ "grad_norm": 38.27541061990721,
979
+ "learning_rate": 1.9157198216806238e-07,
980
+ "logits/chosen": 0.08942364156246185,
981
+ "logits/rejected": 1.5633885860443115,
982
+ "logps/chosen": -431.2159118652344,
983
+ "logps/rejected": -519.74560546875,
984
+ "loss": 0.7602,
985
+ "rewards/accuracies": 0.699999988079071,
986
+ "rewards/chosen": -1.5320241451263428,
987
+ "rewards/margins": 0.9783884882926941,
988
+ "rewards/rejected": -2.5104126930236816,
989
+ "step": 590
990
+ },
991
+ {
992
+ "epoch": 0.6279434850863422,
993
+ "grad_norm": 43.240477495824976,
994
+ "learning_rate": 1.8272307888529274e-07,
995
+ "logits/chosen": 0.5950905084609985,
996
+ "logits/rejected": 1.837579369544983,
997
+ "logps/chosen": -435.34698486328125,
998
+ "logps/rejected": -513.4215698242188,
999
+ "loss": 0.7546,
1000
+ "rewards/accuracies": 0.7749999761581421,
1001
+ "rewards/chosen": -1.5277029275894165,
1002
+ "rewards/margins": 1.0060127973556519,
1003
+ "rewards/rejected": -2.5337159633636475,
1004
+ "step": 600
1005
+ },
1006
+ {
1007
+ "epoch": 0.6279434850863422,
1008
+ "eval_logits/chosen": 0.3289608955383301,
1009
+ "eval_logits/rejected": 1.6797810792922974,
1010
+ "eval_logps/chosen": -437.6458740234375,
1011
+ "eval_logps/rejected": -526.840576171875,
1012
+ "eval_loss": 0.7723253965377808,
1013
+ "eval_rewards/accuracies": 0.783730149269104,
1014
+ "eval_rewards/chosen": -1.5566877126693726,
1015
+ "eval_rewards/margins": 1.1096901893615723,
1016
+ "eval_rewards/rejected": -2.6663780212402344,
1017
+ "eval_runtime": 437.2513,
1018
+ "eval_samples_per_second": 4.574,
1019
+ "eval_steps_per_second": 0.144,
1020
+ "step": 600
1021
+ },
1022
+ {
1023
+ "epoch": 0.6384092098377813,
1024
+ "grad_norm": 52.91138230105047,
1025
+ "learning_rate": 1.7396415252139288e-07,
1026
+ "logits/chosen": 0.2616492509841919,
1027
+ "logits/rejected": 2.0162370204925537,
1028
+ "logps/chosen": -463.6256408691406,
1029
+ "logps/rejected": -519.8214111328125,
1030
+ "loss": 0.7861,
1031
+ "rewards/accuracies": 0.7875000238418579,
1032
+ "rewards/chosen": -1.580159306526184,
1033
+ "rewards/margins": 1.1713967323303223,
1034
+ "rewards/rejected": -2.751555919647217,
1035
+ "step": 610
1036
+ },
1037
+ {
1038
+ "epoch": 0.6488749345892203,
1039
+ "grad_norm": 48.31537624152789,
1040
+ "learning_rate": 1.6530691736402316e-07,
1041
+ "logits/chosen": 0.6659356951713562,
1042
+ "logits/rejected": 1.7698688507080078,
1043
+ "logps/chosen": -429.9051818847656,
1044
+ "logps/rejected": -497.07025146484375,
1045
+ "loss": 0.7441,
1046
+ "rewards/accuracies": 0.7749999761581421,
1047
+ "rewards/chosen": -1.5888570547103882,
1048
+ "rewards/margins": 1.0183988809585571,
1049
+ "rewards/rejected": -2.607255697250366,
1050
+ "step": 620
1051
+ },
1052
+ {
1053
+ "epoch": 0.6593406593406593,
1054
+ "grad_norm": 236.736070461938,
1055
+ "learning_rate": 1.5676295169786864e-07,
1056
+ "logits/chosen": 0.7470348477363586,
1057
+ "logits/rejected": 2.425363302230835,
1058
+ "logps/chosen": -437.38818359375,
1059
+ "logps/rejected": -515.9216918945312,
1060
+ "loss": 0.7726,
1061
+ "rewards/accuracies": 0.800000011920929,
1062
+ "rewards/chosen": -1.6273107528686523,
1063
+ "rewards/margins": 1.1723902225494385,
1064
+ "rewards/rejected": -2.799700975418091,
1065
+ "step": 630
1066
+ },
1067
+ {
1068
+ "epoch": 0.6698063840920984,
1069
+ "grad_norm": 52.12999592819006,
1070
+ "learning_rate": 1.483436823197092e-07,
1071
+ "logits/chosen": 1.6778907775878906,
1072
+ "logits/rejected": 3.183229446411133,
1073
+ "logps/chosen": -463.07952880859375,
1074
+ "logps/rejected": -530.0252685546875,
1075
+ "loss": 0.7562,
1076
+ "rewards/accuracies": 0.7875000238418579,
1077
+ "rewards/chosen": -1.960402488708496,
1078
+ "rewards/margins": 1.0529009103775024,
1079
+ "rewards/rejected": -3.013303279876709,
1080
+ "step": 640
1081
+ },
1082
+ {
1083
+ "epoch": 0.6802721088435374,
1084
+ "grad_norm": 41.836074119152094,
1085
+ "learning_rate": 1.4006036925609243e-07,
1086
+ "logits/chosen": 0.887537956237793,
1087
+ "logits/rejected": 2.033219337463379,
1088
+ "logps/chosen": -427.80706787109375,
1089
+ "logps/rejected": -527.1192626953125,
1090
+ "loss": 0.7609,
1091
+ "rewards/accuracies": 0.731249988079071,
1092
+ "rewards/chosen": -1.601722002029419,
1093
+ "rewards/margins": 0.9513872861862183,
1094
+ "rewards/rejected": -2.5531094074249268,
1095
+ "step": 650
1096
+ },
1097
+ {
1098
+ "epoch": 0.6907378335949764,
1099
+ "grad_norm": 54.26043387708765,
1100
+ "learning_rate": 1.319240907040458e-07,
1101
+ "logits/chosen": 0.6521838903427124,
1102
+ "logits/rejected": 2.2099432945251465,
1103
+ "logps/chosen": -436.31256103515625,
1104
+ "logps/rejected": -529.8990478515625,
1105
+ "loss": 0.7569,
1106
+ "rewards/accuracies": 0.762499988079071,
1107
+ "rewards/chosen": -1.5409561395645142,
1108
+ "rewards/margins": 1.183617353439331,
1109
+ "rewards/rejected": -2.7245736122131348,
1110
+ "step": 660
1111
+ },
1112
+ {
1113
+ "epoch": 0.7012035583464155,
1114
+ "grad_norm": 51.79386928760391,
1115
+ "learning_rate": 1.239457282149695e-07,
1116
+ "logits/chosen": 0.2191120684146881,
1117
+ "logits/rejected": 1.4961636066436768,
1118
+ "logps/chosen": -430.20184326171875,
1119
+ "logps/rejected": -532.6630859375,
1120
+ "loss": 0.7425,
1121
+ "rewards/accuracies": 0.7562500238418579,
1122
+ "rewards/chosen": -1.416535496711731,
1123
+ "rewards/margins": 1.0790010690689087,
1124
+ "rewards/rejected": -2.4955363273620605,
1125
+ "step": 670
1126
+ },
1127
+ {
1128
+ "epoch": 0.7116692830978545,
1129
+ "grad_norm": 39.78182707020652,
1130
+ "learning_rate": 1.1613595214152711e-07,
1131
+ "logits/chosen": 0.693936288356781,
1132
+ "logits/rejected": 1.9294044971466064,
1133
+ "logps/chosen": -457.75494384765625,
1134
+ "logps/rejected": -548.4839477539062,
1135
+ "loss": 0.7871,
1136
+ "rewards/accuracies": 0.762499988079071,
1137
+ "rewards/chosen": -1.460933804512024,
1138
+ "rewards/margins": 1.144832968711853,
1139
+ "rewards/rejected": -2.605767011642456,
1140
+ "step": 680
1141
+ },
1142
+ {
1143
+ "epoch": 0.7221350078492935,
1144
+ "grad_norm": 39.2241916837222,
1145
+ "learning_rate": 1.0850520736699362e-07,
1146
+ "logits/chosen": 0.6683017015457153,
1147
+ "logits/rejected": 2.4072530269622803,
1148
+ "logps/chosen": -429.7069396972656,
1149
+ "logps/rejected": -508.4949645996094,
1150
+ "loss": 0.7309,
1151
+ "rewards/accuracies": 0.8062499761581421,
1152
+ "rewards/chosen": -1.590958833694458,
1153
+ "rewards/margins": 1.1668089628219604,
1154
+ "rewards/rejected": -2.757768154144287,
1155
+ "step": 690
1156
+ },
1157
+ {
1158
+ "epoch": 0.7326007326007326,
1159
+ "grad_norm": 48.29061638514223,
1160
+ "learning_rate": 1.0106369933615042e-07,
1161
+ "logits/chosen": 0.9930335283279419,
1162
+ "logits/rejected": 2.0346710681915283,
1163
+ "logps/chosen": -419.7799377441406,
1164
+ "logps/rejected": -497.60980224609375,
1165
+ "loss": 0.7533,
1166
+ "rewards/accuracies": 0.7562500238418579,
1167
+ "rewards/chosen": -1.6569970846176147,
1168
+ "rewards/margins": 0.933269202709198,
1169
+ "rewards/rejected": -2.590266466140747,
1170
+ "step": 700
1171
+ },
1172
+ {
1173
+ "epoch": 0.7326007326007326,
1174
+ "eval_logits/chosen": 0.698245108127594,
1175
+ "eval_logits/rejected": 2.0190064907073975,
1176
+ "eval_logps/chosen": -444.4420166015625,
1177
+ "eval_logps/rejected": -531.2305908203125,
1178
+ "eval_loss": 0.7731993198394775,
1179
+ "eval_rewards/accuracies": 0.783730149269104,
1180
+ "eval_rewards/chosen": -1.624650239944458,
1181
+ "eval_rewards/margins": 1.0856273174285889,
1182
+ "eval_rewards/rejected": -2.710277795791626,
1183
+ "eval_runtime": 434.4645,
1184
+ "eval_samples_per_second": 4.603,
1185
+ "eval_steps_per_second": 0.145,
1186
+ "step": 700
1187
+ },
1188
+ {
1189
+ "epoch": 0.7430664573521716,
1190
+ "grad_norm": 53.90683395989049,
1191
+ "learning_rate": 9.382138040640714e-08,
1192
+ "logits/chosen": 0.8914741277694702,
1193
+ "logits/rejected": 2.3093373775482178,
1194
+ "logps/chosen": -432.30670166015625,
1195
+ "logps/rejected": -514.6910400390625,
1196
+ "loss": 0.8039,
1197
+ "rewards/accuracies": 0.737500011920929,
1198
+ "rewards/chosen": -1.7386293411254883,
1199
+ "rewards/margins": 0.9785627126693726,
1200
+ "rewards/rejected": -2.7171919345855713,
1201
+ "step": 710
1202
+ },
1203
+ {
1204
+ "epoch": 0.7535321821036107,
1205
+ "grad_norm": 42.68152796542752,
1206
+ "learning_rate": 8.678793653740632e-08,
1207
+ "logits/chosen": 0.6328099370002747,
1208
+ "logits/rejected": 1.9483941793441772,
1209
+ "logps/chosen": -434.7218322753906,
1210
+ "logps/rejected": -515.1497192382812,
1211
+ "loss": 0.7468,
1212
+ "rewards/accuracies": 0.75,
1213
+ "rewards/chosen": -1.5094399452209473,
1214
+ "rewards/margins": 1.1674511432647705,
1215
+ "rewards/rejected": -2.6768908500671387,
1216
+ "step": 720
1217
+ },
1218
+ {
1219
+ "epoch": 0.7639979068550498,
1220
+ "grad_norm": 127.83230371998283,
1221
+ "learning_rate": 7.997277433690983e-08,
1222
+ "logits/chosen": 0.7204136848449707,
1223
+ "logits/rejected": 1.8154442310333252,
1224
+ "logps/chosen": -442.3927307128906,
1225
+ "logps/rejected": -500.26715087890625,
1226
+ "loss": 0.7779,
1227
+ "rewards/accuracies": 0.75,
1228
+ "rewards/chosen": -1.7704966068267822,
1229
+ "rewards/margins": 0.8622214198112488,
1230
+ "rewards/rejected": -2.6327178478240967,
1231
+ "step": 730
1232
+ },
1233
+ {
1234
+ "epoch": 0.7744636316064888,
1235
+ "grad_norm": 47.565336938926386,
1236
+ "learning_rate": 7.338500848029602e-08,
1237
+ "logits/chosen": 0.9852989912033081,
1238
+ "logits/rejected": 2.030043601989746,
1239
+ "logps/chosen": -424.19757080078125,
1240
+ "logps/rejected": -495.47698974609375,
1241
+ "loss": 0.7867,
1242
+ "rewards/accuracies": 0.731249988079071,
1243
+ "rewards/chosen": -1.7129170894622803,
1244
+ "rewards/margins": 0.9195534586906433,
1245
+ "rewards/rejected": -2.6324708461761475,
1246
+ "step": 740
1247
+ },
1248
+ {
1249
+ "epoch": 0.7849293563579278,
1250
+ "grad_norm": 44.5701608685198,
1251
+ "learning_rate": 6.70334495204884e-08,
1252
+ "logits/chosen": 0.9759708642959595,
1253
+ "logits/rejected": 2.1048731803894043,
1254
+ "logps/chosen": -432.25665283203125,
1255
+ "logps/rejected": -540.9370727539062,
1256
+ "loss": 0.7385,
1257
+ "rewards/accuracies": 0.71875,
1258
+ "rewards/chosen": -1.8538835048675537,
1259
+ "rewards/margins": 1.118869662284851,
1260
+ "rewards/rejected": -2.9727535247802734,
1261
+ "step": 750
1262
+ },
1263
+ {
1264
+ "epoch": 0.7953950811093669,
1265
+ "grad_norm": 41.44381898960517,
1266
+ "learning_rate": 6.092659210462231e-08,
1267
+ "logits/chosen": 1.2645413875579834,
1268
+ "logits/rejected": 2.389688014984131,
1269
+ "logps/chosen": -435.0619201660156,
1270
+ "logps/rejected": -540.019287109375,
1271
+ "loss": 0.7955,
1272
+ "rewards/accuracies": 0.7562500238418579,
1273
+ "rewards/chosen": -1.9370710849761963,
1274
+ "rewards/margins": 1.0329408645629883,
1275
+ "rewards/rejected": -2.9700117111206055,
1276
+ "step": 760
1277
+ },
1278
+ {
1279
+ "epoch": 0.8058608058608059,
1280
+ "grad_norm": 30.769792434058846,
1281
+ "learning_rate": 5.507260361320737e-08,
1282
+ "logits/chosen": 0.6200178861618042,
1283
+ "logits/rejected": 1.4624145030975342,
1284
+ "logps/chosen": -481.1289978027344,
1285
+ "logps/rejected": -598.737060546875,
1286
+ "loss": 0.7364,
1287
+ "rewards/accuracies": 0.731249988079071,
1288
+ "rewards/chosen": -1.7775561809539795,
1289
+ "rewards/margins": 0.9905630350112915,
1290
+ "rewards/rejected": -2.7681193351745605,
1291
+ "step": 770
1292
+ },
1293
+ {
1294
+ "epoch": 0.8163265306122449,
1295
+ "grad_norm": 41.661722006018714,
1296
+ "learning_rate": 4.947931323697982e-08,
1297
+ "logits/chosen": 0.542585015296936,
1298
+ "logits/rejected": 1.6623198986053467,
1299
+ "logps/chosen": -482.098388671875,
1300
+ "logps/rejected": -534.7088623046875,
1301
+ "loss": 0.7831,
1302
+ "rewards/accuracies": 0.737500011920929,
1303
+ "rewards/chosen": -1.63140869140625,
1304
+ "rewards/margins": 1.0396373271942139,
1305
+ "rewards/rejected": -2.671046257019043,
1306
+ "step": 780
1307
+ },
1308
+ {
1309
+ "epoch": 0.826792255363684,
1310
+ "grad_norm": 58.38987202011464,
1311
+ "learning_rate": 4.415420150605398e-08,
1312
+ "logits/chosen": 0.6806478500366211,
1313
+ "logits/rejected": 2.5227348804473877,
1314
+ "logps/chosen": -446.99407958984375,
1315
+ "logps/rejected": -531.7085571289062,
1316
+ "loss": 0.8047,
1317
+ "rewards/accuracies": 0.8062499761581421,
1318
+ "rewards/chosen": -1.6840887069702148,
1319
+ "rewards/margins": 1.1803264617919922,
1320
+ "rewards/rejected": -2.864415407180786,
1321
+ "step": 790
1322
+ },
1323
+ {
1324
+ "epoch": 0.837257980115123,
1325
+ "grad_norm": 54.92650887953424,
1326
+ "learning_rate": 3.9104390285376374e-08,
1327
+ "logits/chosen": 0.9064415097236633,
1328
+ "logits/rejected": 2.0099093914031982,
1329
+ "logps/chosen": -438.2958068847656,
1330
+ "logps/rejected": -536.7066650390625,
1331
+ "loss": 0.7498,
1332
+ "rewards/accuracies": 0.737500011920929,
1333
+ "rewards/chosen": -1.7221300601959229,
1334
+ "rewards/margins": 1.0116078853607178,
1335
+ "rewards/rejected": -2.7337381839752197,
1336
+ "step": 800
1337
+ },
1338
+ {
1339
+ "epoch": 0.837257980115123,
1340
+ "eval_logits/chosen": 0.42459264397621155,
1341
+ "eval_logits/rejected": 1.701023817062378,
1342
+ "eval_logps/chosen": -437.61517333984375,
1343
+ "eval_logps/rejected": -523.4053344726562,
1344
+ "eval_loss": 0.770976722240448,
1345
+ "eval_rewards/accuracies": 0.7857142686843872,
1346
+ "eval_rewards/chosen": -1.5563815832138062,
1347
+ "eval_rewards/margins": 1.07564377784729,
1348
+ "eval_rewards/rejected": -2.6320252418518066,
1349
+ "eval_runtime": 433.8899,
1350
+ "eval_samples_per_second": 4.609,
1351
+ "eval_steps_per_second": 0.145,
1352
+ "step": 800
1353
+ },
1354
+ {
1355
+ "epoch": 0.847723704866562,
1356
+ "grad_norm": 43.87215946696443,
1357
+ "learning_rate": 3.433663324986208e-08,
1358
+ "logits/chosen": 0.48925551772117615,
1359
+ "logits/rejected": 1.9570659399032593,
1360
+ "logps/chosen": -447.86956787109375,
1361
+ "logps/rejected": -505.90460205078125,
1362
+ "loss": 0.7762,
1363
+ "rewards/accuracies": 0.78125,
1364
+ "rewards/chosen": -1.5664446353912354,
1365
+ "rewards/margins": 1.0283935070037842,
1366
+ "rewards/rejected": -2.5948386192321777,
1367
+ "step": 810
1368
+ },
1369
+ {
1370
+ "epoch": 0.858189429618001,
1371
+ "grad_norm": 51.45847719694428,
1372
+ "learning_rate": 2.9857306851953897e-08,
1373
+ "logits/chosen": 0.5297014117240906,
1374
+ "logits/rejected": 1.7246748208999634,
1375
+ "logps/chosen": -460.77532958984375,
1376
+ "logps/rejected": -539.4356689453125,
1377
+ "loss": 0.7816,
1378
+ "rewards/accuracies": 0.7437499761581421,
1379
+ "rewards/chosen": -1.5598516464233398,
1380
+ "rewards/margins": 1.0026460886001587,
1381
+ "rewards/rejected": -2.562497615814209,
1382
+ "step": 820
1383
+ },
1384
+ {
1385
+ "epoch": 0.8686551543694401,
1386
+ "grad_norm": 42.163832663541584,
1387
+ "learning_rate": 2.567240179368185e-08,
1388
+ "logits/chosen": 0.6047491431236267,
1389
+ "logits/rejected": 1.7334954738616943,
1390
+ "logps/chosen": -419.39410400390625,
1391
+ "logps/rejected": -516.76171875,
1392
+ "loss": 0.7858,
1393
+ "rewards/accuracies": 0.706250011920929,
1394
+ "rewards/chosen": -1.612444281578064,
1395
+ "rewards/margins": 1.0573890209197998,
1396
+ "rewards/rejected": -2.669833183288574,
1397
+ "step": 830
1398
+ },
1399
+ {
1400
+ "epoch": 0.8791208791208791,
1401
+ "grad_norm": 39.25207064131512,
1402
+ "learning_rate": 2.1787515014630357e-08,
1403
+ "logits/chosen": 0.570409893989563,
1404
+ "logits/rejected": 2.050328493118286,
1405
+ "logps/chosen": -446.327392578125,
1406
+ "logps/rejected": -537.5885009765625,
1407
+ "loss": 0.7303,
1408
+ "rewards/accuracies": 0.762499988079071,
1409
+ "rewards/chosen": -1.6405481100082397,
1410
+ "rewards/margins": 1.0744378566741943,
1411
+ "rewards/rejected": -2.7149858474731445,
1412
+ "step": 840
1413
+ },
1414
+ {
1415
+ "epoch": 0.8895866038723181,
1416
+ "grad_norm": 38.654218714989135,
1417
+ "learning_rate": 1.820784220652766e-08,
1418
+ "logits/chosen": 0.9261777997016907,
1419
+ "logits/rejected": 2.4772746562957764,
1420
+ "logps/chosen": -441.26593017578125,
1421
+ "logps/rejected": -515.775634765625,
1422
+ "loss": 0.7198,
1423
+ "rewards/accuracies": 0.8062499761581421,
1424
+ "rewards/chosen": -1.6039543151855469,
1425
+ "rewards/margins": 1.2189046144485474,
1426
+ "rewards/rejected": -2.822859048843384,
1427
+ "step": 850
1428
+ },
1429
+ {
1430
+ "epoch": 0.9000523286237572,
1431
+ "grad_norm": 40.349585259848205,
1432
+ "learning_rate": 1.4938170864468636e-08,
1433
+ "logits/chosen": 0.8169757127761841,
1434
+ "logits/rejected": 2.297128200531006,
1435
+ "logps/chosen": -451.08831787109375,
1436
+ "logps/rejected": -525.7896118164062,
1437
+ "loss": 0.7621,
1438
+ "rewards/accuracies": 0.7749999761581421,
1439
+ "rewards/chosen": -1.6278215646743774,
1440
+ "rewards/margins": 1.0165573358535767,
1441
+ "rewards/rejected": -2.644378662109375,
1442
+ "step": 860
1443
+ },
1444
+ {
1445
+ "epoch": 0.9105180533751962,
1446
+ "grad_norm": 51.17316772784962,
1447
+ "learning_rate": 1.1982873884064465e-08,
1448
+ "logits/chosen": 0.41630926728248596,
1449
+ "logits/rejected": 1.8632510900497437,
1450
+ "logps/chosen": -424.0140686035156,
1451
+ "logps/rejected": -504.92547607421875,
1452
+ "loss": 0.7708,
1453
+ "rewards/accuracies": 0.7562500238418579,
1454
+ "rewards/chosen": -1.5223877429962158,
1455
+ "rewards/margins": 1.0619755983352661,
1456
+ "rewards/rejected": -2.5843632221221924,
1457
+ "step": 870
1458
+ },
1459
+ {
1460
+ "epoch": 0.9209837781266352,
1461
+ "grad_norm": 101.12783488833479,
1462
+ "learning_rate": 9.345903713082304e-09,
1463
+ "logits/chosen": 0.549272894859314,
1464
+ "logits/rejected": 1.6597293615341187,
1465
+ "logps/chosen": -451.48077392578125,
1466
+ "logps/rejected": -547.0665893554688,
1467
+ "loss": 0.765,
1468
+ "rewards/accuracies": 0.75,
1469
+ "rewards/chosen": -1.473332166671753,
1470
+ "rewards/margins": 1.0507396459579468,
1471
+ "rewards/rejected": -2.52407169342041,
1472
+ "step": 880
1473
+ },
1474
+ {
1475
+ "epoch": 0.9314495028780743,
1476
+ "grad_norm": 38.221448327358026,
1477
+ "learning_rate": 7.030787065396865e-09,
1478
+ "logits/chosen": 0.5618971586227417,
1479
+ "logits/rejected": 1.767998456954956,
1480
+ "logps/chosen": -447.612060546875,
1481
+ "logps/rejected": -547.6981201171875,
1482
+ "loss": 0.763,
1483
+ "rewards/accuracies": 0.7250000238418579,
1484
+ "rewards/chosen": -1.5987756252288818,
1485
+ "rewards/margins": 1.075731873512268,
1486
+ "rewards/rejected": -2.6745076179504395,
1487
+ "step": 890
1488
+ },
1489
+ {
1490
+ "epoch": 0.9419152276295133,
1491
+ "grad_norm": 44.6705960527571,
1492
+ "learning_rate": 5.04062020432286e-09,
1493
+ "logits/chosen": 0.39258813858032227,
1494
+ "logits/rejected": 1.4880520105361938,
1495
+ "logps/chosen": -429.32666015625,
1496
+ "logps/rejected": -534.9125366210938,
1497
+ "loss": 0.7471,
1498
+ "rewards/accuracies": 0.7749999761581421,
1499
+ "rewards/chosen": -1.4897866249084473,
1500
+ "rewards/margins": 1.0451949834823608,
1501
+ "rewards/rejected": -2.5349814891815186,
1502
+ "step": 900
1503
+ },
1504
+ {
1505
+ "epoch": 0.9419152276295133,
1506
+ "eval_logits/chosen": 0.5837222337722778,
1507
+ "eval_logits/rejected": 1.9212509393692017,
1508
+ "eval_logps/chosen": -439.8711853027344,
1509
+ "eval_logps/rejected": -529.835205078125,
1510
+ "eval_loss": 0.7707245945930481,
1511
+ "eval_rewards/accuracies": 0.7857142686843872,
1512
+ "eval_rewards/chosen": -1.5789415836334229,
1513
+ "eval_rewards/margins": 1.1173815727233887,
1514
+ "eval_rewards/rejected": -2.6963229179382324,
1515
+ "eval_runtime": 434.4266,
1516
+ "eval_samples_per_second": 4.604,
1517
+ "eval_steps_per_second": 0.145,
1518
+ "step": 900
1519
+ },
1520
+ {
1521
+ "epoch": 0.9523809523809523,
1522
+ "grad_norm": 67.71518744315276,
1523
+ "learning_rate": 3.3780648016376866e-09,
1524
+ "logits/chosen": 0.7767223119735718,
1525
+ "logits/rejected": 1.9748680591583252,
1526
+ "logps/chosen": -378.9118347167969,
1527
+ "logps/rejected": -469.43475341796875,
1528
+ "loss": 0.7758,
1529
+ "rewards/accuracies": 0.78125,
1530
+ "rewards/chosen": -1.5541870594024658,
1531
+ "rewards/margins": 1.01143479347229,
1532
+ "rewards/rejected": -2.565621852874756,
1533
+ "step": 910
1534
+ },
1535
+ {
1536
+ "epoch": 0.9628466771323915,
1537
+ "grad_norm": 41.38648210926084,
1538
+ "learning_rate": 2.0453443778310766e-09,
1539
+ "logits/chosen": 0.674950897693634,
1540
+ "logits/rejected": 1.9640384912490845,
1541
+ "logps/chosen": -441.17425537109375,
1542
+ "logps/rejected": -512.7354736328125,
1543
+ "loss": 0.7815,
1544
+ "rewards/accuracies": 0.762499988079071,
1545
+ "rewards/chosen": -1.5449568033218384,
1546
+ "rewards/margins": 1.0664222240447998,
1547
+ "rewards/rejected": -2.6113791465759277,
1548
+ "step": 920
1549
+ },
1550
+ {
1551
+ "epoch": 0.9733124018838305,
1552
+ "grad_norm": 44.32409040550577,
1553
+ "learning_rate": 1.0442413283435758e-09,
1554
+ "logits/chosen": 0.3402867913246155,
1555
+ "logits/rejected": 2.12355375289917,
1556
+ "logps/chosen": -458.4666442871094,
1557
+ "logps/rejected": -520.189697265625,
1558
+ "loss": 0.7506,
1559
+ "rewards/accuracies": 0.762499988079071,
1560
+ "rewards/chosen": -1.4366506338119507,
1561
+ "rewards/margins": 1.1228129863739014,
1562
+ "rewards/rejected": -2.5594632625579834,
1563
+ "step": 930
1564
+ },
1565
+ {
1566
+ "epoch": 0.9837781266352695,
1567
+ "grad_norm": 37.11757505544597,
1568
+ "learning_rate": 3.760945397705828e-10,
1569
+ "logits/chosen": 0.5025678873062134,
1570
+ "logits/rejected": 1.861249327659607,
1571
+ "logps/chosen": -410.0967712402344,
1572
+ "logps/rejected": -514.7454223632812,
1573
+ "loss": 0.7376,
1574
+ "rewards/accuracies": 0.731249988079071,
1575
+ "rewards/chosen": -1.468609094619751,
1576
+ "rewards/margins": 1.0813006162643433,
1577
+ "rewards/rejected": -2.549909830093384,
1578
+ "step": 940
1579
+ },
1580
+ {
1581
+ "epoch": 0.9942438513867086,
1582
+ "grad_norm": 43.09246402370499,
1583
+ "learning_rate": 4.17975992204056e-11,
1584
+ "logits/chosen": 0.6304216384887695,
1585
+ "logits/rejected": 1.4559619426727295,
1586
+ "logps/chosen": -462.8529357910156,
1587
+ "logps/rejected": -536.662109375,
1588
+ "loss": 0.757,
1589
+ "rewards/accuracies": 0.706250011920929,
1590
+ "rewards/chosen": -1.6002047061920166,
1591
+ "rewards/margins": 0.9165245890617371,
1592
+ "rewards/rejected": -2.5167293548583984,
1593
+ "step": 950
1594
+ },
1595
+ {
1596
+ "epoch": 0.9994767137624281,
1597
+ "step": 955,
1598
+ "total_flos": 0.0,
1599
+ "train_loss": 0.8142244146756477,
1600
+ "train_runtime": 34283.0174,
1601
+ "train_samples_per_second": 1.783,
1602
+ "train_steps_per_second": 0.028
1603
+ }
1604
+ ],
1605
+ "logging_steps": 10,
1606
+ "max_steps": 955,
1607
+ "num_input_tokens_seen": 0,
1608
+ "num_train_epochs": 1,
1609
+ "save_steps": 100,
1610
+ "stateful_callbacks": {
1611
+ "TrainerControl": {
1612
+ "args": {
1613
+ "should_epoch_stop": false,
1614
+ "should_evaluate": false,
1615
+ "should_log": false,
1616
+ "should_save": true,
1617
+ "should_training_stop": true
1618
+ },
1619
+ "attributes": {}
1620
+ }
1621
+ },
1622
+ "total_flos": 0.0,
1623
+ "train_batch_size": 8,
1624
+ "trial_name": null,
1625
+ "trial_params": null
1626
+ }