Add files using upload-large-folder tool
Browse files- .gitattributes +2 -0
- checkpoint-540/README.md +202 -0
- checkpoint-540/adapter_config.json +34 -0
- checkpoint-540/adapter_model.safetensors +3 -0
- checkpoint-540/added_tokens.json +24 -0
- checkpoint-540/merges.txt +0 -0
- checkpoint-540/optimizer.pt +3 -0
- checkpoint-540/rng_state_0.pth +3 -0
- checkpoint-540/rng_state_1.pth +3 -0
- checkpoint-540/scheduler.pt +3 -0
- checkpoint-540/special_tokens_map.json +31 -0
- checkpoint-540/tokenizer.json +3 -0
- checkpoint-540/tokenizer_config.json +209 -0
- checkpoint-540/trainer_state.json +0 -0
- checkpoint-540/training_args.bin +3 -0
- checkpoint-540/vocab.json +0 -0
- checkpoint-600/README.md +202 -0
- checkpoint-600/adapter_config.json +34 -0
- checkpoint-600/adapter_model.safetensors +3 -0
- checkpoint-600/added_tokens.json +24 -0
- checkpoint-600/merges.txt +0 -0
- checkpoint-600/optimizer.pt +3 -0
- checkpoint-600/rng_state_0.pth +3 -0
- checkpoint-600/rng_state_1.pth +3 -0
- checkpoint-600/scheduler.pt +3 -0
- checkpoint-600/special_tokens_map.json +31 -0
- checkpoint-600/tokenizer.json +3 -0
- checkpoint-600/tokenizer_config.json +209 -0
- checkpoint-600/trainer_state.json +0 -0
- checkpoint-600/training_args.bin +3 -0
- checkpoint-600/vocab.json +0 -0
- trainer_log.jsonl +117 -0
.gitattributes
CHANGED
@@ -41,3 +41,5 @@ checkpoint-60/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
|
41 |
checkpoint-120/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
42 |
checkpoint-240/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
43 |
checkpoint-180/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
41 |
checkpoint-120/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
42 |
checkpoint-240/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
43 |
checkpoint-180/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
44 |
+
checkpoint-540/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
45 |
+
checkpoint-600/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
checkpoint-540/README.md
ADDED
@@ -0,0 +1,202 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
|
3 |
+
library_name: peft
|
4 |
+
---
|
5 |
+
|
6 |
+
# Model Card for Model ID
|
7 |
+
|
8 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
+
|
10 |
+
|
11 |
+
|
12 |
+
## Model Details
|
13 |
+
|
14 |
+
### Model Description
|
15 |
+
|
16 |
+
<!-- Provide a longer summary of what this model is. -->
|
17 |
+
|
18 |
+
|
19 |
+
|
20 |
+
- **Developed by:** [More Information Needed]
|
21 |
+
- **Funded by [optional]:** [More Information Needed]
|
22 |
+
- **Shared by [optional]:** [More Information Needed]
|
23 |
+
- **Model type:** [More Information Needed]
|
24 |
+
- **Language(s) (NLP):** [More Information Needed]
|
25 |
+
- **License:** [More Information Needed]
|
26 |
+
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
+
|
28 |
+
### Model Sources [optional]
|
29 |
+
|
30 |
+
<!-- Provide the basic links for the model. -->
|
31 |
+
|
32 |
+
- **Repository:** [More Information Needed]
|
33 |
+
- **Paper [optional]:** [More Information Needed]
|
34 |
+
- **Demo [optional]:** [More Information Needed]
|
35 |
+
|
36 |
+
## Uses
|
37 |
+
|
38 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
+
|
40 |
+
### Direct Use
|
41 |
+
|
42 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
+
|
44 |
+
[More Information Needed]
|
45 |
+
|
46 |
+
### Downstream Use [optional]
|
47 |
+
|
48 |
+
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
+
|
50 |
+
[More Information Needed]
|
51 |
+
|
52 |
+
### Out-of-Scope Use
|
53 |
+
|
54 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
+
|
56 |
+
[More Information Needed]
|
57 |
+
|
58 |
+
## Bias, Risks, and Limitations
|
59 |
+
|
60 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
+
|
62 |
+
[More Information Needed]
|
63 |
+
|
64 |
+
### Recommendations
|
65 |
+
|
66 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
+
|
68 |
+
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
+
|
70 |
+
## How to Get Started with the Model
|
71 |
+
|
72 |
+
Use the code below to get started with the model.
|
73 |
+
|
74 |
+
[More Information Needed]
|
75 |
+
|
76 |
+
## Training Details
|
77 |
+
|
78 |
+
### Training Data
|
79 |
+
|
80 |
+
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
+
|
82 |
+
[More Information Needed]
|
83 |
+
|
84 |
+
### Training Procedure
|
85 |
+
|
86 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
+
|
88 |
+
#### Preprocessing [optional]
|
89 |
+
|
90 |
+
[More Information Needed]
|
91 |
+
|
92 |
+
|
93 |
+
#### Training Hyperparameters
|
94 |
+
|
95 |
+
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
+
|
97 |
+
#### Speeds, Sizes, Times [optional]
|
98 |
+
|
99 |
+
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
+
|
101 |
+
[More Information Needed]
|
102 |
+
|
103 |
+
## Evaluation
|
104 |
+
|
105 |
+
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
+
|
107 |
+
### Testing Data, Factors & Metrics
|
108 |
+
|
109 |
+
#### Testing Data
|
110 |
+
|
111 |
+
<!-- This should link to a Dataset Card if possible. -->
|
112 |
+
|
113 |
+
[More Information Needed]
|
114 |
+
|
115 |
+
#### Factors
|
116 |
+
|
117 |
+
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
+
|
119 |
+
[More Information Needed]
|
120 |
+
|
121 |
+
#### Metrics
|
122 |
+
|
123 |
+
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
+
|
125 |
+
[More Information Needed]
|
126 |
+
|
127 |
+
### Results
|
128 |
+
|
129 |
+
[More Information Needed]
|
130 |
+
|
131 |
+
#### Summary
|
132 |
+
|
133 |
+
|
134 |
+
|
135 |
+
## Model Examination [optional]
|
136 |
+
|
137 |
+
<!-- Relevant interpretability work for the model goes here -->
|
138 |
+
|
139 |
+
[More Information Needed]
|
140 |
+
|
141 |
+
## Environmental Impact
|
142 |
+
|
143 |
+
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
+
|
145 |
+
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
+
|
147 |
+
- **Hardware Type:** [More Information Needed]
|
148 |
+
- **Hours used:** [More Information Needed]
|
149 |
+
- **Cloud Provider:** [More Information Needed]
|
150 |
+
- **Compute Region:** [More Information Needed]
|
151 |
+
- **Carbon Emitted:** [More Information Needed]
|
152 |
+
|
153 |
+
## Technical Specifications [optional]
|
154 |
+
|
155 |
+
### Model Architecture and Objective
|
156 |
+
|
157 |
+
[More Information Needed]
|
158 |
+
|
159 |
+
### Compute Infrastructure
|
160 |
+
|
161 |
+
[More Information Needed]
|
162 |
+
|
163 |
+
#### Hardware
|
164 |
+
|
165 |
+
[More Information Needed]
|
166 |
+
|
167 |
+
#### Software
|
168 |
+
|
169 |
+
[More Information Needed]
|
170 |
+
|
171 |
+
## Citation [optional]
|
172 |
+
|
173 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
+
|
175 |
+
**BibTeX:**
|
176 |
+
|
177 |
+
[More Information Needed]
|
178 |
+
|
179 |
+
**APA:**
|
180 |
+
|
181 |
+
[More Information Needed]
|
182 |
+
|
183 |
+
## Glossary [optional]
|
184 |
+
|
185 |
+
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
+
|
187 |
+
[More Information Needed]
|
188 |
+
|
189 |
+
## More Information [optional]
|
190 |
+
|
191 |
+
[More Information Needed]
|
192 |
+
|
193 |
+
## Model Card Authors [optional]
|
194 |
+
|
195 |
+
[More Information Needed]
|
196 |
+
|
197 |
+
## Model Card Contact
|
198 |
+
|
199 |
+
[More Information Needed]
|
200 |
+
### Framework versions
|
201 |
+
|
202 |
+
- PEFT 0.12.0
|
checkpoint-540/adapter_config.json
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"alpha_pattern": {},
|
3 |
+
"auto_mapping": null,
|
4 |
+
"base_model_name_or_path": "Qwen/Qwen2.5-Coder-7B-Instruct",
|
5 |
+
"bias": "none",
|
6 |
+
"fan_in_fan_out": false,
|
7 |
+
"inference_mode": true,
|
8 |
+
"init_lora_weights": true,
|
9 |
+
"layer_replication": null,
|
10 |
+
"layers_pattern": null,
|
11 |
+
"layers_to_transform": null,
|
12 |
+
"loftq_config": {},
|
13 |
+
"lora_alpha": 32,
|
14 |
+
"lora_dropout": 0.1,
|
15 |
+
"megatron_config": null,
|
16 |
+
"megatron_core": "megatron.core",
|
17 |
+
"modules_to_save": null,
|
18 |
+
"peft_type": "LORA",
|
19 |
+
"r": 16,
|
20 |
+
"rank_pattern": {},
|
21 |
+
"revision": null,
|
22 |
+
"target_modules": [
|
23 |
+
"down_proj",
|
24 |
+
"k_proj",
|
25 |
+
"v_proj",
|
26 |
+
"q_proj",
|
27 |
+
"o_proj",
|
28 |
+
"gate_proj",
|
29 |
+
"up_proj"
|
30 |
+
],
|
31 |
+
"task_type": "CAUSAL_LM",
|
32 |
+
"use_dora": false,
|
33 |
+
"use_rslora": false
|
34 |
+
}
|
checkpoint-540/adapter_model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:789721ab573afe48eb4cc305162e5970fd0d201c7686a7e3da69098e4447b72c
|
3 |
+
size 161533192
|
checkpoint-540/added_tokens.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"</tool_call>": 151658,
|
3 |
+
"<tool_call>": 151657,
|
4 |
+
"<|box_end|>": 151649,
|
5 |
+
"<|box_start|>": 151648,
|
6 |
+
"<|endoftext|>": 151643,
|
7 |
+
"<|file_sep|>": 151664,
|
8 |
+
"<|fim_middle|>": 151660,
|
9 |
+
"<|fim_pad|>": 151662,
|
10 |
+
"<|fim_prefix|>": 151659,
|
11 |
+
"<|fim_suffix|>": 151661,
|
12 |
+
"<|im_end|>": 151645,
|
13 |
+
"<|im_start|>": 151644,
|
14 |
+
"<|image_pad|>": 151655,
|
15 |
+
"<|object_ref_end|>": 151647,
|
16 |
+
"<|object_ref_start|>": 151646,
|
17 |
+
"<|quad_end|>": 151651,
|
18 |
+
"<|quad_start|>": 151650,
|
19 |
+
"<|repo_name|>": 151663,
|
20 |
+
"<|video_pad|>": 151656,
|
21 |
+
"<|vision_end|>": 151653,
|
22 |
+
"<|vision_pad|>": 151654,
|
23 |
+
"<|vision_start|>": 151652
|
24 |
+
}
|
checkpoint-540/merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
checkpoint-540/optimizer.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b758a34f7290e6023dec62bb924e657c20e42d5eaa8ab24899e3636e07a208f2
|
3 |
+
size 323290986
|
checkpoint-540/rng_state_0.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ff9e2109e68c63080b51d3466b3565bd54a973efd2a8f1d5681d2f4d63782fcf
|
3 |
+
size 14512
|
checkpoint-540/rng_state_1.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5cbce134e1e36c3a83c568a5dd61abf816262614c1239961633851c5cf4d2525
|
3 |
+
size 14512
|
checkpoint-540/scheduler.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b3770fccb602058943b601a095f565a67edc10b6816e97c5ed1b34bc740c2184
|
3 |
+
size 1064
|
checkpoint-540/special_tokens_map.json
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
"<|im_start|>",
|
4 |
+
"<|im_end|>",
|
5 |
+
"<|object_ref_start|>",
|
6 |
+
"<|object_ref_end|>",
|
7 |
+
"<|box_start|>",
|
8 |
+
"<|box_end|>",
|
9 |
+
"<|quad_start|>",
|
10 |
+
"<|quad_end|>",
|
11 |
+
"<|vision_start|>",
|
12 |
+
"<|vision_end|>",
|
13 |
+
"<|vision_pad|>",
|
14 |
+
"<|image_pad|>",
|
15 |
+
"<|video_pad|>"
|
16 |
+
],
|
17 |
+
"eos_token": {
|
18 |
+
"content": "<|im_end|>",
|
19 |
+
"lstrip": false,
|
20 |
+
"normalized": false,
|
21 |
+
"rstrip": false,
|
22 |
+
"single_word": false
|
23 |
+
},
|
24 |
+
"pad_token": {
|
25 |
+
"content": "<|endoftext|>",
|
26 |
+
"lstrip": false,
|
27 |
+
"normalized": false,
|
28 |
+
"rstrip": false,
|
29 |
+
"single_word": false
|
30 |
+
}
|
31 |
+
}
|
checkpoint-540/tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
|
3 |
+
size 11421896
|
checkpoint-540/tokenizer_config.json
ADDED
@@ -0,0 +1,209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": false,
|
3 |
+
"add_prefix_space": false,
|
4 |
+
"added_tokens_decoder": {
|
5 |
+
"151643": {
|
6 |
+
"content": "<|endoftext|>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false,
|
11 |
+
"special": true
|
12 |
+
},
|
13 |
+
"151644": {
|
14 |
+
"content": "<|im_start|>",
|
15 |
+
"lstrip": false,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": false,
|
18 |
+
"single_word": false,
|
19 |
+
"special": true
|
20 |
+
},
|
21 |
+
"151645": {
|
22 |
+
"content": "<|im_end|>",
|
23 |
+
"lstrip": false,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": false,
|
26 |
+
"single_word": false,
|
27 |
+
"special": true
|
28 |
+
},
|
29 |
+
"151646": {
|
30 |
+
"content": "<|object_ref_start|>",
|
31 |
+
"lstrip": false,
|
32 |
+
"normalized": false,
|
33 |
+
"rstrip": false,
|
34 |
+
"single_word": false,
|
35 |
+
"special": true
|
36 |
+
},
|
37 |
+
"151647": {
|
38 |
+
"content": "<|object_ref_end|>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false,
|
43 |
+
"special": true
|
44 |
+
},
|
45 |
+
"151648": {
|
46 |
+
"content": "<|box_start|>",
|
47 |
+
"lstrip": false,
|
48 |
+
"normalized": false,
|
49 |
+
"rstrip": false,
|
50 |
+
"single_word": false,
|
51 |
+
"special": true
|
52 |
+
},
|
53 |
+
"151649": {
|
54 |
+
"content": "<|box_end|>",
|
55 |
+
"lstrip": false,
|
56 |
+
"normalized": false,
|
57 |
+
"rstrip": false,
|
58 |
+
"single_word": false,
|
59 |
+
"special": true
|
60 |
+
},
|
61 |
+
"151650": {
|
62 |
+
"content": "<|quad_start|>",
|
63 |
+
"lstrip": false,
|
64 |
+
"normalized": false,
|
65 |
+
"rstrip": false,
|
66 |
+
"single_word": false,
|
67 |
+
"special": true
|
68 |
+
},
|
69 |
+
"151651": {
|
70 |
+
"content": "<|quad_end|>",
|
71 |
+
"lstrip": false,
|
72 |
+
"normalized": false,
|
73 |
+
"rstrip": false,
|
74 |
+
"single_word": false,
|
75 |
+
"special": true
|
76 |
+
},
|
77 |
+
"151652": {
|
78 |
+
"content": "<|vision_start|>",
|
79 |
+
"lstrip": false,
|
80 |
+
"normalized": false,
|
81 |
+
"rstrip": false,
|
82 |
+
"single_word": false,
|
83 |
+
"special": true
|
84 |
+
},
|
85 |
+
"151653": {
|
86 |
+
"content": "<|vision_end|>",
|
87 |
+
"lstrip": false,
|
88 |
+
"normalized": false,
|
89 |
+
"rstrip": false,
|
90 |
+
"single_word": false,
|
91 |
+
"special": true
|
92 |
+
},
|
93 |
+
"151654": {
|
94 |
+
"content": "<|vision_pad|>",
|
95 |
+
"lstrip": false,
|
96 |
+
"normalized": false,
|
97 |
+
"rstrip": false,
|
98 |
+
"single_word": false,
|
99 |
+
"special": true
|
100 |
+
},
|
101 |
+
"151655": {
|
102 |
+
"content": "<|image_pad|>",
|
103 |
+
"lstrip": false,
|
104 |
+
"normalized": false,
|
105 |
+
"rstrip": false,
|
106 |
+
"single_word": false,
|
107 |
+
"special": true
|
108 |
+
},
|
109 |
+
"151656": {
|
110 |
+
"content": "<|video_pad|>",
|
111 |
+
"lstrip": false,
|
112 |
+
"normalized": false,
|
113 |
+
"rstrip": false,
|
114 |
+
"single_word": false,
|
115 |
+
"special": true
|
116 |
+
},
|
117 |
+
"151657": {
|
118 |
+
"content": "<tool_call>",
|
119 |
+
"lstrip": false,
|
120 |
+
"normalized": false,
|
121 |
+
"rstrip": false,
|
122 |
+
"single_word": false,
|
123 |
+
"special": false
|
124 |
+
},
|
125 |
+
"151658": {
|
126 |
+
"content": "</tool_call>",
|
127 |
+
"lstrip": false,
|
128 |
+
"normalized": false,
|
129 |
+
"rstrip": false,
|
130 |
+
"single_word": false,
|
131 |
+
"special": false
|
132 |
+
},
|
133 |
+
"151659": {
|
134 |
+
"content": "<|fim_prefix|>",
|
135 |
+
"lstrip": false,
|
136 |
+
"normalized": false,
|
137 |
+
"rstrip": false,
|
138 |
+
"single_word": false,
|
139 |
+
"special": false
|
140 |
+
},
|
141 |
+
"151660": {
|
142 |
+
"content": "<|fim_middle|>",
|
143 |
+
"lstrip": false,
|
144 |
+
"normalized": false,
|
145 |
+
"rstrip": false,
|
146 |
+
"single_word": false,
|
147 |
+
"special": false
|
148 |
+
},
|
149 |
+
"151661": {
|
150 |
+
"content": "<|fim_suffix|>",
|
151 |
+
"lstrip": false,
|
152 |
+
"normalized": false,
|
153 |
+
"rstrip": false,
|
154 |
+
"single_word": false,
|
155 |
+
"special": false
|
156 |
+
},
|
157 |
+
"151662": {
|
158 |
+
"content": "<|fim_pad|>",
|
159 |
+
"lstrip": false,
|
160 |
+
"normalized": false,
|
161 |
+
"rstrip": false,
|
162 |
+
"single_word": false,
|
163 |
+
"special": false
|
164 |
+
},
|
165 |
+
"151663": {
|
166 |
+
"content": "<|repo_name|>",
|
167 |
+
"lstrip": false,
|
168 |
+
"normalized": false,
|
169 |
+
"rstrip": false,
|
170 |
+
"single_word": false,
|
171 |
+
"special": false
|
172 |
+
},
|
173 |
+
"151664": {
|
174 |
+
"content": "<|file_sep|>",
|
175 |
+
"lstrip": false,
|
176 |
+
"normalized": false,
|
177 |
+
"rstrip": false,
|
178 |
+
"single_word": false,
|
179 |
+
"special": false
|
180 |
+
}
|
181 |
+
},
|
182 |
+
"additional_special_tokens": [
|
183 |
+
"<|im_start|>",
|
184 |
+
"<|im_end|>",
|
185 |
+
"<|object_ref_start|>",
|
186 |
+
"<|object_ref_end|>",
|
187 |
+
"<|box_start|>",
|
188 |
+
"<|box_end|>",
|
189 |
+
"<|quad_start|>",
|
190 |
+
"<|quad_end|>",
|
191 |
+
"<|vision_start|>",
|
192 |
+
"<|vision_end|>",
|
193 |
+
"<|vision_pad|>",
|
194 |
+
"<|image_pad|>",
|
195 |
+
"<|video_pad|>"
|
196 |
+
],
|
197 |
+
"bos_token": null,
|
198 |
+
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
|
199 |
+
"clean_up_tokenization_spaces": false,
|
200 |
+
"eos_token": "<|im_end|>",
|
201 |
+
"errors": "replace",
|
202 |
+
"extra_special_tokens": {},
|
203 |
+
"model_max_length": 17000,
|
204 |
+
"pad_token": "<|endoftext|>",
|
205 |
+
"padding_side": "right",
|
206 |
+
"split_special_tokens": false,
|
207 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
208 |
+
"unk_token": null
|
209 |
+
}
|
checkpoint-540/trainer_state.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
checkpoint-540/training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9746167dc82a516aeee83c5731c4b250770d2cfb0fb68a7cd590dc02bd3eb0d6
|
3 |
+
size 5688
|
checkpoint-540/vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
checkpoint-600/README.md
ADDED
@@ -0,0 +1,202 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
|
3 |
+
library_name: peft
|
4 |
+
---
|
5 |
+
|
6 |
+
# Model Card for Model ID
|
7 |
+
|
8 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
+
|
10 |
+
|
11 |
+
|
12 |
+
## Model Details
|
13 |
+
|
14 |
+
### Model Description
|
15 |
+
|
16 |
+
<!-- Provide a longer summary of what this model is. -->
|
17 |
+
|
18 |
+
|
19 |
+
|
20 |
+
- **Developed by:** [More Information Needed]
|
21 |
+
- **Funded by [optional]:** [More Information Needed]
|
22 |
+
- **Shared by [optional]:** [More Information Needed]
|
23 |
+
- **Model type:** [More Information Needed]
|
24 |
+
- **Language(s) (NLP):** [More Information Needed]
|
25 |
+
- **License:** [More Information Needed]
|
26 |
+
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
+
|
28 |
+
### Model Sources [optional]
|
29 |
+
|
30 |
+
<!-- Provide the basic links for the model. -->
|
31 |
+
|
32 |
+
- **Repository:** [More Information Needed]
|
33 |
+
- **Paper [optional]:** [More Information Needed]
|
34 |
+
- **Demo [optional]:** [More Information Needed]
|
35 |
+
|
36 |
+
## Uses
|
37 |
+
|
38 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
+
|
40 |
+
### Direct Use
|
41 |
+
|
42 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
+
|
44 |
+
[More Information Needed]
|
45 |
+
|
46 |
+
### Downstream Use [optional]
|
47 |
+
|
48 |
+
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
+
|
50 |
+
[More Information Needed]
|
51 |
+
|
52 |
+
### Out-of-Scope Use
|
53 |
+
|
54 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
+
|
56 |
+
[More Information Needed]
|
57 |
+
|
58 |
+
## Bias, Risks, and Limitations
|
59 |
+
|
60 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
+
|
62 |
+
[More Information Needed]
|
63 |
+
|
64 |
+
### Recommendations
|
65 |
+
|
66 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
+
|
68 |
+
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
+
|
70 |
+
## How to Get Started with the Model
|
71 |
+
|
72 |
+
Use the code below to get started with the model.
|
73 |
+
|
74 |
+
[More Information Needed]
|
75 |
+
|
76 |
+
## Training Details
|
77 |
+
|
78 |
+
### Training Data
|
79 |
+
|
80 |
+
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
+
|
82 |
+
[More Information Needed]
|
83 |
+
|
84 |
+
### Training Procedure
|
85 |
+
|
86 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
+
|
88 |
+
#### Preprocessing [optional]
|
89 |
+
|
90 |
+
[More Information Needed]
|
91 |
+
|
92 |
+
|
93 |
+
#### Training Hyperparameters
|
94 |
+
|
95 |
+
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
+
|
97 |
+
#### Speeds, Sizes, Times [optional]
|
98 |
+
|
99 |
+
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
+
|
101 |
+
[More Information Needed]
|
102 |
+
|
103 |
+
## Evaluation
|
104 |
+
|
105 |
+
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
+
|
107 |
+
### Testing Data, Factors & Metrics
|
108 |
+
|
109 |
+
#### Testing Data
|
110 |
+
|
111 |
+
<!-- This should link to a Dataset Card if possible. -->
|
112 |
+
|
113 |
+
[More Information Needed]
|
114 |
+
|
115 |
+
#### Factors
|
116 |
+
|
117 |
+
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
+
|
119 |
+
[More Information Needed]
|
120 |
+
|
121 |
+
#### Metrics
|
122 |
+
|
123 |
+
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
+
|
125 |
+
[More Information Needed]
|
126 |
+
|
127 |
+
### Results
|
128 |
+
|
129 |
+
[More Information Needed]
|
130 |
+
|
131 |
+
#### Summary
|
132 |
+
|
133 |
+
|
134 |
+
|
135 |
+
## Model Examination [optional]
|
136 |
+
|
137 |
+
<!-- Relevant interpretability work for the model goes here -->
|
138 |
+
|
139 |
+
[More Information Needed]
|
140 |
+
|
141 |
+
## Environmental Impact
|
142 |
+
|
143 |
+
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
+
|
145 |
+
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
+
|
147 |
+
- **Hardware Type:** [More Information Needed]
|
148 |
+
- **Hours used:** [More Information Needed]
|
149 |
+
- **Cloud Provider:** [More Information Needed]
|
150 |
+
- **Compute Region:** [More Information Needed]
|
151 |
+
- **Carbon Emitted:** [More Information Needed]
|
152 |
+
|
153 |
+
## Technical Specifications [optional]
|
154 |
+
|
155 |
+
### Model Architecture and Objective
|
156 |
+
|
157 |
+
[More Information Needed]
|
158 |
+
|
159 |
+
### Compute Infrastructure
|
160 |
+
|
161 |
+
[More Information Needed]
|
162 |
+
|
163 |
+
#### Hardware
|
164 |
+
|
165 |
+
[More Information Needed]
|
166 |
+
|
167 |
+
#### Software
|
168 |
+
|
169 |
+
[More Information Needed]
|
170 |
+
|
171 |
+
## Citation [optional]
|
172 |
+
|
173 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
+
|
175 |
+
**BibTeX:**
|
176 |
+
|
177 |
+
[More Information Needed]
|
178 |
+
|
179 |
+
**APA:**
|
180 |
+
|
181 |
+
[More Information Needed]
|
182 |
+
|
183 |
+
## Glossary [optional]
|
184 |
+
|
185 |
+
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
+
|
187 |
+
[More Information Needed]
|
188 |
+
|
189 |
+
## More Information [optional]
|
190 |
+
|
191 |
+
[More Information Needed]
|
192 |
+
|
193 |
+
## Model Card Authors [optional]
|
194 |
+
|
195 |
+
[More Information Needed]
|
196 |
+
|
197 |
+
## Model Card Contact
|
198 |
+
|
199 |
+
[More Information Needed]
|
200 |
+
### Framework versions
|
201 |
+
|
202 |
+
- PEFT 0.12.0
|
checkpoint-600/adapter_config.json
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"alpha_pattern": {},
|
3 |
+
"auto_mapping": null,
|
4 |
+
"base_model_name_or_path": "Qwen/Qwen2.5-Coder-7B-Instruct",
|
5 |
+
"bias": "none",
|
6 |
+
"fan_in_fan_out": false,
|
7 |
+
"inference_mode": true,
|
8 |
+
"init_lora_weights": true,
|
9 |
+
"layer_replication": null,
|
10 |
+
"layers_pattern": null,
|
11 |
+
"layers_to_transform": null,
|
12 |
+
"loftq_config": {},
|
13 |
+
"lora_alpha": 32,
|
14 |
+
"lora_dropout": 0.1,
|
15 |
+
"megatron_config": null,
|
16 |
+
"megatron_core": "megatron.core",
|
17 |
+
"modules_to_save": null,
|
18 |
+
"peft_type": "LORA",
|
19 |
+
"r": 16,
|
20 |
+
"rank_pattern": {},
|
21 |
+
"revision": null,
|
22 |
+
"target_modules": [
|
23 |
+
"down_proj",
|
24 |
+
"k_proj",
|
25 |
+
"v_proj",
|
26 |
+
"q_proj",
|
27 |
+
"o_proj",
|
28 |
+
"gate_proj",
|
29 |
+
"up_proj"
|
30 |
+
],
|
31 |
+
"task_type": "CAUSAL_LM",
|
32 |
+
"use_dora": false,
|
33 |
+
"use_rslora": false
|
34 |
+
}
|
checkpoint-600/adapter_model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:815082e993e8c2d076ad5e658ce05abf45dcab9d5afaf46d014d6ffdfe0ee16b
|
3 |
+
size 161533192
|
checkpoint-600/added_tokens.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"</tool_call>": 151658,
|
3 |
+
"<tool_call>": 151657,
|
4 |
+
"<|box_end|>": 151649,
|
5 |
+
"<|box_start|>": 151648,
|
6 |
+
"<|endoftext|>": 151643,
|
7 |
+
"<|file_sep|>": 151664,
|
8 |
+
"<|fim_middle|>": 151660,
|
9 |
+
"<|fim_pad|>": 151662,
|
10 |
+
"<|fim_prefix|>": 151659,
|
11 |
+
"<|fim_suffix|>": 151661,
|
12 |
+
"<|im_end|>": 151645,
|
13 |
+
"<|im_start|>": 151644,
|
14 |
+
"<|image_pad|>": 151655,
|
15 |
+
"<|object_ref_end|>": 151647,
|
16 |
+
"<|object_ref_start|>": 151646,
|
17 |
+
"<|quad_end|>": 151651,
|
18 |
+
"<|quad_start|>": 151650,
|
19 |
+
"<|repo_name|>": 151663,
|
20 |
+
"<|video_pad|>": 151656,
|
21 |
+
"<|vision_end|>": 151653,
|
22 |
+
"<|vision_pad|>": 151654,
|
23 |
+
"<|vision_start|>": 151652
|
24 |
+
}
|
checkpoint-600/merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
checkpoint-600/optimizer.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:635465c3fd959877cfbd396709b6e9927168530ca35feea757fa8b8a1414dac8
|
3 |
+
size 323290986
|
checkpoint-600/rng_state_0.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:52f687941cc0293476b1271eeb3c2677694b45291d5a3627003b99d2ca9a8475
|
3 |
+
size 14512
|
checkpoint-600/rng_state_1.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0e7728009555a33e57a0eebd5e73d257873a8e653645e86edc1efeae6387167a
|
3 |
+
size 14512
|
checkpoint-600/scheduler.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:70a44dfc67248bf4884129caa15d933366f4614c987d3628dc98f01aef78a6e0
|
3 |
+
size 1064
|
checkpoint-600/special_tokens_map.json
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
"<|im_start|>",
|
4 |
+
"<|im_end|>",
|
5 |
+
"<|object_ref_start|>",
|
6 |
+
"<|object_ref_end|>",
|
7 |
+
"<|box_start|>",
|
8 |
+
"<|box_end|>",
|
9 |
+
"<|quad_start|>",
|
10 |
+
"<|quad_end|>",
|
11 |
+
"<|vision_start|>",
|
12 |
+
"<|vision_end|>",
|
13 |
+
"<|vision_pad|>",
|
14 |
+
"<|image_pad|>",
|
15 |
+
"<|video_pad|>"
|
16 |
+
],
|
17 |
+
"eos_token": {
|
18 |
+
"content": "<|im_end|>",
|
19 |
+
"lstrip": false,
|
20 |
+
"normalized": false,
|
21 |
+
"rstrip": false,
|
22 |
+
"single_word": false
|
23 |
+
},
|
24 |
+
"pad_token": {
|
25 |
+
"content": "<|endoftext|>",
|
26 |
+
"lstrip": false,
|
27 |
+
"normalized": false,
|
28 |
+
"rstrip": false,
|
29 |
+
"single_word": false
|
30 |
+
}
|
31 |
+
}
|
checkpoint-600/tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
|
3 |
+
size 11421896
|
checkpoint-600/tokenizer_config.json
ADDED
@@ -0,0 +1,209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": false,
|
3 |
+
"add_prefix_space": false,
|
4 |
+
"added_tokens_decoder": {
|
5 |
+
"151643": {
|
6 |
+
"content": "<|endoftext|>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false,
|
11 |
+
"special": true
|
12 |
+
},
|
13 |
+
"151644": {
|
14 |
+
"content": "<|im_start|>",
|
15 |
+
"lstrip": false,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": false,
|
18 |
+
"single_word": false,
|
19 |
+
"special": true
|
20 |
+
},
|
21 |
+
"151645": {
|
22 |
+
"content": "<|im_end|>",
|
23 |
+
"lstrip": false,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": false,
|
26 |
+
"single_word": false,
|
27 |
+
"special": true
|
28 |
+
},
|
29 |
+
"151646": {
|
30 |
+
"content": "<|object_ref_start|>",
|
31 |
+
"lstrip": false,
|
32 |
+
"normalized": false,
|
33 |
+
"rstrip": false,
|
34 |
+
"single_word": false,
|
35 |
+
"special": true
|
36 |
+
},
|
37 |
+
"151647": {
|
38 |
+
"content": "<|object_ref_end|>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false,
|
43 |
+
"special": true
|
44 |
+
},
|
45 |
+
"151648": {
|
46 |
+
"content": "<|box_start|>",
|
47 |
+
"lstrip": false,
|
48 |
+
"normalized": false,
|
49 |
+
"rstrip": false,
|
50 |
+
"single_word": false,
|
51 |
+
"special": true
|
52 |
+
},
|
53 |
+
"151649": {
|
54 |
+
"content": "<|box_end|>",
|
55 |
+
"lstrip": false,
|
56 |
+
"normalized": false,
|
57 |
+
"rstrip": false,
|
58 |
+
"single_word": false,
|
59 |
+
"special": true
|
60 |
+
},
|
61 |
+
"151650": {
|
62 |
+
"content": "<|quad_start|>",
|
63 |
+
"lstrip": false,
|
64 |
+
"normalized": false,
|
65 |
+
"rstrip": false,
|
66 |
+
"single_word": false,
|
67 |
+
"special": true
|
68 |
+
},
|
69 |
+
"151651": {
|
70 |
+
"content": "<|quad_end|>",
|
71 |
+
"lstrip": false,
|
72 |
+
"normalized": false,
|
73 |
+
"rstrip": false,
|
74 |
+
"single_word": false,
|
75 |
+
"special": true
|
76 |
+
},
|
77 |
+
"151652": {
|
78 |
+
"content": "<|vision_start|>",
|
79 |
+
"lstrip": false,
|
80 |
+
"normalized": false,
|
81 |
+
"rstrip": false,
|
82 |
+
"single_word": false,
|
83 |
+
"special": true
|
84 |
+
},
|
85 |
+
"151653": {
|
86 |
+
"content": "<|vision_end|>",
|
87 |
+
"lstrip": false,
|
88 |
+
"normalized": false,
|
89 |
+
"rstrip": false,
|
90 |
+
"single_word": false,
|
91 |
+
"special": true
|
92 |
+
},
|
93 |
+
"151654": {
|
94 |
+
"content": "<|vision_pad|>",
|
95 |
+
"lstrip": false,
|
96 |
+
"normalized": false,
|
97 |
+
"rstrip": false,
|
98 |
+
"single_word": false,
|
99 |
+
"special": true
|
100 |
+
},
|
101 |
+
"151655": {
|
102 |
+
"content": "<|image_pad|>",
|
103 |
+
"lstrip": false,
|
104 |
+
"normalized": false,
|
105 |
+
"rstrip": false,
|
106 |
+
"single_word": false,
|
107 |
+
"special": true
|
108 |
+
},
|
109 |
+
"151656": {
|
110 |
+
"content": "<|video_pad|>",
|
111 |
+
"lstrip": false,
|
112 |
+
"normalized": false,
|
113 |
+
"rstrip": false,
|
114 |
+
"single_word": false,
|
115 |
+
"special": true
|
116 |
+
},
|
117 |
+
"151657": {
|
118 |
+
"content": "<tool_call>",
|
119 |
+
"lstrip": false,
|
120 |
+
"normalized": false,
|
121 |
+
"rstrip": false,
|
122 |
+
"single_word": false,
|
123 |
+
"special": false
|
124 |
+
},
|
125 |
+
"151658": {
|
126 |
+
"content": "</tool_call>",
|
127 |
+
"lstrip": false,
|
128 |
+
"normalized": false,
|
129 |
+
"rstrip": false,
|
130 |
+
"single_word": false,
|
131 |
+
"special": false
|
132 |
+
},
|
133 |
+
"151659": {
|
134 |
+
"content": "<|fim_prefix|>",
|
135 |
+
"lstrip": false,
|
136 |
+
"normalized": false,
|
137 |
+
"rstrip": false,
|
138 |
+
"single_word": false,
|
139 |
+
"special": false
|
140 |
+
},
|
141 |
+
"151660": {
|
142 |
+
"content": "<|fim_middle|>",
|
143 |
+
"lstrip": false,
|
144 |
+
"normalized": false,
|
145 |
+
"rstrip": false,
|
146 |
+
"single_word": false,
|
147 |
+
"special": false
|
148 |
+
},
|
149 |
+
"151661": {
|
150 |
+
"content": "<|fim_suffix|>",
|
151 |
+
"lstrip": false,
|
152 |
+
"normalized": false,
|
153 |
+
"rstrip": false,
|
154 |
+
"single_word": false,
|
155 |
+
"special": false
|
156 |
+
},
|
157 |
+
"151662": {
|
158 |
+
"content": "<|fim_pad|>",
|
159 |
+
"lstrip": false,
|
160 |
+
"normalized": false,
|
161 |
+
"rstrip": false,
|
162 |
+
"single_word": false,
|
163 |
+
"special": false
|
164 |
+
},
|
165 |
+
"151663": {
|
166 |
+
"content": "<|repo_name|>",
|
167 |
+
"lstrip": false,
|
168 |
+
"normalized": false,
|
169 |
+
"rstrip": false,
|
170 |
+
"single_word": false,
|
171 |
+
"special": false
|
172 |
+
},
|
173 |
+
"151664": {
|
174 |
+
"content": "<|file_sep|>",
|
175 |
+
"lstrip": false,
|
176 |
+
"normalized": false,
|
177 |
+
"rstrip": false,
|
178 |
+
"single_word": false,
|
179 |
+
"special": false
|
180 |
+
}
|
181 |
+
},
|
182 |
+
"additional_special_tokens": [
|
183 |
+
"<|im_start|>",
|
184 |
+
"<|im_end|>",
|
185 |
+
"<|object_ref_start|>",
|
186 |
+
"<|object_ref_end|>",
|
187 |
+
"<|box_start|>",
|
188 |
+
"<|box_end|>",
|
189 |
+
"<|quad_start|>",
|
190 |
+
"<|quad_end|>",
|
191 |
+
"<|vision_start|>",
|
192 |
+
"<|vision_end|>",
|
193 |
+
"<|vision_pad|>",
|
194 |
+
"<|image_pad|>",
|
195 |
+
"<|video_pad|>"
|
196 |
+
],
|
197 |
+
"bos_token": null,
|
198 |
+
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
|
199 |
+
"clean_up_tokenization_spaces": false,
|
200 |
+
"eos_token": "<|im_end|>",
|
201 |
+
"errors": "replace",
|
202 |
+
"extra_special_tokens": {},
|
203 |
+
"model_max_length": 17000,
|
204 |
+
"pad_token": "<|endoftext|>",
|
205 |
+
"padding_side": "right",
|
206 |
+
"split_special_tokens": false,
|
207 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
208 |
+
"unk_token": null
|
209 |
+
}
|
checkpoint-600/trainer_state.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
checkpoint-600/training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9746167dc82a516aeee83c5731c4b250770d2cfb0fb68a7cd590dc02bd3eb0d6
|
3 |
+
size 5688
|
checkpoint-600/vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
trainer_log.jsonl
CHANGED
@@ -506,3 +506,120 @@
|
|
506 |
{"current_steps": 506, "total_steps": 1200, "loss": 0.1297, "lr": 3.109037529465056e-05, "epoch": 8.299065420560748, "percentage": 42.17, "elapsed_time": "3:06:03", "remaining_time": "4:15:11", "throughput": 3298.86, "total_tokens": 36827816}
|
507 |
{"current_steps": 507, "total_steps": 1200, "loss": 0.1158, "lr": 3.102687652082597e-05, "epoch": 8.315680166147455, "percentage": 42.25, "elapsed_time": "3:06:36", "remaining_time": "4:15:04", "throughput": 3298.47, "total_tokens": 36931424}
|
508 |
{"current_steps": 508, "total_steps": 1200, "loss": 0.1146, "lr": 3.0963336439464526e-05, "epoch": 8.332294911734165, "percentage": 42.33, "elapsed_time": "3:06:54", "remaining_time": "4:14:36", "throughput": 3298.51, "total_tokens": 36991464}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
506 |
{"current_steps": 506, "total_steps": 1200, "loss": 0.1297, "lr": 3.109037529465056e-05, "epoch": 8.299065420560748, "percentage": 42.17, "elapsed_time": "3:06:03", "remaining_time": "4:15:11", "throughput": 3298.86, "total_tokens": 36827816}
|
507 |
{"current_steps": 507, "total_steps": 1200, "loss": 0.1158, "lr": 3.102687652082597e-05, "epoch": 8.315680166147455, "percentage": 42.25, "elapsed_time": "3:06:36", "remaining_time": "4:15:04", "throughput": 3298.47, "total_tokens": 36931424}
|
508 |
{"current_steps": 508, "total_steps": 1200, "loss": 0.1146, "lr": 3.0963336439464526e-05, "epoch": 8.332294911734165, "percentage": 42.33, "elapsed_time": "3:06:54", "remaining_time": "4:14:36", "throughput": 3298.51, "total_tokens": 36991464}
|
509 |
+
{"current_steps": 509, "total_steps": 1200, "loss": 0.1044, "lr": 3.089975548606283e-05, "epoch": 8.348909657320872, "percentage": 42.42, "elapsed_time": "3:07:27", "remaining_time": "4:14:29", "throughput": 3297.79, "total_tokens": 37092928}
|
510 |
+
{"current_steps": 510, "total_steps": 1200, "loss": 0.1192, "lr": 3.083613409639764e-05, "epoch": 8.36552440290758, "percentage": 42.5, "elapsed_time": "3:07:46", "remaining_time": "4:14:03", "throughput": 3298.95, "total_tokens": 37168792}
|
511 |
+
{"current_steps": 511, "total_steps": 1200, "loss": 0.1197, "lr": 3.0772472706522806e-05, "epoch": 8.38213914849429, "percentage": 42.58, "elapsed_time": "3:08:20", "remaining_time": "4:13:56", "throughput": 3297.1, "total_tokens": 37258864}
|
512 |
+
{"current_steps": 512, "total_steps": 1200, "loss": 0.1351, "lr": 3.0708771752766394e-05, "epoch": 8.398753894080997, "percentage": 42.67, "elapsed_time": "3:08:42", "remaining_time": "4:13:34", "throughput": 3298.09, "total_tokens": 37343224}
|
513 |
+
{"current_steps": 513, "total_steps": 1200, "loss": 0.1336, "lr": 3.06450316717276e-05, "epoch": 8.415368639667705, "percentage": 42.75, "elapsed_time": "3:08:56", "remaining_time": "4:13:02", "throughput": 3298.6, "total_tokens": 37395488}
|
514 |
+
{"current_steps": 514, "total_steps": 1200, "loss": 0.1057, "lr": 3.0581252900273786e-05, "epoch": 8.431983385254414, "percentage": 42.83, "elapsed_time": "3:09:21", "remaining_time": "4:12:43", "throughput": 3298.28, "total_tokens": 37473248}
|
515 |
+
{"current_steps": 515, "total_steps": 1200, "loss": 0.1101, "lr": 3.0517435875537536e-05, "epoch": 8.448598130841122, "percentage": 42.92, "elapsed_time": "3:09:35", "remaining_time": "4:12:09", "throughput": 3299.51, "total_tokens": 37532096}
|
516 |
+
{"current_steps": 516, "total_steps": 1200, "loss": 0.1079, "lr": 3.045358103491357e-05, "epoch": 8.46521287642783, "percentage": 43.0, "elapsed_time": "3:10:04", "remaining_time": "4:11:57", "throughput": 3298.96, "total_tokens": 37622328}
|
517 |
+
{"current_steps": 517, "total_steps": 1200, "loss": 0.1245, "lr": 3.038968881605583e-05, "epoch": 8.481827622014539, "percentage": 43.08, "elapsed_time": "3:10:22", "remaining_time": "4:11:29", "throughput": 3299.42, "total_tokens": 37686304}
|
518 |
+
{"current_steps": 518, "total_steps": 1200, "loss": 0.1275, "lr": 3.0325759656874418e-05, "epoch": 8.498442367601246, "percentage": 43.17, "elapsed_time": "3:10:49", "remaining_time": "4:11:14", "throughput": 3298.97, "total_tokens": 37770856}
|
519 |
+
{"current_steps": 519, "total_steps": 1200, "loss": 0.1123, "lr": 3.026179399553264e-05, "epoch": 8.515057113187954, "percentage": 43.25, "elapsed_time": "3:11:09", "remaining_time": "4:10:49", "throughput": 3298.77, "total_tokens": 37834072}
|
520 |
+
{"current_steps": 520, "total_steps": 1200, "loss": 0.112, "lr": 3.0197792270443982e-05, "epoch": 8.531671858774663, "percentage": 43.33, "elapsed_time": "3:11:24", "remaining_time": "4:10:17", "throughput": 3299.33, "total_tokens": 37889928}
|
521 |
+
{"current_steps": 521, "total_steps": 1200, "loss": 0.2376, "lr": 3.0133754920269103e-05, "epoch": 8.54828660436137, "percentage": 43.42, "elapsed_time": "3:11:50", "remaining_time": "4:10:01", "throughput": 3298.7, "total_tokens": 37971296}
|
522 |
+
{"current_steps": 522, "total_steps": 1200, "loss": 0.123, "lr": 3.0069682383912813e-05, "epoch": 8.564901349948078, "percentage": 43.5, "elapsed_time": "3:12:14", "remaining_time": "4:09:42", "throughput": 3298.61, "total_tokens": 38049288}
|
523 |
+
{"current_steps": 523, "total_steps": 1200, "loss": 0.1386, "lr": 3.0005575100521118e-05, "epoch": 8.581516095534788, "percentage": 43.58, "elapsed_time": "3:12:40", "remaining_time": "4:09:24", "throughput": 3297.75, "total_tokens": 38123392}
|
524 |
+
{"current_steps": 524, "total_steps": 1200, "loss": 0.1194, "lr": 2.9941433509478156e-05, "epoch": 8.598130841121495, "percentage": 43.67, "elapsed_time": "3:13:06", "remaining_time": "4:09:07", "throughput": 3297.63, "total_tokens": 38208264}
|
525 |
+
{"current_steps": 525, "total_steps": 1200, "loss": 0.126, "lr": 2.9877258050403212e-05, "epoch": 8.614745586708203, "percentage": 43.75, "elapsed_time": "3:13:20", "remaining_time": "4:08:35", "throughput": 3297.84, "total_tokens": 38258192}
|
526 |
+
{"current_steps": 526, "total_steps": 1200, "loss": 0.1295, "lr": 2.9813049163147688e-05, "epoch": 8.631360332294912, "percentage": 43.83, "elapsed_time": "3:13:41", "remaining_time": "4:08:11", "throughput": 3298.31, "total_tokens": 38332408}
|
527 |
+
{"current_steps": 527, "total_steps": 1200, "loss": 0.1035, "lr": 2.974880728779212e-05, "epoch": 8.64797507788162, "percentage": 43.92, "elapsed_time": "3:14:08", "remaining_time": "4:07:54", "throughput": 3297.12, "total_tokens": 38404960}
|
528 |
+
{"current_steps": 528, "total_steps": 1200, "loss": 0.1347, "lr": 2.9684532864643122e-05, "epoch": 8.664589823468328, "percentage": 44.0, "elapsed_time": "3:14:26", "remaining_time": "4:07:28", "throughput": 3298.39, "total_tokens": 38481704}
|
529 |
+
{"current_steps": 529, "total_steps": 1200, "loss": 0.1076, "lr": 2.9620226334230388e-05, "epoch": 8.681204569055037, "percentage": 44.08, "elapsed_time": "3:14:44", "remaining_time": "4:07:01", "throughput": 3298.85, "total_tokens": 38546304}
|
530 |
+
{"current_steps": 530, "total_steps": 1200, "loss": 0.1514, "lr": 2.9555888137303695e-05, "epoch": 8.697819314641745, "percentage": 44.17, "elapsed_time": "3:15:08", "remaining_time": "4:06:41", "throughput": 3298.58, "total_tokens": 38621024}
|
531 |
+
{"current_steps": 531, "total_steps": 1200, "loss": 0.1119, "lr": 2.949151871482982e-05, "epoch": 8.714434060228452, "percentage": 44.25, "elapsed_time": "3:15:23", "remaining_time": "4:06:09", "throughput": 3299.38, "total_tokens": 38679368}
|
532 |
+
{"current_steps": 532, "total_steps": 1200, "loss": 0.1331, "lr": 2.9427118507989586e-05, "epoch": 8.731048805815162, "percentage": 44.33, "elapsed_time": "3:15:46", "remaining_time": "4:05:48", "throughput": 3299.31, "total_tokens": 38753984}
|
533 |
+
{"current_steps": 533, "total_steps": 1200, "loss": 0.1158, "lr": 2.93626879581748e-05, "epoch": 8.74766355140187, "percentage": 44.42, "elapsed_time": "3:16:00", "remaining_time": "4:05:17", "throughput": 3299.92, "total_tokens": 38808336}
|
534 |
+
{"current_steps": 534, "total_steps": 1200, "loss": 0.2268, "lr": 2.929822750698524e-05, "epoch": 8.764278296988577, "percentage": 44.5, "elapsed_time": "3:16:23", "remaining_time": "4:04:56", "throughput": 3299.25, "total_tokens": 38876624}
|
535 |
+
{"current_steps": 535, "total_steps": 1200, "loss": 0.1155, "lr": 2.9233737596225613e-05, "epoch": 8.780893042575286, "percentage": 44.58, "elapsed_time": "3:16:38", "remaining_time": "4:04:25", "throughput": 3299.89, "total_tokens": 38933576}
|
536 |
+
{"current_steps": 536, "total_steps": 1200, "loss": 0.114, "lr": 2.916921866790256e-05, "epoch": 8.797507788161994, "percentage": 44.67, "elapsed_time": "3:17:14", "remaining_time": "4:04:20", "throughput": 3299.85, "total_tokens": 39050816}
|
537 |
+
{"current_steps": 537, "total_steps": 1200, "loss": 0.119, "lr": 2.9104671164221576e-05, "epoch": 8.814122533748701, "percentage": 44.75, "elapsed_time": "3:17:27", "remaining_time": "4:03:47", "throughput": 3300.33, "total_tokens": 39101856}
|
538 |
+
{"current_steps": 538, "total_steps": 1200, "loss": 0.115, "lr": 2.9040095527584032e-05, "epoch": 8.83073727933541, "percentage": 44.83, "elapsed_time": "3:17:42", "remaining_time": "4:03:16", "throughput": 3301.39, "total_tokens": 39161928}
|
539 |
+
{"current_steps": 539, "total_steps": 1200, "loss": 0.1312, "lr": 2.897549220058411e-05, "epoch": 8.847352024922118, "percentage": 44.92, "elapsed_time": "3:17:56", "remaining_time": "4:02:45", "throughput": 3301.9, "total_tokens": 39216048}
|
540 |
+
{"current_steps": 540, "total_steps": 1200, "loss": 0.1107, "lr": 2.8910861626005776e-05, "epoch": 8.863966770508826, "percentage": 45.0, "elapsed_time": "3:18:29", "remaining_time": "4:02:36", "throughput": 3301.26, "total_tokens": 39317320}
|
541 |
+
{"current_steps": 541, "total_steps": 1200, "loss": 0.119, "lr": 2.884620424681976e-05, "epoch": 8.880581516095535, "percentage": 45.08, "elapsed_time": "3:18:50", "remaining_time": "4:02:12", "throughput": 3301.06, "total_tokens": 39383120}
|
542 |
+
{"current_steps": 542, "total_steps": 1200, "loss": 0.1212, "lr": 2.8781520506180486e-05, "epoch": 8.897196261682243, "percentage": 45.17, "elapsed_time": "3:19:14", "remaining_time": "4:01:53", "throughput": 3300.59, "total_tokens": 39458584}
|
543 |
+
{"current_steps": 543, "total_steps": 1200, "loss": 0.0999, "lr": 2.871681084742308e-05, "epoch": 8.91381100726895, "percentage": 45.25, "elapsed_time": "3:19:37", "remaining_time": "4:01:32", "throughput": 3300.79, "total_tokens": 39535152}
|
544 |
+
{"current_steps": 544, "total_steps": 1200, "loss": 0.1297, "lr": 2.8652075714060295e-05, "epoch": 8.93042575285566, "percentage": 45.33, "elapsed_time": "3:19:51", "remaining_time": "4:01:00", "throughput": 3301.56, "total_tokens": 39590360}
|
545 |
+
{"current_steps": 545, "total_steps": 1200, "loss": 0.1159, "lr": 2.858731554977948e-05, "epoch": 8.947040498442368, "percentage": 45.42, "elapsed_time": "3:20:16", "remaining_time": "4:00:41", "throughput": 3301.36, "total_tokens": 39669984}
|
546 |
+
{"current_steps": 546, "total_steps": 1200, "loss": 0.1139, "lr": 2.8522530798439567e-05, "epoch": 8.963655244029075, "percentage": 45.5, "elapsed_time": "3:20:42", "remaining_time": "4:00:24", "throughput": 3301.48, "total_tokens": 39757392}
|
547 |
+
{"current_steps": 547, "total_steps": 1200, "loss": 0.1266, "lr": 2.845772190406798e-05, "epoch": 8.980269989615785, "percentage": 45.58, "elapsed_time": "3:21:14", "remaining_time": "4:00:14", "throughput": 3300.19, "total_tokens": 39848064}
|
548 |
+
{"current_steps": 548, "total_steps": 1200, "loss": 0.1199, "lr": 2.8392889310857612e-05, "epoch": 8.996884735202492, "percentage": 45.67, "elapsed_time": "3:21:36", "remaining_time": "3:59:52", "throughput": 3300.19, "total_tokens": 39922288}
|
549 |
+
{"current_steps": 549, "total_steps": 1200, "loss": 0.1246, "lr": 2.832803346316381e-05, "epoch": 9.0, "percentage": 45.75, "elapsed_time": "3:21:39", "remaining_time": "3:59:07", "throughput": 3300.27, "total_tokens": 39932640}
|
550 |
+
{"current_steps": 550, "total_steps": 1200, "loss": 0.0949, "lr": 2.8263154805501297e-05, "epoch": 9.016614745586708, "percentage": 45.83, "elapsed_time": "3:21:59", "remaining_time": "3:58:43", "throughput": 3300.89, "total_tokens": 40005688}
|
551 |
+
{"current_steps": 551, "total_steps": 1200, "loss": 0.1086, "lr": 2.819825378254111e-05, "epoch": 9.033229491173417, "percentage": 45.92, "elapsed_time": "3:22:13", "remaining_time": "3:58:11", "throughput": 3301.32, "total_tokens": 40057120}
|
552 |
+
{"current_steps": 552, "total_steps": 1200, "loss": 0.1098, "lr": 2.8133330839107608e-05, "epoch": 9.049844236760125, "percentage": 46.0, "elapsed_time": "3:22:39", "remaining_time": "3:57:54", "throughput": 3300.71, "total_tokens": 40135992}
|
553 |
+
{"current_steps": 553, "total_steps": 1200, "loss": 0.1297, "lr": 2.8068386420175375e-05, "epoch": 9.066458982346832, "percentage": 46.08, "elapsed_time": "3:22:53", "remaining_time": "3:57:22", "throughput": 3302.03, "total_tokens": 40196928}
|
554 |
+
{"current_steps": 554, "total_steps": 1200, "loss": 0.1113, "lr": 2.8003420970866177e-05, "epoch": 9.083073727933542, "percentage": 46.17, "elapsed_time": "3:23:15", "remaining_time": "3:57:00", "throughput": 3302.04, "total_tokens": 40269392}
|
555 |
+
{"current_steps": 555, "total_steps": 1200, "loss": 0.1145, "lr": 2.7938434936445945e-05, "epoch": 9.09968847352025, "percentage": 46.25, "elapsed_time": "3:23:37", "remaining_time": "3:56:38", "throughput": 3302.65, "total_tokens": 40350080}
|
556 |
+
{"current_steps": 556, "total_steps": 1200, "loss": 0.123, "lr": 2.787342876232167e-05, "epoch": 9.116303219106957, "percentage": 46.33, "elapsed_time": "3:23:55", "remaining_time": "3:56:12", "throughput": 3303.11, "total_tokens": 40416360}
|
557 |
+
{"current_steps": 557, "total_steps": 1200, "loss": 0.0915, "lr": 2.780840289403839e-05, "epoch": 9.132917964693666, "percentage": 46.42, "elapsed_time": "3:24:15", "remaining_time": "3:55:47", "throughput": 3303.99, "total_tokens": 40490432}
|
558 |
+
{"current_steps": 558, "total_steps": 1200, "loss": 0.1149, "lr": 2.774335777727613e-05, "epoch": 9.149532710280374, "percentage": 46.5, "elapsed_time": "3:24:35", "remaining_time": "3:55:23", "throughput": 3304.32, "total_tokens": 40561784}
|
559 |
+
{"current_steps": 559, "total_steps": 1200, "loss": 0.109, "lr": 2.7678293857846844e-05, "epoch": 9.166147455867081, "percentage": 46.58, "elapsed_time": "3:24:57", "remaining_time": "3:55:01", "throughput": 3304.87, "total_tokens": 40641728}
|
560 |
+
{"current_steps": 560, "total_steps": 1200, "loss": 0.1246, "lr": 2.761321158169134e-05, "epoch": 9.18276220145379, "percentage": 46.67, "elapsed_time": "3:25:12", "remaining_time": "3:54:30", "throughput": 3305.75, "total_tokens": 40700744}
|
561 |
+
{"current_steps": 561, "total_steps": 1200, "loss": 0.1251, "lr": 2.754811139487625e-05, "epoch": 9.199376947040498, "percentage": 46.75, "elapsed_time": "3:25:24", "remaining_time": "3:53:57", "throughput": 3306.32, "total_tokens": 40748048}
|
562 |
+
{"current_steps": 562, "total_steps": 1200, "loss": 0.0937, "lr": 2.7482993743590978e-05, "epoch": 9.215991692627206, "percentage": 46.83, "elapsed_time": "3:25:41", "remaining_time": "3:53:30", "throughput": 3306.79, "total_tokens": 40810104}
|
563 |
+
{"current_steps": 563, "total_steps": 1200, "loss": 0.1241, "lr": 2.7417859074144604e-05, "epoch": 9.232606438213915, "percentage": 46.92, "elapsed_time": "3:26:12", "remaining_time": "3:53:19", "throughput": 3305.59, "total_tokens": 40899480}
|
564 |
+
{"current_steps": 564, "total_steps": 1200, "loss": 0.103, "lr": 2.7352707832962865e-05, "epoch": 9.249221183800623, "percentage": 47.0, "elapsed_time": "3:26:41", "remaining_time": "3:53:04", "throughput": 3305.49, "total_tokens": 40993536}
|
565 |
+
{"current_steps": 565, "total_steps": 1200, "loss": 0.0912, "lr": 2.7287540466585065e-05, "epoch": 9.26583592938733, "percentage": 47.08, "elapsed_time": "3:26:59", "remaining_time": "3:52:38", "throughput": 3306.16, "total_tokens": 41060848}
|
566 |
+
{"current_steps": 566, "total_steps": 1200, "loss": 0.119, "lr": 2.7222357421661042e-05, "epoch": 9.28245067497404, "percentage": 47.17, "elapsed_time": "3:27:22", "remaining_time": "3:52:16", "throughput": 3306.39, "total_tokens": 41138352}
|
567 |
+
{"current_steps": 567, "total_steps": 1200, "loss": 0.1273, "lr": 2.7157159144948092e-05, "epoch": 9.299065420560748, "percentage": 47.25, "elapsed_time": "3:27:44", "remaining_time": "3:51:55", "throughput": 3306.41, "total_tokens": 41212624}
|
568 |
+
{"current_steps": 568, "total_steps": 1200, "loss": 0.1002, "lr": 2.7091946083307896e-05, "epoch": 9.315680166147455, "percentage": 47.33, "elapsed_time": "3:28:03", "remaining_time": "3:51:30", "throughput": 3306.67, "total_tokens": 41279472}
|
569 |
+
{"current_steps": 569, "total_steps": 1200, "loss": 0.118, "lr": 2.7026718683703473e-05, "epoch": 9.332294911734165, "percentage": 47.42, "elapsed_time": "3:28:30", "remaining_time": "3:51:13", "throughput": 3305.46, "total_tokens": 41353544}
|
570 |
+
{"current_steps": 570, "total_steps": 1200, "loss": 0.0949, "lr": 2.6961477393196126e-05, "epoch": 9.348909657320872, "percentage": 47.5, "elapsed_time": "3:28:56", "remaining_time": "3:50:56", "throughput": 3305.81, "total_tokens": 41444896}
|
571 |
+
{"current_steps": 571, "total_steps": 1200, "loss": 0.1177, "lr": 2.6896222658942348e-05, "epoch": 9.36552440290758, "percentage": 47.58, "elapsed_time": "3:29:13", "remaining_time": "3:50:28", "throughput": 3306.23, "total_tokens": 41505152}
|
572 |
+
{"current_steps": 572, "total_steps": 1200, "loss": 0.1371, "lr": 2.6830954928190794e-05, "epoch": 9.38213914849429, "percentage": 47.67, "elapsed_time": "3:29:31", "remaining_time": "3:50:02", "throughput": 3306.31, "total_tokens": 41566696}
|
573 |
+
{"current_steps": 573, "total_steps": 1200, "loss": 0.1241, "lr": 2.6765674648279172e-05, "epoch": 9.398753894080997, "percentage": 47.75, "elapsed_time": "3:29:55", "remaining_time": "3:49:41", "throughput": 3306.21, "total_tokens": 41641736}
|
574 |
+
{"current_steps": 574, "total_steps": 1200, "loss": 0.0956, "lr": 2.6700382266631206e-05, "epoch": 9.415368639667705, "percentage": 47.83, "elapsed_time": "3:30:32", "remaining_time": "3:49:36", "throughput": 3304.2, "total_tokens": 41740008}
|
575 |
+
{"current_steps": 575, "total_steps": 1200, "loss": 0.1068, "lr": 2.663507823075358e-05, "epoch": 9.431983385254414, "percentage": 47.92, "elapsed_time": "3:31:01", "remaining_time": "3:49:22", "throughput": 3304.63, "total_tokens": 41842808}
|
576 |
+
{"current_steps": 576, "total_steps": 1200, "loss": 0.1019, "lr": 2.656976298823284e-05, "epoch": 9.448598130841122, "percentage": 48.0, "elapsed_time": "3:31:19", "remaining_time": "3:48:56", "throughput": 3305.9, "total_tokens": 41917128}
|
577 |
+
{"current_steps": 577, "total_steps": 1200, "loss": 0.1116, "lr": 2.6504436986732338e-05, "epoch": 9.46521287642783, "percentage": 48.08, "elapsed_time": "3:31:41", "remaining_time": "3:48:33", "throughput": 3305.56, "total_tokens": 41984232}
|
578 |
+
{"current_steps": 578, "total_steps": 1200, "loss": 0.11, "lr": 2.6439100673989187e-05, "epoch": 9.481827622014539, "percentage": 48.17, "elapsed_time": "3:31:57", "remaining_time": "3:48:05", "throughput": 3306.23, "total_tokens": 42047216}
|
579 |
+
{"current_steps": 579, "total_steps": 1200, "loss": 0.1044, "lr": 2.637375449781115e-05, "epoch": 9.498442367601246, "percentage": 48.25, "elapsed_time": "3:32:18", "remaining_time": "3:47:42", "throughput": 3306.65, "total_tokens": 42122072}
|
580 |
+
{"current_steps": 580, "total_steps": 1200, "loss": 0.0985, "lr": 2.63083989060736e-05, "epoch": 9.515057113187954, "percentage": 48.33, "elapsed_time": "3:32:38", "remaining_time": "3:47:18", "throughput": 3307.36, "total_tokens": 42198480}
|
581 |
+
{"current_steps": 581, "total_steps": 1200, "loss": 0.1046, "lr": 2.624303434671645e-05, "epoch": 9.531671858774663, "percentage": 48.42, "elapsed_time": "3:33:09", "remaining_time": "3:47:06", "throughput": 3306.55, "total_tokens": 42289336}
|
582 |
+
{"current_steps": 582, "total_steps": 1200, "loss": 0.114, "lr": 2.6177661267741065e-05, "epoch": 9.54828660436137, "percentage": 48.5, "elapsed_time": "3:33:28", "remaining_time": "3:46:40", "throughput": 3306.56, "total_tokens": 42352288}
|
583 |
+
{"current_steps": 583, "total_steps": 1200, "loss": 0.1117, "lr": 2.611228011720722e-05, "epoch": 9.564901349948078, "percentage": 48.58, "elapsed_time": "3:33:59", "remaining_time": "3:46:28", "throughput": 3306.57, "total_tokens": 42453832}
|
584 |
+
{"current_steps": 584, "total_steps": 1200, "loss": 0.1148, "lr": 2.604689134322999e-05, "epoch": 9.581516095534788, "percentage": 48.67, "elapsed_time": "3:34:15", "remaining_time": "3:46:00", "throughput": 3307.1, "total_tokens": 42514704}
|
585 |
+
{"current_steps": 585, "total_steps": 1200, "loss": 0.2187, "lr": 2.598149539397672e-05, "epoch": 9.598130841121495, "percentage": 48.75, "elapsed_time": "3:34:33", "remaining_time": "3:45:33", "throughput": 3307.22, "total_tokens": 42576344}
|
586 |
+
{"current_steps": 586, "total_steps": 1200, "loss": 0.1046, "lr": 2.591609271766391e-05, "epoch": 9.614745586708203, "percentage": 48.83, "elapsed_time": "3:35:04", "remaining_time": "3:45:21", "throughput": 3305.98, "total_tokens": 42662824}
|
587 |
+
{"current_steps": 587, "total_steps": 1200, "loss": 0.0848, "lr": 2.5850683762554184e-05, "epoch": 9.631360332294912, "percentage": 48.92, "elapsed_time": "3:35:38", "remaining_time": "3:45:11", "throughput": 3304.23, "total_tokens": 42752496}
|
588 |
+
{"current_steps": 588, "total_steps": 1200, "loss": 0.0886, "lr": 2.578526897695321e-05, "epoch": 9.64797507788162, "percentage": 49.0, "elapsed_time": "3:36:01", "remaining_time": "3:44:50", "throughput": 3304.19, "total_tokens": 42826064}
|
589 |
+
{"current_steps": 589, "total_steps": 1200, "loss": 0.1989, "lr": 2.5719848809206586e-05, "epoch": 9.664589823468328, "percentage": 49.08, "elapsed_time": "3:36:19", "remaining_time": "3:44:24", "throughput": 3304.82, "total_tokens": 42895808}
|
590 |
+
{"current_steps": 590, "total_steps": 1200, "loss": 0.1156, "lr": 2.5654423707696833e-05, "epoch": 9.681204569055037, "percentage": 49.17, "elapsed_time": "3:36:34", "remaining_time": "3:43:54", "throughput": 3305.54, "total_tokens": 42952408}
|
591 |
+
{"current_steps": 591, "total_steps": 1200, "loss": 0.1053, "lr": 2.558899412084026e-05, "epoch": 9.697819314641745, "percentage": 49.25, "elapsed_time": "3:36:57", "remaining_time": "3:43:33", "throughput": 3305.28, "total_tokens": 43025536}
|
592 |
+
{"current_steps": 592, "total_steps": 1200, "loss": 0.084, "lr": 2.5523560497083926e-05, "epoch": 9.714434060228452, "percentage": 49.33, "elapsed_time": "3:37:26", "remaining_time": "3:43:19", "throughput": 3304.95, "total_tokens": 43118024}
|
593 |
+
{"current_steps": 593, "total_steps": 1200, "loss": 0.1033, "lr": 2.5458123284902573e-05, "epoch": 9.731048805815162, "percentage": 49.42, "elapsed_time": "3:37:53", "remaining_time": "3:43:02", "throughput": 3304.3, "total_tokens": 43198360}
|
594 |
+
{"current_steps": 594, "total_steps": 1200, "loss": 0.0881, "lr": 2.539268293279552e-05, "epoch": 9.74766355140187, "percentage": 49.5, "elapsed_time": "3:38:17", "remaining_time": "3:42:41", "throughput": 3303.32, "total_tokens": 43264072}
|
595 |
+
{"current_steps": 595, "total_steps": 1200, "loss": 0.1286, "lr": 2.5327239889283612e-05, "epoch": 9.764278296988577, "percentage": 49.58, "elapsed_time": "3:38:39", "remaining_time": "3:42:20", "throughput": 3303.37, "total_tokens": 43339600}
|
596 |
+
{"current_steps": 596, "total_steps": 1200, "loss": 0.1113, "lr": 2.5261794602906145e-05, "epoch": 9.780893042575286, "percentage": 49.67, "elapsed_time": "3:38:58", "remaining_time": "3:41:54", "throughput": 3303.46, "total_tokens": 43401136}
|
597 |
+
{"current_steps": 597, "total_steps": 1200, "loss": 0.1126, "lr": 2.5196347522217784e-05, "epoch": 9.797507788161994, "percentage": 49.75, "elapsed_time": "3:39:23", "remaining_time": "3:41:36", "throughput": 3302.83, "total_tokens": 43477528}
|
598 |
+
{"current_steps": 598, "total_steps": 1200, "loss": 0.11, "lr": 2.513089909578549e-05, "epoch": 9.814122533748701, "percentage": 49.83, "elapsed_time": "3:39:48", "remaining_time": "3:41:16", "throughput": 3302.62, "total_tokens": 43557352}
|
599 |
+
{"current_steps": 599, "total_steps": 1200, "loss": 0.1259, "lr": 2.5065449772185456e-05, "epoch": 9.83073727933541, "percentage": 49.92, "elapsed_time": "3:40:23", "remaining_time": "3:41:07", "throughput": 3300.46, "total_tokens": 43643104}
|
600 |
+
{"current_steps": 600, "total_steps": 1200, "loss": 0.1101, "lr": 2.5e-05, "epoch": 9.847352024922118, "percentage": 50.0, "elapsed_time": "3:40:45", "remaining_time": "3:40:45", "throughput": 3300.8, "total_tokens": 43721408}
|
601 |
+
{"current_steps": 601, "total_steps": 1200, "loss": 0.0984, "lr": 2.4934550227814553e-05, "epoch": 9.863966770508826, "percentage": 50.08, "elapsed_time": "3:41:14", "remaining_time": "3:40:30", "throughput": 3299.67, "total_tokens": 43802136}
|
602 |
+
{"current_steps": 602, "total_steps": 1200, "loss": 0.1168, "lr": 2.486910090421451e-05, "epoch": 9.880581516095535, "percentage": 50.17, "elapsed_time": "3:41:31", "remaining_time": "3:40:03", "throughput": 3300.04, "total_tokens": 43862904}
|
603 |
+
{"current_steps": 603, "total_steps": 1200, "loss": 0.1232, "lr": 2.480365247778223e-05, "epoch": 9.897196261682243, "percentage": 50.25, "elapsed_time": "3:41:56", "remaining_time": "3:39:44", "throughput": 3299.7, "total_tokens": 43942056}
|
604 |
+
{"current_steps": 604, "total_steps": 1200, "loss": 0.182, "lr": 2.4738205397093864e-05, "epoch": 9.91381100726895, "percentage": 50.33, "elapsed_time": "3:42:16", "remaining_time": "3:39:20", "throughput": 3300.33, "total_tokens": 44016096}
|
605 |
+
{"current_steps": 605, "total_steps": 1200, "loss": 0.1335, "lr": 2.4672760110716394e-05, "epoch": 9.93042575285566, "percentage": 50.42, "elapsed_time": "3:42:29", "remaining_time": "3:38:48", "throughput": 3300.98, "total_tokens": 44065504}
|
606 |
+
{"current_steps": 606, "total_steps": 1200, "loss": 0.1064, "lr": 2.460731706720449e-05, "epoch": 9.947040498442368, "percentage": 50.5, "elapsed_time": "3:42:41", "remaining_time": "3:38:16", "throughput": 3301.75, "total_tokens": 44114776}
|
607 |
+
{"current_steps": 607, "total_steps": 1200, "loss": 0.1359, "lr": 2.4541876715097432e-05, "epoch": 9.963655244029075, "percentage": 50.58, "elapsed_time": "3:42:57", "remaining_time": "3:37:48", "throughput": 3302.29, "total_tokens": 44175184}
|
608 |
+
{"current_steps": 608, "total_steps": 1200, "loss": 0.0915, "lr": 2.447643950291608e-05, "epoch": 9.980269989615785, "percentage": 50.67, "elapsed_time": "3:43:30", "remaining_time": "3:37:37", "throughput": 3300.68, "total_tokens": 44265024}
|
609 |
+
{"current_steps": 609, "total_steps": 1200, "loss": 0.1024, "lr": 2.4411005879159753e-05, "epoch": 9.996884735202492, "percentage": 50.75, "elapsed_time": "3:44:00", "remaining_time": "3:37:23", "throughput": 3300.05, "total_tokens": 44355400}
|
610 |
+
{"current_steps": 610, "total_steps": 1200, "loss": 0.087, "lr": 2.4345576292303176e-05, "epoch": 10.0, "percentage": 50.83, "elapsed_time": "3:44:04", "remaining_time": "3:36:43", "throughput": 3300.32, "total_tokens": 44370360}
|
611 |
+
{"current_steps": 611, "total_steps": 1200, "loss": 0.0902, "lr": 2.4280151190793417e-05, "epoch": 10.016614745586708, "percentage": 50.92, "elapsed_time": "3:44:30", "remaining_time": "3:36:25", "throughput": 3299.69, "total_tokens": 44446816}
|
612 |
+
{"current_steps": 612, "total_steps": 1200, "loss": 0.1038, "lr": 2.4214731023046793e-05, "epoch": 10.033229491173417, "percentage": 51.0, "elapsed_time": "3:44:44", "remaining_time": "3:35:55", "throughput": 3300.39, "total_tokens": 44503632}
|
613 |
+
{"current_steps": 613, "total_steps": 1200, "loss": 0.1036, "lr": 2.4149316237445812e-05, "epoch": 10.049844236760125, "percentage": 51.08, "elapsed_time": "3:45:11", "remaining_time": "3:35:38", "throughput": 3300.2, "total_tokens": 44590320}
|
614 |
+
{"current_steps": 614, "total_steps": 1200, "loss": 0.1014, "lr": 2.408390728233609e-05, "epoch": 10.066458982346832, "percentage": 51.17, "elapsed_time": "3:45:27", "remaining_time": "3:35:10", "throughput": 3301.03, "total_tokens": 44655224}
|
615 |
+
{"current_steps": 615, "total_steps": 1200, "loss": 0.1802, "lr": 2.4018504606023293e-05, "epoch": 10.083073727933542, "percentage": 51.25, "elapsed_time": "3:45:53", "remaining_time": "3:34:52", "throughput": 3300.63, "total_tokens": 44736200}
|
616 |
+
{"current_steps": 616, "total_steps": 1200, "loss": 0.0907, "lr": 2.3953108656770016e-05, "epoch": 10.09968847352025, "percentage": 51.33, "elapsed_time": "3:46:15", "remaining_time": "3:34:30", "throughput": 3300.44, "total_tokens": 44804416}
|
617 |
+
{"current_steps": 617, "total_steps": 1200, "loss": 0.0943, "lr": 2.3887719882792785e-05, "epoch": 10.116303219106957, "percentage": 51.42, "elapsed_time": "3:46:34", "remaining_time": "3:34:05", "throughput": 3300.81, "total_tokens": 44873864}
|
618 |
+
{"current_steps": 618, "total_steps": 1200, "loss": 0.1052, "lr": 2.3822338732258937e-05, "epoch": 10.132917964693666, "percentage": 51.5, "elapsed_time": "3:46:51", "remaining_time": "3:33:39", "throughput": 3301.26, "total_tokens": 44936736}
|
619 |
+
{"current_steps": 619, "total_steps": 1200, "loss": 0.0812, "lr": 2.3756965653283557e-05, "epoch": 10.149532710280374, "percentage": 51.58, "elapsed_time": "3:47:18", "remaining_time": "3:33:21", "throughput": 3301.38, "total_tokens": 45026952}
|
620 |
+
{"current_steps": 620, "total_steps": 1200, "loss": 0.1003, "lr": 2.3691601093926404e-05, "epoch": 10.166147455867081, "percentage": 51.67, "elapsed_time": "3:47:45", "remaining_time": "3:33:04", "throughput": 3300.57, "total_tokens": 45104816}
|
621 |
+
{"current_steps": 621, "total_steps": 1200, "loss": 0.0952, "lr": 2.3626245502188864e-05, "epoch": 10.18276220145379, "percentage": 51.75, "elapsed_time": "3:48:10", "remaining_time": "3:32:44", "throughput": 3300.02, "total_tokens": 45177392}
|
622 |
+
{"current_steps": 622, "total_steps": 1200, "loss": 0.1141, "lr": 2.3560899326010822e-05, "epoch": 10.199376947040498, "percentage": 51.83, "elapsed_time": "3:48:29", "remaining_time": "3:32:19", "throughput": 3299.64, "total_tokens": 45237200}
|
623 |
+
{"current_steps": 623, "total_steps": 1200, "loss": 0.1083, "lr": 2.3495563013267664e-05, "epoch": 10.215991692627206, "percentage": 51.92, "elapsed_time": "3:48:45", "remaining_time": "3:31:51", "throughput": 3300.01, "total_tokens": 45293376}
|
624 |
+
{"current_steps": 624, "total_steps": 1200, "loss": 0.1032, "lr": 2.3430237011767167e-05, "epoch": 10.232606438213915, "percentage": 52.0, "elapsed_time": "3:49:08", "remaining_time": "3:31:30", "throughput": 3299.98, "total_tokens": 45369376}
|
625 |
+
{"current_steps": 625, "total_steps": 1200, "loss": 0.0921, "lr": 2.3364921769246423e-05, "epoch": 10.249221183800623, "percentage": 52.08, "elapsed_time": "3:49:33", "remaining_time": "3:31:11", "throughput": 3299.08, "total_tokens": 45439920}
|