wandermay commited on
Commit
89d9072
·
verified ·
1 Parent(s): 28dc8a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -3
README.md CHANGED
@@ -1,3 +1,53 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ ## Achieving Superior Performance over Qwen3-32B and QwQ-32B Using Only 800 Strategically Curated Samples
5
+
6
+ ## Codemath400
7
+ [\[🤗 Codemath400\]](https://huggingface.co/datasets/ZTE-AIM/NTele-R1-Data)
8
+
9
+
10
+ ### Model description
11
+ NTele-R1-32B-V1 is the continuation of [NTele-R1-32B-Previce](https://huggingface.co/ZTE-AIM/NTele-R1-32B-Preview), please visit for more information. We have made great improvements on the base by using less corpus **in mathematics and code** (only **800 items, including 400 mathematics and 400 codes**), and surpassed the industry's advanced models **Qwen3-32B and QwQ-32B**.
12
+ | Model | Trained From | Release Date | AIME2024 | AIME2025 | MATH500 | GPQA-Diamond | LCB(24.08-25.02) |
13
+ |-------|-------|-------|-------|-------|-------|-------|-------|
14
+ | DeepSeek-32B-Distill | Qwen2.5-32B-Instruct | 25.1.20 | 64.17 | 55.21 | 89.8 | 62.1 | 50.26 |
15
+ | QwQ-32B | - | 25.3.6 | 76.25 | 67.30 | 94.6 | 63.6 | 60.94 |
16
+ | Qwen3-32B(think) | | 25.4.29 | 78.75 | 73.33 | 95 | **69.7** | 53.24 |
17
+ | NTele-R1-32B-V1(ours) | DeepSeek-R1-Distill-Qwen-32B | 25.5.10 | **82.5**| **74.49** | **95.2** | 67.17 | **63.69** |
18
+
19
+
20
+ ### Data
21
+ [\[🤗 Codemath400\]](https://huggingface.co/datasets/ZTE-AIM/NTele-R1-Data)
22
+
23
+ We start from the S1 dataset and conduct the following procedures:
24
+ 1. QwQ-32B as a Better Teacher :
25
+ - We find that QwQ-32B, with its smoother flow in CoT reasoning, serves as a better teacher compared to DeepSeek-R1. For each question in S1 dataset, we sampled 50 responses from QwQ-32B.
26
+ 2. Focusing on Harder Questions :
27
+ - We evaluated the correctness of the responses for each question. After that, we filtered out the easier questions with a pass rate exceeding 0.6.
28
+ 3. Diverse Reasoning Paths Break the Limitation of Distillation :
29
+ - To maximize the diversity of reasoning paths, we calculated the Levenshtein distance between all answers for each question. For every question, we selected up to 5 answers for each question with the greatest distances, resulting in the final dataset with 965 samples.
30
+
31
+ You can access our [dataset](https://huggingface.co/datasets/ZTE-AIM/NTele-R1-Data) to get 800 training data
32
+
33
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67ff7f05a93c489f94a58c74/pOg0t34yxTmrL158xsX1Y.png)
34
+
35
+ ### Evaluation
36
+ We evaluate models with [SkyThought](https://github.com/NovaSky-AI/SkyThought).
37
+
38
+ ### Training Details
39
+ NTele-R1-32B-V1 was trained from DeepSeek-32B-Distill on 8xH800.
40
+
41
+ #### Training hyperparameter
42
+ - learning_rate: 1e-05
43
+ - train_batch_size: 1
44
+ - eval_batch_size: 1
45
+ - seed: 42
46
+ - distributed_type: multi-GPU
47
+ - num_devices: 8
48
+ - gradient_accumulation_steps: 6
49
+ - total_train_batch_size: 48
50
+ - total_eval_batch_size: 48
51
+ - lr_scheduler_type: cosine
52
+ - lr_scheduler_warmup_ratio: 0.1
53
+ - num_epochs: 10.0