File size: 2,430 Bytes
b36c265
 
 
7405168
 
 
b36c265
7405168
b36c265
 
7405168
 
b36c265
 
 
 
 
 
 
 
a9c865d
 
b36c265
 
 
 
 
 
 
a9c865d
b36c265
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
---

# Light-R1-32B-DS: near-SOTA 32B Math Model with Only 3K Data

Paper: https://huggingface.co/papers/2503.10460

|Model|Trained From|Release Date|AIME24|AIME25|GPQA|
| ---- | ---- | ---- | ---- | ---- | ---- |
|DeepSeek-R1-Distill-Qwen-32B|Qwen2.5-32B|25.1.20|72.6|54.9|62.1|
|TinyR1-32B-Preview|DeepSeek-R1-Distill-Qwen-32B|25.2.25|77.1|65.9|65.0|
| [**Light-R1-32B-DS (ours)** 🤗](https://huggingface.co/qihoo360/Light-R1-32B-DS) |DeepSeek-R1-Distill-Qwen-32B|25.3.12|**78.1**|**65.9**|**68.0**|
| [Light-R1-32B (ours) 🤗](https://huggingface.co/qihoo360/Light-R1-32B) |Qwen2.5-32B-Instruct|25.3.4|76.6|64.6|61.8|
| QwQ-32B |N/A|25.3.6|78.5|69.3|67.7|

[technical report](https://arxiv.org/abs/2503.10460)

[GitHub page](https://github.com/Qihoo360/Light-R1)


Light-R1-32B-DS is a near-SOTA 32B math model with AIME24 & 25 scores 78.1 & 65.9.

Originated from DeepSeek-R1-Distill-Qwen-32B, Light-R1-32B-DS is further trained with only [3K SFT data](https://huggingface.co/datasets/qihoo360/Light-R1-SFTData) as we've open-sourced, demonstrating the strong applicability of the released data.

We are excited to release this model along with the [technical report](https://arxiv.org/abs/2503.10460).

## Usage
Same as DeepSeek-R1-Distill-Qwen-32B.

## Data Decontamination

We carefully evaluated data contamination of several open-sourced datasets.
While certain contamination may be [inevitable during pre-training](https://x.com/DimitrisPapail/status/1888325914603516214),
it is unacceptable for post-training to compare on benchmarks.
MATH-500 is somewhat compromised with tens of questions that are identical or only numbers changed. AIME 24 and 25 stay intact but we have to pay special attention when we incorporate AIME data up to 2023.

Light-R1 did thorough decontamination with exact matching (excluding digits) and N-gram (N=32) matching.

## Citation
```latex
@misc{lightr1proj,
      title={Light-R1: Curriculum SFT, DPO and RL for Long COT from Scratch and Beyond}, 
      author={Liang Wen, Yunke Cai, Fenrui Xiao, Xin He, Qi An, Zhenyu Duan, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, Haosheng Zou, Yongchao Deng, Shousheng Jia, Xiangzheng Zhang},
      year={2025},
      eprint={},
      archivePrefix={},
      url={https://github.com/Qihoo360/Light-R1}, 
}
```