Safetensors
qwen2
skydownacai commited on
Commit
68e98c8
·
verified ·
1 Parent(s): b076802

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - Skywork/Skywork-OR1-RL-Data
4
+ base_model:
5
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
6
+ ---
7
+
8
+ # 🤔 Skywork-OR1 (Open Reasoner 1)
9
+
10
+ </div>
11
+ <div>
12
+ <br>
13
+
14
+ <div align="center">
15
+
16
+ [![Models](https://img.shields.io/badge/Models-4d5eff?style=for-the-badge&logo=huggingface&logoColor=ffffff&labelColor)](https://huggingface.co/collections/Skywork/skywork-or1-67fa1bcb41b436ef2def76b9)
17
+ [![Data](https://img.shields.io/badge/Data-4d5eff?style=for-the-badge&logo=huggingface&logoColor=ffffff&labelColor)](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data)
18
+ [![Github](https://img.shields.io/badge/Code-000000?style=for-the-badge&logo=github&logoColor=white)](https://github.com/SkyworkAI/Skywork-OR1)
19
+ [![Notion](https://img.shields.io/badge/Notion_Blog-000000?style=for-the-badge&logo=notion&logoColor=white)](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680)
20
+
21
+ [![GitHub Stars](https://img.shields.io/github/stars/SkyworkAI/Skywork-OR1?style=for-the-badge&logo=github&logoColor=white&label=Stars&color=000000)](https://github.com/SkyworkAI/Skywork-OR1/stargazers)
22
+ [![GitHub Forks](https://img.shields.io/github/forks/SkyworkAI/Skywork-OR1?style=for-the-badge&logo=github&logoColor=white&label=Forks&color=000000)](https://github.com/SkyworkAI/Skywork-OR1/fork)
23
+
24
+ </div>
25
+
26
+ ## 🔥 News
27
+
28
+ - **May 13, 2025**: We release the final version of **`Skywork-OR1`** (Open Reasoner 1) series of models, including **`Skywork-OR1-32B`** and **`Skywork-OR1-7B`**. We open-source
29
+ - 🤗 Model weights: [`Skywork-OR1-32B`](https://huggingface.co/Skywork/Skywork-OR1-32B), [`Skywork-OR1-7B`](https://huggingface.co/Skywork/Skywork-OR1-7B)
30
+ - 🤗 Training data: [`Skywork-OR1-RL-Data`](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data)
31
+ - 🧑‍💻 Code: [`Skywork-OR1`](https://github.com/SkyworkAI/Skywork-OR1)
32
+
33
+ The complete technical report is coming soon. See our [Notion Blog](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680) for training recipes and preliminary experimental results. More analysis and insights will be included in the final technical report, which is dedicated to helping the community better research, understand, and push the frontier of open reasoning models.
34
+
35
+ ## 📖 Overview
36
+
37
+ <div align="center">
38
+ <img src="./assets/figure_1.jpeg" width="70%"/>
39
+
40
+ <sub>The AIME24 and AIME25 scores versus training steps of Skywork-OR1-32B in our training pipeline.</sub>
41
+ </div>
42
+
43
+ The **`Skywork-OR1`** (Open Reasoner 1) model series consists of powerful math and code reasoning models trained using large-scale rule-based reinforcement learning with carefully designed datasets and training recipes. This series includes two general-purpose reasoning modelsl, **`Skywork-OR1-7B`** and **`Skywork-OR1-32B`**.
44
+
45
+ - **[`Skywork-OR1-32B`](https://huggingface.co/Skywork/Skywork-OR1-32B)** delivers the 671B-parameter Deepseek-R1 performance on math tasks (AIME24 and AIME25) and coding tasks (LiveCodeBench).
46
+ - **[`Skywork-OR1-7B`](https://huggingface.co/Skywork/Skywork-OR1-7B)** exhibits competitive performance compared to similarly sized models in both math and coding scenarios.
47
+
48
+
49
+ ## 📊 Evaluation
50
+
51
+ <div align="center">
52
+ <img src="./assets/figure_3.jpeg" width="75%"/>
53
+ <img src="./assets/figure_2.jpeg" width="75%"/>
54
+ </div>
55
+ </div>
56
+
57
+ We evaluate our models on AIME24, AIME25, and LiveCodeBench. Instead of using Pass@1, which is common in prior work, we introduce Avg@K as the primary metric. This metric robustly measures a model's average performance across K independent attempts, reducing the impact of randomness and enhancing the reliability of the results. We believe that Avg@K provides a better reflection of a model's stability and reasoning consistency.
58
+
59
+ We include the detailed results in the following table.
60
+
61
+ | Model | AIME24 (Avg@32) | AIME25 (Avg@32) | LiveCodeBench (8/1/24-2/1/25) (Avg@4) |
62
+ |-------|---------|---------|--------------|
63
+ | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 39.2| 37.6 |
64
+ | Light-R1-7B-DS | 59.1 | 44.3| 39.5 |
65
+ | DeepSeek-R1-Distill-Qwen-32B | 72.9 | 59.0| 57.2 |
66
+ | TinyR1-32B-Preview | 78.1| 65.3| 61.6 |
67
+ | QwQ-32B | 79.5 | 65.3| 61.6 |
68
+ | DeepSeek-R1 | 79.8 | 70.0| 65.9 |
69
+ | **Skywork-OR1-Math-7B** | 69.8 | 52.3 | 43.6 |
70
+ | **Skywork-OR1-7B** | 70.2 | 54.6 | 47.6 |
71
+ | **Skywork-OR1-32B** | 82.2 | 73.3| 63.0 |
72
+
73
+ ## ⚙️ Training Recipe
74
+
75
+ We offer a brief overview of our data and training pipeline below. For more details, please refer to our Notion Blog [here](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680).
76
+
77
+ ### Data
78
+
79
+ - We select, clean, and curate **a dataset of 110K verifiable, challenging, and diverse math problems and 14K coding questions** from open-source datasets.
80
+ - We perform **model-aware difficulty estimation** for each problem and model and conduct **rigorous quality assessment prior to training** to ensure training efficiency and effectiveness.
81
+
82
+ ### Training
83
+
84
+ We develop a customized version of GRPO that leverages both data-wise and training-wise improvements:
85
+
86
+ - We perform both **offline and online difficulty-based filtering** and **rejection sampling** to improve training efficiency.
87
+ - We incorporate a **multi-stage training pipeline** coupled with **adaptive entropy control** and other techniques to enhance exploration and stability.
88
+
89
+ ## 📄 Technical Report
90
+
91
+ Our technical report will be released soon. Stay tuned!
92
+
93
+ ## 🙏 Acknowledgements
94
+
95
+ - Both of our models are trained on top of [`DeepSeek-R1-Distill-Qwen-7B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) and [`DeepSeek-R1-Distill-Qwen-32B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B).
96
+ - Both models are trained using [a custom fork](https://github.com/SkyworkAI/Skywork-OR1) of the wonderful [`verl`](https://github.com/volcengine/verl) project.
97
+
98
+ ## 📚 Citation
99
+
100
+ We will update the citation once the technical report is released. In the meantime, please cite the following:
101
+
102
+ ```bibtex
103
+ @misc{skywork-or1-2025,
104
+ title={Skywork Open Reasoner Series},
105
+ author = {He, Jujie and Liu, Jiacai and Liu, Chris Yuhao and Yan, Rui and Wang, Chaojie and Cheng, Peng and Zhang, Xiaoyu and Zhang, Fuxiang and Xu, Jiacheng and Shen, Wei and Li, Siyuan and Zeng, Liang and Wei, Tianwen and Cheng, Cheng and Liu, Yang and Zhou, Yahui},
106
+ howpublished={\url{https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680}},
107
+ note={Notion Blog},
108
+ year={2025}
109
+ }
110
+ ```