Is it possible to open-source the 2k+ difficult samples from math stage3 separately, as well as the code training data?
Great work! Currently, we are reproducing your approach based on the phased methodology described in your paper. For math stage1 and stage2, we successfully achieved pass@64=61.8 on AIME2024. However, when proceeding to stage3 RL, our performance has remained stagnant at around 62, with no further improvement. Would it be possible for you to open-source the 2k+ difficult samples from math stage3? It would be even better if you could also open-source the code RL training data.
I adopted the same approach and observed the reward in training decreasing from 0.2 to 0.05. However, unfortunately, my results only reached 62.6, and it was difficult to improve further.
@Suu
In our experiment (DeepSeek-r1-disitlled-7B), stage 3 training (24K length) with hard prompt, the training reward drop from 90% to 20% - pass@1 with average 64 runs improve to 65+ after 300 steps.