Is it possible to open-source the 2k+ difficult samples from math stage3 separately, as well as the code training data?

#2
by Suu - opened

Great work! Currently, we are reproducing your approach based on the phased methodology described in your paper. For math stage1 and stage2, we successfully achieved pass@64=61.8 on AIME2024. However, when proceeding to stage3 RL, our performance has remained stagnant at around 62, with no further improvement. Would it be possible for you to open-source the 2k+ difficult samples from math stage3? It would be even better if you could also open-source the code RL training data.

NVIDIA org

Hi @Suu ,

the 2K+ difficult samples are filtered from the current full dataset based on the model pass rate.
We used the stage2 model to generate 16 responses per problem and keep the questions that have pass rate <= 6/16.

I adopted the same approach and observed the reward in training decreasing from 0.2 to 0.05. However, unfortunately, my results only reached 62.6, and it was difficult to improve further.

@Suu
In our experiment (DeepSeek-r1-disitlled-7B), stage 3 training (24K length) with hard prompt, the training reward drop from 90% to 20% - pass@1 with average 64 runs improve to 65+ after 300 steps.

Sign up or log in to comment