Paper: ScaleDiff: Scaling Difficult Problems for Advanced Mathematical Reasoning

Code: https://github.com/QizhiPei/ScaleDiff

DiffScale-7B

This model is a fine-tuned version of QizhiPei/Qwen2.5-Math-7B-Instruct-RoPE-300k on the ScaleDiff-Math dataset.

Model description

ScaleDiff-7B is a Large Reasoning Model (LRM) developed as part of the ScaleDiff pipeline, which is designed to scale the creation of challenging mathematical problems. This model, fine-tuned on the novel ScaleDiff-Math dataset, aims to enhance advanced mathematical reasoning capabilities by addressing the scarcity of high-quality, difficult training data. It leverages an adaptive thinking model for problem identification and a specialized generator (DiffGen-8B) for large-scale problem synthesis.

Intended uses & limitations

ScaleDiff-7B is intended for advanced mathematical reasoning tasks, offering significant improvements in complex problem-solving. It is particularly useful for researchers and practitioners looking to benchmark and develop LRMs on difficult mathematical challenges.

Limitations: As a language model, its performance is dependent on the quality and scope of its training data. While designed for difficult problems, it may exhibit limitations in highly novel or out-of-distribution mathematical contexts. Further research is needed to fully understand its generalization capabilities beyond the specific benchmarks used in its evaluation.

Training and evaluation data

ScaleDiff-7B was fine-tuned on the custom-created ScaleDiff-Math dataset. This dataset is generated through a three-step pipeline:

  1. Problem Selection: Difficult problems are identified from the AM-Distilled-Dataset using AdaptThink, an adaptive thinking model.
  2. Problem Generation: A dedicated problem generator, DiffGen-8B, is trained on these selected difficult problems to produce new, challenging problems.
  3. Solution Distillation and Filtration: Long Chain-of-Thought (CoT) solutions for the newly generated problems are distilled using Qwen3-8B as a teacher model and then filtered for quality and relevance.

The final ScaleDiff-Math dataset combines these new problem-solution pairs with an original dataset to provide a more effective training signal. Evaluation was conducted on a suite of difficult mathematical benchmarks including AIME'24, AIME'25, HMMT-Feb'25, BRUMO'25, and MATH500.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 32
  • total_train_batch_size: 32
  • total_eval_batch_size: 256
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3.0

Training results

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.4.0+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
19
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for QizhiPei/ScaleDiff-7B

Base model

Qwen/Qwen2.5-7B
Finetuned
(1)
this model
Quantizations
2 models

Collection including QizhiPei/ScaleDiff-7B