QizhiPei nielsr HF Staff commited on
Commit
7936e94
·
verified ·
1 Parent(s): 614e122

Improve model card: Update metadata and enrich content for DiffGen-8B (#1)

Browse files

- Improve model card: Update metadata and enrich content for DiffGen-8B (cd84ff27a5f48b9d3e6e59db9e7cf15269a17947)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +11 -10
README.md CHANGED
@@ -1,38 +1,39 @@
1
  ---
2
- library_name: transformers
3
- license: other
4
  base_model: Qwen/Qwen3-8B-Base
 
 
5
  tags:
6
  - llama-factory
7
  - full
8
  - generated_from_trainer
 
 
9
  model-index:
10
  - name: DiffGen-8B
11
  results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
  Paper: [ScaleDiff: Scaling Difficult Problems for Advanced Mathematical Reasoning](https://arxiv.org/abs/2509.21070)
18
 
19
  Code: https://github.com/QizhiPei/ScaleDiff
20
 
21
  # DiffGen-8B
22
 
23
- This model is a fine-tuned version of [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base) on the difficult problems from [AM-Qwen3-Distilled](https://huggingface.co/datasets/a-m-team/AM-Qwen3-Distilled) dataset.
24
 
25
  ## Model description
26
 
27
- More information needed
28
 
29
  ## Intended uses & limitations
30
 
31
- More information needed
 
 
32
 
33
  ## Training and evaluation data
34
 
35
- More information needed
36
 
37
  ## Training procedure
38
 
@@ -61,4 +62,4 @@ The following hyperparameters were used during training:
61
  - Transformers 4.52.0.dev0
62
  - Pytorch 2.6.0+cu124
63
  - Datasets 2.21.0
64
- - Tokenizers 0.21.1
 
1
  ---
 
 
2
  base_model: Qwen/Qwen3-8B-Base
3
+ library_name: transformers
4
+ license: apache-2.0
5
  tags:
6
  - llama-factory
7
  - full
8
  - generated_from_trainer
9
+ - math-reasoning
10
+ pipeline_tag: text-generation
11
  model-index:
12
  - name: DiffGen-8B
13
  results: []
14
  ---
15
 
 
 
 
16
  Paper: [ScaleDiff: Scaling Difficult Problems for Advanced Mathematical Reasoning](https://arxiv.org/abs/2509.21070)
17
 
18
  Code: https://github.com/QizhiPei/ScaleDiff
19
 
20
  # DiffGen-8B
21
 
22
+ This model is a fine-tuned version of [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base).
23
 
24
  ## Model description
25
 
26
+ DiffGen-8B is a specialized difficult problem generator developed as part of the ScaleDiff pipeline, an approach designed to scale the creation of challenging mathematical problems for advanced mathematical reasoning. The model is trained on a filtered dataset of difficult problems, enabling it to efficiently produce a vast number of new, complex mathematical problems. This process eliminates the need for complex, per-instance prompting and its associated high API costs, addressing the scarcity of high-quality, difficult training data for Large Reasoning Models (LRMs).
27
 
28
  ## Intended uses & limitations
29
 
30
+ **Intended Uses**: DiffGen-8B is primarily intended for generating large-scale datasets of challenging mathematical problems. These generated problems are then used to augment training data for Large Reasoning Models (LRMs), thereby enhancing their mathematical reasoning capabilities. It serves as a crucial component in pipelines focused on improving LRM performance on difficult benchmarks by providing a continuous supply of intricate reasoning challenges.
31
+
32
+ **Limitations**: While DiffGen-8B excels at generating difficult problems, its primary scope is mathematical problem generation. The quality and relevance of the generated problems are further ensured through subsequent solution distillation and filtering steps within the broader ScaleDiff pipeline. Its performance may not be optimized for other general text generation tasks.
33
 
34
  ## Training and evaluation data
35
 
36
+ DiffGen-8B is a fine-tuned version of [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base). It was trained on a subset of difficult problems selected from the [AM-Qwen3-Distilled](https://huggingface.co/datasets/a-m-team/AM-Qwen3-Distilled) dataset. This selection was performed efficiently using [AdaptThink](https://huggingface.co/THU-KEG/AdaptThink-7B-delta0.05), an adaptive thinking model that perceives problem difficulty with only a single forward pass, eliminating the need for solutions during selection. The problems generated by DiffGen-8B contribute to the creation of the [ScaleDiff-Math](https://huggingface.co/datasets/QizhiPei/ScaleDiff-Math) dataset.
37
 
38
  ## Training procedure
39
 
 
62
  - Transformers 4.52.0.dev0
63
  - Pytorch 2.6.0+cu124
64
  - Datasets 2.21.0
65
+ - Tokenizers 0.21.1