Image-Text-to-Text
Transformers
Safetensors
qwen2_5_vl
image-to-text
conversational
text-generation-inference
WaltonFuture nielsr HF Staff commited on
Commit
3cf807a
·
verified ·
1 Parent(s): a22c98e

Add pipeline tag, library name, and GitHub README content (#1)

Browse files

- Add pipeline tag, library name, and GitHub README content (4938dbbb2a27149c40529a6365e3f3eadd121400)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +98 -4
README.md CHANGED
@@ -1,10 +1,104 @@
1
  ---
2
- license: apache-2.0
 
3
  datasets:
4
  - WaltonFuture/Multimodal-Cold-Start
5
  - WaltonFuture/Multimodal-RL-Data
6
- base_model:
7
- - Qwen/Qwen2.5-VL-7B-Instruct
 
8
  ---
 
9
  * 🐙 **GitHub Repo:** [waltonfuture/RL-with-Cold-Start](https://github.com/waltonfuture/RL-with-Cold-Start)
10
- * 📜 **Paper (arXiv):** [Advancing Multimodal Reasoning via Reinforcement Learning with Cold Start (arXiv:2505.22334)](https://arxiv.org/abs/2505.22334)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-VL-7B-Instruct
4
  datasets:
5
  - WaltonFuture/Multimodal-Cold-Start
6
  - WaltonFuture/Multimodal-RL-Data
7
+ license: apache-2.0
8
+ pipeline_tag: image-text-to-text
9
+ library_name: transformers
10
  ---
11
+
12
  * 🐙 **GitHub Repo:** [waltonfuture/RL-with-Cold-Start](https://github.com/waltonfuture/RL-with-Cold-Start)
13
+ * 📜 **Paper (arXiv):** [Advancing Multimodal Reasoning via Reinforcement Learning with Cold Start (arXiv:2505.22334)](https://arxiv.org/abs/2505.22334)
14
+
15
+ <div align=center>
16
+ <img src="assets/model_comparison.png" width = "80%" alt="model_comparison" align=center/>
17
+ </div>
18
+
19
+ ## Cold Start Stage
20
+
21
+ We conduct supervised fine-tuning on Qwen2.5-VL-3B and Qwen2.5-VL-7B using [ms-swift](https://github.com/modelscope/ms-swift). In this stage, please refer to this curated [dataset](https://huggingface.co/datasets/WaltonFuture/Multimodal-Cold-Start) distilled from Qwen2.5-VL-32B using rejection sampling.
22
+
23
+ ### Setup
24
+
25
+ ```bash
26
+ git clone https://github.com/waltonfuture/RL-with-Cold-Start.git
27
+ cd RL-with-Cold-Start/SFT
28
+ pip install -e .
29
+ ```
30
+
31
+ ### Prepare Data
32
+
33
+ ```bash
34
+ python convert_data.py
35
+ ```
36
+
37
+ ### SFT
38
+
39
+ ```bash
40
+ bash qwen2.5vl_sft.sh
41
+ ```
42
+
43
+ The checkpoint can be found in SFT/output.
44
+
45
+ ## RL Stage
46
+
47
+ We further conduct GRPO using [EasyR1](https://github.com/hiyouga/EasyR1). Please refer to this [dataset](https://huggingface.co/datasets/WaltonFuture/Multimodal-RL-Data) for the GRPO training.
48
+
49
+ ### Setup
50
+
51
+ ```bash
52
+ git clone https://github.com/waltonfuture/RL-with-Cold-Start.git
53
+ cd RL-with-Cold-Start/GRPO
54
+ pip install -e .
55
+ ```
56
+
57
+ ### GRPO Training (replace the checkpoint with the model after SFT)
58
+
59
+ ```bash
60
+ bash examples/qwen2_5_vl_7b_grpo.sh
61
+ ```
62
+
63
+ ### Merge Checkpoint in Hugging Face Format
64
+
65
+ ```bash
66
+ python3 scripts/model_merger.py --local_dir checkpoints/easyr1/qwen2_5_vl_7b_grpo/global_step_80/actor
67
+ ```
68
+
69
+ ## Data Access
70
+
71
+ Our two stage datasets are now available on Huggingface.
72
+
73
+ | Stage | Data |
74
+ | ------------------ | ------------- |
75
+ | Cold Start | [Multimodal-Cold-Start](https://huggingface.co/datasets/WaltonFuture/Multimodal-Cold-Start) |
76
+ | RL | [Multimodal-RL-Data](https://huggingface.co/datasets/WaltonFuture/Multimodal-RL-Data) |
77
+
78
+ ## Model Access
79
+
80
+ Our models are now available on Huggingface.
81
+
82
+ | Backbone | Our model |
83
+ | ------------------ | ------------- |
84
+ | Qwen2.5-VL-7b | [Qwen2.5VL-7b-RL-with-Cold-Start](https://huggingface.co/WaltonFuture/Qwen2.5VL-7b-RLCS) |
85
+ | Qwen2.5-VL-3b | [Qwen2.5VL-3b-RL-with-Cold-Start](https://huggingface.co/WaltonFuture/Qwen2.5VL-3b-RLCS) |
86
+
87
+ ## Acknowledgment
88
+
89
+ Our models are built upon the amazing [Qwen2.5-VL](https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5) family.
90
+ We thank [EasyR1](https://github.com/hiyouga/EasyR1) and [ms-swift](https://github.com/modelscope/ms-swift) for their training codes.
91
+
92
+ ## Contact
93
+
94
+ Please contact Lai Wei ([email protected]) if needed.
95
+
96
+ ## Citation
97
+ ```
98
+ @article{wei2025advancing,
99
+ title={Advancing Multimodal Reasoning via Reinforcement Learning with Cold Start},
100
+ author={Wei, Lai and Li, Yuting and Zheng, Kaipeng and Wang, Chen and Wang, Yue and Kong, Linghe and Sun, Lichao and Huang, Weiran},
101
+ journal={arXiv preprint arXiv:2505.22334},
102
+ year={2025}
103
+ }
104
+ ```