This repository contains the fine-tuned Qwen2.5-VL-7B-Instruct model using the SFT approach, as detailed in our paper The Synergy Dilemma of Long-CoT SFT and RL: Investigating Post-Training Techniques for Reasoning VLMs.

Citation

If you find this project useful in your research, please consider citing this BibTex:

@misc{chen2025synergydilemmalongcotsft,
      title={The Synergy Dilemma of Long-CoT SFT and RL: Investigating Post-Training Techniques for Reasoning VLMs}, 
      author={Jierun Chen and Tiezheng Yu and Haoli Bai and Lewei Yao and Jiannan Wu and Kaican Li and Fei Mi and Chaofan Tao and Lei Zhu and Manyi Zhang and Xiaohui Li and Lu Hou and Lifeng Shang and Qun Liu},
      year={2025},
      eprint={2507.07562},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.07562}, 
}
Downloads last month
4
Safetensors
Model size
8.29B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for JierunChen/SFT-RL-SynergyDilemma-SFT_Eureka_Distill

Finetuned
(470)
this model