This model belongs to the official implementation of the paper "Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models".

Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions.

To this end, we propose a systematic method to boost LLMs in dealing with complex instructions via incentivizing reasoning for test-time compute scaling. First, we stem from the decomposition of complex instructions under existing taxonomies and propose a reproducible data acquisition method. Second, we exploit reinforcement learning (RL) with verifiable rule-centric reward signals to cultivate reasoning specifically for instruction following. We address the shallow, non-essential nature of reasoning under complex instructions via sample-wise contrast for superior CoT enforcement. We also exploit behavior cloning of experts to facilitate steady distribution shift from fast-thinking LLMs to skillful reasoners. Extensive evaluations on seven comprehensive benchmarks confirm the validity of the proposed method, where a 1.5B LLM achieves 11.74% gains with performance comparable to a 8B LLM.

The model LLaMA3.1-8B is our optimized model (but collapsed during training) for its advanced instruction-following capabilities under complex instructions. It corresponds to the LLaMA3.1-8B-Instruct (Ours) in the Table 1.

Table 1 Performance on seven instruction benchmarks. Best/2nd best are marked bold/underlined.

Model Method IFEval CELLO CF Bench Complex Bench FB Bench Follow Bench Info Bench Avg.
Qwen2.5-1.5B-Instruct I/O 45.28 71.00 36.00 50.97 39.81 40.00 71.24 50.61
Qwen2.5-1.5B-Instruct CoT 28.65 59.30 22.00 32.94 37.31 29.28 62.22 38.81 (-11.79%)
Qwen2.5-1.5B-Instruct SDC 41.95 66.10 30.00 41.70 36.52 37.39 67.55 45.89 (-4.71%)
Qwen2.5-1.5B-Instruct SFT 65.61 71.20 48.00 57.46 42.75 56.47 76.22 59.67 (+9.06%)
Qwen2.5-1.5B-Instruct Ours 44.91 73.50 53.66 63.92 58.67 59.82 81.95 62.35 (+11.74%)
DeepSeek-Qwen1.5B I/O† 36.04 62.50 27.99 39.89 34.51 20.29 52.00 39.03
DeepSeek-Qwen1.5B SFT 45.29 63.20 25.33 35.53 37.59 22.18 51.96 40.15 (+1.12%)
DeepSeek-Qwen1.5B Ours 57.67 69.00 40.00 44.38 37.78 37.79 60.48 49.58 (+10.54%)
DeepScaleR-1.5B I/O† 41.77 65.00 30.00 40.70 40.24 26.01 60.31 43.43
DeepScaleR-1.5B SFT 48.24 62.90 28.00 36.68 35.72 26.50 54.22 41.75 (-1.67%)
DeepScaleR-1.5B Ours 55.63 67.30 39.33 43.23 37.81 36.80 60.08 48.60 (+5.17%)
Qwen2.5-7B-Instruct I/O 72.82 76.50 64.33 74.47 59.29 75.03 85.60 72.58
Qwen2.5-7B-Instruct CoT 69.50 75.20 61.66 72.00 42.65 74.86 82.13 68.28 (-4.29%)
Qwen2.5-7B-Instruct SDC 60.44 72.60 65.66 76.53 60.07 76.09 86.88 71.18 (-1.39%)
Qwen2.5-7B-Instruct SFT 72.45 77.50 63.33 74.23 58.76 75.92 84.31 72.36 (-0.21%)
Qwen2.5-7B-Instruct Ours 70.06 79.20 65.00 77.40 64.45 75.32 82.67 73.44 (+0.85%)
LLaMA3.1-8B-Instruct I/O 77.63 75.20 56.99 69.11 46.92 53.52 71.52 67.01
LLaMA3.1-8B-Instruct CoT 60.44 65.50 47.66 56.54 32.34 37.36 58.48 54.53 (-12.48%)
LLaMA3.1-8B-Instruct SDC 80.22 71.00 58.33 68.73 38.36 48.92 72.89 65.24 (-1.77%)
LLaMA3.1-8B-Instruct SFT 77.26 75.80 54.00 65.24 40.16 59.56 65.30 64.92 (-2.09%)
LLaMA3.1-8B-Instruct Ours 13.49 4.6 1.33 2.71 7.14 1.08 0.51 4.06 (-62.95%)
Ministral-8B-Instruct I/O 59.51 76.20 62.33 70.03 54.54 73.49 84.00 68.58
Ministral-8B-Instruct CoT 48.79 61.90 49.66 61.31 39.17 61.75 79.73 57.47 (-11.11%)
Ministral-8B-Instruct SDC 58.59 63.60 56.99 68.32 48.06 69.37 84.08 64.14 (-4.43%)
Ministral-8B-Instruct SFT 68.57 66.30 48.66 67.20 37.26 54.37 76.62 59.85 (-8.72%)
Ministral-8B-Instruct Ours 72.64 72.6 59.33 70.45 54.35 76.08 75.33 68.68 (+0.10%)
DeepSeek-Qwen7B I/O† 60.81 72.39 57.99 66.86 59.59 62.80 79.64 65.73
DeepSeek-Qwen7B SFT 67.09 69.10 58.66 58.42 55.60 65.96 79.15 64.85 (-0.88%)
DeepSeek-Qwen7B Ours 71.35 71.40 58.67 62.04 59.65 59.38 82.00 66.35 (+0.62%)

Code is available at https://github.com/yuleiqin/RAIF.

🎓 If you find this work useful, please consider the following citation:

@article{qin2025incentivizingreasoningadvancedinstructionfollowing,
      title={Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models}, 
      author={Yulei Qin and Gang Li and Zongyi Li and Zihan Xu and Yuchen Shi and Zhekai Lin and Xiao Cui and Ke Li and Xing Sun},
      year={2025},
      eprint={2506.01413},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2506.01413}
}
Downloads last month
8
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train yolay/RAIF-LLaMA3.1-8B

Collection including yolay/RAIF-LLaMA3.1-8B