Thinking-Free Policy Initialization Makes Distilled Reasoning Models More Effective and Efficient Reasoners
Abstract
TFPI, a simple adaptation to RLVR, improves performance and reduces token usage by discarding thinking content during training, accelerating RL convergence and achieving higher accuracy with less computational cost.
Reinforcement Learning with Verifiable Reward (RLVR) effectively solves complex tasks but demands extremely long context lengths during training, leading to substantial computational costs. While multi-stage training can partially mitigate this, starting with overly short contexts often causes irreversible performance degradation, ultimately failing to reduce overall training compute significantly. In this paper, we introduce **T**hinking-**F**ree **P**olicy **I**nitialization (**TFPI**), a simple yet effective adaptation to RLVR that bridges long Chain-of-Thought (CoT) distillation and standard RLVR. TFPI employs a simple *ThinkFree* operation, explicitly discarding the thinking content via a direct *</think>* append, to reduce token usage during inference. Training with *ThinkFree*-adapted inputs improves performance and lowers token consumption, even in the original slow-thinking mode. Extensive experiments across various benchmarks have shown that TFPI accelerates RL convergence, achieves a higher performance ceiling, and yields more token-efficient reasoning models without specialized rewards or complex training designs. With TFPI only, we train a 4B model to reach 89.0% accuracy on AIME24 and 65.5% on LiveCodeBench using less than 4K H20 hours.
Community
We propose the Thinking-Free Policy Initialization, a stage prior to RL that can accelerate RL convergence to a higher performance ceiling and naturally yield token-efficient reasoning models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Your Models Have Thought Enough: Training Large Reasoning Models to Stop Overthinking (2025)
- HiPO: Hybrid Policy Optimization for Dynamic Reasoning in LLMs (2025)
- Sample More to Think Less: Group Filtered Policy Optimization for Concise Reasoning (2025)
- VCRL: Variance-based Curriculum Reinforcement Learning for Large Language Models (2025)
- BudgetThinker: Empowering Budget-aware LLM Reasoning with Control Tokens (2025)
- Critique-Coder: Enhancing Coder Models by Critique Reinforcement Learning (2025)
- Conditional Advantage Estimation for Reinforcement Learning in Large Reasoning Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper