Cooper: Co-Optimizing Policy and Reward Models in Reinforcement Learning for Large Language Models
Abstract
A reinforcement learning framework jointly optimizes policy and reward models to enhance robustness and mitigate reward hacking in large language models.
Large language models (LLMs) have demonstrated remarkable performance in reasoning tasks, where reinforcement learning (RL) serves as a key algorithm for enhancing their reasoning capabilities. Currently, there are two mainstream reward paradigms: model-based rewards and rule-based rewards. However, both approaches suffer from limitations: rule-based rewards lack robustness, while model-based rewards are vulnerable to reward hacking. To address these issues, we propose Cooper(Co-optimizing Policy Model and Reward Model), a RL framework that jointly optimizes both the policy model and the reward model. Cooper leverages the high precision of rule-based rewards when identifying correct responses, and dynamically constructs and selects positive-negative sample pairs for continued training the reward model. This design enhances robustness and mitigates the risk of reward hacking. To further support Cooper, we introduce a hybrid annotation strategy that efficiently and accurately generates training data for the reward model. We also propose a reference-based reward modeling paradigm, where the reward model takes a reference answer as input. Based on this design, we train a reward model named VerifyRM, which achieves higher accuracy on VerifyBench compared to other models of the same size. We conduct reinforcement learning using both VerifyRM and Cooper. Our experiments show that Cooper not only alleviates reward hacking but also improves end-to-end RL performance, for instance, achieving a 0.54% gain in average accuracy on Qwen2.5-1.5B-Instruct. Our findings demonstrate that dynamically updating reward model is an effective way to combat reward hacking, providing a reference for better integrating reward models into RL.
Community
We are happy to introduce Cooper, a new RL framework that jointly optimizes policy and reward models for LLMs, combining rule-based precision with dynamic sample selection to enhance robustness and reduce reward hacking.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Libra: Assessing and Improving Reward Model by Learning to Think (2025)
- Posterior-GRPO: Rewarding Reasoning Processes in Code Generation (2025)
- URPO: A Unified Reward&Policy Optimization Framework for Large Language Models (2025)
- Co-Reward: Self-supervised Reinforcement Learning for Large Language Model Reasoning via Contrastive Agreement (2025)
- AutoRule: Reasoning Chain-of-thought Extracted Rule-based Rewards Improve Preference Learning (2025)
- RefCritic: Training Long Chain-of-Thought Critic Models with Refinement Feedback (2025)
- Multimodal Mathematical Reasoning with Diverse Solving Perspective (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper