RLVR-World: Training World Models with Reinforcement Learning
Abstract
RLVR-World uses reinforcement learning with verifiable rewards to optimize world models for task-specific metrics, achieving improved performance across language and video domains.
World models predict state transitions in response to actions and are increasingly developed across diverse modalities. However, standard training objectives such as maximum likelihood estimation (MLE) often misalign with task-specific goals of world models, i.e., transition prediction metrics like accuracy or perceptual quality. In this paper, we present RLVR-World, a unified framework that leverages reinforcement learning with verifiable rewards (RLVR) to directly optimize world models for such metrics. Despite formulating world modeling as autoregressive prediction of tokenized sequences, RLVR-World evaluates metrics of decoded predictions as verifiable rewards. We demonstrate substantial performance gains on both language- and video-based world models across domains, including text games, web navigation, and robot manipulation. Our work indicates that, beyond recent advances in reasoning language models, RLVR offers a promising post-training paradigm for enhancing the utility of generative models more broadly.
Community
We pioneer training world models through reinforcement learning with verifiable rewards (RLVR), demonstrating substantial performance gains on both language- and video-based world models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Exploring the Effect of Reinforcement Learning on Video Understanding: Insights from SEED-Bench-R1 (2025)
- RL in Name Only? Analyzing the Structural Assumptions in RL post-training for LLMs (2025)
- Sailing AI by the Stars: A Survey of Learning from Rewards in Post-Training and Test-Time Scaling of Large Language Models (2025)
- Reward Reasoning Model (2025)
- A Unified Pairwise Framework for RLHF: Bridging Generative Reward Modeling and Policy Optimization (2025)
- RM-R1: Reward Modeling as Reasoning (2025)
- UI-R1: Enhancing Efficient Action Prediction of GUI Agents by Reinforcement Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 6
Browse 6 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper