Vid2World: Crafting Video Diffusion Models to Interactive World Models
Abstract
Vid2World repurposes pre-trained video diffusion models into interactive world models via causalization and action guidance, enhancing action controllability and scalability in complex environments.
World models, which predict transitions based on history observation and action sequences, have shown great promise in improving data efficiency for sequential decision making. However, existing world models often require extensive domain-specific training and still produce low-fidelity, coarse predictions, limiting their applicability in complex environments. In contrast, video diffusion models trained on large, internet-scale datasets have demonstrated impressive capabilities in generating high-quality videos that capture diverse real-world dynamics. In this work, we present Vid2World, a general approach for leveraging and transferring pre-trained video diffusion models into interactive world models. To bridge the gap, Vid2World performs casualization of a pre-trained video diffusion model by crafting its architecture and training objective to enable autoregressive generation. Furthermore, it introduces a causal action guidance mechanism to enhance action controllability in the resulting interactive world model. Extensive experiments in robot manipulation and game simulation domains show that our method offers a scalable and effective approach for repurposing highly capable video diffusion models to interactive world models.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Unified World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets (2025)
- WORLDMEM: Long-term Consistent World Simulation with Memory (2025)
- Generative Pre-trained Autoregressive Diffusion Transformer (2025)
- RLVR-World: Training World Models with Reinforcement Learning (2025)
- CamContextI2V: Context-aware Controllable Video Generation (2025)
- EnerVerse-AC: Envisioning Embodied Environments with Action Condition (2025)
- GAIA-2: A Controllable Multi-View Generative World Model for Autonomous Driving (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper