EWMBench: Evaluating Scene, Motion, and Semantic Quality in Embodied World Models
Abstract
Recent advances in creative AI have enabled the synthesis of high-fidelity images and videos conditioned on language instructions. Building on these developments, text-to-video diffusion models have evolved into embodied world models (EWMs) capable of generating physically plausible scenes from language commands, effectively bridging vision and action in embodied AI applications. This work addresses the critical challenge of evaluating EWMs beyond general perceptual metrics to ensure the generation of physically grounded and action-consistent behaviors. We propose the Embodied World Model Benchmark (EWMBench), a dedicated framework designed to evaluate EWMs based on three key aspects: visual scene consistency, motion correctness, and semantic alignment. Our approach leverages a meticulously curated dataset encompassing diverse scenes and motion patterns, alongside a comprehensive multi-dimensional evaluation toolkit, to assess and compare candidate models. The proposed benchmark not only identifies the limitations of existing video generation models in meeting the unique requirements of embodied tasks but also provides valuable insights to guide future advancements in the field. The dataset and evaluation tools are publicly available at https://github.com/AgibotTech/EWMBench.
Community
The dataset and evaluation tools are publicly available at: https://github.com/AgibotTech/EWMBench
Dataset Dashboard:
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- WorldScore: A Unified Evaluation Benchmark for World Generation (2025)
- VBench-2.0: Advancing Video Generation Benchmark Suite for Intrinsic Faithfulness (2025)
- TesserAct: Learning 4D Embodied World Models (2025)
- Morpheus: Benchmarking Physical Reasoning of Video Generative Models with Real Physical Experiments (2025)
- STI-Bench: Are MLLMs Ready for Precise Spatial-Temporal World Understanding? (2025)
- Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing (2025)
- VLIPP: Towards Physically Plausible Video Generation with Vision and Language Informed Physical Prior (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper