WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs

Jack Hong1, [Shilin Yan](https://scholar.google.com/citations?user=2VhjOykAAAAJ&hl=zh-CN&oi=ao)1†, Jiayin Cai1, [Xiaolong Jiang](https://scholar.google.com/citations?user=G0Ow8j8AAAAJ&hl=zh-CN&oi=ao)1, [Yao Hu](https://scholar.google.com/citations?user=LIu7k7wAAAAJ&hl=en)1, [Weidi Xie](https://scholar.google.com/citations?user=Vtrqj4gAAAAJ&hl=en)2‡

Project Leader Corresponding Author

1Xiaohongshu Inc. 2Shanghai Jiao Tong University
[[🏠 Project Page](https://jaaackhongggg.github.io/WorldSense/)] [[📖 arXiv Paper](https://arxiv.org/pdf/2502.04326)] [[🤗 Dataset](https://huggingface.co/datasets/honglyhly/WorldSense)] [[🏆 Leaderboard](https://jaaackhongggg.github.io/WorldSense/#leaderboard)]
--- ## 🔥 News * **`2025.02.07`** 🌟 We release WorldSense, the first benchmark for real-world omnimodal understanding of MLLMs. ## 👀 WorldSense Overview we introduce **WorldSense**, the **first** benchmark to assess the multi-modal video understanding, that simultaneously encompasses _visual, audio, and text_ inputs. In contrast to existing benchmarks, our **WorldSense** has several features: * **Collaboration of omni-modality**. We design the evaluation tasks to feature a strong coupling of audio and video, requiring models to effectively utilize the **synergistic perception of omni-modality**; * **Diversity of videos and tasks**. WorldSense encompasses a diverse collection of **1,662** audio-visual synchronised videos, systematically categorized into **8** primary domains and **67** fine-grained subcategories to cover the broad scenarios, and **3,172** multi-choice QA pairs across **26** distinct tasks to enable the comprehensive evaluation; * **High-quality annotations**. All the QA pairs are manually labeled by 80 expert annotators with multiple rounds of correction to ensure quality. Based on our **WorldSense**, we extensively evaluate various state-of-the-art models. The experimental results indicate that existing models face significant challenges in understanding real-world scenarios (48% best accuracy). We hope our **WorldSense** can provide a platform for evaluating the ability in constructing and understanding coherent contexts from omni-modality.

## 📐 Dataset Examples

## 🔍 Dataset Please download our WorldSense from [here](https://huggingface.co/datasets/honglyhly/WorldSense). ## 🔮 Evaluation Pipeline 📍 **Evaluation**: Thanks for the reproduction of our evaluation through [VLMEvalkit](https://github.com/open-compass/VLMEvalKit). Please refer to [VLMEvalkit](https://github.com/open-compass/VLMEvalKit) for details. 📍 **Leaderboard**: If you want to add your model to our [leaderboard](https://jaaackhongggg.github.io/WorldSense/#leaderboard), please contact **jaaackhong@gmail.com**. ## 📈 Experimental Results - **Evaluation results of sota MLLMs.**

- **Fine-grained results on task category.**

- **Fine-grained results on audio type.**

- **In-depth analysis for real-world omnimodal understanding.**
Impact of vision information.

Impact of audio information.

Impact of audio information for Video MLLMs.

Impact of video frames.

## 📖 Citation If you find WorldSense helpful for your research, please consider citing our work. Thanks! ```bibtex @article{hong2025worldsenseevaluatingrealworldomnimodal, title={WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs}, author={Jack Hong and Shilin Yan and Jiayin Cai and Xiaolong Jiang and Yao Hu and Weidi Xie}, year={2025}, eprint={2502.04326}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2502.04326}, } ```