video
video |
---|
VidComposition Benchmark
π₯ Project Page | π Evaluation Space
The advancement of Multimodal Large Language Models (MLLMs) has enabled significant progress in multimodal understanding, expanding their capacity to analyze video content. However, existing evaluation benchmarks for MLLMs primarily focus on abstract video comprehension, lacking a detailed assessment of their ability to understand video compositions, the nuanced interpretation of how visual elements combine and interact within highly compiled video contexts. We introduce VidComposition, a new benchmark specifically designed to evaluate the video composition understanding capabilities of MLLMs using carefully curated compiled videos and cinematic-level annotations. VidComposition includes 982 videos with 1706 multiple-choice questions, covering various compositional aspects such as camera movement, angle, shot size, narrative structure, character actions and emotions, etc. Our comprehensive evaluation of 33 open-source and proprietary MLLMs reveals a significant performance gap between human and model capabilities. This highlights the limitations of current MLLMs in understanding complex, compiled video compositions and offers insights into areas for further improvement.
π Dataset Format
Each item in the dataset is a JSON object structured as follows [multi_choice.json]:
{
"video": "0SIK_5qpD70",
"segment": "0SIK_5qpD70_183.3_225.5.mp4",
"class": "background_perception",
"question": "What is the main background in the video?",
"options": {
"A": "restaurant",
"B": "hallway",
"C": "grassland",
"D": "wood"
},
"id": "1cad95c1-d13a-4ef0-b1c1-f7e753b5122f"
}
π§ͺ Evaluation
To evaluate your model on VidComposition, format your prediction file as follows:
[
{
"id": "1cad95c1-d13a-4ef0-b1c1-f7e753b5122f",
"model_answer": "A"
},
...
]
π Citation
If you like this dataset, please cite the following paper:
@article{tang2024vidcompostion,
title = {VidComposition: Can MLLMs Analyze Compositions in Compiled Videos?},
author = {Tang, Yunlong and Guo, Junjia and Hua, Hang and Liang, Susan and Feng, Mingqian and Li, Xinyang and Mao, Rui and Huang, Chao and Bi, Jing and Zhang, Zeliang and Fazli, Pooyan and Xu, Chenliang},
journal = {arXiv preprint arXiv:2411.10979},
year = {2024}
}
- Downloads last month
- 116