Video-LevelGauge / README.md
Cola-any's picture
add dataset
d2f84b6
|
raw
history blame
1.48 kB
metadata
license: cc-by-nc-sa-4.0

๐Ÿ”ฅVideo-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models

๐Ÿ”ฅ ToDo

  • Release the dataset.
  • Release the evaluation code.
  • Release the metric code.

๐Ÿ‘€ Video-LevelGauge Overview

Video-LevelGauge is explicitly designed to investigate contextual positional bias in video understanding. We introduce a standardized probe and customized context design paradigm, where carefully designed probe segments are inserted at varying positions within customized contextual contents. By comparing model responses to identical probes at different insertion points, we assess positional bias in video comprehension. It supports flexible control over context length, probe position, and context composition to evaluate positional biases in various real-world scenarios, such as multi-video understanding, long video comprehension and multi-modal interleaved inputs. Video-LevelGauge encompasses six categories of structured video understanding tasks (e.g., action reasoning), along with an open-ended descriptive task. It includes 438 manually collected multi-type videos, 1,177 multiple-choice question answering (MCQA) items, and 120 open-ended instructed descriptive problems paired with annotations.

๐Ÿ” Dataset

Coming soon

๐Ÿ”ฎ Evaluation Example

Coming soon

๐Ÿ“ˆ Experimental Results