Datasets:
dataset_info:
features:
- name: video
dtype: string
- name: question
dtype: string
- name: options
list: string
- name: answer
dtype: string
- name: answer_text
dtype: string
- name: meta
dtype: string
- name: source
dtype: string
- name: qa_subtype
dtype: string
- name: qa_type
dtype: string
splits:
- name: test
num_bytes: 515277
num_examples: 1289
download_size: 174366
dataset_size: 515277
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
task_categories:
- video-text-to-text
VideoEval-Pro
VideoEval-Pro is a robust and realistic long video understanding benchmark containing open-ended, short-answer QA problems. The dataset is constructed by reformatting questions from four existing long video understanding MCQ benchmarks: Video-MME, MLVU, LVBench, and LongVideoBench into free-form questions. The paper can be found here.
The evaluation code and scripts are available at: TIGER-AI-Lab/VideoEval-Pro
Dataset Structure
Each example in the dataset contains:
video
: Name (path) of the video filequestion
: The question about the video contentoptions
: Original options from the source benchmarkanswer
: The correct MCQ answeranswer_text
: The correct free-form answermeta
: Additional metadata from the source benchmarksource
: Source benchmarkqa_subtype
: Question task subtypeqa_type
: Question task type
Evaluation Steps
Download and Prepare Videos
# Navigate to videos directory cd videos # Merge all split tar.gz files into a single archive cat videos_part_*.tar.gz > videos_merged.tar.gz # Extract the merged archive tar -xzf videos_merged.tar.gz # [Optional] Clean up the split files and merged archive rm videos_part_*.tar.gz videos_merged.tar.gz # After extraction, you will get a directory containing all videos # The path to this directory will be used as --video_root in evaluation # For example: 'VideoEval-Pro/videos'
[Optional] Pre-extract Frames To improve efficiency, you can pre-extract frames from videos. The extracted frames should be organized as follows:
frames_root/ βββ video_name_1/ # Directory name is thevideo name β βββ 000001.jpg # Frame images β βββ 000002.jpg β βββ ... βββ video_name_2/ β βββ 000001.jpg β βββ 000002.jpg β βββ ... βββ ...
After frame extraction, the path to the frames will be used as
--frames_root
. Set--using_frames True
when running the evaluation script.Setup Evaluation Environment
# Clone the repository from the GitHub repository git clone https://github.com/TIGER-AI-Lab/VideoEval-Pro cd VideoEval-Pro # Create conda environment from requirements.txt (there are different requirements files for different models) conda create -n videoevalpro --file requirements.txt conda activate videoevalpro
Run Evaluation
cd VideoEval-Pro # Set PYTHONPATH export PYTHONPATH=. # Run evaluation script with the following parameters: # --video_root: Path to video files folder # --frames_root: Path to video frames folder [For using_frames] # --output_path: Path to save output results # --using_frames: Whether to use pre-extracted frames # --model_path: Path to model # --device: Device to run inference on # --num_frames: Number of frames to sample from video # --max_retries: Maximum number of retries for failed inference # --num_threads: Number of threads for parallel processing python tools/*_chat.py \ --video_root <path_to_videos> \ --frames_root <path_to_frames> \ --output_path <path_to_save_results> \ --using_frames <True/False> \ --model_path <model_name_or_path> \ --device <device> \ --num_frames <number_of_frames> \ --max_retries <max_retries> \ --num_threads <num_threads> E.g.: python tools/qwen_chat.py \ --video_root ./videos \ --frames_root ./frames \ --output_path ./results/qwen_results.jsonl \ --using_frames False \ --model_path Qwen/Qwen2-VL-7B-Instruct \ --device cuda \ --num_frames 32 \ --max_retries 10 \ --num_threads 1
Judge the results
cd VideoEval-Pro # Set PYTHONPATH export PYTHONPATH=. # Run judge script *gpt4o_judge.py* with the following parameters: # --input_path: Path to save output results # --output_path: Path to judged results # --model_name: Version of the judge model # --num_threads: Number of threads for parallel processing python tools/gpt4o_judge.py \ --input_path <path_to_saved_results> \ --output_path <path_to_judged_results> \ --model_name <model_version> \ --num_threads <num_threads> E.g.: python tools/gpt4o_judge.py \ --input_path ./results/qwen_results.jsonl \ --output_path ./results/qwen_results_judged.jsonl \ --model_name gpt-4o-2024-08-06 \ --num_threads 1
Note: the released results are judged by gpt-4o-2024-08-06