VideoEval-Pro / README.md
wren93's picture
Update README.md (#4)
a38a853 verified
metadata
dataset_info:
  features:
    - name: video
      dtype: string
    - name: question
      dtype: string
    - name: options
      list: string
    - name: answer
      dtype: string
    - name: answer_text
      dtype: string
    - name: meta
      dtype: string
    - name: source
      dtype: string
    - name: qa_subtype
      dtype: string
    - name: qa_type
      dtype: string
  splits:
    - name: test
      num_bytes: 515277
      num_examples: 1289
  download_size: 174366
  dataset_size: 515277
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
task_categories:
  - video-text-to-text

VideoEval-Pro

VideoEval-Pro is a robust and realistic long video understanding benchmark containing open-ended, short-answer QA problems. The dataset is constructed by reformatting questions from four existing long video understanding MCQ benchmarks: Video-MME, MLVU, LVBench, and LongVideoBench into free-form questions. The paper can be found here.

The evaluation code and scripts are available at: TIGER-AI-Lab/VideoEval-Pro

Dataset Structure

Each example in the dataset contains:

  • video: Name (path) of the video file
  • question: The question about the video content
  • options: Original options from the source benchmark
  • answer: The correct MCQ answer
  • answer_text: The correct free-form answer
  • meta: Additional metadata from the source benchmark
  • source: Source benchmark
  • qa_subtype: Question task subtype
  • qa_type: Question task type

Evaluation Steps

  1. Download and Prepare Videos

    # Navigate to videos directory
    cd videos
    
    # Merge all split tar.gz files into a single archive
    cat videos_part_*.tar.gz > videos_merged.tar.gz
    
    # Extract the merged archive
    tar -xzf videos_merged.tar.gz
    
    # [Optional] Clean up the split files and merged archive
    rm videos_part_*.tar.gz videos_merged.tar.gz
    
    # After extraction, you will get a directory containing all videos
    # The path to this directory will be used as --video_root in evaluation
    # For example: 'VideoEval-Pro/videos'
    
  2. [Optional] Pre-extract Frames To improve efficiency, you can pre-extract frames from videos. The extracted frames should be organized as follows:

    frames_root/
    β”œβ”€β”€ video_name_1/              # Directory name is thevideo name
    β”‚   β”œβ”€β”€ 000001.jpg             # Frame images
    β”‚   β”œβ”€β”€ 000002.jpg
    β”‚   └── ...
    β”œβ”€β”€ video_name_2/
    β”‚   β”œβ”€β”€ 000001.jpg
    β”‚   β”œβ”€β”€ 000002.jpg
    β”‚   └── ...
    └── ...
    

    After frame extraction, the path to the frames will be used as --frames_root. Set --using_frames True when running the evaluation script.

  3. Setup Evaluation Environment

    # Clone the repository from the GitHub repository
    git clone https://github.com/TIGER-AI-Lab/VideoEval-Pro
    cd VideoEval-Pro
    
    # Create conda environment from requirements.txt (there are different requirements files for different models)
    conda create -n videoevalpro --file requirements.txt
    conda activate videoevalpro
    
  4. Run Evaluation

    cd VideoEval-Pro
    
    # Set PYTHONPATH
    export PYTHONPATH=.
    
    # Run evaluation script with the following parameters:
    # --video_root: Path to video files folder
    # --frames_root: Path to video frames folder [For using_frames]
    # --output_path: Path to save output results
    # --using_frames: Whether to use pre-extracted frames
    # --model_path: Path to model
    # --device: Device to run inference on
    # --num_frames: Number of frames to sample from video
    # --max_retries: Maximum number of retries for failed inference
    # --num_threads: Number of threads for parallel processing
    
    python tools/*_chat.py \
        --video_root <path_to_videos> \
        --frames_root <path_to_frames> \
        --output_path <path_to_save_results> \
        --using_frames <True/False> \
        --model_path <model_name_or_path> \
        --device <device> \
        --num_frames <number_of_frames> \
        --max_retries <max_retries> \
        --num_threads <num_threads>
    
    E.g.:
    python tools/qwen_chat.py \
        --video_root ./videos \
        --frames_root ./frames \
        --output_path ./results/qwen_results.jsonl \
        --using_frames False \
        --model_path Qwen/Qwen2-VL-7B-Instruct \
        --device cuda \
        --num_frames 32 \
        --max_retries 10 \
        --num_threads 1
    
  5. Judge the results

    cd VideoEval-Pro
    
    # Set PYTHONPATH
    export PYTHONPATH=.
    
    # Run judge script *gpt4o_judge.py* with the following parameters:
    # --input_path: Path to save output results
    # --output_path: Path to judged results
    # --model_name: Version of the judge model
    # --num_threads: Number of threads for parallel processing
    
    python tools/gpt4o_judge.py \
        --input_path <path_to_saved_results> \
        --output_path <path_to_judged_results> \
        --model_name <model_version> \
        --num_threads <num_threads>
    
    E.g.:
    python tools/gpt4o_judge.py \
        --input_path ./results/qwen_results.jsonl \
        --output_path ./results/qwen_results_judged.jsonl \
        --model_name gpt-4o-2024-08-06 \
        --num_threads 1
    

    Note: the released results are judged by gpt-4o-2024-08-06