EndoBench / README.md
Saint-lsy's picture
Update README.md
c5b49eb verified
metadata
license: cc-by-sa-3.0
tags:
  - medical
language:
  - en
task_categories:
  - question-answering
configs:
  - config_name: EndoBench
    data_files:
      - split: test
        path: EndoBench.tsv

EndoBench

🍎 Homepage|💻 GitHub|🤗 Dataset|📖 Paper

This repository is the official implementation of the paper EndoBench: A Comprehensive Evaluation of Multi-Modal Large Language Models for Endoscopy Analysis.

☀️ Tutorial

EndoBench is a comprehensive MLLM evaluation framework spanning 4 endoscopy scenarios and 12 clinical tasks with 12 secondary subtasks that mirror the progression of endoscopic examination workflows. Featuring five levels of visual prompting granularity to assess region-specific understanding, our EndoBench contains 6,832 clinically validated VQA pairs derived from 22 endoscopy datasets. This structure enables precise measurement of MLLMs' clinical perceptual, diagnostic accuracy, and spatial comprehension across diverse endoscopic scenarios.

Our dataset construction involves collecting 20 public and 1 private endoscopy datasets and standardizing QA pairs, yielding 446,535 VQA pairs comprising our EndoVQA-Instruct dataset, the current largest endoscopic instruction-tuning collection. From EndoVQA-Instruct, we extract representative pairs that undergo rigorous clinical review, resulting in our final EndoBench of 6,832 clinically validated VQA pairs.

We split EndoVQA-Instruct and provide two datasets:

  1. EndoVQA-Instruct-trainval, which included 439703 VQA pairs. We provide the .json file containing the original image paths. You can download these datasets according to your needs. The private WCE2025 dataset is available upon request.

  2. EndoBench, resulting in 6,832 rigorously validated VQA pairs, and we provide 2 versions of EndoBench: EndoBench.json and EndoBench.tsv file. Each data entry in the EndoBench.json file corresponds to an image in the EndoBench-Images.zip file. The EndoBench.tsv file contains images in base64 format.

Evaluation

The evaluation can be built upon VLMEvalKit. To get started:

  1. This project is built upon VLMEvalKit. To get started:

Visit the VLMEvalKit Quickstart Guide for installation instructions. or you can run the following command for a quick start:

git clone https://github.com/CUHK-AIM-Group/EndoBench.git
cd VLMEvalKit
pip install -e .
  1. You can evaluate your model with the following command:
python run.py --data EndoBench --model Your_model_name 

Demo: Qwen2.5-VL-7B-Instruct on EndoBench, Inference only

python run.py --data EndoBench --model Qwen2.5-VL-7B-Instruct --mode infer
  1. You can find more details on the ImageMCQDataset Class.

Disclaimers

The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution. Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to contact us. Upon verification, such samples will be promptly removed.

Greatly appreciate to all the authors of these datasets for their contributions to the field of endoscopy analysis.