--- license: cc-by-sa-3.0 tags: - medical language: - en task_categories: - question-answering configs: - config_name: EndoBench data_files: - split: test path: EndoBench.tsv --- #
EndoBench
[🍎 **Homepage**](https://cuhk-aim-group.github.io/EndoBench.github.io/)|[💻 **GitHub**](https://github.com/CUHK-AIM-Group/EndoBench)|[**🤗 Dataset**](https://huggingface.co/datasets/Saint-lsy/EndoBench/)|[**📖 Paper**](https://arxiv.org/html/2505.23601) This repository is the official implementation of the paper **EndoBench: A Comprehensive Evaluation of Multi-Modal Large Language Models for Endoscopy Analysis**. ## ☀️ Tutorial EndoBench is a comprehensive MLLM evaluation framework spanning 4 endoscopy scenarios and 12 clinical tasks with 12 secondary subtasks that mirror the progression of endoscopic examination workflows. Featuring five levels of visual prompting granularity to assess region-specific understanding, our EndoBench contains 6,832 clinically validated VQA pairs derived from 22 endoscopy datasets. This structure enables precise measurement of MLLMs' clinical perceptual, diagnostic accuracy, and spatial comprehension across diverse endoscopic scenarios. Our dataset construction involves collecting 20 public and 1 private endoscopy datasets and standardizing QA pairs, yielding **446,535** VQA pairs comprising our EndoVQA-Instruct dataset, the current largest endoscopic instruction-tuning collection. From EndoVQA-Instruct, we extract representative pairs that undergo rigorous clinical review, resulting in our final EndoBench of 6,832 clinically validated VQA pairs. We split **EndoVQA-Instruct** and provide two datasets: 1. EndoVQA-Instruct-trainval, which included **439703** VQA pairs. We provide the .json file containing the original image paths. You can download these datasets according to your needs. The private WCE2025 dataset is available upon request. 3. EndoBench, resulting in **6,832** rigorously validated VQA pairs, and we provide 2 versions of EndoBench: `EndoBench.json` and `EndoBench.tsv` file. Each data entry in the `EndoBench.json` file corresponds to an image in the `EndoBench-Images.zip` file. The `EndoBench.tsv` file contains images in base64 format. ## Evaluation The evaluation can be built upon **VLMEvalKit**. To get started: 1. This project is built upon **VLMEvalKit**. To get started: Visit the [VLMEvalKit Quickstart Guide](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/get_started/Quickstart.md) for installation instructions. or you can run the following command for a quick start: ```bash git clone https://github.com/CUHK-AIM-Group/EndoBench.git cd VLMEvalKit pip install -e . ``` 2. You can evaluate your model with the following command: ```bash python run.py --data EndoBench --model Your_model_name ``` **Demo**: Qwen2.5-VL-7B-Instruct on EndoBench, Inference only ```bash python run.py --data EndoBench --model Qwen2.5-VL-7B-Instruct --mode infer ``` 3. You can find more details on the [ImageMCQDataset Class](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/dataset/image_mcq.py). ## Disclaimers The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution. Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to contact us. Upon verification, such samples will be promptly removed. Greatly appreciate to all the authors of these datasets for their contributions to the field of endoscopy analysis.