Saint-lsy commited on
Commit
f4824b1
·
1 Parent(s): f0db2f5

Update dataset

Browse files
Files changed (4) hide show
  1. EndoBench.json +0 -0
  2. EndoBench.tsv +1 -1
  3. EndoVQA-Instruct-trainval.json +1 -1
  4. README.md +12 -7
EndoBench.json CHANGED
The diff for this file is too large to render. See raw diff
 
EndoBench.tsv CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:75ad721444b13fbed6b62b45a25fcbfe586a8b6c4b2064366f963f08f6eb5868
3
  size 588480658
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d371db9b1421d0edf4e9caa0f1e310dffd86bf917fad75eba0328c00182d983
3
  size 588480658
EndoVQA-Instruct-trainval.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85988a52e757e57e906165a217aed11df764fa459295a14317f2def16d8fad70
3
  size 297633793
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91206c60377164f0a3c28af46a61d88ad7430ddd3d3f64d98023b07df8c344fe
3
  size 297633793
README.md CHANGED
@@ -1,11 +1,16 @@
1
  ---
2
  license: cc-by-sa-3.0
3
  tags:
4
- - medical
5
  language:
6
- - en
7
  task_categories:
8
- - question-answering
 
 
 
 
 
9
  ---
10
  # <div align="center"><b> EndoBench </b></div>
11
 
@@ -17,15 +22,15 @@ This repository is the official implementation of the paper **EndoBench: A Compr
17
 
18
  EndoBench is a comprehensive MLLM evaluation framework spanning 4 endoscopy scenarios and 12 clinical tasks with 12 secondary subtasks that mirror the progression of endoscopic examination workflows. Featuring five levels of visual prompting granularity to assess region-specific understanding, our EndoBench contains 6,832 clinically validated VQA pairs derived from 22 endoscopy datasets. This structure enables precise measurement of MLLMs' clinical perceptual, diagnostic accuracy, and spatial comprehension across diverse endoscopic scenarios.
19
 
20
- Our dataset construction involves collecting 21 public and 1 private endoscopy datasets and standardizing QA pairs, yielding 446,535 VQA pairs comprising our~\ourinstruct~dataset, the current largest endoscopic instruction-tuning collection. From~\ourinstruct, we extract representative pairs that undergo rigorous clinical review, resulting in our final~\ourmethod~dataset of 6,832 clinically validated VQA pairs.
21
 
22
  We provide two datasets:
23
 
24
- 1. EndoVQA-Instruct-trainval, which included *439703* VQA pairs.
25
 
26
  2. EndoBench, which encompasses 4 distinct endoscopic modalities, 12 specialized clinical tasks with 12 secondary subtasks, and 5 levels of visual prompting granularity, resulting in 6,832 rigorously validated VQA pairs from 22 diverse datasets. Our multi-dimensional evaluation framework mirrors the clinical workflow—spanning anatomical recognition, lesion analysis, spatial localization, and surgical operations—to holistically gauge the perceptual and diagnostic abilities of MLLMs in realistic scenarios.
27
 
28
- We provide 2 version of EndoBench: json file and tsv file.
29
 
30
 
31
  ## Evaluation
@@ -38,7 +43,7 @@ cd VLMEvalKit
38
  pip install -e .
39
  ```
40
 
41
- 2. Add our dataset
42
 
43
  3. You can find more details on the [ImageMCQDataset Class](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/dataset/image_mcq.py).
44
 
 
1
  ---
2
  license: cc-by-sa-3.0
3
  tags:
4
+ - medical
5
  language:
6
+ - en
7
  task_categories:
8
+ - question-answering
9
+ configs:
10
+ - config_name: EndoBench
11
+ data_files:
12
+ - split: test
13
+ path: EndoBench.tsv
14
  ---
15
  # <div align="center"><b> EndoBench </b></div>
16
 
 
22
 
23
  EndoBench is a comprehensive MLLM evaluation framework spanning 4 endoscopy scenarios and 12 clinical tasks with 12 secondary subtasks that mirror the progression of endoscopic examination workflows. Featuring five levels of visual prompting granularity to assess region-specific understanding, our EndoBench contains 6,832 clinically validated VQA pairs derived from 22 endoscopy datasets. This structure enables precise measurement of MLLMs' clinical perceptual, diagnostic accuracy, and spatial comprehension across diverse endoscopic scenarios.
24
 
25
+ Our dataset construction involves collecting 21 public and 1 private endoscopy datasets and standardizing QA pairs, yielding 446,535 VQA pairs comprising our EndoVQA-Instruct dataset, the current largest endoscopic instruction-tuning collection. From EndoVQA-Instruct, we extract representative pairs that undergo rigorous clinical review, resulting in our final EndoBench of 6,832 clinically validated VQA pairs.
26
 
27
  We provide two datasets:
28
 
29
+ 1. EndoVQA-Instruct-trainval, which included **439703** VQA pairs.
30
 
31
  2. EndoBench, which encompasses 4 distinct endoscopic modalities, 12 specialized clinical tasks with 12 secondary subtasks, and 5 levels of visual prompting granularity, resulting in 6,832 rigorously validated VQA pairs from 22 diverse datasets. Our multi-dimensional evaluation framework mirrors the clinical workflow—spanning anatomical recognition, lesion analysis, spatial localization, and surgical operations—to holistically gauge the perceptual and diagnostic abilities of MLLMs in realistic scenarios.
32
 
33
+ We provide 2 versions of EndoBench: .json file and .tsv file.
34
 
35
 
36
  ## Evaluation
 
43
  pip install -e .
44
  ```
45
 
46
+ 2. Add our dataset to VLMEvalKit.
47
 
48
  3. You can find more details on the [ImageMCQDataset Class](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/dataset/image_mcq.py).
49