File size: 5,246 Bytes
634a7e6 55fe1c2 634a7e6 69628fb 634a7e6 83c1180 5a2b11d 1f0b9c4 5a2b11d 634a7e6 83c1180 9f1fe92 2df08f8 83c1180 634a7e6 83c1180 634a7e6 83c1180 634a7e6 758366c 83c1180 758366c 83c1180 758366c 83c1180 758366c 83c1180 758366c 83c1180 758366c 634a7e6 83c1180 634a7e6 9f1fe92 795cf59 9f1fe92 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
---
task_categories:
- question-answering
tags:
- science
pretty_name: Scientific Figure Interpretation Benchmark
size_categories:
- 1k<n<10k
language:
- en
configs:
- config_name: default
data_files:
- split: CS_Figure2Caption
path: data/CS_Figure2Caption-*
- split: CS_Caption2Figure
path: data/CS_Caption2Figure-*
- split: General_Figure2Caption
path: data/General_Figure2Caption-*
- split: General_Caption2Figure
path: data/General_Caption2Figure-*
dataset_info:
features:
- name: ID
dtype: int64
- name: Question
dtype: string
- name: Options
sequence: string
- name: Answer
dtype: string
- name: Category
dtype: string
- name: Images
sequence: image
splits:
- name: CS_Figure2Caption
num_bytes: 22992276.0
num_examples: 500
- name: CS_Caption2Figure
num_bytes: 122043099.0
num_examples: 500
- name: General_Figure2Caption
num_bytes: 290333873.0
num_examples: 500
- name: General_Caption2Figure
num_bytes: 1475930020.0
num_examples: 500
download_size: 926209658
dataset_size: 1911299268.0
---
# SciFIBench
## Jonathan Roberts, Kai Han, Neil Houlsby, and Samuel Albanie
## NeurIPS 2024
[](https://hub.opencompass.org.cn/dataset-detail/SciFIBench)
Note: This repo has been updated to add two splits ('General_Figure2Caption' and 'General_Caption2Figure') with an additional 1000 questions. The original version splits are preserved and have been renamed as follows: 'Figure2Caption' -> 'CS_Figure2Caption' and 'Caption2Figure' -> 'CS_Caption2Figure'.
## Dataset Description
- **Homepage:** [SciFIBench](https://scifibench.github.io/)
- **Paper:** [SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation](https://arxiv.org/pdf/2405.08807)
- **Repository** [SciFIBench](https://github.com/jonathan-roberts1/SciFIBench)
-
### Dataset Summary
The SciFIBench (Scientific Figure Interpretation Benchmark) contains 2000 multiple-choice scientific figure interpretation questions covering two tasks. Task 1:
Figure -> Caption involves selecting the most appropriate caption given a figure; Task 2: Caption -> Figure involves the opposite -- selecting the most appropriate
figure given a caption. This benchmark was curated from the SciCap and ArxivCap datasets, using adversarial filtering to obtain hard negatives. Human verification has been performed
on each question to ensure high-quality,
answerable questions.
### Example Usage
```python
from datasets import load_dataset
# load dataset
dataset = load_dataset("jonathan-roberts1/SciFIBench") # optional: set cache_dir="PATH/TO/MY/CACHE/DIR"
# there are 4 dataset splits, which can be indexed separately
# cs_figure2caption_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="CS_Figure2Caption")
# cs_caption2figure_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="CS_Caption2Figure")
# general_figure2caption_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="General_Figure2Caption")
# general_caption2figure_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="General_Caption2Figure")
"""
DatasetDict({
CS_Caption2Figure: Dataset({
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
num_rows: 500
})
CS_Figure2Caption: Dataset({
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
num_rows: 500
})
General_Caption2Figure: Dataset({
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
num_rows: 500
})
General_Figure2Caption: Dataset({
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
num_rows: 500
})
})
"""
# select task and split
cs_figure2caption_dataset = dataset['CS_Figure2Caption']
"""
Dataset({
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
num_rows: 500
})
"""
# query items
cs_figure2caption_dataset[40] # e.g., the 41st element
"""
{'ID': 40,
'Question': 'Which caption best matches the image?',
'Options': ['A) ber vs snr for fft size=2048 using ls , lmmse , lr-lmmse .',
'B) ber vs snr for fft size=1024 using ls , lmmse , lr-lmmse algorithms .',
'C) ber vs snr for fft size=512 using ls , lmmse , lr-lmmse algorithms .',
'D) ber vs snr for fft size=256 using ls , lmmse , lr-lmmse algorithms with a 16 qam modulation .',
'E) ber vs snr for a bpsk modulation .'],
'Answer': 'D',
'Category': 'other cs',
'Images': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=501x431>]}
"""
```
### Source Data
More information regarding the source data can be found at: https://github.com/tingyaohsu/SciCap and https://mm-arxiv.github.io/.
### Dataset Curators
This dataset was curated by Jonathan Roberts, Kai Han, Neil Houlsby, and Samuel Albanie
### Citation Information
```
@article{roberts2024scifibench,
title={SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation},
author={Roberts, Jonathan and Han, Kai and Houlsby, Neil and Albanie, Samuel},
journal={arXiv preprint arXiv:2405.08807},
year={2024}
}
``` |