|
--- |
|
task_categories: |
|
- question-answering |
|
tags: |
|
- science |
|
pretty_name: Scientific Figure Interpretation Benchmark |
|
size_categories: |
|
- 1k<n<10k |
|
language: |
|
- en |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: CS_Figure2Caption |
|
path: data/CS_Figure2Caption-* |
|
- split: CS_Caption2Figure |
|
path: data/CS_Caption2Figure-* |
|
- split: General_Figure2Caption |
|
path: data/General_Figure2Caption-* |
|
- split: General_Caption2Figure |
|
path: data/General_Caption2Figure-* |
|
dataset_info: |
|
features: |
|
- name: ID |
|
dtype: int64 |
|
- name: Question |
|
dtype: string |
|
- name: Options |
|
sequence: string |
|
- name: Answer |
|
dtype: string |
|
- name: Category |
|
dtype: string |
|
- name: Images |
|
sequence: image |
|
splits: |
|
- name: CS_Figure2Caption |
|
num_bytes: 22992276.0 |
|
num_examples: 500 |
|
- name: CS_Caption2Figure |
|
num_bytes: 122043099.0 |
|
num_examples: 500 |
|
- name: General_Figure2Caption |
|
num_bytes: 290333873.0 |
|
num_examples: 500 |
|
- name: General_Caption2Figure |
|
num_bytes: 1475930020.0 |
|
num_examples: 500 |
|
download_size: 926209658 |
|
dataset_size: 1911299268.0 |
|
--- |
|
|
|
# SciFIBench |
|
## Jonathan Roberts, Kai Han, Neil Houlsby, and Samuel Albanie |
|
## NeurIPS 2024 |
|
|
|
[](https://hub.opencompass.org.cn/dataset-detail/SciFIBench) |
|
|
|
Note: This repo has been updated to add two splits ('General_Figure2Caption' and 'General_Caption2Figure') with an additional 1000 questions. The original version splits are preserved and have been renamed as follows: 'Figure2Caption' -> 'CS_Figure2Caption' and 'Caption2Figure' -> 'CS_Caption2Figure'. |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** [SciFIBench](https://scifibench.github.io/) |
|
- **Paper:** [SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation](https://arxiv.org/pdf/2405.08807) |
|
- **Repository** [SciFIBench](https://github.com/jonathan-roberts1/SciFIBench) |
|
- |
|
### Dataset Summary |
|
The SciFIBench (Scientific Figure Interpretation Benchmark) contains 2000 multiple-choice scientific figure interpretation questions covering two tasks. Task 1: |
|
Figure -> Caption involves selecting the most appropriate caption given a figure; Task 2: Caption -> Figure involves the opposite -- selecting the most appropriate |
|
figure given a caption. This benchmark was curated from the SciCap and ArxivCap datasets, using adversarial filtering to obtain hard negatives. Human verification has been performed |
|
on each question to ensure high-quality, |
|
answerable questions. |
|
|
|
### Example Usage |
|
```python |
|
from datasets import load_dataset |
|
|
|
# load dataset |
|
dataset = load_dataset("jonathan-roberts1/SciFIBench") # optional: set cache_dir="PATH/TO/MY/CACHE/DIR" |
|
# there are 4 dataset splits, which can be indexed separately |
|
# cs_figure2caption_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="CS_Figure2Caption") |
|
# cs_caption2figure_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="CS_Caption2Figure") |
|
# general_figure2caption_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="General_Figure2Caption") |
|
# general_caption2figure_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="General_Caption2Figure") |
|
""" |
|
DatasetDict({ |
|
CS_Caption2Figure: Dataset({ |
|
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'], |
|
num_rows: 500 |
|
}) |
|
CS_Figure2Caption: Dataset({ |
|
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'], |
|
num_rows: 500 |
|
}) |
|
General_Caption2Figure: Dataset({ |
|
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'], |
|
num_rows: 500 |
|
}) |
|
General_Figure2Caption: Dataset({ |
|
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'], |
|
num_rows: 500 |
|
}) |
|
}) |
|
""" |
|
|
|
# select task and split |
|
cs_figure2caption_dataset = dataset['CS_Figure2Caption'] |
|
""" |
|
Dataset({ |
|
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'], |
|
num_rows: 500 |
|
}) |
|
""" |
|
|
|
# query items |
|
cs_figure2caption_dataset[40] # e.g., the 41st element |
|
""" |
|
{'ID': 40, |
|
'Question': 'Which caption best matches the image?', |
|
'Options': ['A) ber vs snr for fft size=2048 using ls , lmmse , lr-lmmse .', |
|
'B) ber vs snr for fft size=1024 using ls , lmmse , lr-lmmse algorithms .', |
|
'C) ber vs snr for fft size=512 using ls , lmmse , lr-lmmse algorithms .', |
|
'D) ber vs snr for fft size=256 using ls , lmmse , lr-lmmse algorithms with a 16 qam modulation .', |
|
'E) ber vs snr for a bpsk modulation .'], |
|
'Answer': 'D', |
|
'Category': 'other cs', |
|
'Images': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=501x431>]} |
|
""" |
|
``` |
|
|
|
### Source Data |
|
|
|
More information regarding the source data can be found at: https://github.com/tingyaohsu/SciCap and https://mm-arxiv.github.io/. |
|
|
|
### Dataset Curators |
|
|
|
This dataset was curated by Jonathan Roberts, Kai Han, Neil Houlsby, and Samuel Albanie |
|
|
|
|
|
### Citation Information |
|
``` |
|
@article{roberts2024scifibench, |
|
title={SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation}, |
|
author={Roberts, Jonathan and Han, Kai and Houlsby, Neil and Albanie, Samuel}, |
|
journal={arXiv preprint arXiv:2405.08807}, |
|
year={2024} |
|
} |
|
``` |