Datasets:
license: cc-by-4.0
task_categories:
- image-text-to-text
language:
- en
tags:
- medical
- multimodal
- in-context-learning
- vqa
- benchmark
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: image
dtype: image
- name: image_url
dtype: string
- name: problem_id
dtype: string
- name: order
dtype: int64
- name: parquet_path
dtype: string
- name: speciality
dtype: string
- name: flag_answer_format
dtype: string
- name: flag_image_type
dtype: string
- name: flag_cognitive_process
dtype: string
- name: flag_rarity
dtype: string
- name: flag_difficulty_llms
dtype: string
splits:
- name: train
num_bytes: 94510405
num_examples: 517
download_size: 90895608
dataset_size: 94510405
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning
Paper | Project page | Code
Introduction
Multimodal in-context learning (ICL) remains underexplored despite the profound potential it could have in complex application domains such as medicine. Clinicians routinely face a long tail of tasks which they need to learn to solve from few examples, such as considering few relevant previous cases or few differential diagnoses. While MLLMs have shown impressive advances in medical visual question answering (VQA) or multi-turn chatting, their ability to learn multimodal tasks from context is largely unknown.
We introduce SMMILE (Stanford Multimodal Medical In-context Learning Evaluation), the first multimodal medical ICL benchmark. A set of clinical experts curated ICL problems to scrutinize MLLM's ability to learn multimodal tasks at inference time from context.
Dataset Access
The SMMILE dataset is available on HuggingFace:
from datasets import load_dataset
load_dataset('smmile/SMMILE', token=YOUR_HF_TOKEN)
load_dataset('smmile/SMMILE-plusplus', token=YOUR_HF_TOKEN)
Note: You need to set your HuggingFace token as an environment variable:
export HF_TOKEN=your_token_here
License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Citation
If you find our dataset useful for your research, please cite the following paper:
@article{rieff2025smmile,
title={SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning},
author={Melanie Rieff and Maya Varma and Ossian Rabow and Subathra Adithan and Julie Kim and Ken Chang and Hannah Lee and Nidhi Rohatgi and Christian Bluethgen and Mohamed S. Muneer and Jean-Benoit Delbrouck and Michael Moor},
year={2025},
eprint={2506.21355},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2506.21355},
}
Acknowledgments
We thank the clinical experts who contributed to curating the benchmark dataset.