Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -54,11 +54,7 @@ configs:
|
|
54 |
# SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning
|
55 |
|
56 |
[Paper](https://huggingface.co/papers/2506.21355) | [Project page](https://smmile-benchmark.github.io) | [Code](https://github.com/eth-medical-ai-lab/smmile)
|
57 |
-
|
58 |
-
<div align="center">
|
59 |
-
<img src="https://raw.githubusercontent.com/smmile-benchmark/smmile-benchmark.github.io/main/figures/logo_final.png" alt="SMMILE Logo" width="500"/>
|
60 |
-
</div>
|
61 |
-
|
62 |
## Introduction
|
63 |
|
64 |
Multimodal in-context learning (ICL) remains underexplored despite the profound potential it could have in complex application domains such as medicine. Clinicians routinely face a long tail of tasks which they need to learn to solve from few examples, such as considering few relevant previous cases or few differential diagnoses. While MLLMs have shown impressive advances in medical visual question answering (VQA) or multi-turn chatting, their ability to learn multimodal tasks from context is largely unknown.
|
|
|
54 |
# SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning
|
55 |
|
56 |
[Paper](https://huggingface.co/papers/2506.21355) | [Project page](https://smmile-benchmark.github.io) | [Code](https://github.com/eth-medical-ai-lab/smmile)
|
57 |
+

|
|
|
|
|
|
|
|
|
58 |
## Introduction
|
59 |
|
60 |
Multimodal in-context learning (ICL) remains underexplored despite the profound potential it could have in complex application domains such as medicine. Clinicians routinely face a long tail of tasks which they need to learn to solve from few examples, such as considering few relevant previous cases or few differential diagnoses. While MLLMs have shown impressive advances in medical visual question answering (VQA) or multi-turn chatting, their ability to learn multimodal tasks from context is largely unknown.
|