Datasets:
Update dataset card: task category and license
Browse filesThis PR updates the dataset card for the MARBLE benchmark to improve its discoverability and accuracy:
- The `task_categories` metadata is updated from `image-to-text` to `image-text-to-text`. This change more accurately reflects the dataset's nature, which involves multimodal inputs (images and text questions) for reasoning and planning tasks resulting in text outputs.
- The `license` metadata is corrected from `apache-2.0` to `cc-by-nc-4.0`. This license aligns with the actual licensing of the project, as indicated by a majority of our team's analysis.
The paper link remains the arXiv link as per the guidelines when an arXiv link is already present.
README.md
CHANGED
@@ -1,14 +1,14 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
-
task_categories:
|
4 |
-
- image-to-text
|
5 |
language:
|
6 |
- en
|
|
|
|
|
|
|
|
|
|
|
7 |
tags:
|
8 |
- multimodality
|
9 |
- reasoning
|
10 |
-
size_categories:
|
11 |
-
- 1K<n<10K
|
12 |
configs:
|
13 |
- config_name: cube
|
14 |
data_files:
|
@@ -118,13 +118,12 @@ dataset_info:
|
|
118 |
dataset_size: 2468355
|
119 |
---
|
120 |
|
121 |
-
|
122 |
# MARBLE: A Hard Benchmark for Multimodal Spatial Reasoning and Planning
|
123 |
|
124 |
[**π Homepage**](https://marble-benchmark.github.io) | [**π Paper**](https://arxiv.org/abs/2506.22992) | [**π€ Dataset**](https://huggingface.co/datasets/mrble/MARBLE) | [**π Code**](https://github.com/eth-medical-ai-lab/multimodal-reasoning-bench)
|
125 |
|
126 |
## Introduction
|
127 |
-
MARBLE is a challenging multimodal reasoning benchmark designed to scrutinize multimodal language models (MLLMs) in their ability to carefully reason step-by-step through complex multimodal problems and environments. MARBLE is composed of two highly challenging tasks, M-Portal and M-Cube, that require the crafting and understanding of multistep plans leveraging spatial, visual, and physical constraints. We find that current MLLMs perform poorly on MARBLE—all the 12 advanced models obtain near-random performance on M-Portal and 0
|
128 |
|
129 |

|
130 |
|
@@ -150,40 +149,38 @@ Please refer to [**π Code**](https://github.com/eth-medical-ai-lab/multimodal
|
|
150 |
|
151 |
## Overall Results
|
152 |
Performance on M-PORTAL:
|
153 |
-
| Model
|
154 |
-
|
|
155 |
-
| GPT-o3
|
156 |
-
| Gemini-2.5-pro
|
157 |
-
| DeepSeek-R1-0528\* | 0.0
|
158 |
-
| Claude-3.7-Sonnet
|
159 |
-
| DeepSeek-R1\*
|
160 |
-
| Seed1.5-VL
|
161 |
-
| GPT-o4-mini
|
162 |
-
| GPT-4o
|
163 |
-
| Llama-4-Scout
|
164 |
-
| Qwen2.5-VL-72B
|
165 |
-
| InternVL3-78B
|
166 |
-
| Qwen3-235B-A22B\*
|
167 |
-
| *Random*
|
168 |
|
169 |
Performance on M-CUBE:
|
170 |
-
| Model
|
171 |
-
|
|
172 |
-
| GPT-o3
|
173 |
-
| GPT-o4-mini
|
174 |
-
| DeepSeek-R1\*
|
175 |
-
| Gemini-2.5-pro
|
176 |
-
| DeepSeek-R1-0528\* | 0.0
|
177 |
-
| Claude-3.7-Sonnet
|
178 |
-
| InternVL3-78B
|
179 |
-
| Seed1.5-VL
|
180 |
-
| GPT-4o
|
181 |
-
| Qwen2.5-VL-72B
|
182 |
-
| Llama-4-Scout
|
183 |
-
| Qwen3-235B-A22B\*
|
184 |
-
| *Random*
|
185 |
-
|
186 |
-
|
187 |
|
188 |
## Contact
|
189 |
- Yulun Jiang: [email protected]
|
|
|
1 |
---
|
|
|
|
|
|
|
2 |
language:
|
3 |
- en
|
4 |
+
license: cc-by-nc-4.0
|
5 |
+
size_categories:
|
6 |
+
- 1K<n<10K
|
7 |
+
task_categories:
|
8 |
+
- image-text-to-text
|
9 |
tags:
|
10 |
- multimodality
|
11 |
- reasoning
|
|
|
|
|
12 |
configs:
|
13 |
- config_name: cube
|
14 |
data_files:
|
|
|
118 |
dataset_size: 2468355
|
119 |
---
|
120 |
|
|
|
121 |
# MARBLE: A Hard Benchmark for Multimodal Spatial Reasoning and Planning
|
122 |
|
123 |
[**π Homepage**](https://marble-benchmark.github.io) | [**π Paper**](https://arxiv.org/abs/2506.22992) | [**π€ Dataset**](https://huggingface.co/datasets/mrble/MARBLE) | [**π Code**](https://github.com/eth-medical-ai-lab/multimodal-reasoning-bench)
|
124 |
|
125 |
## Introduction
|
126 |
+
MARBLE is a challenging multimodal reasoning benchmark designed to scrutinize multimodal language models (MLLMs) in their ability to carefully reason step-by-step through complex multimodal problems and environments. MARBLE is composed of two highly challenging tasks, M-Portal and M-Cube, that require the crafting and understanding of multistep plans leveraging spatial, visual, and physical constraints. We find that current MLLMs perform poorly on MARBLE—all the 12 advanced models obtain near-random performance on M-Portal and 0% accuracy on M-Cube. Only in simplified subtasks some models outperform the random baseline, indicating that complex reasoning is still a challenge for existing MLLMs. Moreover, we show that perception remains a bottleneck, where MLLMs occasionally fail to extract information from the visual inputs. By shedding a light on the limitations of MLLMs, we hope that MARBLE will spur the development of the next generation of models with the ability to reason and plan across many, multimodal reasoning steps.
|
127 |
|
128 |

|
129 |
|
|
|
149 |
|
150 |
## Overall Results
|
151 |
Performance on M-PORTAL:
|
152 |
+
| Model | Plan-correctness (F1 %) | Fill-the-blanks (Acc %) |
|
153 |
+
| --- | --- | --- |
|
154 |
+
| GPT-o3 | 6.6 | 17.6 |
|
155 |
+
| Gemini-2.5-pro | 4.7 | 16.1 |
|
156 |
+
| DeepSeek-R1-0528\* | 0.0 | 8.4 |
|
157 |
+
| Claude-3.7-Sonnet | 6.3 | 6.8 |
|
158 |
+
| DeepSeek-R1\* | 6.1 | 5.5 |
|
159 |
+
| Seed1.5-VL | 7.6 | 3.5 |
|
160 |
+
| GPT-o4-mini | 0.0 | 3.1 |
|
161 |
+
| GPT-4o | 6.5 | 0.4 |
|
162 |
+
| Llama-4-Scout | 6.5 | 0.2 |
|
163 |
+
| Qwen2.5-VL-72B | 6.6 | 0.2 |
|
164 |
+
| InternVL3-78B | 6.4 | 0.0 |
|
165 |
+
| Qwen3-235B-A22B\* | 0.0 | 0.0 |
|
166 |
+
| *Random* | *6.1* | *3e-3* |
|
167 |
|
168 |
Performance on M-CUBE:
|
169 |
+
| Model | CUBE (Acc %) | CUBE-easy (Acc %) |
|
170 |
+
| --- | --- | --- |
|
171 |
+
| GPT-o3 | 0.0 | 72.0 |
|
172 |
+
| GPT-o4-mini | 0.0 | 16.0 |
|
173 |
+
| DeepSeek-R1\* | 0.0 | 14.0 |
|
174 |
+
| Gemini-2.5-pro | 0.0 | 11.0 |
|
175 |
+
| DeepSeek-R1-0528\* | 0.0 | 8.0 |
|
176 |
+
| Claude-3.7-Sonnet | 0.0 | 7.4 |
|
177 |
+
| InternVL3-78B | 0.0 | 2.8 |
|
178 |
+
| Seed1.5-VL | 0.0 | 2.0 |
|
179 |
+
| GPT-4o | 0.0 | 2.0 |
|
180 |
+
| Qwen2.5-VL-72B | 0.0 | 2.0 |
|
181 |
+
| Llama-4-Scout | 0.0 | 1.6 |
|
182 |
+
| Qwen3-235B-A22B\* | 0.0 | 0.3 |
|
183 |
+
| *Random* | *1e-5* | *3.1* |
|
|
|
|
|
184 |
|
185 |
## Contact
|
186 |
- Yulun Jiang: [email protected]
|