RyanWW commited on
Commit
bc5623d
·
2 Parent(s): 2309075 132bdab

update readme

Browse files
Files changed (1) hide show
  1. README.md +86 -39
README.md CHANGED
@@ -1,22 +1,20 @@
1
  ---
2
  license: apache-2.0
3
  task_categories:
4
- - visual-question-answering
5
  language:
6
- - en
7
  tags:
8
- - spatial-reasoning
9
- - multimodal
10
  pretty_name: Spatial457
11
  size_categories:
12
- - 10K<n<100K
13
  ---
14
- <!-- # Spatial457: A Diagnostic Benchmark for 6D Spatial Reasoning of Large Multimodal Models -->
15
 
16
- <p align="center">
17
- <img src="https://xingruiwang.github.io/projects/Spatial457/static/images/icon.png" width="48" alt="icon"/>
18
- <strong><span style="font-size: 28px;">&nbsp;Spatial457</span></strong>
19
- </p>
20
 
21
  <h1 align="center">
22
  <a href="https://arxiv.org/abs/2502.08636">
@@ -24,63 +22,111 @@ size_categories:
24
  </a>
25
  </h1>
26
 
27
-
28
  <p align="center">
29
- <a href=".">Xingrui Wang</a><sup>1</sup>,
30
- <a href=".">Wufei Ma</a><sup>1</sup>,
31
- <a href=".">Tiezheng Zhang</a><sup>1</sup>,
32
- <a href=".">Celso M de Melo</a><sup>2</sup>,
33
- <a href=".">Jieneng Chen</a><sup>1</sup>,
34
- <a href=".">Alan Yuille</a><sup>1</sup>
35
  </p>
36
 
37
  <p align="center">
38
- <sup>1</sup> Johns Hopkins University &nbsp;&nbsp;
39
  <sup>2</sup> DEVCOM Army Research Laboratory
40
  </p>
41
 
42
-
43
  <p align="center">
44
- <a href="https://xingruiwang.github.io/projects/Spatial457/">Project Page</a> /
45
- <a href="https://arxiv.org/abs/2502.08636">Paper</a> /
46
- <a href="https://huggingface.co/datasets/RyanWW/Spatial457">🤗 Huggingface</a> /
47
- <a href="https://github.com/XingruiWang/Spatial457">Code</a>
48
  </p>
49
 
50
  <p align="center">
51
- <img src="https://xingruiwang.github.io/projects/Spatial457/static/images/teaser.png" alt="Spatial457 Teaser" width="80%">
52
  </p>
53
 
54
- <!-- <p align="center"><i>
55
- Official implementation of the CVPR 2025 (Highlight) paper:
56
- <strong>Spatial457: A Diagnostic Benchmark for 6D Spatial Reasoning of Large Multimodal Models</strong>
57
- </i></p> -->
58
-
59
  ---
60
 
61
  ## 🧠 Introduction
62
 
63
- Spatial457 is a diagnostic benchmark designed to evaluate the 6D spatial reasoning capabilities of large multimodal models (LMMs). It systematically introduces four key capabilities—multi-object understanding, 2D and 3D localization, and 3D orientation—across five difficulty levels and seven question types, progressing from basic recognition to complex physical interaction.
 
 
 
 
 
 
 
64
 
65
  ---
66
 
67
- ## 📦 Download
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
- You can access the full dataset and evaluation toolkit:
70
 
71
- - **Dataset**: [Hugging Face](https://huggingface.co/datasets/RyanWW/Spatial457)
72
- - **Code**: [GitHub Repository](https://github.com/XingruiWang/Spatial457)
73
- - **Paper**: [arXiv 2502.08636](https://arxiv.org/abs/2502.08636)
 
 
 
 
74
 
75
  ---
76
 
77
- ## 📊 Benchmark
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
- We benchmarked a wide range of state-of-the-art models—including GPT-4o, Gemini, Claude, and several open-source LMMs—on all subsets. Performance consistently drops as task difficulty increases. PO3D-VQA and humans remain most robust across all levels.
 
 
 
 
 
80
 
81
- The table below summarizes model performance across 7 subsets:
82
 
83
- <!-- Include table image or markdown table here if needed -->
84
 
85
  ---
86
 
@@ -94,5 +140,6 @@ The table below summarizes model performance across 7 subsets:
94
  year = {2025},
95
  url = {https://arxiv.org/abs/2502.08636}
96
  }
 
97
 
98
 
 
1
  ---
2
  license: apache-2.0
3
  task_categories:
4
+ - visual-question-answering
5
  language:
6
+ - en
7
  tags:
8
+ - spatial-reasoning
9
+ - multimodal
10
  pretty_name: Spatial457
11
  size_categories:
12
+ - 10K<n<100K
13
  ---
 
14
 
15
+ <div align="center">
16
+ <img src="https://xingruiwang.github.io/projects/Spatial457/static/images/icon_name.png" alt="Spatial457 Logo" width="240"/>
17
+ </div>
 
18
 
19
  <h1 align="center">
20
  <a href="https://arxiv.org/abs/2502.08636">
 
22
  </a>
23
  </h1>
24
 
 
25
  <p align="center">
26
+ <a href="https://xingruiwang.github.io/">Xingrui Wang</a><sup>1</sup>,
27
+ <a href="#">Wufei Ma</a><sup>1</sup>,
28
+ <a href="#">Tiezheng Zhang</a><sup>1</sup>,
29
+ <a href="#">Celso M. de Melo</a><sup>2</sup>,
30
+ <a href="#">Jieneng Chen</a><sup>1</sup>,
31
+ <a href="#">Alan Yuille</a><sup>1</sup>
32
  </p>
33
 
34
  <p align="center">
35
+ <sup>1</sup> Johns Hopkins University &nbsp;&nbsp;&nbsp;&nbsp;
36
  <sup>2</sup> DEVCOM Army Research Laboratory
37
  </p>
38
 
 
39
  <p align="center">
40
+ <a href="https://xingruiwang.github.io/projects/Spatial457/">🌐 Project Page</a>
41
+ <a href="https://arxiv.org/abs/2502.08636">📄 Paper</a>
42
+ <a href="https://huggingface.co/datasets/RyanWW/Spatial457">🤗 Dataset</a>
43
+ <a href="https://github.com/XingruiWang/Spatial457">💻 Code</a>
44
  </p>
45
 
46
  <p align="center">
47
+ <img src="https://xingruiwang.github.io/projects/Spatial457/static/images/teaser.png" alt="Spatial457 Teaser" width="80%"/>
48
  </p>
49
 
 
 
 
 
 
50
  ---
51
 
52
  ## 🧠 Introduction
53
 
54
+ **Spatial457** is a diagnostic benchmark designed to evaluate **6D spatial reasoning** in large multimodal models (LMMs). It systematically introduces four core spatial capabilities:
55
+
56
+ - 🧱 Multi-object understanding
57
+ - 🧭 2D spatial localization
58
+ - 📦 3D spatial localization
59
+ - 🔄 3D orientation estimation
60
+
61
+ These are assessed across **five difficulty levels** and **seven diverse question types**, ranging from simple object queries to complex reasoning about physical interactions.
62
 
63
  ---
64
 
65
+ ## 📂 Dataset Structure
66
+
67
+ The dataset is organized as follows:
68
+
69
+ ```
70
+ Spatial457/
71
+ ├── images/ # RGB images used in VQA tasks
72
+ ├── questions/ # JSONs for each subtask
73
+ │ ├── L1_single.json
74
+ │ ├── L2_objects.json
75
+ │ ├── L3_2d_spatial.json
76
+ │ ├── L4_occ.json
77
+ │ └── ...
78
+ ├── Spatial457.py # Hugging Face dataset loader script
79
+ ├── README.md # Documentation
80
+ ```
81
 
82
+ Each JSON file contains a list of VQA examples, where each item includes:
83
 
84
+ - "image_filename": image file name used in the question
85
+ - "question": natural language question
86
+ - "answer": boolean, string, or number
87
+ - "program": symbolic program (optional)
88
+ - "question_index": unique identifier
89
+
90
+ This modular structure supports scalable multi-task evaluation across levels and reasoning types.
91
 
92
  ---
93
 
94
+ ## 🛠️ Dataset Usage
95
+
96
+ You can load the dataset directly using the Hugging Face 🤗 `datasets` library:
97
+
98
+ ### ��� Load a specific subtask (e.g., L1_single)
99
+
100
+ ```python
101
+ from datasets import load_dataset
102
+
103
+ dataset = load_dataset("RyanWW/Spatial457", name="L1_single", split="train")
104
+ ```
105
+
106
+ Each example is a dictionary like:
107
+
108
+ ```python
109
+ {
110
+ 'image': <PIL.Image.Image>,
111
+ 'image_filename': 'superCLEVR_new_000001.png',
112
+ 'question': 'Is the large red object in front of the yellow car?',
113
+ 'answer': 'True',
114
+ 'program': [...],
115
+ 'question_index': 100001
116
+ }
117
+ ```
118
+
119
+ ### 🔹 Other available configurations
120
 
121
+ ```python
122
+ [
123
+ "L1_single", "L2_objects", "L3_2d_spatial",
124
+ "L4_occ", "L4_pose", "L5_6d_spatial", "L5_collision"
125
+ ]
126
+ ```
127
 
128
+ You can swap `name="..."` in `load_dataset(...)` to evaluate different spatial reasoning capabilities.
129
 
 
130
 
131
  ---
132
 
 
140
  year = {2025},
141
  url = {https://arxiv.org/abs/2502.08636}
142
  }
143
+ ```
144
 
145