File size: 11,953 Bytes
b876538
 
 
3254372
b876538
 
 
304ff11
 
b876538
 
 
 
 
 
 
3254372
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b876538
 
 
 
304ff11
 
b876538
 
 
 
 
 
304ff11
 
 
b876538
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
304ff11
b876538
 
 
 
 
 
304ff11
b876538
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
304ff11
 
 
 
 
 
 
b876538
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
304ff11
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
---
language:
- en
license: cc-by-nc-4.0
task_categories:
- visual-question-answering
- object-detection
- image-text-to-text
pretty_name: GRAID Waymo Perception Dataset Question-Answer Dataset
tags:
- visual-reasoning
- spatial-reasoning
- object-detection
- computer-vision
- autonomous-driving
- waymo
dataset_info:
  features:
  - name: image
    dtype: image
  - name: annotations
    list:
    - name: area
      dtype: float64
    - name: bbox
      list: float64
    - name: category
      dtype: string
    - name: category_id
      dtype: int64
    - name: iscrowd
      dtype: int64
    - name: score
      dtype: float64
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: question_type
    dtype: string
  - name: source_id
    dtype: string
  - name: id
    dtype: int64
  splits:
  - name: train
    num_bytes: 81528663495.875
    num_examples: 11049
  - name: val
    num_bytes: 20704777688.25
    num_examples: 2806
  download_size: 4995835521
  dataset_size: 102233441184.125
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: val
    path: data/val-*
---

# GRAID Waymo Perception Dataset Question-Answer Dataset

[Paper](https://huggingface.co/papers/2510.22118) | [Project Page](https://ke7.github.io/graid/) | [Code](https://github.com/kd7-ml/graid)

## Overview

This dataset was generated using **GRAID** (**G**enerating **R**easoning questions from **A**nalysis of **I**mages via **D**iscriminative artificial intelligence), a framework for creating spatial reasoning datasets from object detection annotations.

**GRAID** transforms raw object detection data into structured question-answer pairs that test various aspects of object localization, visual reasoning, spatial reasoning, and object relationship comprehension.

## Abstract
Vision Language Models (VLMs) achieve strong performance on many vision-language tasks but often struggle with spatial reasoning\textemdash{}a prerequisite for many applications. Empirically, we find that a dataset produced by a current training data generation pipeline has a 57.6\% human validation rate. These rates stem from current limitations: single-image 3D reconstruction introduces cascading modeling errors and requires wide answer tolerances, while caption-based methods require hyper-detailed annotations and suffer from generative hallucinations. We present GRAID, built on the key insight that qualitative spatial relationships can be reliably determined from 2D geometric primitives alone. By operating exclusively on 2D bounding boxes from standard object detectors, GRAID avoids both 3D reconstruction errors and generative hallucinations, resulting in datasets that are of higher quality than existing tools that produce similar datasets as validated by human evaluations. We apply our framework to the BDD100k, NuImages, and Waymo datasets, generating over 8.5 million high-quality VQA pairs creating questions spanning spatial relations, counting, ranking, and size comparisons. We evaluate one of the datasets and find it achieves 91.16\% human-validated accuracy\textemdash{}compared to 57.6\% on a dataset generated by recent work. % or recent work Critically, we demonstrate that when trained on GRAID data, models learn spatial reasoning concepts that generalize: models fine-tuned on 6 question types improve on over 10 held-out types, with accuracy gains of 47.5\% on BDD and 37.9\% on NuImages for Llama 3.2B 11B, and when trained on all questions types, achieve improvements on several existing benchmarks such as BLINK. The GRAID framework, datasets, and additional information can be found on our \href{ this https URL }{project page}.

## Dataset Details

- **Total QA Pairs**: 13,855
- **Source Dataset**: Waymo Perception Dataset
- **Generation Date**: 2025-09-23
- **Image Format**: Embedded in parquet files (no separate image files)
- **Question Types**: 17 different reasoning patterns

## Dataset Splits

- **train**: 11,049 (79.75%)
- **val**: 2,806 (20.25%)

## Question Type Distribution

- **Are there {target} or more {object_1}(s) in this image? Respond Yes/No.**: 3,080 (22.23%)
- **Are there less than {target} {object_1}(s) in this image? Respond Yes/No.**: 3,080 (22.23%)
- **How many {object_1}(s) are there in this image?**: 1,540 (11.12%)
- **How many {object_1}(s) are in the image? Choose one: A) {range_a}, B) {range_b}, C) {range_c}, D) Unsure / Not Visible. Respond with the letter only.**: 1,186 (8.56%)
- **Is there at least one {object_1} to the left of any {object_2}?**: 1,074 (7.75%)
- **Is there at least one {object_1} to the right of any {object_2}?**: 1,074 (7.75%)
- **Rank the {k} kinds of objects that appear the largest (by pixel area) in the image from largest to smallest. Provide your answer as a comma-separated list of object names only.**: 531 (3.83%)
- **What kind of object appears the most frequently in the image?**: 497 (3.59%)
- **What kind of object appears the least frequently in the image?**: 497 (3.59%)
- **Are there more {object_1}(s) than {object_2}(s) in this image?**: 497 (3.59%)
- **If you were to draw a tight box around each object in the image, which type of object would have the biggest box?**: 423 (3.05%)
- **Divide the image into a grid of {N} rows x {M} columns. Number the cells from left to right, then top to bottom, starting with 1. In what cell does the {object_1} appear?**: 181 (1.31%)
- **Divide the image into thirds. In which third does the {object_1} primarily appear? Respond with the letter only: A) left third, B) middle third, C) right third.**: 104 (0.75%)
- **What is the leftmost object in the image?**: 38 (0.27%)
- **What is the rightmost object in the image?**: 37 (0.27%)
- **Does the rightmost object in the image appear to be wider than it is tall?**: 9 (0.06%)
- **Does the leftmost object in the image appear to be wider than it is tall?**: 7 (0.05%)

## Performance Analysis

### Question Processing Efficiency

| Question Type | is_applicable Avg (ms) | apply Avg (ms) | Predicate -> QA Hit Rate | Empty cases |
|---------------|------------------------|----------------|--------------------------|-------------|
| Divide the image into thirds. In which third does the {object_1} primarily appear? Respond with the letter only: A) left third, B) middle third, C) right third. | 0.03 | 0.32 | 66.2% | 53 |
| Divide the image into a grid of {N} rows x {M} columns. Number the cells from left to right, then top to bottom, starting with 1. In what cell does the {object_1} appear? | 0.01 | 1.82 | 28.8% | 447 |
| If you were to draw a tight box around each object in the image, which type of object would have the biggest box? | 0.02 | 18.40 | 78.3% | 117 |
| Rank the {k} kinds of objects that appear the largest (by pixel area) in the image from largest to smallest. Provide your answer as a comma-separated list of object names only. | 0.02 | 15.74 | 98.3% | 9 |
| What kind of object appears the most frequently in the image? | 0.01 | 0.02 | 92.0% | 43 |
| What kind of object appears the least frequently in the image? | 0.01 | 0.02 | 92.0% | 43 |
| Is there at least one {object_1} to the left of any {object_2}? | 1.60 | 9.09 | 100.0% | 0 |
| Is there at least one {object_1} to the right of any {object_2}? | 1.43 | 8.01 | 100.0% | 0 |
| What is the leftmost object in the image? | 0.02 | 1.47 | 24.2% | 119 |
| What is the rightmost object in the image? | 0.01 | 1.36 | 23.6% | 120 |
| How many {object_1}(s) are there in this image? | 0.01 | 0.02 | 100.0% | 0 |
| Are there more {object_1}(s) than {object_2}(s) in this image? | 0.01 | 0.02 | 92.0% | 43 |
| What appears the most in this image: {object_1}s, {object_2}s, or {object_3}s? | 0.01 | 0.02 | 0.0% | 540 |
| Does the leftmost object in the image appear to be wider than it is tall? | 0.01 | 0.50 | 4.5% | 150 |
| Does the rightmost object in the image appear to be wider than it is tall? | 0.01 | 0.55 | 5.7% | 148 |
| Are there {target} or more {object_1}(s) in this image? Respond Yes/No. | 0.01 | 0.02 | 100.0% | 0 |
| Are there less than {target} {object_1}(s) in this image? Respond Yes/No. | 0.01 | 0.02 | 100.0% | 0 |
| How many {object_1}(s) are in the image? Choose one: A) {range_a}, B) {range_b}, C) {range_c}, D) Unsure / Not Visible. Respond with the letter only. | 0.01 | 0.13 | 88.8% | 112 |
**Notes:**
- `is_applicable` checks if a question type can be applied to an image
- `apply` generates the actual question-answer pairs
- Predicate -> QA Hit Rate = Percentage of applicable cases that generated at least one QA pair
- Empty cases = Number of times is_applicable=True but apply returned no QA pairs
## Usage

```python
from datasets import load_dataset

# Load the complete dataset
dataset = load_dataset("kd7/graid-waymo-unique")

# Access individual splits
train_data = dataset["train"]
val_data = dataset["val"]

# Example of accessing a sample
sample = dataset["train"][0] # or "val"
print(f"Question: {sample['question']}")
print(f"Answer: {sample['answer']}")  
print(f"Question Type: {sample['question_type']}")

# The image is embedded as a PIL Image object
image = sample["image"]
image.show() # Display the image
```

## Dataset Schema

- **image**: PIL Image object (embedded, no separate files)
- **annotations**: COCO-style bounding box annotations  
- **question**: Generated question text
- **answer**: Corresponding answer text
- **reasoning**: Additional reasoning information (if applicable)
- **question_type**: Type of question (e.g., "HowMany", "LeftOf", "Quadrants")
- **source_id**: Original image identifier from Waymo Perception Dataset

## License

This generated dataset is licensed under **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)**, which permits free use for non-commercial purposes including academic research and education.

**Commercial Use Policy**: Commercial entities (including startups and companies) that wish to use this dataset for commercial purposes must obtain a paid license from **MESH**. The CC BY-NC license prohibits commercial use without explicit permission.

To request a commercial license, please contact **Karim Elmaaroufi**.

**Original Source Compliance**: The original source datasets and their licenses still apply to the underlying images and annotations. You must comply with both the CC BY-NC terms and the source dataset terms:

This dataset is derived from the Waymo Perception Dataset. Please refer to the [Waymo Perception Dataset license terms](https://waymo.com/open/terms/) for usage restrictions.

## Citation

If you use this dataset in your research, please cite both the original dataset and the GRAID framework:

```bibtex
@article{elmaaroufi2025graid,
  title={GRAID: Enhancing Spatial Reasoning of VLMs Through High-Fidelity Data Generation},
  author={Elmaaroufi, Karim and Zheng, Jonathan and Xu, Haohan and Pan, Yang and Kim, Younghyun and Choi, Joshua and Amsalem, Yoav and Ma, Jianxin and Xu, Minjun and Liu, Fang and Liang, Ting and Singh, Kavit and Hwu, Wen-mei and Chen, Yida},
  journal={arXiv preprint arXiv:2510.22118},
  year={2025}
}

@dataset{graid_waymo,
    title={GRAID Waymo Perception Dataset Question-Answer Dataset},
    author={GRAID Framework},
    year={2025},
    note={Generated using GRAID: Generating Reasoning questions from Analysis of Images via Discriminative artificial intelligence}
}

@inproceedings{waymo,
    title={Scalability in Perception for Autonomous Driving: Waymo Open Dataset},
    author={Sun, Pei and Kretzschmar, Henrik and Dotiwalla, Xerxes and Chouard, Aurelien and Patnaik, Vijaysai and Tsui, Paul and Guo, James and Zhou, Yin and Chai, Yuning and Caine, Benjamin and others},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={2446--2454},
    year={2020}
}
```

## Contact

For questions about this dataset or the GRAID framework, please open an issue in the repository.