---
annotations_creators:
- expert-generated
language:
- en
size_categories:
- 1K
## Dataset Usage
To use this dataset, run the following:
```python
from datasets import load_dataset
dataset = load_dataset("wei2912/SPHERE-VLM", "counting_only-paired-distance_and_counting")
```
where the second argument to `load_dataset` is the subset of your choice (see [Dataset Structure](#dataset-structure)).
## Dataset Structure
The dataset is split into the following subsets:
### Single-skill
1. **Position** (`position_only`) - 357 samples
- Egocentric: 172, Allocentric: 185
2. **Counting** (`counting_only-paired-distance_and_counting` + `counting_only-paired-position_and_counting`) - 201 samples
- The `counting_only-paired-distance_and_counting` subset comprises questions corresponding to those in `distance_and_counting`, and similarly for `counting_only-paired-position_and_counting` with `position-and_counting`.
- For instance, every question in `distance_and_counting` (e.g. "How many crows are on the railing farther from the viewer?") has a corresponding question in `counting_only-paired-distance_and_counting` to count all such instances (e.g. "How many crows are in the photo?")
3. **Distance** (`distance_only`) - 202 samples
4. **Size** (`size_only`) - 198 samples
### Multi-skill
1. **Position + Counting** (`position_and_counting`) - 169 samples
- Egocentric: 64, Allocentric: 105
2. **Distance + Counting** (`distance_and_counting`) - 158 samples
3. **Distance + Size** (`distance_and_size`) - 199 samples
### Reasoning
1. **Object occlusion** (`object_occlusion`) - 402 samples
- Intermediate: 202, Final: 200
- The `object_occlusion_w_intermediate` subset contains final questions with intermediate answers prefixed in the following format:
> "Given that for the question: \ The answer is: \. \ Answer the question directly."
- For instance, given the two questions "Which object is thicker?" (intermediate) and "Where can a child be hiding?" (final) in `object_occlusion`, the corresponding question in `object_occlusion_w_intermediate` is:
> "Given that for the question: Which object is thicker? Fire hydrant or tree trunk? The answer is: Tree trunk. Where can a child be hiding? Behind the fire hydrant or behind the tree? Answer the question directly."
3. **Object manipulation** (`object_manipulation`) - 399
- Intermediate: 199, Final: 200
## Data Fields
The data fields are as follows:
- `question_id`: A unique ID for the question.
- `question`: Question to be passed to the VLM.
- `option`: A list of options that the VLM can select from. For counting tasks, this field is left as null.
- `answer`: The expected answer, which must be either one of the strings in `option` (for non-counting tasks) or a number (for counting tasks).
- `metadata`:
- `viewpoint`: Either "allo" (allocentric) or "ego" (egocentric).
- `format`: Expected format of the answer, e.g. "bool" (boolean), "name", "num" (numeric), "pos" (position).
- `source_dataset`: Currently, this is "coco_test2017" ([MS COCO-2017](https://cocodataset.org)) for our entire set of annotations.
- `source_img_id`: Source image ID in [MS COCO-2017](https://cocodataset.org).
- `skill`: For reasoning tasks, a list of skills tested by the question, e.g. "count", "dist" (distance), "pos" (position), "shape", "size", "vis" (visual).
## Dataset Preparation
This version of the dataset was prepared by combining the [JSON annotations](https://github.com/zwenyu/SPHERE-VLM/tree/main/eval_datasets/coco_test2017_annotations) with the corresponding images from [MS COCO-2017](https://cocodataset.org).
The script used can be found at `prepare_parquet.py`, to be executed in the root of [our GitHub repository](https://github.com/zwenyu/SPHERE-VLM).
## Licensing Information
Please note that the images are subject to the [Terms of Use of MS COCO-2017](https://cocodataset.org/#termsofuse):
> **Images**
>
> The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.
## BibTeX
```
@article{zhang2025sphere,
title={SPHERE: Unveiling Spatial Blind Spots in Vision-Language Models Through Hierarchical Evaluation},
author={Zhang, Wenyu and Ng, Wei En and Ma, Lixin and Wang, Yuwen and Zhao, Jungqi and Koenecke, Allison and Li, Boyang and Wang, Lu},
journal={arXiv},
year={2025}
}
```