Datasets:

Modalities:
Image
Text
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
File size: 2,867 Bytes
de4397c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98b4919
 
 
 
 
 
 
 
de4397c
380c8b4
 
 
 
 
de4397c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b41508f
de4397c
 
 
 
 
 
 
b41508f
 
de4397c
b41508f
 
 
de4397c
 
 
 
 
 
 
 
 
 
b41508f
 
 
de4397c
b41508f
 
 
 
de4397c
 
 
 
 
 
 
b41508f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
size_categories:
- 10M<n<100M
---

# Perceptual Constancy

**Perceptual Constancy** is a multimodal benchmark designed to evaluate high-level perceptual invariance in large vision-language models (VLMs). It probes a model’s understanding of physical and geometric stability under varying sensory appearances. This dataset is part of the [Grow AI Like a Child](https://huggingface.co/grow-ai-like-a-child) benchmark initiative.

---

## 🧠 Dataset Overview

The Perceptual Constancy dataset focuses on **appearance-invariant reasoning** using both static images and short video clips. Each question tests whether the model can generalize consistent properties across transformations such as viewpoint, color, orientation, size, or occlusion.

The dataset contains:

- **253 samples**
- **Two modalities**: `image` or `video`
- **Two question formats**: `multiple-choice` (MC) or `true/false` (TF)


---

## πŸ“ Dataset Format

Each sample includes:

| Field            | Description |
|------------------|-------------|
| `Index`          | Unique ID (e.g., `a0031`) |
| `Data.Type`     | Either `image` or `video` |
| `Qustion.Type`  | Either `MC` or `TF` |
| `Sec..Label`      | Integer from 1 to 3 (see section mapping below) |
| `Question`       | Natural language question with embedded options (for MC) |
| `Correct.Answer` | The correct response (e.g., `A`, `B`, `Yes`, `No`) |

### πŸ”’ `Sec..Label` Categories

| Label | Category         |
|-------|------------------|
| 1     | Color Constancy  |
| 2     | Size Constancy   |
| 3     | Shape Constancy  |

---

## πŸ“‚ Folder Structure

```
data/
β”œβ”€β”€ data.csv
β”œβ”€β”€ images/
β”‚   β”œβ”€β”€ *.png / *.jpg / *.avif
β”‚   └── metadata.jsonl
β”œβ”€β”€ videos/
β”‚   β”œβ”€β”€ *.mp4 / *.gif / *.mov
β”‚   └── metadata.jsonl
```

- The `metadata.jsonl` files store structured entries for each modality.
- `.gif` files are stored in `videos/` and marked as `media_type = video`.

---

## πŸ’‘ Example

```json
{
  "file_name": "a0033.JPG",
  "media_type": "image",
  "question_type": "TF",
  "sec_label": 1.0,
  "question": "In the picture, has the actual color of the bridge itself changed?",
  "correct_answer": "No"
}
```

---

## πŸ“š Citation

If you use this dataset, please cite:

```bibtex
@misc{sun2025probingperceptualconstancylarge,
      title={Probing Perceptual Constancy in Large Vision Language Models}, 
      author={Haoran Sun and Suyang Yu and Yijiang Li and Qingying Gao and Haiyun Lyu and Hokin Deng and Dezhi Luo},
      year={2025},
      eprint={2502.10273},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2502.10273}, 
}
```

---

## 🀝 Acknowledgments

This dataset is developed by the Grow AI Like a Child community to support structured.