Datasets:
Update README.md (#2)
Browse files- Update README.md (d9cd3f126d3592891c81121d4630dfec0d9debcb)
Co-authored-by: Karmesh Yadav <[email protected]>
README.md
CHANGED
|
@@ -33,4 +33,74 @@ configs:
|
|
| 33 |
path: data/train-*
|
| 34 |
- split: validation
|
| 35 |
path: data/validation-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
path: data/train-*
|
| 34 |
- split: validation
|
| 35 |
path: data/validation-*
|
| 36 |
+
license: apache-2.0
|
| 37 |
+
task_categories:
|
| 38 |
+
- question-answering
|
| 39 |
+
language:
|
| 40 |
+
- en
|
| 41 |
+
tags:
|
| 42 |
+
- robotics
|
| 43 |
+
- embodied-ai
|
| 44 |
+
pretty_name: findingdory
|
| 45 |
+
size_categories:
|
| 46 |
+
- 10K<n<100K
|
| 47 |
---
|
| 48 |
+
<center>
|
| 49 |
+
<a href="https://arxiv.org/abs/2506.15635" target="_blank">
|
| 50 |
+
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-FindingDory-red?logo=arxiv" height="20" />
|
| 51 |
+
</a>
|
| 52 |
+
<a href="https://findingdory-benchmark.github.io/" target="_blank">
|
| 53 |
+
<img alt="Website" src="https://img.shields.io/badge/🌎_Website-FindingDory-blue.svg" height="20" />
|
| 54 |
+
</a>
|
| 55 |
+
<a href="https://github.com/findingdory-benchmark/findingdory-trl" target="_blank">
|
| 56 |
+
<img alt="GitHub Code" src="https://img.shields.io/badge/Code-FindingDory--TRL-white?&logo=github&logoColor=white" />
|
| 57 |
+
</a>
|
| 58 |
+
<a href="https://huggingface.co/yali30/findingdory-qwen2.5-VL-3B-finetuned" target="_blank"">
|
| 59 |
+
<img alt="Huggingface Model" src="https://img.shields.io/badge/Model-FindingDory-yellow?logo=huggingface" />
|
| 60 |
+
</a>
|
| 61 |
+
</center>
|
| 62 |
+
|
| 63 |
+
<center><h1>FindingDory: A Benchmark to Evaluate Memory in Embodied Agents</h1>
|
| 64 |
+
<a href="https://www.karmeshyadav.com/">Karmesh Yadav*</a>,
|
| 65 |
+
<a href="https://yusufali98.github.io/">Yusuf Ali*</a>,
|
| 66 |
+
<a href="https://gunshigupta.netlify.app/">Gunshi Gupta</a>,
|
| 67 |
+
<a href="https://www.cs.ox.ac.uk/people/yarin.gal/website/">Yarin Gal</a>,
|
| 68 |
+
<a href="https://faculty.cc.gatech.edu/~zk15/">Zsolt Kira</a>
|
| 69 |
+
</center>
|
| 70 |
+
|
| 71 |
+
Current vision-language models (VLMs) struggle with long-term memory in embodied tasks. To address this, we introduce **FindingDory**, a benchmark in Habitat that evaluates memory-based reasoning across 60 long-horizon tasks.
|
| 72 |
+
|
| 73 |
+
In this repo, we release the FindingDory Subsampled Video Dataset. Each video contains 96 images collected from a robot’s egocentric view as it navigates realistic indoor environments and interacts with objects. This dataset was used to train and evaluate the high-level agent SFT agent in the FindingDory benchmark.
|
| 74 |
+
|
| 75 |
+
# Usage
|
| 76 |
+
```
|
| 77 |
+
from datasets import load_dataset
|
| 78 |
+
dataset = load_dataset("yali30/findingdory-subsampled-96")
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
# Dataset Structure
|
| 82 |
+
|
| 83 |
+
| Field name | Description |
|
| 84 |
+
| ------------------------- | ------------------------------------------------------------------------------------------------------------- |
|
| 85 |
+
| **ep\_id** | Episode id. |
|
| 86 |
+
| **video** | Relative path of the video clip. |
|
| 87 |
+
| **question** | Question posed to the agent based on the episode. |
|
| 88 |
+
| **answer** | Ground-truth answer stored as a list of image indices |
|
| 89 |
+
| **task\_id** | Identifier indicating which task template the episode belongs to (string). |
|
| 90 |
+
| **high\_level\_category** | Higl-task task category label. (Options: Single-Goal Spatial Tasks, Single-Goal Temporal Tasks, Multi-Goal Tasks). |
|
| 91 |
+
| **low\_level\_category** | Fine-grained task category label. (Example categories: Interaction-Order, Room Visitation, etc) |
|
| 92 |
+
| **num\_interactions** | Number of objects the robot interacts with, during the experience collection. |
|
| 93 |
+
|
| 94 |
+
Notes:
|
| 95 |
+
* The validation split contains 60 tasks . The training split only contains 55 task because the 5 “Object Attributes” tasks are withheld from the training set.
|
| 96 |
+
* The full video version of the dataset is available [here](https://huggingface.co/datasets/yali30/findingdory).
|
| 97 |
+
|
| 98 |
+
📄 Citation
|
| 99 |
+
```
|
| 100 |
+
@article{yadav2025findingdory,
|
| 101 |
+
title = {FindingDory: A Benchmark to Evaluate Memory in Embodied Agents},
|
| 102 |
+
author = {Yadav, Karmesh and Ali, Yusuf and Gupta, Gunshi and Gal, Yarin and Kira, Zsolt},
|
| 103 |
+
journal = {arXiv preprint arXiv:2506.15635},
|
| 104 |
+
year = {2025}
|
| 105 |
+
}
|
| 106 |
+
```
|