Files changed (1) hide show
  1. README.md +60 -1
README.md CHANGED
@@ -44,4 +44,63 @@ tags:
44
  pretty_name: findingdory
45
  size_categories:
46
  - 10K<n<100K
47
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  pretty_name: findingdory
45
  size_categories:
46
  - 10K<n<100K
47
+ ---
48
+ <center>
49
+ <a href="https://arxiv.org/abs/2506.15635" target="_blank">
50
+ <img alt="arXiv" src="https://img.shields.io/badge/arXiv-FindingDory-red?logo=arxiv" height="20" />
51
+ </a>
52
+ <a href="https://findingdory-benchmark.github.io/" target="_blank">
53
+ <img alt="Website" src="https://img.shields.io/badge/🌎_Website-FindingDory-blue.svg" height="20" />
54
+ </a>
55
+ <a href="https://github.com/findingdory-benchmark/findingdory-trl" target="_blank">
56
+ <img alt="GitHub Code" src="https://img.shields.io/badge/Code-FindingDory--TRL-white?&logo=github&logoColor=white" />
57
+ </a>
58
+ <a href="https://huggingface.co/yali30/findingdory-qwen2.5-VL-3B-finetuned" target="_blank"">
59
+ <img alt="Huggingface Model" src="https://img.shields.io/badge/Model-FindingDory-yellow?logo=huggingface" />
60
+ </a>
61
+ </center>
62
+
63
+ <center><h1>FindingDory: A Benchmark to Evaluate Memory in Embodied Agents</h1>
64
+ <a href="https://www.karmeshyadav.com/">Karmesh Yadav*</a>,
65
+ <a href="https://yusufali98.github.io/">Yusuf Ali*</a>,
66
+ <a href="https://gunshigupta.netlify.app/">Gunshi Gupta</a>,
67
+ <a href="https://www.cs.ox.ac.uk/people/yarin.gal/website/">Yarin Gal</a>,
68
+ <a href="https://faculty.cc.gatech.edu/~zk15/">Zsolt Kira</a>
69
+ </center>
70
+
71
+ Current vision-language models (VLMs) struggle with long-term memory in embodied tasks. To address this, we introduce **FindingDory**, a benchmark in Habitat that evaluates memory-based reasoning across 60 long-horizon tasks.
72
+
73
+ In this repo, we release the FindingDory Video Dataset. Each video contains images collected from a robot’s egocentric view as it navigates realistic indoor environments and interacts with objects. This dataset was used to train and evaluate the high-level agent SFT agent in the FindingDory benchmark.
74
+
75
+ # Usage
76
+ ```
77
+ from datasets import load_dataset
78
+ dataset = load_dataset("yali30/findingdory")
79
+ ```
80
+
81
+ # Dataset Structure
82
+
83
+ | Field name | Description |
84
+ | ------------------------- | ------------------------------------------------------------------------------------------------------------- |
85
+ | **ep\_id** | Episode id. |
86
+ | **video** | Relative path of the video clip. |
87
+ | **question** | Question posed to the agent based on the episode. |
88
+ | **answer** | Ground-truth answer stored as a list of image indices |
89
+ | **task\_id** | Identifier indicating which task template the episode belongs to (string). |
90
+ | **high\_level\_category** | Higl-task task category label. (Options: Single-Goal Spatial Tasks, Single-Goal Temporal Tasks, Multi-Goal Tasks). |
91
+ | **low\_level\_category** | Fine-grained task category label. (Example categories: Interaction-Order, Room Visitation, etc) |
92
+ | **num\_interactions** | Number of objects the robot interacts with, during the experience collection. |
93
+
94
+ Notes:
95
+ * The validation split contains 60 tasks . The training split only contains 55 task because the 5 “Object Attributes” tasks are withheld from the training set.
96
+ * A subsampled version of the dataset (96 frames per episode) is available [here](https://huggingface.co/datasets/yali30/findingdory-subsampled-96).
97
+
98
+ 📄 Citation
99
+ ```
100
+ @article{yadav2025findingdory,
101
+ title = {FindingDory: A Benchmark to Evaluate Memory in Embodied Agents},
102
+ author = {Yadav, Karmesh and Ali, Yusuf and Gupta, Gunshi and Gal, Yarin and Kira, Zsolt},
103
+ journal = {arXiv preprint arXiv:2506.15635},
104
+ year = {2025}
105
+ }
106
+ ```