Datasets:
Size:
10K<n<100K
License:
| license: cc-by-4.0 | |
| task_categories: | |
| - visual-question-answering | |
| - summarization | |
| - video-classification | |
| - any-to-any | |
| language: | |
| - en | |
| - de | |
| pretty_name: IndEgo | |
| tags: | |
| - industrial | |
| - egocentric | |
| - procedural | |
| - collaborative work | |
| - mistake detection | |
| - VQA | |
| - video understanding | |
| size_categories: | |
| - 10K<n<100K | |
| <div align="center"> | |
| # IndEgo: A Dataset of Industrial Scenarios and Collaborative Work for Egocentric Assistants | |
| **[Vivek Chavan](https://vivekchavan.com/)¹²\*, [Yasmina Imgrund](https://www.linkedin.com/in/yasmina-imgrund/)²†, [Tung Dao](https://www.linkedin.com/in/lam-dao-tung/)²†, [Sanwantri Bai](https://www.linkedin.com/in/sanwantri-bai-0a808a1b3/)³†, [Bosong Wang](https://www.linkedin.com/in/bosong0106/)⁴†, Ze Lu⁵†, [Oliver Heimann](https://www.linkedin.com/in/oliver-heimann/)¹, [Jörg Krüger](https://www.tu.berlin/iat/ueber-uns/leitung)¹²** | |
| <p> | |
| ¹Fraunhofer IPK, Berlin ²Technical University of Berlin ³University of Tübingen<br> | |
| ⁴RWTH Aachen University ⁵Leibniz University Hannover | |
| </p> | |
| *<sup>\*Project Lead †Work done during student theses/projects at Fraunhofer IPK, Berlin.</sup>* | |
| <div align="center"> | |
| <h3 style="display: flex; align-items: center; justify-content: center; gap: 10px; margin-top: 1em; margin-bottom: 1em;"> | |
| <img src="https://IndEgo-Dataset.github.io/assets/NeurIPS-logo.svg" alt="NeurIPS Logo" height="200"> | |
| <span>Published at NeurIPS 2025</span> | |
| </h3> | |
| </div> | |
| <p> | |
| <a href="https://IndEgo-Dataset.github.io/" target="_blank"><img src="https://img.shields.io/badge/Project-Website-blue?style=flat-square" alt="Project Website"></a> | |
| <a href="https://openreview.net/forum?id=jKw3Qhc8m1" target="_blank"><img src="https://img.shields.io/badge/Paper-OpenReview-red?style=flat-square" alt="Paper PDF"></a> | |
| <a href="https://github.com/Vivek9Chavan/IndEgo/" target="_blank"><img src="https://img.shields.io/badge/Code-GitHub-black?style=flat-square&logo=github" alt="Code"></a> | |
| <a href="https://neurips.cc/virtual/2025/poster/121501" target="_blank"><img src="https://img.shields.io/badge/NeurIPS-Page-orange?style=flat-square" alt="NeurIPS Page"></a> | |
| </p> | |
| <p> | |
| <a href="https://colab.research.google.com/drive/1qCZnFQNRjBuy3vBlkMy7sMTcYkTNOzgg?usp=sharing" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | |
| </p> | |
| </div> | |
| --- | |
| ## 📖 Abstract | |
| We introduce **IndEgo**, a multimodal **egocentric and exocentric** video dataset capturing common industrial tasks such as assembly/disassembly, logistics and organisation, inspection and repair, and woodworking. The dataset includes **3,460 egocentric recordings (~197 hours)** and **1,092 exocentric recordings (~97 hours)**. | |
|  | |
| A central focus of IndEgo is **collaborative work**, where two workers coordinate on cognitively and physically demanding tasks. The egocentric recordings include rich multimodal data — eye gaze, narration, sound, motion, and semi-dense point clouds. | |
| We provide: | |
| - Detailed annotations: actions, summaries, mistake labels, and narrations | |
| - Processed outputs: eye gaze, hand poses, SLAM-based semi-dense point clouds | |
| - Benchmarks: procedural/non-procedural task understanding, **collaborative tasks**, **Mistake Detection**, and **reasoning-based Video QA** | |
| Baseline evaluations show that IndEgo presents a challenge for state-of-the-art multimodal models. | |
| --- | |
| ## 🧩 Citation | |
| If you use **IndEgo** in your research, please cite our NeurIPS 2025 paper: | |
| ```bibtex | |
| @inproceedings{Chavan2025IndEgo, | |
| author = {Vivek Chavan and Yasmina Imgrund and Tung Dao and Sanwantri Bai and Bosong Wang and Ze Lu and Oliver Heimann and J{\"o}rg Kr{\"u}ger}, | |
| title = {IndEgo: A Dataset of Industrial Scenarios and Collaborative Work for Egocentric Assistants}, | |
| booktitle = {Advances in Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track}, | |
| year = {2025}, | |
| url = {https://neurips.cc/virtual/2025/poster/121501} | |
| } | |
| ``` | |
| ## Acknowledgments & Funding | |
| This work is funded by the German Federal Ministry of Education and Research (BMBF) and the German Aerospace Center (DLR) under the KIKERP project (Grant No. 01IS22058C) in the KI-Familie program. We thank the Meta AI team and Reality Labs for the Project Aria initiative, including the research kit, the open-source tools and related services. The data collection for this study was carried out at the IWF research labs and the test field at TU Berlin. Lastly, we sincerely thank the student volunteers and workers who participated in the data collection process. |