vivek9chavan commited on
Commit
8961e71
·
verified ·
1 Parent(s): 728d711

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -3
README.md CHANGED
@@ -26,11 +26,33 @@ size_categories:
26
  [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qCZnFQNRjBuy3vBlkMy7sMTcYkTNOzgg?usp=sharing)
27
 
28
  ## Abstract:
29
- We introduce the IndEgo dataset, a multimodal egocentric and exocentric video dataset addressing common industrial tasks, including assembly/disassembly, logistics and organisation, inspection and repair, woodworking, and others. The dataset contains 3,460 egocentric recordings (approximately 197 hours), along with 1,092 exocentric recordings (approximately 97 hours). A key focus of the dataset is collaborative work, where two workers work together on cognitively and physically intensive tasks. The egocentric recordings include rich multimodal data and added context via eye gaze, narration, sound, motion, and others. We provide detailed annotations (actions, summaries, mistake annotations, narrations), metadata, processed outputs (eye gaze, hand pose, semi-dense point cloud), and benchmarks on procedural and non-procedural task understanding, Mistake Detection, and reasoning-based Question Answering. Baseline evaluations for Mistake Detection, Question Answering and collaborative task understanding show that the dataset presents a challenge for the state-of-the-art multimodal models. Our dataset and code are available.
30
-
31
 
32
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62e26c1429fa53b8979fa344/oPQUrEmFLO48rDUiL5Lfs.png)
33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
- ### Acknowledgements: Meta Reality Labs for their support and open-science initiative with Project Aria.
36
 
 
 
26
  [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qCZnFQNRjBuy3vBlkMy7sMTcYkTNOzgg?usp=sharing)
27
 
28
  ## Abstract:
29
+ We introduce **IndEgo**, a multimodal **egocentric and exocentric** video dataset capturing common industrial tasks such as assembly/disassembly, logistics and organisation, inspection and repair, and woodworking.
30
+ The dataset includes **3,460 egocentric recordings (~197 hours)** and **1,092 exocentric recordings (~97 hours)**.
31
 
32
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62e26c1429fa53b8979fa344/oPQUrEmFLO48rDUiL5Lfs.png)
33
 
34
+ A central focus of IndEgo is **collaborative work**, where two workers coordinate on cognitively and physically demanding tasks.
35
+ The egocentric recordings include rich multimodal data — eye gaze, narration, sound, motion, and semi-dense point clouds.
36
+
37
+ We provide:
38
+ - Detailed annotations: actions, summaries, mistake labels, and narrations
39
+ - Processed outputs: eye gaze, hand poses, SLAM-based semi-dense point clouds
40
+ - Benchmarks: procedural/non-procedural task understanding, **Mistake Detection**, and **reasoning-based Video QA**
41
+
42
+ Baseline evaluations show that IndEgo presents a challenge for state-of-the-art multimodal models.
43
+
44
+
45
+
46
+ If you use **IndEgo**, please cite our NeurIPS 2025 paper:
47
+
48
+ ```bibtex
49
+ @inproceedings{Chavan2025IndEgo,
50
+ author = {Vivek Chavan and Yasmina Imgrund and Tung Dao and Sanwantri Bai and Bosong Wang and Ze Lu and Oliver Heimann and J{\"o}rg Kr{\"u}ger},
51
+ title = {IndEgo: A Dataset of Industrial Scenarios and Collaborative Work for Egocentric Assistants},
52
+ booktitle = {Advances in Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track},
53
+ year = {2025},
54
+ url = {https://neurips.cc/virtual/2025/poster/121501}
55
+ }
56
 
 
57
 
58
+ ### Acknowledgements: Meta Reality Labs for their support and open-science initiative with Project Aria.