--- annotations_creators: [manual] language: [en] license: apache-2.0 multilinguality: [monolingual] pretty_name: ARM4R Dataset size_categories: [100K*Equal contribution, †Equal advising **Berkeley AI Research, UC Berkeley** **ICML 2025** [**Paper**](https://arxiv.org/pdf/2502.13142) • [**Code**](https://github.com/Dantong88/arm4r) • [**Models**](https://huggingface.co/datasets/yuvansharma/arm4r-ckpts) • [**Dataset**](https://huggingface.co/datasets/yuvansharma/arm4r-data) The structure for the data is as follows: ``` . ├── .gitattributes ├── README.md ├── epic_clips.json # contains mapping for episode id --> language instruction (76,014 episodes) ├── epic_tasks_final.zip # contains extracted 3D point data for Epic-Kitchens (76,014 episodes) └── real_kinova_release_data.zip # contains collected data for real world Kinova Gen3 setup (2,550 episodes) ``` ## Citation If you find our work helpful, please consider citing: ```bibtex @article{niu2025pre, title={Pre-training auto-regressive robotic models with 4d representations}, author={Niu, Dantong and Sharma, Yuvan and Xue, Haoru and Biamby, Giscard and Zhang, Junyi and Ji, Ziteng and Darrell, Trevor and Herzig, Roei}, journal={arXiv preprint arXiv:2502.13142}, year={2025} }