nielsr's picture
nielsr HF Staff
Improve dataset card: Add paper, project page, code, citation, tags, and usage instructions
a70e21e verified
|
raw
history blame
9.57 kB
metadata
license: apache-2.0
task_categories:
  - robotics
tags:
  - LeRobot
  - gaze
  - foveated-vision
  - robot-learning
  - simulation
library_name: lerobot
configs:
  - config_name: default
    data_files: data/*/*.parquet

This dataset was created using LeRobot.

Dataset Description

This dataset, presented in the paper Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers, provides a simulation benchmark and dataset for training robot policies that incorporate human gaze. It includes bimanual robot demonstrations with synchronized human eye-tracking data collected using the AV-ALOHA simulation platform for the peg insertion task. This dataset is part of a larger effort to explore how human-like active gaze can enhance robot learning efficiency and robustness.

Dataset Structure

meta/info.json:

{
    "codebase_version": "v2.1",
    "robot_type": null,
    "total_episodes": 100,
    "total_frames": 17741,
    "total_tasks": 1,
    "total_videos": 600,
    "total_chunks": 1,
    "chunks_size": 1000,
    "fps": 25,
    "splits": {
        "train": "0:100"
    },
    "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
    "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
    "features": {
        "observation.images.zed_cam_left": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channel"
            ],
            "info": {
                "video.height": 480,
                "video.width": 640,
                "video.codec": "av1",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "video.fps": 25,
                "video.channels": 3,
                "has_audio": false
            }
        },
        "observation.images.zed_cam_right": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channel"
            ],
            "info": {
                "video.height": 480,
                "video.width": 640,
                "video.codec": "av1",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "video.fps": 25,
                "video.channels": 3,
                "has_audio": false
            }
        },
        "observation.images.wrist_cam_left": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channel"
            ],
            "info": {
                "video.height": 480,
                "video.width": 640,
                "video.codec": "av1",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "video.fps": 25,
                "video.channels": 3,
                "has_audio": false
            }
        },
        \"observation.images.wrist_cam_right\": {
            \"dtype\": \"video\",
            \"shape\": [
                480,
                640,
                3
            ],
            \"names\": [
                \"height\",
                \"width\",
                \"channel\"
            ],
            \"info\": {
                \"video.height\": 480,
                \"video.width\": 640,
                \"video.codec\": \"av1\",
                \"video.pix_fmt\": \"yuv420p\",
                \"video.is_depth_map\": false,
                \"video.fps\": 25,
                \"video.channels\": 3,
                \"has_audio\": false
            }
        },
        \"observation.images.overhead_cam\": {
            \"dtype\": \"video\",
            \"shape\": [
                480,
                640,
                3
            ],
            \"names\": [
                \"height\",
                \"width\",
                \"channel\"
            ],
            \"info\": {
                \"video.height\": 480,
                \"video.width\": 640,
                \"video.codec\": \"av1\",
                \"video.pix_fmt\": \"yuv420p\",
                \"video.is_depth_map\": false,
                \"video.fps\": 25,
                \"video.channels\": 3,
                \"has_audio\": false
            }
        },
        \"observation.images.worms_eye_cam\": {
            \"dtype\": \"video\",
            \"shape\": [
                480,
                640,
                3
            ],
            \"names\": [
                \"height\",
                \"width\",
                \"channel\"
            ],
            \"info\": {
                \"video.height\": 480,
                \"video.width\": 640,
                \"video.codec\": \"av1\",
                \"video.pix_fmt\": \"yuv420p\",
                \"video.is_depth_map\": false,
                \"video.fps\": 25,
                \"video.channels\": 3,
                \"has_audio\": false
            }
        },
        \"observation.state\": {
            "dtype": "float32",
            "shape": [
                21
            ],
            "names": null
        },
        "observation.environment_state": {
            "dtype": "float32",
            "shape": [
                14
            ],
            "names": null
        },
        "action": {
            "dtype": "float32",
            "shape": [
                21
            ],
            "names": null
        },
        "left_eye": {
            "dtype": "float32",
            "shape": [
                2
            ],
            "names": null
        },
        "right_eye": {
            "dtype": "float32",
            "shape": [
                2
            ],
            "names": null
        },
        "left_arm_pose": {
            "dtype": "float32",
            "shape": [
                16
            ],
            "names": null
        },
        "right_arm_pose": {
            "dtype": "float32",
            "shape": [
                16
            ],
            "names": null
        },
        "middle_arm_pose": {
            "dtype": "float32",
            "shape": [
                16
            ],
            "names": null
        },
        "timestamp": {
            "dtype": "float32",
            "shape": [
                1
            ],
            "names": null
        },
        "frame_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "episode_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "task_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        }
    }
}

Sample Usage

This dataset is provided in LeRobot format for ease of sharing and visualization. For faster access during training, it is recommended to convert the dataset to a custom AVAlohaDataset format based on Zarr.

  1. Install Dependencies: First, ensure you have the lerobot library and other necessary dependencies installed as described in the official GitHub repository.

    # Install LeRobot (if not already installed)
    pip install git+https://github.com/huggingface/lerobot.git
    
    # Clone the gaze-av-aloha repository for scripts and set up environment
    git clone https://github.com/ian-chuang/gaze-av-aloha.git
    cd gaze-av-aloha
    # Follow additional installation steps from the repo's README, e.g., conda env setup
    conda create -n gaze python=3.10
    conda activate gaze
    pip install -e ./gym_av_aloha
    pip install -e ./gaze_av_aloha
    
  2. Convert Dataset to Zarr Format: Use the conversion script provided in the GitHub repository to convert this dataset to the Zarr format:

    python gym_av_aloha/scripts/convert_lerobot_to_avaloha.py --repo_id iantc104/av_aloha_sim_peg_insertion
    

    Converted datasets will be saved under gym_av_aloha/outputs/.

For more detailed usage, including training and evaluating policies, please refer to the project's GitHub repository.

Citation

BibTeX:

@misc{chuang2025lookfocusactefficient,
      title={Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers}, 
      author={Ian Chuang and Andrew Lee and Dechen Gao and Jinyu Zou and Iman Soltani},
      year={2025},
      eprint={2507.15833},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2507.15833}, 
}