nielsr's picture
nielsr HF Staff
Improve model card with metadata, links, abstract, and usage example
199e108 verified
|
raw
history blame
3.99 kB
metadata
tags:
  - model_hub_mixin
  - pytorch_model_hub_mixin
  - robotics
license: mit
pipeline_tag: robotics
library_name: lerobot

Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers

This repository contains the official code for the paper: Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers

🚀 Project Website: https://ian-chuang.github.io/gaze-av-aloha/

💻 Code: https://github.com/ian-chuang/gaze-av-aloha.git

Abstract

Human vision is a highly active process driven by gaze, which directs attention and fixation to task-relevant regions and dramatically reduces visual processing. In contrast, robot learning systems typically rely on passive, uniform processing of raw camera images. In this work, we explore how incorporating human-like active gaze into robotic policies can enhance both efficiency and performance. We build on recent advances in foveated image processing and apply them to an Active Vision robot system that emulates both human head movement and eye tracking. Extending prior work on the AV-ALOHA robot simulation platform, we introduce a framework for simultaneously collecting eye-tracking data and robot demonstrations from a human operator as well as a simulation benchmark and dataset for training robot policies that incorporate human gaze. Given the widespread use of Vision Transformers (ViTs) in robot learning, we integrate gaze information into ViTs using a foveated patch tokenization scheme inspired by recent work in image segmentation. Compared to uniform patch tokenization, this significantly reduces the number of tokens-and thus computation-without sacrificing visual fidelity near regions of interest. We also explore two approaches to gaze imitation and prediction from human data. The first is a two-stage model that predicts gaze to guide foveation and action; the second integrates gaze into the action space, allowing the policy to jointly predict gaze and actions end-to-end. Our results show that our method for foveated robot vision not only drastically reduces computational overhead, but also improves performance for high precision tasks and robustness to unseen distractors. Together, these findings suggest that human-inspired visual processing offers a useful inductive bias for robotic vision systems.

Usage

You can load and use the LeRobot model for a specific task. First, ensure you have the necessary lerobot library installed as per the instructions in the GitHub repository.

import torch
from lerobot.policy import load_policy

# Load a pre-trained policy (e.g., for peg insertion task)
# You might need to replace 'your_policy_id' with the actual model ID if it's hosted on Hugging Face Hub
policy = load_policy("iantc104/gaze_model_av_aloha_sim_peg_insertion")

# Assuming you have your observation data (e.g., 'obs') prepared as a dictionary
# For demonstration purposes, let's create a dummy observation matching the expected input structure
# (replace with actual image/gaze data as needed)
dummy_image = torch.randn(3, 480, 640) # Example: C, H, W for an image
dummy_gaze_pose = torch.randn(2) # Example: x, y gaze coordinates
dummy_obs = {
    "observation": {
        "image": dummy_image,
        "gaze_pose": dummy_gaze_pose,
        # Add other modalities if required by the specific policy
    }
}

# Move observations to the correct device if the policy is on GPU
if torch.cuda.is_available():
    policy.to("cuda")
    for key in dummy_obs["observation"]:
        if isinstance(dummy_obs["observation"][key], torch.Tensor):
            dummy_obs["observation"][key] = dummy_obs["observation"][key].to("cuda")

# Predict action
with torch.no_grad():
    action = policy(dummy_obs)

print(f"Predicted action: {action}")