nielsr HF Staff commited on
Commit
199e108
·
verified ·
1 Parent(s): 56a6a22

Improve model card with metadata, links, abstract, and usage example

Browse files

This PR enhances the model card by:

- Adding `pipeline_tag: robotics` for correct categorization on the Hugging Face Hub.
- Specifying `library_name: lerobot` for better discoverability and integration.
- Including direct links to the paper, project page, and GitHub repository.
- Adding the paper's abstract for a quick understanding.
- Adding a Python code snippet for usage demonstration.

Files changed (1) hide show
  1. README.md +55 -4
README.md CHANGED
@@ -2,9 +2,60 @@
2
  tags:
3
  - model_hub_mixin
4
  - pytorch_model_hub_mixin
 
 
 
 
5
  ---
6
 
7
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
- - Code: [More Information Needed]
9
- - Paper: [More Information Needed]
10
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  tags:
3
  - model_hub_mixin
4
  - pytorch_model_hub_mixin
5
+ - robotics
6
+ license: mit
7
+ pipeline_tag: robotics
8
+ library_name: lerobot
9
  ---
10
 
11
+ # Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers
12
+
13
+ This repository contains the official code for the paper:
14
+ **[Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers](https://huggingface.co/papers/2507.15833)**
15
+
16
+ 🚀 **Project Website:** [https://ian-chuang.github.io/gaze-av-aloha/](https://ian-chuang.github.io/gaze-av-aloha/)
17
+
18
+ 💻 **Code:** [https://github.com/ian-chuang/gaze-av-aloha.git](https://github.com/ian-chuang/gaze-av-aloha.git)
19
+
20
+ ## Abstract
21
+
22
+ Human vision is a highly active process driven by gaze, which directs attention and fixation to task-relevant regions and dramatically reduces visual processing. In contrast, robot learning systems typically rely on passive, uniform processing of raw camera images. In this work, we explore how incorporating human-like active gaze into robotic policies can enhance both efficiency and performance. We build on recent advances in foveated image processing and apply them to an Active Vision robot system that emulates both human head movement and eye tracking. Extending prior work on the AV-ALOHA robot simulation platform, we introduce a framework for simultaneously collecting eye-tracking data and robot demonstrations from a human operator as well as a simulation benchmark and dataset for training robot policies that incorporate human gaze. Given the widespread use of Vision Transformers (ViTs) in robot learning, we integrate gaze information into ViTs using a foveated patch tokenization scheme inspired by recent work in image segmentation. Compared to uniform patch tokenization, this significantly reduces the number of tokens-and thus computation-without sacrificing visual fidelity near regions of interest. We also explore two approaches to gaze imitation and prediction from human data. The first is a two-stage model that predicts gaze to guide foveation and action; the second integrates gaze into the action space, allowing the policy to jointly predict gaze and actions end-to-end. Our results show that our method for foveated robot vision not only drastically reduces computational overhead, but also improves performance for high precision tasks and robustness to unseen distractors. Together, these findings suggest that human-inspired visual processing offers a useful inductive bias for robotic vision systems.
23
+
24
+ ## Usage
25
+
26
+ You can load and use the `LeRobot` model for a specific task. First, ensure you have the necessary `lerobot` library installed as per the instructions in the GitHub repository.
27
+
28
+ ```python
29
+ import torch
30
+ from lerobot.policy import load_policy
31
+
32
+ # Load a pre-trained policy (e.g., for peg insertion task)
33
+ # You might need to replace 'your_policy_id' with the actual model ID if it's hosted on Hugging Face Hub
34
+ policy = load_policy("iantc104/gaze_model_av_aloha_sim_peg_insertion")
35
+
36
+ # Assuming you have your observation data (e.g., 'obs') prepared as a dictionary
37
+ # For demonstration purposes, let's create a dummy observation matching the expected input structure
38
+ # (replace with actual image/gaze data as needed)
39
+ dummy_image = torch.randn(3, 480, 640) # Example: C, H, W for an image
40
+ dummy_gaze_pose = torch.randn(2) # Example: x, y gaze coordinates
41
+ dummy_obs = {
42
+ "observation": {
43
+ "image": dummy_image,
44
+ "gaze_pose": dummy_gaze_pose,
45
+ # Add other modalities if required by the specific policy
46
+ }
47
+ }
48
+
49
+ # Move observations to the correct device if the policy is on GPU
50
+ if torch.cuda.is_available():
51
+ policy.to("cuda")
52
+ for key in dummy_obs["observation"]:
53
+ if isinstance(dummy_obs["observation"][key], torch.Tensor):
54
+ dummy_obs["observation"][key] = dummy_obs["observation"][key].to("cuda")
55
+
56
+ # Predict action
57
+ with torch.no_grad():
58
+ action = policy(dummy_obs)
59
+
60
+ print(f"Predicted action: {action}")
61
+ ```