zawnpn commited on
Commit
d73a9a9
Β·
verified Β·
1 Parent(s): ea5ad93

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +140 -0
README.md CHANGED
@@ -17,6 +17,146 @@ task_categories:
17
 
18
  <div align="center">
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  [![Project Page](https://img.shields.io/badge/Website-Being--H0-green)](https://beingbeyond.github.io/Being-H0)
21
  [![arXiv](https://img.shields.io/badge/arXiv-2507.15597-b31b1b.svg)](https://arxiv.org/abs/2507.15597)
22
  [![Model](https://img.shields.io/badge/Hugging%20Face-GitHub-yellow)](https://github.com/BeingBeyond/Being-H0)
 
17
 
18
  <div align="center">
19
 
20
+ [![Project Page](https://img.shields.io/badge/Website-Being--H0-green)](https://beingbeyond.github.io/Being-H0)
21
+ [![arXiv](https://img.shields.io/badge/arXiv-2507.15597-b31b1b.svg)](https://arxiv.org/abs/2507.15597)
22
+ [![Model](https://img.shields.io/badge/GitHub-Being--H0-white)](https://huggingface.co/BeingBeyond/Being-H0)
23
+ [![License](https://img.shields.io/badge/License-MIT-blue.svg)](./LICENSE)
24
+
25
+ </div>
26
+
27
+ <p align="center">
28
+ <img src="https://raw.githubusercontent.com/BeingBeyond/Being-H0/refs/heads/main/docs/assets/image/overview.png"/>
29
+ <p>
30
+
31
+
32
+ We introduce **Being-H0**, the first dexterous Vision-Language-Action model pretrained from large-scale human videos via explicit hand motion modeling.
33
+
34
+ ## News
35
+
36
+ - **[2025-08-02]**: We release the **Being-H0** codebase and pretrained models! Check our [Hugging Face Model Hub](https://huggingface.co/BeingBeyond/Being-H0) for more details. πŸ”₯πŸ”₯πŸ”₯
37
+ - **[2025-07-21]**: We publish **Being-H0**! Check our paper [here](https://arxiv.org/abs/2507.15597). 🌟🌟🌟
38
+
39
+ ## Model Checkpoints
40
+
41
+ Download pre-trained models from Hugging Face:
42
+
43
+ | Model Type | Model Name | Parameters | Description |
44
+ |------------|------------|------------|-------------|
45
+ | **Motion Model** | [Being-H0-GRVQ-8K](https://huggingface.co/BeingBeyond/Being-H0-GRVQ-8K) | - | Motion tokenizer |
46
+ | **VLA Pre-trained** | [Being-H0-1B-2508](https://huggingface.co/BeingBeyond/Being-H0-1B-2508) | 1B | Base vision-language-action model |
47
+ | **VLA Pre-trained** | [Being-H0-8B-2508](https://huggingface.co/BeingBeyond/Being-H0-8B-2508) | 8B | Base vision-language-action model |
48
+ | **VLA Pre-trained** | [Being-H0-14B-2508](https://huggingface.co/BeingBeyond/Being-H0-14B-2508) | 14B | Base vision-language-action model |
49
+ | **VLA Post-trained** | [Being-H0-8B-Align-2508](https://huggingface.co/BeingBeyond/Being-H0-8B-Align-2508) | 8B | Fine-tuned for robot alignment |
50
+
51
+ ## Dataset
52
+
53
+ We have provided the dataset for post-training the VLA model. The dataset is available in Hugging Face:
54
+
55
+ | Dataset Type | Dataset Name | Description |
56
+ |--------------|--------------|-------------|
57
+ | **VLA Post-training** | [h0_post_train_db_2508](https://huggingface.co/datasets/BeingBeyond/h0_post_train_db_2508) | Post-training dataset for pretrained Being-H0 VLA model |
58
+
59
+ ## Setup
60
+
61
+ ### Clone repository
62
+
63
+ ```bash
64
+ git clone https://github.com/BeingBeyond/Being-H0.git
65
+ cd Being-H0
66
+ ```
67
+
68
+ ### Create environment
69
+ ```bash
70
+ conda env create -f environment.yml
71
+ conda activate beingvla
72
+ ```
73
+
74
+ ### Install package
75
+ ```bash
76
+ pip install flash-attn --no-build-isolation
77
+ pip install git+https://github.com/lixiny/manotorch.git
78
+ pip install git+https://github.com/mattloper/chumpy.git
79
+ ```
80
+
81
+ ### Download MANO package
82
+
83
+ - Visit [MANO website](http://mano.is.tue.mpg.de/)
84
+ - Create an account by clicking _Sign Up_ and provide your information
85
+ - Download Models and Code (the downloaded file should have the format `mano_v*_*.zip`). Note that all code and data from this download falls under the [MANO license](http://mano.is.tue.mpg.de/license).
86
+ - Unzip and copy the contents in `mano_v*_*/` folder to the `beingvla/models/motion/mano/` folder
87
+
88
+ ## Inference
89
+
90
+ ### Motion Generation
91
+
92
+ - To generate hand motion tokens and render the motion, you should use the Motion Model (`Being-H0-GRVQ-8K`) and the pretrained VLA model (`Being-H0-{1B,8B,14B}-2508`).
93
+ - You can use the following command to inference. For the `--motion_code_path`, you should use a `+` symbol to jointly specify the wrist and finger motion code paths, e.g., `--motion_code_path "/path/to/Being-H0-GRVQ-8K/wrist/+/path/to/Being-H0-GRVQ-8K/finger/"`.
94
+ - The `--hand_mode` can be set to `left`, `right`, or `both` to specify which hand to use for the task.
95
+
96
+ ```bash
97
+ python -m beingvla.inference.vla_internvl_inference \
98
+ --model_path /path/to/Being-H0-XXX \
99
+ --motion_code_path "/path/to/Being-H0-GRVQ-8K/wrist/+/path/to/Being-H0-GRVQ-8K/finger/" \
100
+ --input_image ./playground/unplug_airpods.jpg \
101
+ --task_description "unplug the charging cable from the AirPods" \
102
+ --hand_mode both \
103
+ --num_samples 3 \
104
+ --num_seconds 4 \
105
+ --enable_render true \
106
+ --gpu_device 0 \
107
+ --output_dir ./work_dirs/
108
+ ```
109
+
110
+ - **To inference on your own photos**: See [Camera Intrinsics Guide](https://github.com/BeingBeyond/Being-H0/blob/main/docs/camera_intrinsics.md) for how to estimate camera intrinsics and input them for custom inference.
111
+
112
+ ### Evaluation
113
+
114
+ - You can use our pretrained VLA model to post-train on real robot data. When you get your post-trained model (e.g., `Being-H0-8B-Align-2508`), you can use the following commands to communicate with real robot, or evaluate the model on a robot task.
115
+
116
+ - Setup robot communication:
117
+
118
+ ```bash
119
+ python -m beingvla.models.motion.m2m.aligner.run_server \
120
+ --model-path /path/to/Being-H0-XXX-Align \
121
+ --port 12305 \
122
+ --action-chunk-length 16
123
+ ```
124
+ - Run evaluation on robot task:
125
+
126
+ ```bash
127
+ python -m beingvla.models.motion.m2m.aligner.eval_policy \
128
+ --model-path /path/to/Being-H0-XXX-Align \
129
+ --zarr-path /path/to/real-robot/data \
130
+ --task_description "Put the little white duck into the cup." \
131
+ --action-chunk-length 16
132
+ ```
133
+
134
+ ## Contributing and Building on Being-H0
135
+
136
+ We encourage researchers and practitioners to leverage Being-H0 as a foundation for their own creative experiments and applications. Whether you're adapting Being-H0 to new robotic platforms, exploring novel hand manipulation tasks, or extending the model to new domains, our modular codebase is designed to support your innovations. We welcome contributions of all kinds - from bug fixes and documentation improvements to new features and model architectures. By building on Being-H0 together, we can advance the field of dexterous vision-language-action modeling and enable robots to understand and replicate the rich complexity of human hand movements. Join us in making robotic manipulation more intuitive, capable, and accessible to all.
137
+
138
+ ## Citation
139
+ If you find our work useful, please consider citing us and give a star to our repository! 🌟🌟🌟
140
+
141
+ **Being-H0**
142
+
143
+ ```bibtex
144
+ @article{beingbeyond2025beingh0,
145
+ title={Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos},
146
+ author={Luo, Hao and Feng, Yicheng and Zhang, Wanpeng and Zheng, Sipeng and Wang, Ye and Yuan, Haoqi and Liu, Jiazheng and Xu, Chaoyi and Jin, Qin and Lu, Zongqing},
147
+ journal={arXiv preprint arXiv:2507.15597},
148
+ year={2025}
149
+ }
150
+ ```
151
+
152
+ # Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos
153
+
154
+ <p align="center">
155
+ <img src="https://raw.githubusercontent.com/BeingBeyond/Being-H0/refs/heads/main/docs/assets/image/being-h0-black.png" width="300"/>
156
+ <p>
157
+
158
+ <div align="center">
159
+
160
  [![Project Page](https://img.shields.io/badge/Website-Being--H0-green)](https://beingbeyond.github.io/Being-H0)
161
  [![arXiv](https://img.shields.io/badge/arXiv-2507.15597-b31b1b.svg)](https://arxiv.org/abs/2507.15597)
162
  [![Model](https://img.shields.io/badge/Hugging%20Face-GitHub-yellow)](https://github.com/BeingBeyond/Being-H0)