lllliuxiao23 commited on
Commit
39060e0
Β·
verified Β·
1 Parent(s): 345cdc3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md CHANGED
@@ -1,3 +1,138 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ language:
4
+ - en
5
+ - zh
6
+ size_categories:
7
+ - n>1T
8
+ tags:
9
+ - real-world
10
+ - dual-arm
11
+ - whole body control
12
+ - manipulation
13
  ---
14
+
15
+ # πŸš€ Galaxea Open-World Dataset
16
+ [![Project Page](https://img.shields.io/badge/Project%20Page-000000?style=for-the-badge&logo=github)](https://opengalaxea.github.io/G0/)
17
+ [![Paper](https://img.shields.io/badge/Paper-8A2BE2?style=for-the-badge&logo=arxiv)](https://github.com/OpenGalaxea/G0/blob/main/Galaxea_G0_report.pdf)
18
+ [![Videos](https://img.shields.io/badge/Videos-FF0000?style=for-the-badge&logo=youtube)](https://opengalaxea.github.io/G0/)
19
+ [![Visualizer](https://img.shields.io/badge/Visualizer-FF8C00?style=for-the-badge&logo=airplayvideo)](https://opengalaxea.github.io/G0/visualizer/index.html)
20
+ [![Modelscope](https://img.shields.io/badge/Modelscope-1890FF?style=for-the-badge&logo=alibabacloud)](https://www.modelscope.cn/organization/Galaxea)
21
+
22
+ ## Key features
23
+ - **500+ hours** of real-world mobile manipulation data.
24
+ - All data collected using **one uniform robotic embodiment** for consistency.
25
+ - Fine-grained **subtask language annotations**.
26
+ - Covers **residential**, **kitchen**, **retail**, and **office** settings.
27
+ - Dataset in **RLDS** format.
28
+
29
+ ## Dataset Structure
30
+ **For convenience, we divided the 500 hours of data into four equal parts by time. We also provide a small sample dataset for quick start.**
31
+ ```
32
+ rlds
33
+ β”œβ”€β”€ part1_r1_lite
34
+ β”‚ β”œβ”€β”€ 1.0.0
35
+ β”‚ β”‚ β”œβ”€β”€ dataset_info.json
36
+ β”‚ β”‚ β”œβ”€β”€ features.json
37
+ β”‚ β”‚ β”œβ”€β”€ merge_dataset_large_r1_lite-train.tfrecord-00000-of-02048
38
+ β”‚ β”‚ β”œβ”€β”€ ...
39
+ β”‚ β”‚ β”œβ”€β”€ merge_dataset_large_r1_lite-train.tfrecord-02047-of-02048
40
+ β”œβ”€β”€ part2_r1_lite
41
+ β”œβ”€β”€ part3_r1_lite
42
+ β”œβ”€β”€ part4_r1_lite
43
+ β”œβ”€β”€ sample
44
+ β”‚ β”œβ”€β”€ 1.0.0
45
+ β”‚ β”‚ β”œβ”€β”€ merge_dataset_large_r1_lite-train.tfrecord-00000-of-01024
46
+ β”‚ β”‚ β”œβ”€β”€ ...
47
+ β”‚ β”‚ β”œβ”€β”€ merge_dataset_large_r1_lite-train.tfrecord-01023-of-01024
48
+ ```
49
+
50
+ ## Dataset Schema
51
+
52
+ ```
53
+ OpenGalaxeaDataset = {
54
+ "episode_metadata": {
55
+ "file_path": tf.Text, # path to the original data file
56
+ },
57
+ "steps": {
58
+ "is_first": tf.Scalar(dtype=bool), # true on first step of the episode
59
+ "is_last": tf.Scalar(dtype=bool), # true on last step of the episode
60
+
61
+ "language_instruction": tf.Text, # language instruction, format: "high level"@"low level chinese"@"low level english"
62
+ "observation": {
63
+ "base_velocity": tf.Tensor(3, dtype=float32), # robot base velocity
64
+ "gripper_state_left": tf.Tensor(1, dtype=float32), # left gripper state, 0-close and 100-open
65
+ "gripper_state_right": tf.Tensor(1, dtype=float32), # right gripper state, 0-close and 100-open
66
+ "depth_camera_wrist_left": tf.Tensor(224, 224, 1, dtype=uint16), # wrist camera depth left viewpoint, unit: mm
67
+ "depth_camera_wrist_right": tf.Tensor(224, 224, 1, dtype=uint16), # wrist camera depth right viewpoint, unit: mm
68
+ "image_camera_head": tf.Tensor(224, 224, 3, dtype=uint8), # head camera RGB viewpoint
69
+ "image_camera_wrist_left": tf.Tensor(224, 224, 3, dtype=uint8), # wrist camera RGB left viewpoint
70
+ "image_camera_wrist_right": tf.Tensor(224, 224, 3, dtype=uint8), # wrist camera RGB right viewpoint
71
+ "joint_position_arm_left": tf.Tensor(6, dtype=float32), # joint positions of the left arm
72
+ "joint_position_arm_right": tf.Tensor(6, dtype=float32), # joint positions of the right arm
73
+ "joint_position_torso": tf.Tensor(4, dtype=float32), # joint positions of the torso
74
+ "joint_velocity_arm_left": tf.Tensor(6, dtype=float32), # joint velocities of the left arm
75
+ "joint_velocity_arm_right": tf.Tensor(6, dtype=float32), # joint velocities of the right arm
76
+ "last_action": tf.Tensor(26, dtype=float32), # history of the last action
77
+ },
78
+ # action dimensions:
79
+ # 26 = 6 (left arm) + 1 (left gripper) + 6 (right arm) + 1 (right gripper) + 6 (torso) + 6 (base)
80
+ "action": tf.Tensor(26, dtype=float32), # robot action, consists of [6x joint velocities, 1x gripper position]
81
+ "segment_idx": tf.Scalar(dtype=int32), # index of the segment in the episode
82
+ "variant_idx": tf.Scalar(dtype=int32),
83
+ },
84
+ }
85
+ ```
86
+
87
+ ## Example
88
+
89
+ We provide an example script to load our RLDS dataset and transform some episodes into mp4 video format (head camera).
90
+
91
+ ```python
92
+ import tensorflow_datasets as tfds
93
+ import tyro
94
+ import os
95
+ import imageio
96
+ from tqdm import tqdm
97
+
98
+ def main(
99
+ dataset_name: str,
100
+ data_dir: str,
101
+ output_dir: str = "extracted_videos",
102
+ num_trajs: int = 10
103
+ ):
104
+ ds = tfds.load(dataset_name, split='train', data_dir=data_dir)
105
+ print(f"Successfully loaded dataset: {dataset_name}")
106
+
107
+ os.makedirs(output_dir, exist_ok=True)
108
+ print(f"Videos will be saved to: {output_dir}")
109
+
110
+ for i, episode in enumerate(tqdm(ds.take(num_trajs), total=num_trajs, desc="Exporting videos")):
111
+ head_frames = []
112
+
113
+ for step in episode['steps']:
114
+ head_rgb_image = step['observation']['image_camera_head'].numpy()
115
+ head_frames.append(head_rgb_image)
116
+ instruction = step['language_instruction'].numpy().decode('utf-8')
117
+
118
+ video_path = os.path.join(output_dir, f"traj_{i}_head_rgb.mp4")
119
+ try:
120
+ imageio.mimsave(video_path, head_frames, fps=15)
121
+ print(f"Saved video for episode {i} to {video_path} with instruction: '{instruction}'")
122
+ except Exception as e:
123
+ print(f"Error saving video for episode {i}: {e}")
124
+
125
+ if __name__ == '__main__':
126
+ tyro.cli(main)
127
+ ```
128
+ ## πŸ“œ Citation
129
+
130
+ All the data and code within this repo are under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). If you use our dataset or models, please cite:
131
+
132
+ ```bibtex
133
+ @article{galaxea2025,
134
+ title={Galaxea G0: Open-World Dataset and Dual-System VLA Model},
135
+ author={Galaxea Team},
136
+ journal={arXiv preprint arXiv:XXXX.XXXXX},
137
+ year={2025}
138
+ }