hqfang commited on
Commit
5c025a7
·
verified ·
1 Parent(s): aa3b580

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +160 -3
README.md CHANGED
@@ -1,3 +1,160 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - Qwen/Qwen2.5-7B
7
+ - google/siglip2-so400m-patch14-384
8
+ library_name: transformers
9
+ tags:
10
+ - molmoact
11
+ - molmo
12
+ - olmo
13
+ - reasoning
14
+ - vla
15
+ - robotics
16
+ - manipulation
17
+ ---
18
+
19
+ # MolmoAct 7B-D Pretrain RT-1
20
+
21
+ MolmoAct is a fully open-source action reasoning model for robotic manipulation developed by the Allen Institute for AI. MolmoAct is trained on a subset of OXE and MolmoAct Dataset, a dataset with 10k high-quality trajectories of a single-arm Franka robot performing 93 unique manipulation tasks in both home and tabletop environments. It has state-of-the-art performance among vision-language-action models on multiple benchmarks while being fully open-source. You can find all models in the MolmoAct family [here](https://huggingface.co/collections/allenai/molmoact-689697591a3936fba38174d7).
22
+ **Learn more about MolmoAct** in our announcement [blog post](https://allenai.org/blog/molmoact) or the [paper](https://huggingface.co/allenai/MolmoAct-7B-D-0812/blob/main/MolmoAct_Technical_Report.pdf).
23
+
24
+ **MolmoAct 7B-D Pretrain RT-1** is based on [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) and uses [SigLip2](https://huggingface.co/google/siglip2-so400m-patch14-384) as the vision backbone, which is initialized using Molmo's pre-training approach. It is first pre-trained on MolmoAct's [Pre-training Mixture](https://huggingface.co/datasets/allenai/MolmoAct-Pretraining-Mixture), and then fine-tuned on RT-1 data using the same configuration of mid-training.
25
+
26
+ This checkpoint is a **preview** of the MolmoAct release. All artifacts used in creating MolmoAct (data, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
27
+
28
+ Quick links:
29
+ - 📂 [All Models](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19)
30
+ - 📂 [All Data](https://huggingface.co/collections/allenai/molmoact-data-mixture-6897e583e13b6c2cf3ea2b80)
31
+ - 📃 [Paper](https://huggingface.co/allenai/MolmoAct-7B-D-0812/blob/main/MolmoAct_Technical_Report.pdf)
32
+ - 🎥 [Blog Post](https://allenai.org/blog/molmoact)
33
+
34
+
35
+ ## Quick Start
36
+
37
+ To run MolmoAct, first install dependencies:
38
+
39
+ ```bash
40
+ pip install einops torchvision accelerate
41
+ pip install transformers==4.52
42
+ ```
43
+
44
+ Then, follow these steps:
45
+
46
+ ```python
47
+ from transformers import AutoProcessor, AutoModelForImageTextToText
48
+ import torch
49
+ from PIL import Image
50
+ import requests
51
+ from io import BytesIO
52
+
53
+ ckpt = "allenai/MolmoAct-7B-D-Pretrain-0812"
54
+
55
+ # load the processor
56
+ processor = AutoProcessor.from_pretrained(
57
+ ckpt,
58
+ trust_remote_code=True,
59
+ torch_dtype="auto",
60
+ device_map="auto",
61
+ padding_side="left",
62
+ )
63
+
64
+ # load the model
65
+ model = AutoModelForImageTextToText.from_pretrained(
66
+ ckpt,
67
+ trust_remote_code=True,
68
+ torch_dtype="auto",
69
+ device_map="auto",
70
+ )
71
+
72
+ # task instruction
73
+ instruction = "pick orange can"
74
+
75
+ # strictly follow this reasoning prompt
76
+ prompt = (
77
+ f"The task is {instruction}. "
78
+ "What is the action that the robot should take. "
79
+ f"To figure out the action that the robot should take to {instruction}, "
80
+ "let's think through it step by step. "
81
+ "First, what is the depth map for this image? "
82
+ "Second, what is the trajectory of the end effector? "
83
+ "Based on the depth map of the image and the trajectory of the end effector, "
84
+ "what is the action that the robot should take?"
85
+ )
86
+
87
+ # apply chat template
88
+ text = processor.apply_chat_template(
89
+ [
90
+ {
91
+ "role": "user",
92
+ "content": [dict(type="text", text=prompt)]
93
+ }
94
+ ],
95
+ tokenize=False,
96
+ add_generation_prompt=True,
97
+ )
98
+
99
+ # image observation
100
+ url = "https://huggingface.co/allenai/MolmoAct-7B-D-Pretrain-0812/resolve/main/example.png"
101
+ r = requests.get(url, headers={"User-Agent": "python-requests"}, timeout=30)
102
+ r.raise_for_status()
103
+ img = Image.open(BytesIO(r.content)).convert("RGB")
104
+ imgs = [img]
105
+
106
+ # process the image and text
107
+ inputs = processor(
108
+ images=[imgs],
109
+ text=text,
110
+ padding=True,
111
+ return_tensors="pt",
112
+ )
113
+
114
+ # move inputs to the correct device
115
+ inputs = {k: v.to(model.device) for k, v in inputs.items()}
116
+
117
+ # generate output
118
+ with torch.inference_mode():
119
+ with torch.autocast("cuda", enabled=True, dtype=torch.bfloat16):
120
+ generated_ids = model.generate(**inputs, max_new_tokens=256)
121
+
122
+ # only get generated tokens; decode them to text
123
+ generated_tokens = generated_ids[:, inputs['input_ids'].size(1):]
124
+ generated_text = processor.batch_decode(generated_tokens, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
125
+
126
+ # print the generated text
127
+ print(f"generated text: {generated_text}")
128
+
129
+ # >>> The depth map of the image is ... The trajectory of the end effector is ...
130
+ # Based on these information, the action that the robot should take is ...
131
+
132
+ # parse out all depth perception tokens
133
+ depth = model.parse_depth(generated_text)
134
+ print(f"generated depth perception tokens: {depth}")
135
+
136
+ # >>> [ "<DEPTH_START><DEPTH_1><DEPTH_2>...<DEPTH_END>" ]
137
+
138
+ # parse out all visual reasoning traces
139
+ trace = model.parse_trace(generated_text)
140
+ print(f"generated visual reasoning trace: {trace}")
141
+
142
+ # >>> [ [[242, 115], [140, 77], [94, 58], [140, 44], [153, 26]]] ]
143
+
144
+ # parse out all actions, unnormalizing with key of fractal20220817_data
145
+ action = model.parse_action(generated_text, unnorm_key="fractal20220817_data")
146
+ print(f"generated action: {action}")
147
+
148
+ # >>> [ [0.0732076061122558, 0.08228153779226191, -0.027760173818644346,
149
+ # 0.15932856272248652, -0.09686601126895233, 0.043916773912953344,
150
+ # 0.996078431372549] ]
151
+ ```
152
+
153
+ ## License and Use
154
+
155
+ This model is licensed under Apache 2.0. It is intended for research and educational use.
156
+ For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
157
+
158
+
159
+ ## Model and Hardware Safety
160
+ MolmoAct offers the ability to inspect a visual trace of its intended actions in space before they occur, allowing users to ensure safe behavior by proactively auditing and adjusting the actions of any hardware acting under the model’s instructions. MolmoAct’s action space is bounded within the data provided, and compliance is built into the model to prevent excessive force when resistance is detected. Please follow the hardware manufacturer’s guidelines when using this model with a robot and perform all operations in a safely configured environment.