|
--- |
|
datasets: zaringleb/so101_binary_cube_7 |
|
library_name: lerobot |
|
license: apache-2.0 |
|
model_name: act |
|
pipeline_tag: robotics |
|
tags: |
|
- lerobot |
|
- act |
|
- robotics |
|
--- |
|
|
|
# Model Card for act |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. |
|
|
|
|
|
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). |
|
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). |
|
|
|
--- |
|
|
|
## How to Get Started with the Model |
|
|
|
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). |
|
Below is the short version on how to train and run inference/eval: |
|
|
|
### Train from scratch |
|
|
|
```bash |
|
python lerobot/scripts/train.py \ |
|
--dataset.repo_id=${HF_USER}/<dataset> \ |
|
--policy.type=act \ |
|
--output_dir=outputs/train/<desired_policy_repo_id> \ |
|
--job_name=lerobot_training \ |
|
--policy.device=cuda \ |
|
--policy.repo_id=${HF_USER}/<desired_policy_repo_id> |
|
--wandb.enable=true |
|
``` |
|
|
|
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.* |
|
|
|
### Evaluate the policy/run inference |
|
|
|
```bash |
|
python -m lerobot.record \ |
|
--robot.type=so100_follower \ |
|
--dataset.repo_id=<hf_user>/eval_<dataset> \ |
|
--policy.path=<hf_user>/<desired_policy_repo_id> \ |
|
--episodes=10 |
|
``` |
|
|
|
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. |
|
|
|
--- |
|
|
|
## Model Details |
|
|
|
* **License:** apache-2.0 |