task_categories: | |
- robotics | |
size_categories: | |
- 100K<n<1M | |
# Datasets for OneTwoVLA | |
[[Project Page]](https://one-two-vla.github.io/) | [[Paper]](https://huggingface.co/papers/2505.11917) | [[Code]](https://github.com/Fanqi-Lin/OneTwoVLA) | |
This repository provides datasets collected with the [UMI](https://umi-gripper.github.io/), converted into the [LeRobot](https://github.com/huggingface/lerobot.git) data format, along with synthetic vision-language data used in the paper *OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning*. | |
The robot data covers two main tasks: | |
- **Cocktail** | |
- **Open-World Visual Grounding** | |
## Dataset Folders | |
- **cocktail** | |
Contains 299 real-world demonstrations collected in the lab, each with [reasoning annotations](cocktail/cot.json). | |
- **wild_move_to** | |
Includes both real robot data and synthetic vision-language data for the Open-World Visual Grounding task. The robot data consists of 932 demonstrations across 16 environments, featuring about 180 unique objects. The vision-language portion contains 6,000 synthetic images, augmented with fisheye distortion and/or a composited robot gripper. Reasoning annotations are available [here](wild_move_to/cot.json). | |
- **wild_move_to_no_vl** | |
This is the same as **wild_move_to**, but without the synthetic vision-language data. | |
- **open_world_visual_grounding_vl_data.tar.gz** | |
Contains the 6,000 synthetic images (`{0...5999}.png`) and corresponding reasoning annotation files (`{0...5999}_gemini.txt`). | |