onetwovla-dataset / README.md
Richard-Nai's picture
Add link to paper and code (#2)
9094a90 verified
metadata
task_categories:
  - robotics
size_categories:
  - 100K<n<1M

Datasets for OneTwoVLA

[Project Page] | [Paper] | [Code]

This repository provides datasets collected with the UMI, converted into the LeRobot data format, along with synthetic vision-language data used in the paper OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning.

The robot data covers two main tasks:

  • Cocktail
  • Open-World Visual Grounding

Dataset Folders

  • cocktail
    Contains 299 real-world demonstrations collected in the lab, each with reasoning annotations.

  • wild_move_to
    Includes both real robot data and synthetic vision-language data for the Open-World Visual Grounding task. The robot data consists of 932 demonstrations across 16 environments, featuring about 180 unique objects. The vision-language portion contains 6,000 synthetic images, augmented with fisheye distortion and/or a composited robot gripper. Reasoning annotations are available here.

  • wild_move_to_no_vl
    This is the same as wild_move_to, but without the synthetic vision-language data.

  • open_world_visual_grounding_vl_data.tar.gz
    Contains the 6,000 synthetic images ({0...5999}.png) and corresponding reasoning annotation files ({0...5999}_gemini.txt).