The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 298, in get_dataset_config_info for split_generator in builder._split_generators( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 83, in _split_generators raise ValueError( ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 352, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 303, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Colosseum Dataset Card
This dataset contains demonstrations for training and testing Imitation Learning
based policies, taken from our simulation benchmark Colosseum
, which is
based on RLBench
. The benchmark consists of 20 tasks from the RLBench suite.
We implement variations for each task, like camera pose
, which try to
test generalization capabilities.
Dataset details
The training set consits of 100 demonstrations of the 20 tasks without any variation factor (the vanilla version of the RLBench tasks). Each demonstration consists of frame data from the following 4 camera views:
- Front camera
- Left shoulder camera
- Right shoulder camera
- Wrist camera
For each camera view we collect the following data:
- RGB
- Depth
Note: each frame is recorded at 128 x 128
resolution.
The test set consits of 25 demonstrations of the 20 tasks, each for the factors of variations that are applicable to that task. Each step collects data from the same 4 camera views, at the same resolution.
Dataset structure
The data is distributed as tar.gz
files. After downloading each tar and
extracting it into a local folder, you'll get a folder structure like the
following (e.g. for the task stack_cups
):
Each folder contains a suffix (idx
), which indicates which variation factor
was applied to the simulation, e.g. idx=0
means no variations, whereas
idx=2
means Object Color variation applied to the Manipulated Object. You
can find a spreadsheet here with the tasks idx
for each of the 20 tasks.
You can also find what variations are applicable to that task, as it could be
that some variations are not active for some task combination.
The pickle file variation_description.pkl
contains the language instructions
for that task. Below we go deeper into the folder structure for one of the
variations. Notice there is a set of folders per each episode/demonstration, and
on each folder there are extra folders for each camera view and type of image.
There's also a pickle file low_dim_obs.pkl
with the low dimensional observation
saved by RLBench. The info stored in this pickle comes from this config
file in RLBench.
Downloading the dataset using wget and a download link
- Go to the HuggingFace repo and select the files option:
- Select the task you want to get:
- Get the download link:
- Use
curl
orwget
to get the tar file:
wget YOUR_DOWNLOAD_LINK
Resources for more information
- Paper: https://arxiv.org/abs/2402.08191
- Benchmark Code: https://github.com/robot-colosseum/robot-colosseum
- Website: https://robot-colosseum.github.io
Citation
If you find our work helpful, please consider citing our paper.
@article{pumacay2024colosseum,
title = {THE COLOSSEUM: A Benchmark for Evaluating Generalization for Robotic Manipulation},
author = {Pumacay, Wilbert and Singh, Ishika and Duan, Jiafei and Krishna, Ranjay and Thomason, Jesse and Fox, Dieter},
booktitle = {arXiv preprint arXiv:2402.08191},
year = {2024},
}
- Downloads last month
- 127