Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 13 new columns ({'features', 'chunks_size', 'fps', 'video_path', 'codebase_version', 'robot_type', 'total_episodes', 'total_videos', 'splits', 'total_chunks', 'total_tasks', 'total_frames', 'data_path'}) and 3 missing columns ({'tasks', 'length', 'episode_index'}). This happened while the json dataset builder was generating data using hf://datasets/StanleyChueh/franka_lerobot_one/meta/info.json (at revision 390f193f31affa2c53ff09fb2527b55d6247b413) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast codebase_version: string robot_type: string total_episodes: int64 total_frames: int64 total_tasks: int64 total_videos: int64 total_chunks: int64 chunks_size: int64 fps: int64 splits: struct<train: string> child 0, train: string data_path: string video_path: string features: struct<observation.state: struct<dtype: string, shape: list<item: int64>, names: struct<motors: list<item: string>>>, observation.images.image: struct<dtype: string, shape: list<item: int64>, names: list<item: string>, info: struct<video.fps: double, video.height: int64, video.width: int64, video.channels: int64, video.codec: string, video.pix_fmt: string, video.is_depth_map: bool, has_audio: bool>>, observation.images.image_additional_view: struct<dtype: string, shape: list<item: int64>, names: list<item: string>, info: struct<video.fps: double, video.height: int64, video.width: int64, video.channels: int64, video.codec: string, video.pix_fmt: string, video.is_depth_map: bool, has_audio: bool>>, action: struct<dtype: string, shape: list<item: int64>, names: struct<motors: list<item: string>>>, timestamp: struct<dtype: string, shape: list<item: int64>, names: null>, frame_index: struct<dtype: string, shape: list<item: int64>, names: null>, episode_index: struct<dtype: string, shape: list<item: int64>, names: null>, index: struct<dtype: string, shape: list<item: int64>, names: null>, task_index: struct<dtype: string, shape: list<item: int64>, names: null>> child 0, observation.state: struct<dtype: strin ... deo.pix_fmt: string child 6, video.is_depth_map: bool child 7, has_audio: bool child 3, action: struct<dtype: string, shape: list<item: int64>, names: struct<motors: list<item: string>>> child 0, dtype: string child 1, shape: list<item: int64> child 0, item: int64 child 2, names: struct<motors: list<item: string>> child 0, motors: list<item: string> child 0, item: string child 4, timestamp: struct<dtype: string, shape: list<item: int64>, names: null> child 0, dtype: string child 1, shape: list<item: int64> child 0, item: int64 child 2, names: null child 5, frame_index: struct<dtype: string, shape: list<item: int64>, names: null> child 0, dtype: string child 1, shape: list<item: int64> child 0, item: int64 child 2, names: null child 6, episode_index: struct<dtype: string, shape: list<item: int64>, names: null> child 0, dtype: string child 1, shape: list<item: int64> child 0, item: int64 child 2, names: null child 7, index: struct<dtype: string, shape: list<item: int64>, names: null> child 0, dtype: string child 1, shape: list<item: int64> child 0, item: int64 child 2, names: null child 8, task_index: struct<dtype: string, shape: list<item: int64>, names: null> child 0, dtype: string child 1, shape: list<item: int64> child 0, item: int64 child 2, names: null to {'episode_index': Value(dtype='int64', id=None), 'tasks': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'length': Value(dtype='int64', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1436, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1053, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 925, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1001, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 13 new columns ({'features', 'chunks_size', 'fps', 'video_path', 'codebase_version', 'robot_type', 'total_episodes', 'total_videos', 'splits', 'total_chunks', 'total_tasks', 'total_frames', 'data_path'}) and 3 missing columns ({'tasks', 'length', 'episode_index'}). This happened while the json dataset builder was generating data using hf://datasets/StanleyChueh/franka_lerobot_one/meta/info.json (at revision 390f193f31affa2c53ff09fb2527b55d6247b413) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
episode_index
int64 | tasks
sequence | length
int64 |
---|---|---|
0 |
[
"Play with the kitchen."
] | 42 |
1 |
[
"Play with the kitchen."
] | 28 |
2 |
[
"Play with the kitchen."
] | 121 |
3 |
[
"Play with the kitchen."
] | 95 |
4 |
[
"Play with the kitchen."
] | 49 |
null | null | null |
null | null | null |
null | null | null |
No dataset card yet
- Downloads last month
- 2