Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'stats'}) and 2 missing columns ({'length', 'tasks'}).

This happened while the json dataset builder was generating data using

hf://datasets/csuvla/piperdata/meta/episodes_stats.jsonl (at revision 04e410f89c2c7c7010b85675a061e8aec9a42d6f)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              episode_index: int64
              stats: struct<observation.state: struct<min: list<item: double>, max: list<item: double>, mean: list<item: double>, std: list<item: double>, count: list<item: int64>>, action: struct<min: list<item: double>, max: list<item: double>, mean: list<item: double>, std: list<item: double>, count: list<item: int64>>, observation.images.cam_high: struct<min: list<item: list<item: list<item: double>>>, max: list<item: list<item: list<item: double>>>, mean: list<item: list<item: list<item: double>>>, std: list<item: list<item: list<item: double>>>, count: list<item: int64>>, observation.images.cam_low: struct<min: list<item: list<item: list<item: double>>>, max: list<item: list<item: list<item: double>>>, mean: list<item: list<item: list<item: double>>>, std: list<item: list<item: list<item: double>>>, count: list<item: int64>>, observation.images.cam_left_wrist: struct<min: list<item: list<item: list<item: double>>>, max: list<item: list<item: list<item: double>>>, mean: list<item: list<item: list<item: double>>>, std: list<item: list<item: list<item: double>>>, count: list<item: int64>>, observation.images.cam_right_wrist: struct<min: list<item: list<item: list<item: double>>>, max: list<item: list<item: list<item: double>>>, mean: list<item: list<item: list<item: double>>>, std: list<item: list<item: list<item: double>>>, count: list<item: int64>>, timestamp: struct<min: list<item: double>, max: list<item: double>, mean: list<item: double>, std: list<item: double
              ...
              index: struct<min: list<item: int64>, max: list<item: int64>, mean: list<item: double>, std: list<item: double>, count: list<item: int64>>
                    child 0, min: list<item: int64>
                        child 0, item: int64
                    child 1, max: list<item: int64>
                        child 0, item: int64
                    child 2, mean: list<item: double>
                        child 0, item: double
                    child 3, std: list<item: double>
                        child 0, item: double
                    child 4, count: list<item: int64>
                        child 0, item: int64
                child 9, index: struct<min: list<item: int64>, max: list<item: int64>, mean: list<item: double>, std: list<item: double>, count: list<item: int64>>
                    child 0, min: list<item: int64>
                        child 0, item: int64
                    child 1, max: list<item: int64>
                        child 0, item: int64
                    child 2, mean: list<item: double>
                        child 0, item: double
                    child 3, std: list<item: double>
                        child 0, item: double
                    child 4, count: list<item: int64>
                        child 0, item: int64
                child 10, task_index: struct<min: list<item: int64>, max: list<item: int64>, mean: list<item: double>, std: list<item: double>, count: list<item: int64>>
                    child 0, min: list<item: int64>
                        child 0, item: int64
                    child 1, max: list<item: int64>
                        child 0, item: int64
                    child 2, mean: list<item: double>
                        child 0, item: double
                    child 3, std: list<item: double>
                        child 0, item: double
                    child 4, count: list<item: int64>
                        child 0, item: int64
              to
              {'episode_index': Value(dtype='int64', id=None), 'tasks': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'length': Value(dtype='int64', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1433, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1050, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 925, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'stats'}) and 2 missing columns ({'length', 'tasks'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/csuvla/piperdata/meta/episodes_stats.jsonl (at revision 04e410f89c2c7c7010b85675a061e8aec9a42d6f)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

episode_index
int64
tasks
sequence
length
int64
0
[ "五合一数字5-517mm" ]
752
1
[ "五合一数字5-517mm" ]
1,009
2
[ "五合一数字5-517mm" ]
849
3
[ "五合一数字5-517mm" ]
1,103
4
[ "五合一数字5-517mm" ]
745
5
[ "五合一数字5-517mm" ]
623
6
[ "五合一数字5-517mm" ]
562
7
[ "五合一数字5-517mm" ]
741
8
[ "五合一数字5-517mm" ]
609
9
[ "五合一数字5-517mm" ]
757
10
[ "五合一数字5-517mm" ]
725
11
[ "五合一数字5-517mm" ]
880
12
[ "五合一数字5-517mm" ]
782
13
[ "五合一数字5-517mm" ]
753
14
[ "五合一数字5-517mm" ]
653
15
[ "五合一数字5-517mm" ]
630
16
[ "五合一数字5-517mm" ]
705
17
[ "五合一数字5-517mm" ]
851
18
[ "五合一数字5-517mm" ]
522
19
[ "五合一数字5-517mm" ]
770
20
[ "五合一数字5-517mm" ]
584
21
[ "五合一数字5-517mm" ]
646
22
[ "五合一数字5-517mm" ]
761
23
[ "五合一数字5-517mm" ]
510
24
[ "五合一数字5-517mm" ]
868
25
[ "五合一数字5-517mm" ]
588
26
[ "五合一数字5-517mm" ]
661
27
[ "五合一数字5-517mm" ]
562
28
[ "五合一数字5-517mm" ]
613
29
[ "五合一数字5-517mm" ]
581
30
[ "五合一数字5-517mm" ]
516
31
[ "五合一数字5-517mm" ]
668
32
[ "五合一数字5-517mm" ]
696
33
[ "五合一数字5-517mm" ]
938
34
[ "五合一数字5-517mm" ]
547
35
[ "五合一数字5-517mm" ]
877
36
[ "五合一数字5-517mm" ]
592
37
[ "五合一数字5-517mm" ]
590
38
[ "五合一数字5-517mm" ]
674
39
[ "五合一数字5-517mm" ]
588
40
[ "五合一数字5-517mm" ]
658
41
[ "五合一数字5-517mm" ]
667
42
[ "五合一数字5-517mm" ]
768
43
[ "五合一数字5-517mm" ]
803
44
[ "五合一数字5-517mm" ]
933
45
[ "五合一数字5-517mm" ]
843
46
[ "五合一数字5-517mm" ]
498
47
[ "五合一数字5-517mm" ]
765
48
[ "五合一数字5-517mm" ]
616
49
[ "五合一数字5-517mm" ]
545
50
[ "五合一数字6-517mm" ]
637
51
[ "五合一数字6-517mm" ]
1,322
52
[ "五合一数字6-517mm" ]
864
53
[ "五合一数字6-517mm" ]
492
54
[ "五合一数字6-517mm" ]
854
55
[ "五合一数字6-517mm" ]
845
56
[ "五合一数字6-517mm" ]
771
57
[ "五合一数字6-517mm" ]
1,003
58
[ "五合一数字6-517mm" ]
751
59
[ "五合一数字6-517mm" ]
671
60
[ "五合一数字6-517mm" ]
602
61
[ "五合一数字6-517mm" ]
735
62
[ "五合一数字6-517mm" ]
614
63
[ "五合一数字6-517mm" ]
673
64
[ "五合一数字6-517mm" ]
706
65
[ "五合一数字6-517mm" ]
591
66
[ "五合一数字6-517mm" ]
725
67
[ "五合一数字6-517mm" ]
591
68
[ "五合一数字6-517mm" ]
688
69
[ "五合一数字6-517mm" ]
572
70
[ "五合一数字6-517mm" ]
841
0
null
null
1
null
null
2
null
null
3
null
null
4
null
null
5
null
null
6
null
null
7
null
null
8
null
null
9
null
null
10
null
null
11
null
null
12
null
null
13
null
null
14
null
null
15
null
null
16
null
null
17
null
null
18
null
null
19
null
null
20
null
null
21
null
null
22
null
null
23
null
null
24
null
null
25
null
null
26
null
null
27
null
null
28
null
null
End of preview.

No dataset card yet

Downloads last month
4