url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.13B
| node_id
stringlengths 18
32
| number
int64 1
6.66k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | comments
sequence | created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6559/comments | https://api.github.com/repos/huggingface/datasets/issues/6559/events | https://github.com/huggingface/datasets/issues/6559 | 2,065,118,332 | I_kwDODunzps57FzR8 | 6,559 | Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default'] | {
"login": "zhulinJulia24",
"id": 145004780,
"node_id": "U_kgDOCKSY7A",
"avatar_url": "https://avatars.githubusercontent.com/u/145004780?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhulinJulia24",
"html_url": "https://github.com/zhulinJulia24",
"followers_url": "https://api.github.com/users/zhulinJulia24/followers",
"following_url": "https://api.github.com/users/zhulinJulia24/following{/other_user}",
"gists_url": "https://api.github.com/users/zhulinJulia24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhulinJulia24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhulinJulia24/subscriptions",
"organizations_url": "https://api.github.com/users/zhulinJulia24/orgs",
"repos_url": "https://api.github.com/users/zhulinJulia24/repos",
"events_url": "https://api.github.com/users/zhulinJulia24/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhulinJulia24/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n\r\nYou can load it this way instead:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ncache_dir = 'path/to/your/cache/directory'\r\ndataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', cache_dir=cache_dir)\r\n```",
"> Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n> \r\n> You can load it this way instead:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> cache_dir = 'path/to/your/cache/directory'\r\n> dataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', cache_dir=cache_dir)\r\n> ```\r\n\r\nthanks, the command run successfully in the latest version\r\n"
] | 2024-01-04T07:04:48 | 2024-01-05T01:26:26 | 2024-01-05T01:26:25 | NONE | null | null | ### Describe the bug
python script is:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir)
```
the script success when datasets version is 2.14.7.
when using 2.16.1, error occurs
`
ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']`
### Steps to reproduce the bug
1. pip install datasets==2.16.1
2. run python script:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir)
```
### Expected behavior
the dataset should be loaded successful in the latest version.
### Environment info
datasets 2.16.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6559/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6558/comments | https://api.github.com/repos/huggingface/datasets/issues/6558/events | https://github.com/huggingface/datasets/issues/6558 | 2,064,885,984 | I_kwDODunzps57E6jg | 6,558 | OSError: image file is truncated (1 bytes not processed) #28323 | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"You can add \r\n\r\n```python\r\nfrom PIL import ImageFile\r\nImageFile.LOAD_TRUNCATED_IMAGES = True\r\n```\r\n\r\nafter the imports to be able to read truncated images."
] | 2024-01-04T02:15:13 | 2024-01-15T16:01:35 | null | NONE | null | null | ### Describe the bug
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[24], line 28
23 return example
25 # Filter the dataset
26 # filtered_dataset = dataset.filter(contains_number)
27 # Add the 'label' field in the dataset
---> 28 labeled_dataset = dataset.filter(contains_number).map(add_label)
29 # View the structure of the updated dataset
30 print(labeled_dataset)
File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:975, in DatasetDict.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc)
972 if cache_file_names is None:
973 cache_file_names = {k: None for k in self}
974 return DatasetDict(
--> 975 {
976 k: dataset.filter(
977 function=function,
978 with_indices=with_indices,
979 input_columns=input_columns,
980 batched=batched,
981 batch_size=batch_size,
982 keep_in_memory=keep_in_memory,
983 load_from_cache_file=load_from_cache_file,
984 cache_file_name=cache_file_names[k],
985 writer_batch_size=writer_batch_size,
986 fn_kwargs=fn_kwargs,
987 num_proc=num_proc,
988 desc=desc,
989 )
990 for k, dataset in self.items()
991 }
992 )
File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:976, in <dictcomp>(.0)
972 if cache_file_names is None:
973 cache_file_names = {k: None for k in self}
974 return DatasetDict(
975 {
--> 976 k: dataset.filter(
977 function=function,
978 with_indices=with_indices,
979 input_columns=input_columns,
980 batched=batched,
981 batch_size=batch_size,
982 keep_in_memory=keep_in_memory,
983 load_from_cache_file=load_from_cache_file,
984 cache_file_name=cache_file_names[k],
985 writer_batch_size=writer_batch_size,
986 fn_kwargs=fn_kwargs,
987 num_proc=num_proc,
988 desc=desc,
989 )
990 for k, dataset in self.items()
991 }
992 )
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs)
550 self_format = {
551 "type": self._format_type,
552 "format_kwargs": self._format_kwargs,
553 "columns": self._format_columns,
554 "output_all_columns": self._output_all_columns,
555 }
556 # apply actual function
--> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
559 # re-apply format to the output
File /usr/local/lib/python3.10/dist-packages/datasets/fingerprint.py:481, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
477 validate_fingerprint(kwargs[fingerprint_name])
479 # Call actual function
--> 481 out = func(dataset, *args, **kwargs)
483 # Update fingerprint of in-place transforms + update in-place history of transforms
485 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3623, in Dataset.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
3620 if len(self) == 0:
3621 return self
-> 3623 indices = self.map(
3624 function=partial(
3625 get_indices_from_mask_function, function, batched, with_indices, input_columns, self._indices
3626 ),
3627 with_indices=True,
3628 features=Features({"indices": Value("uint64")}),
3629 batched=True,
3630 batch_size=batch_size,
3631 remove_columns=self.column_names,
3632 keep_in_memory=keep_in_memory,
3633 load_from_cache_file=load_from_cache_file,
3634 cache_file_name=cache_file_name,
3635 writer_batch_size=writer_batch_size,
3636 fn_kwargs=fn_kwargs,
3637 num_proc=num_proc,
3638 suffix_template=suffix_template,
3639 new_fingerprint=new_fingerprint,
3640 input_columns=input_columns,
3641 desc=desc or "Filter",
3642 )
3643 new_dataset = copy.deepcopy(self)
3644 new_dataset._indices = indices.data
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:592, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
590 self: "Dataset" = kwargs.pop("self")
591 # apply actual function
--> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
594 for dataset in datasets:
595 # Remove task templates if a column mapping of the template is no longer valid
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs)
550 self_format = {
551 "type": self._format_type,
552 "format_kwargs": self._format_kwargs,
553 "columns": self._format_columns,
554 "output_all_columns": self._output_all_columns,
555 }
556 # apply actual function
--> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
559 # re-apply format to the output
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3093, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
3087 if transformed_dataset is None:
3088 with hf_tqdm(
3089 unit=" examples",
3090 total=pbar_total,
3091 desc=desc or "Map",
3092 ) as pbar:
-> 3093 for rank, done, content in Dataset._map_single(**dataset_kwargs):
3094 if done:
3095 shards_done += 1
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3470, in Dataset._map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
3466 indices = list(
3467 range(*(slice(i, i + batch_size).indices(shard.num_rows)))
3468 ) # Something simpler?
3469 try:
-> 3470 batch = apply_function_on_filtered_inputs(
3471 batch,
3472 indices,
3473 check_same_num_examples=len(shard.list_indexes()) > 0,
3474 offset=offset,
3475 )
3476 except NumExamplesMismatchError:
3477 raise DatasetTransformationNotAllowedError(
3478 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."
3479 ) from None
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3349, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset)
3347 if with_rank:
3348 additional_args += (rank,)
-> 3349 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
3350 if isinstance(processed_inputs, LazyDict):
3351 processed_inputs = {
3352 k: v for k, v in processed_inputs.data.items() if k not in processed_inputs.keys_to_format
3353 }
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:6212, in get_indices_from_mask_function(function, batched, with_indices, input_columns, indices_mapping, *args, **fn_kwargs)
6209 if input_columns is None:
6210 # inputs only contains a batch of examples
6211 batch: dict = inputs[0]
-> 6212 num_examples = len(batch[next(iter(batch.keys()))])
6213 for i in range(num_examples):
6214 example = {key: batch[key][i] for key in batch}
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:272, in LazyDict.__getitem__(self, key)
270 value = self.data[key]
271 if key in self.keys_to_format:
--> 272 value = self.format(key)
273 self.data[key] = value
274 self.keys_to_format.remove(key)
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:375, in LazyBatch.format(self, key)
374 def format(self, key):
--> 375 return self.formatter.format_column(self.pa_table.select([key]))
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:442, in PythonFormatter.format_column(self, pa_table)
440 def format_column(self, pa_table: pa.Table) -> list:
441 column = self.python_arrow_extractor().extract_column(pa_table)
--> 442 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
443 return column
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:218, in PythonFeaturesDecoder.decode_column(self, column, column_name)
217 def decode_column(self, column: list, column_name: str) -> list:
--> 218 return self.features.decode_column(column, column_name) if self.features else column
File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in Features.decode_column(self, column, column_name)
1938 def decode_column(self, column: list, column_name: str):
1939 """Decode column with custom feature decoding.
1940
1941 Args:
(...)
1948 `list[Any]`
1949 """
1950 return (
-> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column]
1952 if self._column_requires_decoding[column_name]
1953 else column
1954 )
File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in <listcomp>(.0)
1938 def decode_column(self, column: list, column_name: str):
1939 """Decode column with custom feature decoding.
1940
1941 Args:
(...)
1948 `list[Any]`
1949 """
1950 return (
-> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column]
1952 if self._column_requires_decoding[column_name]
1953 else column
1954 )
File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id)
1336 elif isinstance(schema, (Audio, Image)):
1337 # we pass the token to read and decode files from private repositories in streaming mode
1338 if obj is not None and schema.decode:
-> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1340 return obj
File /usr/local/lib/python3.10/dist-packages/datasets/features/image.py:185, in Image.decode_example(self, value, token_per_repo_id)
183 else:
184 image = PIL.Image.open(BytesIO(bytes_))
--> 185 image.load() # to avoid "Too many open files" errors
186 return image
File /usr/local/lib/python3.10/dist-packages/PIL/ImageFile.py:254, in ImageFile.load(self)
252 break
253 else:
--> 254 raise OSError(
255 "image file is truncated "
256 f"({len(b)} bytes not processed)"
257 )
259 b = b + s
260 n, err_code = decoder.decode(b)
OSError: image file is truncated (1 bytes not processed)
```
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("mehul7/captioned_military_aircraft")
from transformers import AutoImageProcessor
checkpoint = "microsoft/resnet-50"
image_processor = AutoImageProcessor.from_pretrained(checkpoint)
import re
from PIL import Image
import io
def contains_number(example):
try:
image = Image.open(io.BytesIO(example["image"]['bytes']))
t = image_processor(images=image, return_tensors="pt")['pixel_values']
except Exception as e:
print(f"Error processing imageοΌ{example['text']}")
return False
return bool(re.search(r'\d', example['text']))
# Define a function to add the 'label' field
def add_label(example):
lab = example['text'].split()
temp = 'NOT'
for item in lab:
if str(item[-1]).isdigit():
temp = item
break
example['label'] = temp
return example
# Filter the dataset
# filtered_dataset = dataset.filter(contains_number)
# Add the 'label' field in the dataset
labeled_dataset = dataset.filter(contains_number).map(add_label)
# View the structure of the updated dataset
print(labeled_dataset)
```
### Expected behavior
needs to form labels
same as : https://www.kaggle.com/code/jiabaowangts/dataset-air/notebook
### Environment info
Kaggle notebook P100 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6558/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6557 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6557/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6557/comments | https://api.github.com/repos/huggingface/datasets/issues/6557/events | https://github.com/huggingface/datasets/pull/6557 | 2,064,341,965 | PR_kwDODunzps5jJ63z | 6,557 | Support standalone yaml | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6557). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq \r\nhello\r\nI think it should be defined in config.py\r\nDATASET_ README_ FILENAME=\"README. md\"\r\nThis can replace all \"README. md\"\r\n",
"Thanks for the feedback :) merging now",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004890 / 0.011353 (-0.006463) | 0.003535 / 0.011008 (-0.007473) | 0.062894 / 0.038508 (0.024386) | 0.029133 / 0.023109 (0.006024) | 0.242387 / 0.275898 (-0.033511) | 0.262720 / 0.323480 (-0.060760) | 0.002880 / 0.007986 (-0.005106) | 0.002674 / 0.004328 (-0.001655) | 0.048932 / 0.004250 (0.044682) | 0.041669 / 0.037052 (0.004617) | 0.255922 / 0.258489 (-0.002567) | 0.282106 / 0.293841 (-0.011734) | 0.028137 / 0.128546 (-0.100409) | 0.010620 / 0.075646 (-0.065026) | 0.207799 / 0.419271 (-0.211473) | 0.035499 / 0.043533 (-0.008034) | 0.246158 / 0.255139 (-0.008981) | 0.262671 / 0.283200 (-0.020528) | 0.017297 / 0.141683 (-0.124386) | 1.118681 / 1.452155 (-0.333474) | 1.156732 / 1.492716 (-0.335985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091670 / 0.018006 (0.073664) | 0.300327 / 0.000490 (0.299837) | 0.000212 / 0.000200 (0.000012) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018080 / 0.037411 (-0.019332) | 0.060357 / 0.014526 (0.045831) | 0.072221 / 0.176557 (-0.104336) | 0.119281 / 0.737135 (-0.617855) | 0.073861 / 0.296338 (-0.222477) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289848 / 0.215209 (0.074639) | 2.845203 / 2.077655 (0.767549) | 1.531271 / 1.504120 (0.027152) | 1.366110 / 1.541195 (-0.175085) | 1.395041 / 1.468490 (-0.073449) | 0.563353 / 4.584777 (-4.021424) | 2.389074 / 3.745712 (-1.356638) | 2.752960 / 5.269862 (-2.516901) | 1.715508 / 4.565676 (-2.850168) | 0.063063 / 0.424275 (-0.361212) | 0.004967 / 0.007607 (-0.002640) | 0.340757 / 0.226044 (0.114713) | 3.387667 / 2.268929 (1.118739) | 1.845182 / 55.444624 (-53.599442) | 1.569616 / 6.876477 (-5.306861) | 1.571393 / 2.142072 (-0.570679) | 0.643455 / 4.805227 (-4.161772) | 0.116919 / 6.500664 (-6.383745) | 0.042551 / 0.075469 (-0.032918) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943761 / 1.841788 (-0.898027) | 11.481068 / 8.074308 (3.406760) | 10.422180 / 10.191392 (0.230788) | 0.132015 / 0.680424 (-0.548408) | 0.013932 / 0.534201 (-0.520268) | 0.288340 / 0.579283 (-0.290943) | 0.263695 / 0.434364 (-0.170669) | 0.324459 / 0.540337 (-0.215878) | 0.415204 / 1.386936 (-0.971732) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005042 / 0.011353 (-0.006310) | 0.003465 / 0.011008 (-0.007543) | 0.050107 / 0.038508 (0.011599) | 0.029542 / 0.023109 (0.006433) | 0.273645 / 0.275898 (-0.002253) | 0.293661 / 0.323480 (-0.029818) | 0.004099 / 0.007986 (-0.003887) | 0.002667 / 0.004328 (-0.001661) | 0.048281 / 0.004250 (0.044030) | 0.044406 / 0.037052 (0.007353) | 0.284245 / 0.258489 (0.025756) | 0.312303 / 0.293841 (0.018462) | 0.030057 / 0.128546 (-0.098489) | 0.010675 / 0.075646 (-0.064971) | 0.058404 / 0.419271 (-0.360868) | 0.051874 / 0.043533 (0.008342) | 0.273308 / 0.255139 (0.018169) | 0.289356 / 0.283200 (0.006157) | 0.018628 / 0.141683 (-0.123055) | 1.148764 / 1.452155 (-0.303391) | 1.194181 / 1.492716 (-0.298535) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091383 / 0.018006 (0.073376) | 0.300221 / 0.000490 (0.299731) | 0.000232 / 0.000200 (0.000032) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021814 / 0.037411 (-0.015597) | 0.076420 / 0.014526 (0.061894) | 0.087404 / 0.176557 (-0.089152) | 0.126184 / 0.737135 (-0.610951) | 0.089738 / 0.296338 (-0.206600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299839 / 0.215209 (0.084630) | 2.929260 / 2.077655 (0.851605) | 1.608327 / 1.504120 (0.104207) | 1.479757 / 1.541195 (-0.061437) | 1.494768 / 1.468490 (0.026278) | 0.563873 / 4.584777 (-4.020904) | 2.434442 / 3.745712 (-1.311270) | 2.641384 / 5.269862 (-2.628478) | 1.724222 / 4.565676 (-2.841454) | 0.062125 / 0.424275 (-0.362150) | 0.004994 / 0.007607 (-0.002613) | 0.350895 / 0.226044 (0.124851) | 3.448550 / 2.268929 (1.179621) | 1.928910 / 55.444624 (-53.515714) | 1.669887 / 6.876477 (-5.206590) | 1.781304 / 2.142072 (-0.360768) | 0.649301 / 4.805227 (-4.155926) | 0.116255 / 6.500664 (-6.384409) | 0.040947 / 0.075469 (-0.034522) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977537 / 1.841788 (-0.864251) | 12.119913 / 8.074308 (4.045605) | 10.874078 / 10.191392 (0.682686) | 0.130174 / 0.680424 (-0.550250) | 0.016176 / 0.534201 (-0.518025) | 0.287967 / 0.579283 (-0.291316) | 0.280591 / 0.434364 (-0.153773) | 0.324332 / 0.540337 (-0.216005) | 0.419479 / 1.386936 (-0.967457) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9d6d16117a30ba345b0236407975f701c5b288d4 \"CML watermark\")\n"
] | 2024-01-03T16:47:35 | 2024-01-11T17:59:51 | 2024-01-11T17:53:42 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6557",
"html_url": "https://github.com/huggingface/datasets/pull/6557",
"diff_url": "https://github.com/huggingface/datasets/pull/6557.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6557.patch",
"merged_at": "2024-01-11T17:53:42"
} | see (internal) https://huggingface.slack.com/archives/C02V51Q3800/p1703885853581679 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6557/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6556 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6556/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6556/comments | https://api.github.com/repos/huggingface/datasets/issues/6556/events | https://github.com/huggingface/datasets/pull/6556 | 2,064,018,208 | PR_kwDODunzps5jI0nN | 6,556 | Fix imagefolder with one image | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6556). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Fixed in dataset viewer: https://huggingface.co/datasets/multimodalart/repro_1_image\r\n\r\n<img width=\"682\" alt=\"Capture dβeΜcran 2024-02-12 aΜ 22 57 08\" src=\"https://github.com/huggingface/datasets/assets/1676121/be9a8dbc-2d78-4ffc-aed4-293a7c57bc0d\">\r\n"
] | 2024-01-03T13:13:02 | 2024-02-12T21:57:34 | 2024-01-09T13:06:30 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6556",
"html_url": "https://github.com/huggingface/datasets/pull/6556",
"diff_url": "https://github.com/huggingface/datasets/pull/6556.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6556.patch",
"merged_at": "2024-01-09T13:06:30"
} | A dataset repository with one image and one metadata file was considered a JSON dataset instead of an ImageFolder dataset. This is because we pick the dataset type with the most compatible data file extensions present in the repository and it results in a tie in this case.
e.g. for https://huggingface.co/datasets/multimodalart/repro_1_image
I fixed this by deprioritizing metadata files in the count.
fix https://github.com/huggingface/datasets/issues/6545 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6556/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6555 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6555/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6555/comments | https://api.github.com/repos/huggingface/datasets/issues/6555/events | https://github.com/huggingface/datasets/pull/6555 | 2,063,841,286 | PR_kwDODunzps5jIM79 | 6,555 | Do not use Parquet exports if revision is passed | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6555). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"As shared on slack, `HubDatasetModuleFactoryWithParquetExport` raises a `DatasetsServerError` already if the user tries to load another revision that the one from the parquet export. And therefore it fall backs on using `HubDatasetModuleFactoryWithScript`",
"@lhoestq I would say that although current implementation finally returns `HubDatasetModuleFactoryWithScript` as expected, with this PR we avoid the useless call to `HubDatasetModuleFactoryWithParquetExport.get_module`, so this is more optimal.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005596 / 0.011353 (-0.005757) | 0.004022 / 0.011008 (-0.006986) | 0.064041 / 0.038508 (0.025533) | 0.030683 / 0.023109 (0.007574) | 0.245236 / 0.275898 (-0.030662) | 0.269657 / 0.323480 (-0.053823) | 0.003142 / 0.007986 (-0.004844) | 0.002821 / 0.004328 (-0.001507) | 0.048774 / 0.004250 (0.044523) | 0.043771 / 0.037052 (0.006719) | 0.258202 / 0.258489 (-0.000287) | 0.288381 / 0.293841 (-0.005460) | 0.028154 / 0.128546 (-0.100392) | 0.011071 / 0.075646 (-0.064576) | 0.209836 / 0.419271 (-0.209436) | 0.035923 / 0.043533 (-0.007609) | 0.248361 / 0.255139 (-0.006777) | 0.268728 / 0.283200 (-0.014472) | 0.019982 / 0.141683 (-0.121701) | 1.172330 / 1.452155 (-0.279824) | 1.192262 / 1.492716 (-0.300455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089231 / 0.018006 (0.071225) | 0.299192 / 0.000490 (0.298702) | 0.000214 / 0.000200 (0.000014) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018358 / 0.037411 (-0.019053) | 0.062633 / 0.014526 (0.048107) | 0.076276 / 0.176557 (-0.100280) | 0.120862 / 0.737135 (-0.616274) | 0.075958 / 0.296338 (-0.220380) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291575 / 0.215209 (0.076366) | 2.855908 / 2.077655 (0.778253) | 1.459891 / 1.504120 (-0.044229) | 1.374945 / 1.541195 (-0.166250) | 1.333759 / 1.468490 (-0.134731) | 0.575428 / 4.584777 (-4.009348) | 2.414253 / 3.745712 (-1.331459) | 2.768222 / 5.269862 (-2.501639) | 1.705005 / 4.565676 (-2.860672) | 0.063406 / 0.424275 (-0.360869) | 0.004981 / 0.007607 (-0.002626) | 0.343826 / 0.226044 (0.117781) | 3.418143 / 2.268929 (1.149215) | 1.856571 / 55.444624 (-53.588053) | 1.571318 / 6.876477 (-5.305159) | 1.609897 / 2.142072 (-0.532175) | 0.646779 / 4.805227 (-4.158448) | 0.118143 / 6.500664 (-6.382521) | 0.042408 / 0.075469 (-0.033061) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965091 / 1.841788 (-0.876697) | 11.569655 / 8.074308 (3.495347) | 10.587818 / 10.191392 (0.396426) | 0.128518 / 0.680424 (-0.551905) | 0.013954 / 0.534201 (-0.520247) | 0.287244 / 0.579283 (-0.292039) | 0.263755 / 0.434364 (-0.170609) | 0.321661 / 0.540337 (-0.218676) | 0.428753 / 1.386936 (-0.958183) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005568 / 0.011353 (-0.005785) | 0.003755 / 0.011008 (-0.007253) | 0.049134 / 0.038508 (0.010626) | 0.032113 / 0.023109 (0.009004) | 0.276645 / 0.275898 (0.000747) | 0.299240 / 0.323480 (-0.024240) | 0.004297 / 0.007986 (-0.003689) | 0.002727 / 0.004328 (-0.001602) | 0.048420 / 0.004250 (0.044170) | 0.045070 / 0.037052 (0.008017) | 0.288597 / 0.258489 (0.030108) | 0.320824 / 0.293841 (0.026983) | 0.053293 / 0.128546 (-0.075253) | 0.011002 / 0.075646 (-0.064644) | 0.057747 / 0.419271 (-0.361524) | 0.034389 / 0.043533 (-0.009143) | 0.277914 / 0.255139 (0.022775) | 0.292919 / 0.283200 (0.009719) | 0.018252 / 0.141683 (-0.123431) | 1.187245 / 1.452155 (-0.264910) | 1.199823 / 1.492716 (-0.292893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088338 / 0.018006 (0.070332) | 0.297498 / 0.000490 (0.297008) | 0.000206 / 0.000200 (0.000006) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021445 / 0.037411 (-0.015966) | 0.075522 / 0.014526 (0.060996) | 0.086010 / 0.176557 (-0.090546) | 0.124938 / 0.737135 (-0.612197) | 0.087542 / 0.296338 (-0.208796) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292460 / 0.215209 (0.077251) | 2.841290 / 2.077655 (0.763635) | 1.537941 / 1.504120 (0.033821) | 1.409903 / 1.541195 (-0.131291) | 1.435339 / 1.468490 (-0.033151) | 0.578967 / 4.584777 (-4.005810) | 2.398588 / 3.745712 (-1.347125) | 2.662342 / 5.269862 (-2.607520) | 1.743055 / 4.565676 (-2.822622) | 0.064043 / 0.424275 (-0.360232) | 0.005030 / 0.007607 (-0.002577) | 0.348542 / 0.226044 (0.122498) | 3.395854 / 2.268929 (1.126926) | 1.918935 / 55.444624 (-53.525689) | 1.639320 / 6.876477 (-5.237157) | 1.740406 / 2.142072 (-0.401666) | 0.653346 / 4.805227 (-4.151881) | 0.117298 / 6.500664 (-6.383366) | 0.040635 / 0.075469 (-0.034834) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008277 / 1.841788 (-0.833510) | 12.069369 / 8.074308 (3.995061) | 10.967322 / 10.191392 (0.775930) | 0.131938 / 0.680424 (-0.548486) | 0.015418 / 0.534201 (-0.518783) | 0.297257 / 0.579283 (-0.282026) | 0.270742 / 0.434364 (-0.163622) | 0.332296 / 0.540337 (-0.208042) | 0.421606 / 1.386936 (-0.965330) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8f22ec79a1ce4fbf0a1728d53f0338d5fdf664d8 \"CML watermark\")\n"
] | 2024-01-03T11:33:10 | 2024-02-02T10:41:33 | 2024-02-02T10:35:28 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6555",
"html_url": "https://github.com/huggingface/datasets/pull/6555",
"diff_url": "https://github.com/huggingface/datasets/pull/6555.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6555.patch",
"merged_at": "2024-02-02T10:35:28"
} | Fix #6554. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6555/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6554/comments | https://api.github.com/repos/huggingface/datasets/issues/6554/events | https://github.com/huggingface/datasets/issues/6554 | 2,063,839,916 | I_kwDODunzps57A7Ks | 6,554 | Parquet exports are used even if revision is passed | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"I don't think this bug is a thing ? Do you have some code that leads to this issue ?"
] | 2024-01-03T11:32:26 | 2024-02-02T10:35:29 | 2024-02-02T10:35:29 | MEMBER | null | null | We should not used Parquet exports if `revision` is passed.
I think this is a regression. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6554/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6553 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6553/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6553/comments | https://api.github.com/repos/huggingface/datasets/issues/6553/events | https://github.com/huggingface/datasets/issues/6553 | 2,063,474,183 | I_kwDODunzps56_h4H | 6,553 | Cannot import name 'load_dataset' from .... module βdatasetsβ | {
"login": "ciaoyizhen",
"id": 83450192,
"node_id": "MDQ6VXNlcjgzNDUwMTky",
"avatar_url": "https://avatars.githubusercontent.com/u/83450192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ciaoyizhen",
"html_url": "https://github.com/ciaoyizhen",
"followers_url": "https://api.github.com/users/ciaoyizhen/followers",
"following_url": "https://api.github.com/users/ciaoyizhen/following{/other_user}",
"gists_url": "https://api.github.com/users/ciaoyizhen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ciaoyizhen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ciaoyizhen/subscriptions",
"organizations_url": "https://api.github.com/users/ciaoyizhen/orgs",
"repos_url": "https://api.github.com/users/ciaoyizhen/repos",
"events_url": "https://api.github.com/users/ciaoyizhen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ciaoyizhen/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I don't know My conpany conputer cannot work. but in my computer, it work?",
"Do you have a folder in your working directory called datasets?"
] | 2024-01-03T08:18:21 | 2024-01-25T01:08:04 | null | NONE | null | null | ### Describe the bug
use python -m pip install datasets to install
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
it doesn't work
### Environment info
datasets version==2.15.0
python == 3.10.12
linux version I don't know?? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6553/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6552/comments | https://api.github.com/repos/huggingface/datasets/issues/6552/events | https://github.com/huggingface/datasets/issues/6552 | 2,063,157,187 | I_kwDODunzps56-UfD | 6,552 | Loading a dataset from Google Colab hangs at "Resolving data files". | {
"login": "KelSolaar",
"id": 99779,
"node_id": "MDQ6VXNlcjk5Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/99779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KelSolaar",
"html_url": "https://github.com/KelSolaar",
"followers_url": "https://api.github.com/users/KelSolaar/followers",
"following_url": "https://api.github.com/users/KelSolaar/following{/other_user}",
"gists_url": "https://api.github.com/users/KelSolaar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KelSolaar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KelSolaar/subscriptions",
"organizations_url": "https://api.github.com/users/KelSolaar/orgs",
"repos_url": "https://api.github.com/users/KelSolaar/repos",
"events_url": "https://api.github.com/users/KelSolaar/events{/privacy}",
"received_events_url": "https://api.github.com/users/KelSolaar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This bug comes from the `huggingface_hub` library, see: https://github.com/huggingface/huggingface_hub/issues/1952\r\n\r\nA fix is provided at https://github.com/huggingface/huggingface_hub/pull/1953. Feel free to install `huggingface_hub` from this PR, or wait for it to be merged and the new version of `huggingface_hub` to be released",
"Thanks!"
] | 2024-01-03T02:18:17 | 2024-01-08T10:09:04 | 2024-01-08T10:09:04 | NONE | null | null | ### Describe the bug
Hello,
I'm trying to load a dataset from Google Colab but the process hangs at `Resolving data files`:
![image](https://github.com/huggingface/datasets/assets/99779/7175ad85-e571-46ed-9f87-92653985777d)
It is happening when the `_get_origin_metadata` definition is invoked:
```python
def _get_origin_metadata(
data_files: List[str],
max_workers=64,
download_config: Optional[DownloadConfig] = None,
) -> Tuple[str]:
return thread_map(
partial(_get_single_origin_metadata, download_config=download_config),
data_files,
max_workers=max_workers,
tqdm_class=hf_tqdm,
desc="Resolving data files",
disable=len(data_files) <= 16,
```
The thread is then stuck at `waiter.acquire()` in the builtin `threading.py` file.
I can load the dataset just fine on my machine.
Cheers,
Thomas
### Steps to reproduce the bug
In Google Colab:
```python
!pip install datasets
from datasets import load_dataset
dataset = load_dataset("colour-science/color-checker-detection-dataset")
```
### Expected behavior
The dataset should be loaded.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.1
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6552/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6551 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6551/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6551/comments | https://api.github.com/repos/huggingface/datasets/issues/6551/events | https://github.com/huggingface/datasets/pull/6551 | 2,062,768,400 | PR_kwDODunzps5jEi1C | 6,551 | Fix parallel downloads for datasets without scripts | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6551). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005002 / 0.011353 (-0.006350) | 0.003300 / 0.011008 (-0.007708) | 0.062509 / 0.038508 (0.024001) | 0.029807 / 0.023109 (0.006698) | 0.249935 / 0.275898 (-0.025963) | 0.264320 / 0.323480 (-0.059160) | 0.003790 / 0.007986 (-0.004195) | 0.002554 / 0.004328 (-0.001774) | 0.048207 / 0.004250 (0.043956) | 0.042033 / 0.037052 (0.004981) | 0.245725 / 0.258489 (-0.012764) | 0.276695 / 0.293841 (-0.017146) | 0.026502 / 0.128546 (-0.102044) | 0.010379 / 0.075646 (-0.065268) | 0.207002 / 0.419271 (-0.212269) | 0.034648 / 0.043533 (-0.008885) | 0.247957 / 0.255139 (-0.007182) | 0.263921 / 0.283200 (-0.019278) | 0.017710 / 0.141683 (-0.123973) | 1.105851 / 1.452155 (-0.346304) | 1.163315 / 1.492716 (-0.329401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089842 / 0.018006 (0.071836) | 0.352499 / 0.000490 (0.352009) | 0.000201 / 0.000200 (0.000001) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018094 / 0.037411 (-0.019317) | 0.060463 / 0.014526 (0.045937) | 0.073257 / 0.176557 (-0.103300) | 0.119771 / 0.737135 (-0.617364) | 0.075210 / 0.296338 (-0.221128) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288365 / 0.215209 (0.073156) | 2.825377 / 2.077655 (0.747722) | 1.532436 / 1.504120 (0.028316) | 1.393475 / 1.541195 (-0.147719) | 1.381859 / 1.468490 (-0.086632) | 0.564155 / 4.584777 (-4.020622) | 2.398177 / 3.745712 (-1.347535) | 2.730271 / 5.269862 (-2.539590) | 1.713779 / 4.565676 (-2.851898) | 0.062789 / 0.424275 (-0.361486) | 0.004991 / 0.007607 (-0.002616) | 0.340789 / 0.226044 (0.114744) | 3.323543 / 2.268929 (1.054615) | 1.861925 / 55.444624 (-53.582700) | 1.555181 / 6.876477 (-5.321296) | 1.559512 / 2.142072 (-0.582560) | 0.634565 / 4.805227 (-4.170663) | 0.116529 / 6.500664 (-6.384135) | 0.041312 / 0.075469 (-0.034157) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945739 / 1.841788 (-0.896049) | 11.376130 / 8.074308 (3.301822) | 10.007752 / 10.191392 (-0.183640) | 0.126815 / 0.680424 (-0.553609) | 0.013898 / 0.534201 (-0.520303) | 0.287438 / 0.579283 (-0.291845) | 0.261532 / 0.434364 (-0.172832) | 0.320197 / 0.540337 (-0.220140) | 0.414444 / 1.386936 (-0.972492) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004994 / 0.011353 (-0.006359) | 0.003407 / 0.011008 (-0.007601) | 0.049281 / 0.038508 (0.010773) | 0.042815 / 0.023109 (0.019706) | 0.268291 / 0.275898 (-0.007607) | 0.285877 / 0.323480 (-0.037603) | 0.004006 / 0.007986 (-0.003980) | 0.002607 / 0.004328 (-0.001721) | 0.047682 / 0.004250 (0.043431) | 0.044281 / 0.037052 (0.007228) | 0.268287 / 0.258489 (0.009798) | 0.298649 / 0.293841 (0.004808) | 0.028607 / 0.128546 (-0.099939) | 0.010367 / 0.075646 (-0.065279) | 0.057114 / 0.419271 (-0.362158) | 0.053753 / 0.043533 (0.010220) | 0.269010 / 0.255139 (0.013871) | 0.285057 / 0.283200 (0.001858) | 0.017693 / 0.141683 (-0.123990) | 1.134718 / 1.452155 (-0.317436) | 1.186609 / 1.492716 (-0.306107) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091109 / 0.018006 (0.073103) | 0.298603 / 0.000490 (0.298113) | 0.000216 / 0.000200 (0.000016) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022125 / 0.037411 (-0.015286) | 0.076570 / 0.014526 (0.062044) | 0.088903 / 0.176557 (-0.087654) | 0.126427 / 0.737135 (-0.610708) | 0.091001 / 0.296338 (-0.205338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300332 / 0.215209 (0.085123) | 2.971106 / 2.077655 (0.893452) | 1.617886 / 1.504120 (0.113766) | 1.476679 / 1.541195 (-0.064516) | 1.483750 / 1.468490 (0.015260) | 0.582569 / 4.584777 (-4.002208) | 2.441804 / 3.745712 (-1.303908) | 2.753927 / 5.269862 (-2.515935) | 1.733546 / 4.565676 (-2.832130) | 0.062653 / 0.424275 (-0.361622) | 0.005019 / 0.007607 (-0.002588) | 0.355556 / 0.226044 (0.129512) | 3.497431 / 2.268929 (1.228503) | 1.951711 / 55.444624 (-53.492913) | 1.663874 / 6.876477 (-5.212602) | 1.657363 / 2.142072 (-0.484709) | 0.653488 / 4.805227 (-4.151739) | 0.117055 / 6.500664 (-6.383609) | 0.040687 / 0.075469 (-0.034782) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969485 / 1.841788 (-0.872303) | 12.064793 / 8.074308 (3.990485) | 10.851531 / 10.191392 (0.660139) | 0.129060 / 0.680424 (-0.551364) | 0.015339 / 0.534201 (-0.518862) | 0.287215 / 0.579283 (-0.292069) | 0.276545 / 0.434364 (-0.157819) | 0.322748 / 0.540337 (-0.217589) | 0.421363 / 1.386936 (-0.965573) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d26abadce0b884db32382b92422d8a6aa997d40a \"CML watermark\")\n",
"@lhoestq \r\n<img width=\"1015\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/17604849/b19b9d92-c6f7-4e3a-8c9d-1178e56c67ea\">\r\nit's still not fixed =(",
"@lhoestq i was thinking uninstalling `datasets` and then `pip install git+https://github.com/huggingface/datasets.git` has to fix it. Buuuuut. I'm not sure what's going on actually...\r\n\r\nNow instead of showing progress bars one after another it seems to be downloading the dataset way way way faster (like 4 mins instead of 58, thank you very much) but does not show any progress bars related to downloading at all.\r\n\r\n<img width=\"1170\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/17604849/21a84908-c44d-41b4-bb0d-8061cab3bc64\">\r\n\r\n<img width=\"1159\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/17604849/26684a8a-c10a-4fa2-bd84-cab4f938ffcc\">\r\n"
] | 2024-01-02T18:06:18 | 2024-01-06T20:14:57 | 2024-01-03T13:19:48 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6551",
"html_url": "https://github.com/huggingface/datasets/pull/6551",
"diff_url": "https://github.com/huggingface/datasets/pull/6551.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6551.patch",
"merged_at": "2024-01-03T13:19:47"
} | Enable parallel downloads using multiprocessing when `num_proc` is passed to `load_dataset`.
It was enabled for datasets with scripts already (if they passed lists to `dl_manager.download`) but not for no-script datasets (we pass dicts {split: [list of files]} to `dl_manager.download` for those ones).
I fixed this by parallelising on the lists contained in the data files dicts when possible.
I also added a context manager `stack_multiprocessing_download_progress_bars` in `DownloadManager` to stack the progress bard of the downloads (from `cached_path(...)` calls). Otherwise the progress bars overlap each other with an annoying flickering effect. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6551/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6551/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6550 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6550/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6550/comments | https://api.github.com/repos/huggingface/datasets/issues/6550/events | https://github.com/huggingface/datasets/pull/6550 | 2,062,556,493 | PR_kwDODunzps5jD1OL | 6,550 | Multi gpu docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6550). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks @lhoestq . This is a very important fix for code to run on multiple GPUs. Otherwise, only one GPU is working. I wish it can be merged soon. \r\nI also wrote a [blog post](https://forrestbao.github.io/2024/01/30/datasets_map_with_rank_multiple_GPUs.html) with a complete example in case it can be helpful to someone. Please feel free to use complete example in any documentation. \r\n",
"Thanks a lot @forrestbao ! I reused parts of your code for the documentation, I'm sure it will be useful to many people !",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005662 / 0.011353 (-0.005691) | 0.003930 / 0.011008 (-0.007078) | 0.063807 / 0.038508 (0.025299) | 0.030227 / 0.023109 (0.007118) | 0.235338 / 0.275898 (-0.040560) | 0.264433 / 0.323480 (-0.059047) | 0.004226 / 0.007986 (-0.003759) | 0.002847 / 0.004328 (-0.001481) | 0.048998 / 0.004250 (0.044747) | 0.042713 / 0.037052 (0.005660) | 0.250504 / 0.258489 (-0.007985) | 0.281101 / 0.293841 (-0.012740) | 0.029123 / 0.128546 (-0.099423) | 0.011388 / 0.075646 (-0.064258) | 0.211342 / 0.419271 (-0.207930) | 0.036437 / 0.043533 (-0.007096) | 0.238909 / 0.255139 (-0.016230) | 0.255853 / 0.283200 (-0.027347) | 0.018852 / 0.141683 (-0.122831) | 1.131870 / 1.452155 (-0.320284) | 1.209007 / 1.492716 (-0.283710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092433 / 0.018006 (0.074427) | 0.303045 / 0.000490 (0.302556) | 0.000291 / 0.000200 (0.000091) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018349 / 0.037411 (-0.019062) | 0.062527 / 0.014526 (0.048002) | 0.075347 / 0.176557 (-0.101210) | 0.120587 / 0.737135 (-0.616549) | 0.075171 / 0.296338 (-0.221167) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288364 / 0.215209 (0.073155) | 2.775779 / 2.077655 (0.698124) | 1.490875 / 1.504120 (-0.013245) | 1.375451 / 1.541195 (-0.165744) | 1.398923 / 1.468490 (-0.069567) | 0.588659 / 4.584777 (-3.996117) | 2.458114 / 3.745712 (-1.287598) | 2.928910 / 5.269862 (-2.340951) | 1.834221 / 4.565676 (-2.731456) | 0.064503 / 0.424275 (-0.359772) | 0.005028 / 0.007607 (-0.002580) | 0.340386 / 0.226044 (0.114341) | 3.408697 / 2.268929 (1.139769) | 1.843613 / 55.444624 (-53.601012) | 1.569300 / 6.876477 (-5.307177) | 1.636761 / 2.142072 (-0.505312) | 0.687854 / 4.805227 (-4.117374) | 0.123462 / 6.500664 (-6.377202) | 0.042877 / 0.075469 (-0.032593) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.984054 / 1.841788 (-0.857734) | 12.243934 / 8.074308 (4.169626) | 10.835244 / 10.191392 (0.643852) | 0.131609 / 0.680424 (-0.548815) | 0.014000 / 0.534201 (-0.520201) | 0.292070 / 0.579283 (-0.287213) | 0.271958 / 0.434364 (-0.162406) | 0.326866 / 0.540337 (-0.213471) | 0.440880 / 1.386936 (-0.946056) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005954 / 0.011353 (-0.005399) | 0.004123 / 0.011008 (-0.006885) | 0.050371 / 0.038508 (0.011863) | 0.034387 / 0.023109 (0.011277) | 0.273254 / 0.275898 (-0.002644) | 0.297785 / 0.323480 (-0.025695) | 0.004619 / 0.007986 (-0.003367) | 0.002884 / 0.004328 (-0.001444) | 0.050236 / 0.004250 (0.045986) | 0.048586 / 0.037052 (0.011533) | 0.283878 / 0.258489 (0.025389) | 0.315218 / 0.293841 (0.021377) | 0.060688 / 0.128546 (-0.067859) | 0.011991 / 0.075646 (-0.063655) | 0.059518 / 0.419271 (-0.359753) | 0.036113 / 0.043533 (-0.007420) | 0.274767 / 0.255139 (0.019628) | 0.290620 / 0.283200 (0.007420) | 0.020070 / 0.141683 (-0.121613) | 1.164635 / 1.452155 (-0.287519) | 1.189482 / 1.492716 (-0.303234) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095171 / 0.018006 (0.077165) | 0.307129 / 0.000490 (0.306639) | 0.000227 / 0.000200 (0.000027) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022777 / 0.037411 (-0.014634) | 0.076761 / 0.014526 (0.062235) | 0.087654 / 0.176557 (-0.088902) | 0.126729 / 0.737135 (-0.610406) | 0.089491 / 0.296338 (-0.206847) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292208 / 0.215209 (0.076999) | 2.890491 / 2.077655 (0.812836) | 1.625696 / 1.504120 (0.121576) | 1.463484 / 1.541195 (-0.077710) | 1.490889 / 1.468490 (0.022399) | 0.582155 / 4.584777 (-4.002622) | 2.492209 / 3.745712 (-1.253503) | 2.817020 / 5.269862 (-2.452842) | 1.806812 / 4.565676 (-2.758864) | 0.065830 / 0.424275 (-0.358445) | 0.005089 / 0.007607 (-0.002518) | 0.356067 / 0.226044 (0.130022) | 3.489652 / 2.268929 (1.220723) | 1.959276 / 55.444624 (-53.485348) | 1.678819 / 6.876477 (-5.197657) | 1.853581 / 2.142072 (-0.288491) | 0.660515 / 4.805227 (-4.144712) | 0.119884 / 6.500664 (-6.380780) | 0.041713 / 0.075469 (-0.033757) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.021701 / 1.841788 (-0.820087) | 12.918290 / 8.074308 (4.843982) | 11.469371 / 10.191392 (1.277979) | 0.144830 / 0.680424 (-0.535594) | 0.015858 / 0.534201 (-0.518343) | 0.290136 / 0.579283 (-0.289148) | 0.277894 / 0.434364 (-0.156470) | 0.330091 / 0.540337 (-0.210247) | 0.422697 / 1.386936 (-0.964240) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#13b36ee5c6d77f7eacbb4dd545a21e785db7fd3e \"CML watermark\")\n"
] | 2024-01-02T15:11:58 | 2024-01-31T13:45:15 | 2024-01-31T13:38:59 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6550",
"html_url": "https://github.com/huggingface/datasets/pull/6550",
"diff_url": "https://github.com/huggingface/datasets/pull/6550.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6550.patch",
"merged_at": "2024-01-31T13:38:59"
} | after discussions in https://github.com/huggingface/datasets/pull/6415 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6550/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6549/comments | https://api.github.com/repos/huggingface/datasets/issues/6549/events | https://github.com/huggingface/datasets/issues/6549 | 2,062,420,259 | I_kwDODunzps567gkj | 6,549 | Loading from hf hub with clearer error message | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [
"Maybe we can add a helper message like `Maybe try again using \"hf://path/without/resolve\"` if the path contains `/resolve/` ?\r\n\r\ne.g.\r\n\r\n```\r\nFileNotFoundError: Unable to find 'hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json'\r\nIt looks like you used parts of the URL of the file from the Hugging Face website, but you should remove the \"/resolve/<revision>\" part to have a valid `hf://` path.\r\nPlease try again using this path instead:\r\n hf://datasets/HuggingFaceTB/eval_data/eval_data_context_and_answers.json\r\n```\r\n\r\nand suggest `f\"hf://datasets/HuggingFaceTB/eval_data@{revision}/eval_data_context_and_answers.json\"` if revision != \"main\"\r\n\r\nEDIT: I think this message should also be raised from the `huggingface_hub`'s `HfFileSystem` implementation"
] | 2024-01-02T13:26:34 | 2024-01-02T14:06:49 | null | MEMBER | null | null | ### Feature request
Shouldn't this kinda work ?
```
Dataset.from_json("hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json")
```
I got an error
```
File ~/miniconda3/envs/datatrove/lib/python3.10/site-packages/datasets/data_files.py:380, in resolve_pattern(pattern, base_path, allowed_extensions, download_config)
378 if allowed_extensions is not None:
379 error_msg += f" with any supported extension {list(allowed_extensions)}"
--> 380 raise FileNotFoundError(error_msg)
381 return out
FileNotFoundError: Unable to find 'hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json'
(I'm logged in)
```
Fix: the correct path is
```
hf://datasets/HuggingFaceTB/eval_data/eval_data_context_and_answers.json
```
Proposal: raise a clearer error
### Motivation
Clearer error message
### Your contribution
Can open a PR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6549/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6548/comments | https://api.github.com/repos/huggingface/datasets/issues/6548/events | https://github.com/huggingface/datasets/issues/6548 | 2,061,047,984 | I_kwDODunzps562Riw | 6,548 | Skip if a dataset has issues | {
"login": "hadianasliwa",
"id": 143214684,
"node_id": "U_kgDOCIlIXA",
"avatar_url": "https://avatars.githubusercontent.com/u/143214684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadianasliwa",
"html_url": "https://github.com/hadianasliwa",
"followers_url": "https://api.github.com/users/hadianasliwa/followers",
"following_url": "https://api.github.com/users/hadianasliwa/following{/other_user}",
"gists_url": "https://api.github.com/users/hadianasliwa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadianasliwa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadianasliwa/subscriptions",
"organizations_url": "https://api.github.com/users/hadianasliwa/orgs",
"repos_url": "https://api.github.com/users/hadianasliwa/repos",
"events_url": "https://api.github.com/users/hadianasliwa/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadianasliwa/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"It looks like a transient DNS issue. It should work fine now if you try again.\r\n\r\nThere is no parameter in load_dataset to skip failed downloads. In your case it would have skipped every single subsequent download until the DNS issue was resolved anyway."
] | 2023-12-31T12:41:26 | 2024-01-02T10:33:17 | null | NONE | null | null | ### Describe the bug
Hello everyone,
I'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error:
Couldn't reach https://huggingface.co/datasets/wikimedia/wikipedia/resolve/4cb9b0d719291f1a10f96f67d609c5d442980dc9/20231101.ext/train-00000-of-00001.parquet
Failed to resolve \'huggingface.co\' ([Errno -3] Temporary failure in name resolution)"))')))
![image](https://github.com/huggingface/datasets/assets/143214684/8847d9cb-529e-4eda-9c76-282713dfa3af)
so I was wondering is there a parameter to be passed to load_dataset() to skip files that can't be downloaded??
### Steps to reproduce the bug
Parameter to be passed to load_dataset() of huggingface to skip files that can't be downloaded??
### Expected behavior
load_dataset() finishes without error
### Environment info
None | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6548/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6547/comments | https://api.github.com/repos/huggingface/datasets/issues/6547/events | https://github.com/huggingface/datasets/pull/6547 | 2,060,796,927 | PR_kwDODunzps5i-Jni | 6,547 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6547). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004855 / 0.011353 (-0.006498) | 0.003552 / 0.011008 (-0.007456) | 0.062328 / 0.038508 (0.023820) | 0.031142 / 0.023109 (0.008032) | 0.247726 / 0.275898 (-0.028172) | 0.270951 / 0.323480 (-0.052528) | 0.002887 / 0.007986 (-0.005099) | 0.002663 / 0.004328 (-0.001665) | 0.047888 / 0.004250 (0.043638) | 0.042932 / 0.037052 (0.005880) | 0.253660 / 0.258489 (-0.004829) | 0.274997 / 0.293841 (-0.018844) | 0.027200 / 0.128546 (-0.101347) | 0.010851 / 0.075646 (-0.064796) | 0.206566 / 0.419271 (-0.212706) | 0.035311 / 0.043533 (-0.008222) | 0.254146 / 0.255139 (-0.000993) | 0.269074 / 0.283200 (-0.014126) | 0.019221 / 0.141683 (-0.122462) | 1.101986 / 1.452155 (-0.350169) | 1.155541 / 1.492716 (-0.337175) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004749 / 0.018006 (-0.013257) | 0.301627 / 0.000490 (0.301138) | 0.000208 / 0.000200 (0.000008) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018205 / 0.037411 (-0.019206) | 0.060420 / 0.014526 (0.045894) | 0.072533 / 0.176557 (-0.104023) | 0.119807 / 0.737135 (-0.617328) | 0.073249 / 0.296338 (-0.223089) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284947 / 0.215209 (0.069738) | 2.796939 / 2.077655 (0.719285) | 1.486076 / 1.504120 (-0.018043) | 1.358247 / 1.541195 (-0.182948) | 1.383680 / 1.468490 (-0.084811) | 0.550253 / 4.584777 (-4.034524) | 2.364783 / 3.745712 (-1.380929) | 2.765631 / 5.269862 (-2.504230) | 1.695694 / 4.565676 (-2.869983) | 0.061519 / 0.424275 (-0.362756) | 0.004914 / 0.007607 (-0.002693) | 0.340370 / 0.226044 (0.114325) | 3.313175 / 2.268929 (1.044247) | 1.805421 / 55.444624 (-53.639203) | 1.532151 / 6.876477 (-5.344325) | 1.541195 / 2.142072 (-0.600878) | 0.625266 / 4.805227 (-4.179961) | 0.119980 / 6.500664 (-6.380684) | 0.042334 / 0.075469 (-0.033135) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.952893 / 1.841788 (-0.888895) | 11.322232 / 8.074308 (3.247924) | 9.982108 / 10.191392 (-0.209284) | 0.130034 / 0.680424 (-0.550389) | 0.013192 / 0.534201 (-0.521009) | 0.286041 / 0.579283 (-0.293243) | 0.269802 / 0.434364 (-0.164562) | 0.323582 / 0.540337 (-0.216755) | 0.428641 / 1.386936 (-0.958295) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005071 / 0.011353 (-0.006282) | 0.003368 / 0.011008 (-0.007640) | 0.049003 / 0.038508 (0.010495) | 0.029507 / 0.023109 (0.006398) | 0.271859 / 0.275898 (-0.004039) | 0.294660 / 0.323480 (-0.028820) | 0.004218 / 0.007986 (-0.003767) | 0.002686 / 0.004328 (-0.001642) | 0.047947 / 0.004250 (0.043696) | 0.044499 / 0.037052 (0.007447) | 0.273982 / 0.258489 (0.015493) | 0.303393 / 0.293841 (0.009552) | 0.029649 / 0.128546 (-0.098898) | 0.010555 / 0.075646 (-0.065091) | 0.057553 / 0.419271 (-0.361718) | 0.051686 / 0.043533 (0.008153) | 0.274079 / 0.255139 (0.018940) | 0.292535 / 0.283200 (0.009335) | 0.019211 / 0.141683 (-0.122472) | 1.130629 / 1.452155 (-0.321526) | 1.196791 / 1.492716 (-0.295925) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093617 / 0.018006 (0.075611) | 0.302698 / 0.000490 (0.302209) | 0.000222 / 0.000200 (0.000022) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022830 / 0.037411 (-0.014581) | 0.077061 / 0.014526 (0.062535) | 0.089464 / 0.176557 (-0.087092) | 0.127487 / 0.737135 (-0.609649) | 0.092133 / 0.296338 (-0.204205) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295362 / 0.215209 (0.080153) | 2.902251 / 2.077655 (0.824596) | 1.600508 / 1.504120 (0.096388) | 1.477763 / 1.541195 (-0.063431) | 1.492242 / 1.468490 (0.023752) | 0.569347 / 4.584777 (-4.015430) | 2.449873 / 3.745712 (-1.295839) | 2.787207 / 5.269862 (-2.482655) | 1.723852 / 4.565676 (-2.841825) | 0.063076 / 0.424275 (-0.361199) | 0.005060 / 0.007607 (-0.002547) | 0.349614 / 0.226044 (0.123569) | 3.429735 / 2.268929 (1.160806) | 1.953883 / 55.444624 (-53.490741) | 1.664232 / 6.876477 (-5.212245) | 1.648864 / 2.142072 (-0.493209) | 0.640295 / 4.805227 (-4.164932) | 0.117053 / 6.500664 (-6.383611) | 0.041314 / 0.075469 (-0.034156) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970663 / 1.841788 (-0.871125) | 12.144810 / 8.074308 (4.070502) | 10.938985 / 10.191392 (0.747593) | 0.140502 / 0.680424 (-0.539922) | 0.015522 / 0.534201 (-0.518679) | 0.286629 / 0.579283 (-0.292654) | 0.283695 / 0.434364 (-0.150669) | 0.327298 / 0.540337 (-0.213039) | 0.424635 / 1.386936 (-0.962301) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e23a59ef7ba2b50d4e5588825c41212a3cfd1331 \"CML watermark\")\n"
] | 2023-12-30T16:47:17 | 2023-12-30T16:53:38 | 2023-12-30T16:47:27 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6547",
"html_url": "https://github.com/huggingface/datasets/pull/6547",
"diff_url": "https://github.com/huggingface/datasets/pull/6547.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6547.patch",
"merged_at": "2023-12-30T16:47:27"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6547/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6546/comments | https://api.github.com/repos/huggingface/datasets/issues/6546/events | https://github.com/huggingface/datasets/pull/6546 | 2,060,796,369 | PR_kwDODunzps5i-Jgv | 6,546 | Release: 2.16.1 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6546). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005415 / 0.011353 (-0.005938) | 0.003733 / 0.011008 (-0.007275) | 0.064178 / 0.038508 (0.025670) | 0.033162 / 0.023109 (0.010053) | 0.249799 / 0.275898 (-0.026099) | 0.274875 / 0.323480 (-0.048605) | 0.002977 / 0.007986 (-0.005009) | 0.002696 / 0.004328 (-0.001633) | 0.050042 / 0.004250 (0.045792) | 0.047127 / 0.037052 (0.010074) | 0.250865 / 0.258489 (-0.007624) | 0.289758 / 0.293841 (-0.004083) | 0.028007 / 0.128546 (-0.100539) | 0.010671 / 0.075646 (-0.064975) | 0.207123 / 0.419271 (-0.212148) | 0.036403 / 0.043533 (-0.007130) | 0.261527 / 0.255139 (0.006388) | 0.277277 / 0.283200 (-0.005922) | 0.019418 / 0.141683 (-0.122264) | 1.118019 / 1.452155 (-0.334136) | 1.180254 / 1.492716 (-0.312462) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004604 / 0.018006 (-0.013402) | 0.308129 / 0.000490 (0.307639) | 0.000202 / 0.000200 (0.000002) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018400 / 0.037411 (-0.019011) | 0.060777 / 0.014526 (0.046251) | 0.073059 / 0.176557 (-0.103498) | 0.119677 / 0.737135 (-0.617458) | 0.074076 / 0.296338 (-0.222263) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275353 / 0.215209 (0.060144) | 2.694079 / 2.077655 (0.616424) | 1.419670 / 1.504120 (-0.084450) | 1.302079 / 1.541195 (-0.239116) | 1.342077 / 1.468490 (-0.126413) | 0.549794 / 4.584777 (-4.034983) | 2.377149 / 3.745712 (-1.368563) | 2.800362 / 5.269862 (-2.469500) | 1.728152 / 4.565676 (-2.837524) | 0.061774 / 0.424275 (-0.362501) | 0.004898 / 0.007607 (-0.002709) | 0.330996 / 0.226044 (0.104952) | 3.262010 / 2.268929 (0.993082) | 1.761106 / 55.444624 (-53.683518) | 1.489783 / 6.876477 (-5.386694) | 1.532470 / 2.142072 (-0.609602) | 0.648814 / 4.805227 (-4.156414) | 0.116893 / 6.500664 (-6.383771) | 0.042167 / 0.075469 (-0.033303) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.937679 / 1.841788 (-0.904109) | 11.621632 / 8.074308 (3.547324) | 10.226177 / 10.191392 (0.034785) | 0.129242 / 0.680424 (-0.551182) | 0.014884 / 0.534201 (-0.519317) | 0.287619 / 0.579283 (-0.291664) | 0.261677 / 0.434364 (-0.172687) | 0.336361 / 0.540337 (-0.203976) | 0.426461 / 1.386936 (-0.960475) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005246 / 0.011353 (-0.006106) | 0.003533 / 0.011008 (-0.007475) | 0.051691 / 0.038508 (0.013182) | 0.031551 / 0.023109 (0.008442) | 0.297884 / 0.275898 (0.021986) | 0.323100 / 0.323480 (-0.000380) | 0.004101 / 0.007986 (-0.003884) | 0.002668 / 0.004328 (-0.001661) | 0.048764 / 0.004250 (0.044513) | 0.045429 / 0.037052 (0.008377) | 0.300107 / 0.258489 (0.041618) | 0.335650 / 0.293841 (0.041809) | 0.030061 / 0.128546 (-0.098485) | 0.010878 / 0.075646 (-0.064768) | 0.058561 / 0.419271 (-0.360710) | 0.052829 / 0.043533 (0.009296) | 0.302704 / 0.255139 (0.047565) | 0.320527 / 0.283200 (0.037327) | 0.018995 / 0.141683 (-0.122688) | 1.144050 / 1.452155 (-0.308105) | 1.255275 / 1.492716 (-0.237441) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092708 / 0.018006 (0.074701) | 0.305204 / 0.000490 (0.304714) | 0.000224 / 0.000200 (0.000024) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021607 / 0.037411 (-0.015805) | 0.075938 / 0.014526 (0.061412) | 0.090864 / 0.176557 (-0.085693) | 0.128248 / 0.737135 (-0.608887) | 0.090322 / 0.296338 (-0.206017) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302095 / 0.215209 (0.086886) | 2.925686 / 2.077655 (0.848032) | 1.617767 / 1.504120 (0.113648) | 1.477975 / 1.541195 (-0.063220) | 1.508576 / 1.468490 (0.040086) | 0.574376 / 4.584777 (-4.010401) | 2.467483 / 3.745712 (-1.278229) | 2.832500 / 5.269862 (-2.437362) | 1.765233 / 4.565676 (-2.800443) | 0.064105 / 0.424275 (-0.360170) | 0.005090 / 0.007607 (-0.002517) | 0.349819 / 0.226044 (0.123774) | 3.468916 / 2.268929 (1.199987) | 1.946499 / 55.444624 (-53.498126) | 1.684369 / 6.876477 (-5.192107) | 1.711036 / 2.142072 (-0.431036) | 0.650153 / 4.805227 (-4.155075) | 0.116598 / 6.500664 (-6.384066) | 0.041213 / 0.075469 (-0.034256) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.990842 / 1.841788 (-0.850946) | 12.348468 / 8.074308 (4.274160) | 11.174441 / 10.191392 (0.983049) | 0.140950 / 0.680424 (-0.539473) | 0.016100 / 0.534201 (-0.518101) | 0.286486 / 0.579283 (-0.292797) | 0.282054 / 0.434364 (-0.152310) | 0.324261 / 0.540337 (-0.216076) | 0.420717 / 1.386936 (-0.966219) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7b2bcd76457de720454c3ac304f2ed5c6f40acaa \"CML watermark\")\n"
] | 2023-12-30T16:44:51 | 2023-12-30T16:52:07 | 2023-12-30T16:45:52 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6546",
"html_url": "https://github.com/huggingface/datasets/pull/6546",
"diff_url": "https://github.com/huggingface/datasets/pull/6546.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6546.patch",
"merged_at": "2023-12-30T16:45:52"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6546/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6545/comments | https://api.github.com/repos/huggingface/datasets/issues/6545/events | https://github.com/huggingface/datasets/issues/6545 | 2,060,789,507 | I_kwDODunzps561ScD | 6,545 | `image` column not automatically inferred if image dataset only contains 1 image | {
"login": "apolinario",
"id": 788417,
"node_id": "MDQ6VXNlcjc4ODQxNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/788417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apolinario",
"html_url": "https://github.com/apolinario",
"followers_url": "https://api.github.com/users/apolinario/followers",
"following_url": "https://api.github.com/users/apolinario/following{/other_user}",
"gists_url": "https://api.github.com/users/apolinario/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apolinario/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apolinario/subscriptions",
"organizations_url": "https://api.github.com/users/apolinario/orgs",
"repos_url": "https://api.github.com/users/apolinario/repos",
"events_url": "https://api.github.com/users/apolinario/events{/privacy}",
"received_events_url": "https://api.github.com/users/apolinario/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 2023-12-30T16:17:29 | 2024-01-09T13:06:31 | 2024-01-09T13:06:31 | NONE | null | null | ### Describe the bug
By default, the standard Image Dataset maps out `file_name` to `image` when loading an Image Dataset.
However, if the dataset contains only 1 image, this does not take place
### Steps to reproduce the bug
Input
(dataset with one image `multimodalart/repro_1_image`)
```py
from datasets import load_dataset
dataset = load_dataset("multimodalart/repro_1_image")
dataset
```
Output:
```py
DatasetDict({
train: Dataset({
features: ['file_name', 'prompt'],
num_rows: 1
})
})
```
Input
(dataset with 2+ images `multimodalart/repro_2_image`)
```py
from datasets import load_dataset
dataset = load_dataset("multimodalart/repro_2_image")
dataset
```
Output:
```py
DatasetDict({
train: Dataset({
features: ['image', 'prompt'],
num_rows: 2
})
})
```
### Expected behavior
Expected to map `file_name` β `image` for all dataset sizes, including 1.
### Environment info
Both latest main and 2.16.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6545/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6545/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6544 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6544/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6544/comments | https://api.github.com/repos/huggingface/datasets/issues/6544/events | https://github.com/huggingface/datasets/pull/6544 | 2,060,782,594 | PR_kwDODunzps5i-G4_ | 6,544 | Fix custom configs from script | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6544). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005462 / 0.011353 (-0.005891) | 0.003918 / 0.011008 (-0.007090) | 0.065021 / 0.038508 (0.026513) | 0.032620 / 0.023109 (0.009511) | 0.249794 / 0.275898 (-0.026104) | 0.277330 / 0.323480 (-0.046150) | 0.002962 / 0.007986 (-0.005023) | 0.003435 / 0.004328 (-0.000894) | 0.048992 / 0.004250 (0.044742) | 0.046841 / 0.037052 (0.009788) | 0.252459 / 0.258489 (-0.006030) | 0.287889 / 0.293841 (-0.005952) | 0.028322 / 0.128546 (-0.100224) | 0.011214 / 0.075646 (-0.064432) | 0.208555 / 0.419271 (-0.210717) | 0.037004 / 0.043533 (-0.006529) | 0.262537 / 0.255139 (0.007398) | 0.307418 / 0.283200 (0.024218) | 0.021552 / 0.141683 (-0.120131) | 1.144252 / 1.452155 (-0.307903) | 1.195687 / 1.492716 (-0.297029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004766 / 0.018006 (-0.013240) | 0.301926 / 0.000490 (0.301436) | 0.000218 / 0.000200 (0.000018) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017891 / 0.037411 (-0.019521) | 0.066848 / 0.014526 (0.052322) | 0.075522 / 0.176557 (-0.101035) | 0.120762 / 0.737135 (-0.616374) | 0.075980 / 0.296338 (-0.220359) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284843 / 0.215209 (0.069634) | 2.816260 / 2.077655 (0.738605) | 1.484370 / 1.504120 (-0.019750) | 1.362090 / 1.541195 (-0.179104) | 1.421729 / 1.468490 (-0.046762) | 0.561673 / 4.584777 (-4.023104) | 2.370793 / 3.745712 (-1.374919) | 2.982639 / 5.269862 (-2.287223) | 1.834614 / 4.565676 (-2.731063) | 0.063158 / 0.424275 (-0.361117) | 0.005044 / 0.007607 (-0.002563) | 0.339834 / 0.226044 (0.113790) | 3.369051 / 2.268929 (1.100122) | 1.821040 / 55.444624 (-53.623584) | 1.544009 / 6.876477 (-5.332468) | 1.603902 / 2.142072 (-0.538171) | 0.638151 / 4.805227 (-4.167076) | 0.117012 / 6.500664 (-6.383652) | 0.042999 / 0.075469 (-0.032470) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941809 / 1.841788 (-0.899978) | 12.279635 / 8.074308 (4.205326) | 10.212876 / 10.191392 (0.021484) | 0.129904 / 0.680424 (-0.550519) | 0.014210 / 0.534201 (-0.519991) | 0.286140 / 0.579283 (-0.293143) | 0.267453 / 0.434364 (-0.166911) | 0.324417 / 0.540337 (-0.215921) | 0.428262 / 1.386936 (-0.958674) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005351 / 0.011353 (-0.006002) | 0.003591 / 0.011008 (-0.007417) | 0.048755 / 0.038508 (0.010247) | 0.030857 / 0.023109 (0.007748) | 0.270301 / 0.275898 (-0.005597) | 0.294459 / 0.323480 (-0.029021) | 0.004265 / 0.007986 (-0.003720) | 0.002712 / 0.004328 (-0.001616) | 0.047725 / 0.004250 (0.043475) | 0.048392 / 0.037052 (0.011339) | 0.274226 / 0.258489 (0.015737) | 0.304010 / 0.293841 (0.010169) | 0.029283 / 0.128546 (-0.099263) | 0.011196 / 0.075646 (-0.064450) | 0.057213 / 0.419271 (-0.362058) | 0.057504 / 0.043533 (0.013971) | 0.266091 / 0.255139 (0.010952) | 0.285991 / 0.283200 (0.002791) | 0.020030 / 0.141683 (-0.121653) | 1.121514 / 1.452155 (-0.330641) | 1.192608 / 1.492716 (-0.300108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095041 / 0.018006 (0.077035) | 0.301255 / 0.000490 (0.300765) | 0.000218 / 0.000200 (0.000018) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022265 / 0.037411 (-0.015146) | 0.078416 / 0.014526 (0.063890) | 0.091097 / 0.176557 (-0.085460) | 0.129864 / 0.737135 (-0.607272) | 0.091683 / 0.296338 (-0.204655) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294104 / 0.215209 (0.078895) | 2.886809 / 2.077655 (0.809154) | 1.601931 / 1.504120 (0.097811) | 1.469353 / 1.541195 (-0.071842) | 1.525132 / 1.468490 (0.056642) | 0.565164 / 4.584777 (-4.019613) | 2.432873 / 3.745712 (-1.312839) | 2.885849 / 5.269862 (-2.384013) | 1.780474 / 4.565676 (-2.785203) | 0.064358 / 0.424275 (-0.359917) | 0.005186 / 0.007607 (-0.002421) | 0.349374 / 0.226044 (0.123329) | 3.424751 / 2.268929 (1.155823) | 1.956874 / 55.444624 (-53.487750) | 1.679002 / 6.876477 (-5.197475) | 1.718821 / 2.142072 (-0.423252) | 0.656974 / 4.805227 (-4.148254) | 0.120645 / 6.500664 (-6.380019) | 0.042355 / 0.075469 (-0.033114) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001923 / 1.841788 (-0.839864) | 13.208127 / 8.074308 (5.133819) | 11.164863 / 10.191392 (0.973471) | 0.131964 / 0.680424 (-0.548460) | 0.015344 / 0.534201 (-0.518857) | 0.287961 / 0.579283 (-0.291322) | 0.273986 / 0.434364 (-0.160378) | 0.327280 / 0.540337 (-0.213058) | 0.426761 / 1.386936 (-0.960175) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ebb913eca80807521239efece1ff305625cb89b4 \"CML watermark\")\n",
"Thanks for the fix and the patch release. This confirms that, as I suggested in the Summer, maybe we should avoid making a release right before leaving on holidays."
] | 2023-12-30T15:51:25 | 2024-01-02T11:02:39 | 2023-12-30T16:09:49 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6544",
"html_url": "https://github.com/huggingface/datasets/pull/6544",
"diff_url": "https://github.com/huggingface/datasets/pull/6544.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6544.patch",
"merged_at": "2023-12-30T16:09:49"
} | We should not use the parquet export when the user is passing config_kwargs
I also fixed a regression that would disallow creating a custom config when a dataset has multiple predefined configs
fix https://github.com/huggingface/datasets/issues/6533 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6544/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6543/comments | https://api.github.com/repos/huggingface/datasets/issues/6543/events | https://github.com/huggingface/datasets/pull/6543 | 2,060,776,174 | PR_kwDODunzps5i-Frx | 6,543 | Fix dl_manager.extract returning FileNotFoundError | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6543). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004950 / 0.011353 (-0.006403) | 0.003502 / 0.011008 (-0.007506) | 0.062517 / 0.038508 (0.024009) | 0.030965 / 0.023109 (0.007856) | 0.250661 / 0.275898 (-0.025237) | 0.279165 / 0.323480 (-0.044314) | 0.002960 / 0.007986 (-0.005026) | 0.003382 / 0.004328 (-0.000946) | 0.048174 / 0.004250 (0.043923) | 0.042975 / 0.037052 (0.005922) | 0.248079 / 0.258489 (-0.010410) | 0.283770 / 0.293841 (-0.010070) | 0.027935 / 0.128546 (-0.100611) | 0.010634 / 0.075646 (-0.065012) | 0.207039 / 0.419271 (-0.212233) | 0.035863 / 0.043533 (-0.007670) | 0.257426 / 0.255139 (0.002287) | 0.274222 / 0.283200 (-0.008978) | 0.017590 / 0.141683 (-0.124093) | 1.126889 / 1.452155 (-0.325266) | 1.160795 / 1.492716 (-0.331921) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004298 / 0.018006 (-0.013708) | 0.301366 / 0.000490 (0.300876) | 0.000202 / 0.000200 (0.000002) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018159 / 0.037411 (-0.019252) | 0.060566 / 0.014526 (0.046041) | 0.072500 / 0.176557 (-0.104057) | 0.119612 / 0.737135 (-0.617523) | 0.074467 / 0.296338 (-0.221871) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281859 / 0.215209 (0.066650) | 2.760157 / 2.077655 (0.682502) | 1.450632 / 1.504120 (-0.053487) | 1.326636 / 1.541195 (-0.214559) | 1.363381 / 1.468490 (-0.105109) | 0.576199 / 4.584777 (-4.008578) | 2.355776 / 3.745712 (-1.389936) | 2.807308 / 5.269862 (-2.462553) | 1.745449 / 4.565676 (-2.820228) | 0.063413 / 0.424275 (-0.360862) | 0.004978 / 0.007607 (-0.002630) | 0.332738 / 0.226044 (0.106693) | 3.267677 / 2.268929 (0.998748) | 1.766074 / 55.444624 (-53.678551) | 1.500853 / 6.876477 (-5.375624) | 1.532434 / 2.142072 (-0.609639) | 0.648238 / 4.805227 (-4.156989) | 0.116030 / 6.500664 (-6.384634) | 0.042018 / 0.075469 (-0.033451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.934325 / 1.841788 (-0.907463) | 11.439765 / 8.074308 (3.365457) | 9.958624 / 10.191392 (-0.232768) | 0.130295 / 0.680424 (-0.550129) | 0.014437 / 0.534201 (-0.519764) | 0.286073 / 0.579283 (-0.293210) | 0.262430 / 0.434364 (-0.171934) | 0.323905 / 0.540337 (-0.216432) | 0.416615 / 1.386936 (-0.970321) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005149 / 0.011353 (-0.006204) | 0.003517 / 0.011008 (-0.007491) | 0.048658 / 0.038508 (0.010150) | 0.029638 / 0.023109 (0.006529) | 0.271002 / 0.275898 (-0.004896) | 0.324910 / 0.323480 (0.001430) | 0.004086 / 0.007986 (-0.003900) | 0.002609 / 0.004328 (-0.001719) | 0.047806 / 0.004250 (0.043556) | 0.045422 / 0.037052 (0.008369) | 0.274317 / 0.258489 (0.015828) | 0.304544 / 0.293841 (0.010703) | 0.029318 / 0.128546 (-0.099229) | 0.010626 / 0.075646 (-0.065020) | 0.057838 / 0.419271 (-0.361434) | 0.052408 / 0.043533 (0.008875) | 0.267736 / 0.255139 (0.012597) | 0.292024 / 0.283200 (0.008824) | 0.019244 / 0.141683 (-0.122439) | 1.167728 / 1.452155 (-0.284427) | 1.226364 / 1.492716 (-0.266352) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092441 / 0.018006 (0.074435) | 0.310316 / 0.000490 (0.309827) | 0.000218 / 0.000200 (0.000018) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021818 / 0.037411 (-0.015594) | 0.076515 / 0.014526 (0.061989) | 0.089179 / 0.176557 (-0.087377) | 0.127034 / 0.737135 (-0.610102) | 0.089646 / 0.296338 (-0.206692) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292056 / 0.215209 (0.076847) | 2.841410 / 2.077655 (0.763756) | 1.550626 / 1.504120 (0.046506) | 1.426204 / 1.541195 (-0.114990) | 1.445838 / 1.468490 (-0.022652) | 0.555777 / 4.584777 (-4.029000) | 2.441077 / 3.745712 (-1.304635) | 2.773445 / 5.269862 (-2.496416) | 1.728951 / 4.565676 (-2.836726) | 0.062579 / 0.424275 (-0.361697) | 0.005063 / 0.007607 (-0.002544) | 0.350749 / 0.226044 (0.124705) | 3.461702 / 2.268929 (1.192773) | 1.892506 / 55.444624 (-53.552118) | 1.625958 / 6.876477 (-5.250519) | 1.649175 / 2.142072 (-0.492898) | 0.636123 / 4.805227 (-4.169105) | 0.116548 / 6.500664 (-6.384116) | 0.041174 / 0.075469 (-0.034295) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973468 / 1.841788 (-0.868320) | 12.104761 / 8.074308 (4.030453) | 11.131691 / 10.191392 (0.940299) | 0.132309 / 0.680424 (-0.548115) | 0.016191 / 0.534201 (-0.518010) | 0.284748 / 0.579283 (-0.294535) | 0.282661 / 0.434364 (-0.151703) | 0.323797 / 0.540337 (-0.216540) | 0.417767 / 1.386936 (-0.969169) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#72b440325f6a84d341ea57539d8c368a001e2e75 \"CML watermark\")\n"
] | 2023-12-30T15:24:50 | 2023-12-30T16:00:06 | 2023-12-30T15:53:59 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6543",
"html_url": "https://github.com/huggingface/datasets/pull/6543",
"diff_url": "https://github.com/huggingface/datasets/pull/6543.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6543.patch",
"merged_at": "2023-12-30T15:53:59"
} | The dl_manager base path is remote (e.g. a hf:// path), so local cached paths should be passed as absolute paths.
This could happen if users provide a relative path as `cache_dir`
fix https://github.com/huggingface/datasets/issues/6536 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6543/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6542/comments | https://api.github.com/repos/huggingface/datasets/issues/6542/events | https://github.com/huggingface/datasets/issues/6542 | 2,059,198,575 | I_kwDODunzps56vOBv | 6,542 | Datasets : wikipedia 20220301.en error | {
"login": "ppx666",
"id": 53203620,
"node_id": "MDQ6VXNlcjUzMjAzNjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/53203620?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ppx666",
"html_url": "https://github.com/ppx666",
"followers_url": "https://api.github.com/users/ppx666/followers",
"following_url": "https://api.github.com/users/ppx666/following{/other_user}",
"gists_url": "https://api.github.com/users/ppx666/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ppx666/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ppx666/subscriptions",
"organizations_url": "https://api.github.com/users/ppx666/orgs",
"repos_url": "https://api.github.com/users/ppx666/repos",
"events_url": "https://api.github.com/users/ppx666/events{/privacy}",
"received_events_url": "https://api.github.com/users/ppx666/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi ! We now recommend using the `wikimedia/wikipedia` dataset, can you try loading this one instead ?\r\n\r\n```python\r\nwiki_dataset = load_dataset(\"wikimedia/wikipedia\", \"20231101.en\")\r\n```",
"This bug has been fixed in `2.16.1` thanks to https://github.com/huggingface/datasets/pull/6544, feel free to update `datasets` and re-run your code :)\r\n\r\n```\r\npip install -U datasets\r\n```"
] | 2023-12-29T08:34:51 | 2024-01-02T13:21:06 | 2024-01-02T13:20:30 | NONE | null | null | ### Describe the bug
When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist.
### Steps to reproduce the bug
1.I tried downloading directly.
```python
wiki_dataset = load_dataset("wikipedia", "20220301.en")
```
An exception occurred
```
MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')`
```
2.I modified the code as prompted.
```python
wiki_dataset = load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')
```
An exception occurred:
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json
```
### Expected behavior
I searched in the parent directory of the corresponding URL, but there was no corresponding "20220301" directory.
I really need this data set and hope to provide a download method.
### Environment info
python 3.8
datasets 2.16.0
apache-beam 2.52.0
dill 0.3.7
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6542/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6541/comments | https://api.github.com/repos/huggingface/datasets/issues/6541/events | https://github.com/huggingface/datasets/issues/6541 | 2,058,983,826 | I_kwDODunzps56uZmS | 6,541 | Dataset not loading successfully. | {
"login": "hi-sushanta",
"id": 93595990,
"node_id": "U_kgDOBZQpVg",
"avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hi-sushanta",
"html_url": "https://github.com/hi-sushanta",
"followers_url": "https://api.github.com/users/hi-sushanta/followers",
"following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}",
"gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions",
"organizations_url": "https://api.github.com/users/hi-sushanta/orgs",
"repos_url": "https://api.github.com/users/hi-sushanta/repos",
"events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}",
"received_events_url": "https://api.github.com/users/hi-sushanta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is a problem with your environment. You should be able to fix it by upgrading `numpy` based on [this](https://github.com/numpy/numpy/issues/23570) issue.",
"Bro I already update numpy package.",
"Then, this shouldn't throw an error on your machine:\r\n```python\r\nimport numpy\r\nnumpy._no_nep50_warning\r\n```\r\n\r\nIf it does, run `python -m pip install numpy` to ensure the correct `pip` is used for the package installation.",
"Your suggestion to run `python -m pip install numpy` proved to be successful, and my issue has been resolved. I am grateful for your assistance, @mariosasko"
] | 2023-12-29T01:35:47 | 2024-01-17T00:40:46 | 2024-01-17T00:40:45 | NONE | null | null | ### Describe the bug
When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099)
### Steps to reproduce the bug
## Reproduction
Hi, please check this line of code, when I run Show attribute error.
```
from datasets import load_dataset
from transformers import WhisperProcessor, WhisperForConditionalGeneration
# Select an audio file and read it:
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = ds[0]["audio"]
waveform = audio_sample["array"]
sampling_rate = audio_sample["sampling_rate"]
# Load the Whisper model in Hugging Face format:
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
# Use the model and processor to transcribe the audio:
input_features = processor(
waveform, sampling_rate=sampling_rate, return_tensors="pt"
).input_features
# Generate token ids
predicted_ids = model.generate(input_features)
# Decode token ids to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
transcription[0]
```
**Attribute Error**
```
AttributeError Traceback (most recent call last)
Cell In[9], line 6
4 # Select an audio file and read it:
5 ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
----> 6 audio_sample = ds[0]["audio"]
7 waveform = audio_sample["array"]
8 sampling_rate = audio_sample["sampling_rate"]
File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2795, in Dataset.__getitem__(self, key)
2793 def __getitem__(self, key): # noqa: F811
2794 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2795 return self._getitem(key)
File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2780, in Dataset._getitem(self, key, **kwargs)
2778 formatter = get_formatter(format_type, features=self._info.features, **format_kwargs)
2779 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2780 formatted_output = format_table(
2781 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2782 )
2783 return formatted_output
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:629, in format_table(table, key, formatter, format_columns, output_all_columns)
627 python_formatter = PythonFormatter(features=formatter.features)
628 if format_columns is None:
--> 629 return formatter(pa_table, query_type=query_type)
630 elif query_type == "column":
631 if key in format_columns:
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:396, in Formatter.__call__(self, pa_table, query_type)
394 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
395 if query_type == "row":
--> 396 return self.format_row(pa_table)
397 elif query_type == "column":
398 return self.format_column(pa_table)
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:437, in PythonFormatter.format_row(self, pa_table)
435 return LazyRow(pa_table, self)
436 row = self.python_arrow_extractor().extract_row(pa_table)
--> 437 row = self.python_features_decoder.decode_row(row)
438 return row
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:215, in PythonFeaturesDecoder.decode_row(self, row)
214 def decode_row(self, row: dict) -> dict:
--> 215 return self.features.decode_example(row) if self.features else row
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1917, in Features.decode_example(self, example, token_per_repo_id)
1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1904 """Decode example with custom feature decoding.
1905
1906 Args:
(...)
1914 `dict[str, Any]`
1915 """
-> 1917 return {
1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1919 if self._column_requires_decoding[column_name]
1920 else value
1921 for column_name, (feature, value) in zip_dict(
1922 {key: value for key, value in self.items() if key in example}, example
1923 )
1924 }
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1918, in <dictcomp>(.0)
1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1904 """Decode example with custom feature decoding.
1905
1906 Args:
(...)
1914 `dict[str, Any]`
1915 """
1917 return {
-> 1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1919 if self._column_requires_decoding[column_name]
1920 else value
1921 for column_name, (feature, value) in zip_dict(
1922 {key: value for key, value in self.items() if key in example}, example
1923 )
1924 }
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id)
1336 elif isinstance(schema, (Audio, Image)):
1337 # we pass the token to read and decode files from private repositories in streaming mode
1338 if obj is not None and schema.decode:
-> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1340 return obj
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/audio.py:191, in Audio.decode_example(self, value, token_per_repo_id)
189 array = array.T
190 if self.mono:
--> 191 array = librosa.to_mono(array)
192 if self.sampling_rate and self.sampling_rate != sampling_rate:
193 array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)
File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:78, in attach.<locals>.__getattr__(name)
76 submod_path = f"{package_name}.{attr_to_modules[name]}"
77 submod = importlib.import_module(submod_path)
---> 78 attr = getattr(submod, name)
80 # If the attribute lives in a file (module) with the same
81 # name as the attribute, ensure that the attribute and *not*
82 # the module is accessible on the package.
83 if name == attr_to_modules[name]:
File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:77, in attach.<locals>.__getattr__(name)
75 elif name in attr_to_modules:
76 submod_path = f"{package_name}.{attr_to_modules[name]}"
---> 77 submod = importlib.import_module(submod_path)
78 attr = getattr(submod, name)
80 # If the attribute lives in a file (module) with the same
81 # name as the attribute, ensure that the attribute and *not*
82 # the module is accessible on the package.
File /usr/lib/python3.8/importlib/__init__.py:127, in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1014, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:991, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:975, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:671, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:848, in exec_module(self, module)
File <frozen importlib._bootstrap>:219, in _call_with_frames_removed(f, *args, **kwds)
File /opt/pytorch/lib/python3.8/site-packages/librosa/core/audio.py:13
11 import audioread
12 import numpy as np
---> 13 import scipy.signal
14 import soxr
15 import lazy_loader as lazy
File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/__init__.py:323
314 from ._spline import ( # noqa: F401
315 cspline2d,
316 qspline2d,
(...)
319 symiirorder2,
320 )
322 from ._bsplines import *
--> 323 from ._filter_design import *
324 from ._fir_filter_design import *
325 from ._ltisys import *
File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/_filter_design.py:16
13 from numpy.polynomial.polynomial import polyval as npp_polyval
14 from numpy.polynomial.polynomial import polyvalfromroots
---> 16 from scipy import special, optimize, fft as sp_fft
17 from scipy.special import comb
18 from scipy._lib._util import float_factorial
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/__init__.py:405
1 """
2 =====================================================
3 Optimization and root finding (:mod:`scipy.optimize`)
(...)
401
402 """
404 from ._optimize import *
--> 405 from ._minimize import *
406 from ._root import *
407 from ._root_scalar import *
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_minimize.py:26
24 from ._trustregion_krylov import _minimize_trust_krylov
25 from ._trustregion_exact import _minimize_trustregion_exact
---> 26 from ._trustregion_constr import _minimize_trustregion_constr
28 # constrained minimization
29 from ._lbfgsb_py import _minimize_lbfgsb
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/__init__.py:4
1 """This module contains the equality constrained SQP solver."""
----> 4 from .minimize_trustregion_constr import _minimize_trustregion_constr
6 __all__ = ['_minimize_trustregion_constr']
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/minimize_trustregion_constr.py:5
3 from scipy.sparse.linalg import LinearOperator
4 from .._differentiable_functions import VectorFunction
----> 5 from .._constraints import (
6 NonlinearConstraint, LinearConstraint, PreparedConstraint, strict_bounds)
7 from .._hessian_update_strategy import BFGS
8 from .._optimize import OptimizeResult
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_constraints.py:8
6 from ._optimize import OptimizeWarning
7 from warnings import warn, catch_warnings, simplefilter
----> 8 from numpy.testing import suppress_warnings
9 from scipy.sparse import issparse
12 def _arr_to_scalar(x):
13 # If x is a numpy array, return x.item(). This will
14 # fail if the array has more than one element.
File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/__init__.py:11
8 from unittest import TestCase
10 from . import _private
---> 11 from ._private.utils import *
12 from ._private.utils import (_assert_valid_refcount, _gen_alignment_data)
13 from ._private import extbuild, decorators as dec
File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/_private/utils.py:480
476 pprint.pprint(desired, msg)
477 raise AssertionError(msg.getvalue())
--> 480 @np._no_nep50_warning()
481 def assert_almost_equal(actual,desired,decimal=7,err_msg='',verbose=True):
482 """
483 Raises an AssertionError if two items are not equal up to desired
484 precision.
(...)
548
549 """
550 __tracebackhide__ = True # Hide traceback for py.test
File /opt/pytorch/lib/python3.8/site-packages/numpy/__init__.py:313, in __getattr__(attr)
305 raise AttributeError(__former_attrs__[attr])
307 # Importing Tester requires importing all of UnitTest which is not a
308 # cheap import Since it is mainly used in test suits, we lazy import it
309 # here to save on the order of 10 ms of import time for most users
310 #
311 # The previous way Tester was imported also had a side effect of adding
312 # the full `numpy.testing` namespace
--> 313 if attr == 'testing':
314 import numpy.testing as testing
315 return testing
AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
```
### Expected behavior
``` ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ```
Also, make sure this script is provided for your official website so please update:
[script](https://huggingface.co/docs/transformers/model_doc/whisper)
### Environment info
**System Info**
* transformers -> 4.36.1
* datasets -> 2.15.0
* huggingface_hub -> 0.19.4
* python -> 3.8.10
* accelerate -> 0.25.0
* pytorch -> 2.0.1+cpu
* Using GPU in Script -> No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6541/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6540 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6540/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6540/comments | https://api.github.com/repos/huggingface/datasets/issues/6540/events | https://github.com/huggingface/datasets/issues/6540 | 2,058,965,157 | I_kwDODunzps56uVCl | 6,540 | Extreme inefficiency for `save_to_disk` when merging datasets | {
"login": "KatarinaYuan",
"id": 43512683,
"node_id": "MDQ6VXNlcjQzNTEyNjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/43512683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KatarinaYuan",
"html_url": "https://github.com/KatarinaYuan",
"followers_url": "https://api.github.com/users/KatarinaYuan/followers",
"following_url": "https://api.github.com/users/KatarinaYuan/following{/other_user}",
"gists_url": "https://api.github.com/users/KatarinaYuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KatarinaYuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KatarinaYuan/subscriptions",
"organizations_url": "https://api.github.com/users/KatarinaYuan/orgs",
"repos_url": "https://api.github.com/users/KatarinaYuan/repos",
"events_url": "https://api.github.com/users/KatarinaYuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/KatarinaYuan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Concatenating datasets doesn't create any indices mapping - so flattening indices is not needed (unless you shuffle the dataset).\r\nCan you share the snippet of code you are using to merge your datasets and save them to disk ?"
] | 2023-12-29T00:44:35 | 2023-12-30T15:05:48 | null | NONE | null | null | ### Describe the bug
Hi, I tried to merge in total 22M sequences of data, where each sequence is of maximum length 2000. I found that merging these datasets and then `save_to_disk` is extremely slow because of flattening the indices. Wondering if you have any suggestions or guidance on this. Thank you very much!
### Steps to reproduce the bug
The source data is too big to demonstrate
### Expected behavior
The source data is too big to demonstrate
### Environment info
python 3.9.0
datasets 2.7.0
pytorch 2.0.0
tokenizers 0.13.1
transformers 4.31.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6540/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6539/comments | https://api.github.com/repos/huggingface/datasets/issues/6539/events | https://github.com/huggingface/datasets/issues/6539 | 2,058,493,960 | I_kwDODunzps56siAI | 6,539 | 'Repo card metadata block was not found' when loading a pragmeval dataset | {
"login": "lambdaofgod",
"id": 3647577,
"node_id": "MDQ6VXNlcjM2NDc1Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3647577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lambdaofgod",
"html_url": "https://github.com/lambdaofgod",
"followers_url": "https://api.github.com/users/lambdaofgod/followers",
"following_url": "https://api.github.com/users/lambdaofgod/following{/other_user}",
"gists_url": "https://api.github.com/users/lambdaofgod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lambdaofgod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lambdaofgod/subscriptions",
"organizations_url": "https://api.github.com/users/lambdaofgod/orgs",
"repos_url": "https://api.github.com/users/lambdaofgod/repos",
"events_url": "https://api.github.com/users/lambdaofgod/events{/privacy}",
"received_events_url": "https://api.github.com/users/lambdaofgod/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-28T14:18:25 | 2023-12-28T14:18:37 | null | NONE | null | null | ### Describe the bug
I can't load dataset subsets of 'pragmeval'.
The funny thing is I ran the dataset author's [colab notebook](https://colab.research.google.com/drive/1sg--LF4z7XR1wxAOfp0-3d4J6kQ9nj_A?usp=sharing) and it works just fine. I tried to install exactly the same packages that are installed on colab using poetry, so my environment info only differs from the one from colab in linux version - I still get the same bug outside colab.
### Steps to reproduce the bug
Install dependencies with poetry
pyproject.toml
```
[tool.poetry]
name = "project"
version = "0.1.0"
description = ""
authors = []
[tool.poetry.dependencies]
python = "^3.10"
datasets = "2.16.0"
pandas = "1.5.3"
pyarrow = "10.0.1"
huggingface-hub = "0.19.4"
fsspec = "2023.6.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
```
`poetry run python -c "import datasets; print(datasets.get_dataset_config_names('pragmeval'))`
prints ['default']
### Expected behavior
The command should print
```
['emergent',
'emobank-arousal',
'emobank-dominance',
'emobank-valence',
'gum',
'mrda',
'pdtb',
'persuasiveness-claimtype',
'persuasiveness-eloquence',
'persuasiveness-premisetype',
'persuasiveness-relevance',
'persuasiveness-specificity',
'persuasiveness-strength',
'sarcasm',
'squinky-formality',
'squinky-implicature',
'squinky-informativeness',
'stac',
'switchboard',
'verifiability']
```
### Environment info
- `datasets` version: 2.16.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6539/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6538/comments | https://api.github.com/repos/huggingface/datasets/issues/6538/events | https://github.com/huggingface/datasets/issues/6538 | 2,057,377,630 | I_kwDODunzps56oRde | 6,538 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | {
"login": "Sonali-Behera-TRT",
"id": 131662185,
"node_id": "U_kgDOB9kBaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/131662185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sonali-Behera-TRT",
"html_url": "https://github.com/Sonali-Behera-TRT",
"followers_url": "https://api.github.com/users/Sonali-Behera-TRT/followers",
"following_url": "https://api.github.com/users/Sonali-Behera-TRT/following{/other_user}",
"gists_url": "https://api.github.com/users/Sonali-Behera-TRT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sonali-Behera-TRT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sonali-Behera-TRT/subscriptions",
"organizations_url": "https://api.github.com/users/Sonali-Behera-TRT/orgs",
"repos_url": "https://api.github.com/users/Sonali-Behera-TRT/repos",
"events_url": "https://api.github.com/users/Sonali-Behera-TRT/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sonali-Behera-TRT/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi ! Are you sure you have `datasets` 2.16 ? I just checked and on 2.16 I can run `from datasets.arrow_writer import SchemaInferenceError` without error",
"I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on kaggle?",
"I have the same issue now and didn't have this problem around 2 weeks ago.",
"> Hi ! Are you sure you have `datasets` 2.16 ? I just checked and on 2.16 I can run `from datasets.arrow_writer import SchemaInferenceError` without error\r\n\r\nYes, I am sure\r\n\r\n```\r\n!pip show datasets\r\nName: datasets\r\nVersion: 2.16.1\r\nSummary: HuggingFace community-driven open-source library of datasets\r\nHome-page: https://github.com/huggingface/datasets\r\nAuthor: HuggingFace Inc.\r\nAuthor-email: [email protected]\r\nLicense: Apache 2.0\r\nLocation: /opt/conda/lib/python3.10/site-packages\r\nRequires: aiohttp, dill, filelock, fsspec, huggingface-hub, multiprocess, numpy, packaging, pandas, pyarrow, pyarrow-hotfix, pyyaml, requests, tqdm, xxhash\r\nRequired-by: trl\r\n```",
"> I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on kaggle?\r\n\r\nDon't know about other people. But I am having this issue whose solution I can't find anywhere. And this issue still persists. ",
"> I have the same issue now and didn't have this problem around 2 weeks ago.\r\n\r\nSame here",
"I was having the same issue but the datasets version was 2.6.1, after I updated it to latest(2.16), error is gone while importing.\r\n",
"> I was having the same issue but the datasets version was 2.6.1, after I updated it to latest(2.16), error is gone while importing.\r\n\r\nI also have datasets version 2.16, but the error is still there.",
"Can you try re-installing `datasets` ?",
"> Can you try re-installing `datasets` ?\r\n\r\nI tried re-installing. Still getting the same error. \r\n",
"> > Can you try re-installing `datasets` ?\r\n> \r\n> I tried re-installing. Still getting the same error.\r\n\r\nIn kaggle I used:\r\n- `%pip install -U datasets`\r\nand then restarted runtime and then everything works fine.",
"> > > Can you try re-installing `datasets` ?\r\n> > \r\n> > \r\n> > I tried re-installing. Still getting the same error.\r\n> \r\n> In kaggle I used:\r\n> \r\n> * `%pip install -U datasets`\r\n> and then restarted runtime and then everything works fine.\r\n\r\nYes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages?",
"> > > > Can you try re-installing `datasets` ?\r\n> > > \r\n> > > \r\n> > > I tried re-installing. Still getting the same error.\r\n> > \r\n> > \r\n> > In kaggle I used:\r\n> > \r\n> > * `%pip install -U datasets`\r\n> > and then restarted runtime and then everything works fine.\r\n> \r\n> Yes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages?\r\nFor some packages it is required.\r\nhttps://stackoverflow.com/questions/57831187/need-to-restart-runtime-before-import-an-installed-package-in-colab\r\n",
"> > > > > Can you try re-installing `datasets` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried re-installing. Still getting the same error.\r\n> > > \r\n> > > \r\n> > > In kaggle I used:\r\n> > > \r\n> > > * `%pip install -U datasets`\r\n> > > and then restarted runtime and then everything works fine.\r\n> > \r\n> > \r\n> > Yes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages?\r\n> > For some packages it is required.\r\n> > https://stackoverflow.com/questions/57831187/need-to-restart-runtime-before-import-an-installed-package-in-colab\r\n\r\nThank you for your assistance. I dedicated the past 2-3 weeks to resolving this issue. Interestingly, it runs flawlessly in Colab without requiring a runtime restart. However, the problem persisted exclusively in Kaggle. I appreciate your help once again. Thank you.",
"Closing this issue as it is not related to the datasets library; rather, it's linked to platform-related issues."
] | 2023-12-27T13:31:16 | 2024-01-03T10:06:47 | 2024-01-03T10:04:58 | NONE | null | null | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging
)
from peft import LoraConfig, PeftModel
from trl import SFTTrainer
from huggingface_hub import login
import pandas as pd
```
Error:
````
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[5], line 14
4 from transformers import (
5 AutoModelForCausalLM,
6 AutoTokenizer,
(...)
11 logging
12 )
13 from peft import LoraConfig, PeftModel
---> 14 from trl import SFTTrainer
15 from huggingface_hub import login
16 import pandas as pd
File /opt/conda/lib/python3.10/site-packages/trl/__init__.py:21
8 from .import_utils import (
9 is_diffusers_available,
10 is_npu_available,
(...)
13 is_xpu_available,
14 )
15 from .models import (
16 AutoModelForCausalLMWithValueHead,
17 AutoModelForSeq2SeqLMWithValueHead,
18 PreTrainedModelWrapper,
19 create_reference_model,
20 )
---> 21 from .trainer import (
22 DataCollatorForCompletionOnlyLM,
23 DPOTrainer,
24 IterativeSFTTrainer,
25 PPOConfig,
26 PPOTrainer,
27 RewardConfig,
28 RewardTrainer,
29 SFTTrainer,
30 )
33 if is_diffusers_available():
34 from .models import (
35 DDPOPipelineOutput,
36 DDPOSchedulerOutput,
37 DDPOStableDiffusionPipeline,
38 DefaultDDPOStableDiffusionPipeline,
39 )
File /opt/conda/lib/python3.10/site-packages/trl/trainer/__init__.py:44
42 from .ppo_trainer import PPOTrainer
43 from .reward_trainer import RewardTrainer, compute_accuracy
---> 44 from .sft_trainer import SFTTrainer
45 from .training_configs import RewardConfig
File /opt/conda/lib/python3.10/site-packages/trl/trainer/sft_trainer.py:23
21 import torch.nn as nn
22 from datasets import Dataset
---> 23 from datasets.arrow_writer import SchemaInferenceError
24 from datasets.builder import DatasetGenerationError
25 from transformers import (
26 AutoModelForCausalLM,
27 AutoTokenizer,
(...)
33 TrainingArguments,
34 )
ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py
````
transformers version: 4.36.2
python version: 3.10.12
datasets version: 2.16.1
### Steps to reproduce the bug
1. Install packages
```
!pip install -U datasets trl accelerate peft bitsandbytes transformers trl huggingface_hub
```
2. import packages
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging
)
from peft import LoraConfig, PeftModel
from trl import SFTTrainer
from huggingface_hub import login
import pandas as pd
```
### Expected behavior
No error while importing
### Environment info
- `datasets` version: 2.16.0
- Platform: Linux-5.15.133+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.1
- PyArrow version: 11.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6538/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6537/comments | https://api.github.com/repos/huggingface/datasets/issues/6537/events | https://github.com/huggingface/datasets/issues/6537 | 2,057,132,173 | I_kwDODunzps56nViN | 6,537 | Adding support for netCDF (*.nc) files | {
"login": "shermansiu",
"id": 12627125,
"node_id": "MDQ6VXNlcjEyNjI3MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/12627125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shermansiu",
"html_url": "https://github.com/shermansiu",
"followers_url": "https://api.github.com/users/shermansiu/followers",
"following_url": "https://api.github.com/users/shermansiu/following{/other_user}",
"gists_url": "https://api.github.com/users/shermansiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shermansiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shermansiu/subscriptions",
"organizations_url": "https://api.github.com/users/shermansiu/orgs",
"repos_url": "https://api.github.com/users/shermansiu/repos",
"events_url": "https://api.github.com/users/shermansiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shermansiu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [
"Related to #3113 ",
"Conceptually, we can use xarray to load the netCDF file, then xarray -> pandas -> pyarrow.",
"I'd still need to verify that such a conversion would be lossless, especially for multi-dimensional data."
] | 2023-12-27T09:27:29 | 2023-12-27T20:46:53 | null | NONE | null | null | ### Feature request
netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`.
### Motivation
When uploading *.nc files onto Huggingface Hub through the `datasets` API, I would like to be able to preview the dataset without converting it to another format.
### Your contribution
I can submit a PR, provided I have the time. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6537/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6536/comments | https://api.github.com/repos/huggingface/datasets/issues/6536/events | https://github.com/huggingface/datasets/issues/6536 | 2,056,863,239 | I_kwDODunzps56mT4H | 6,536 | datasets.load_dataset raises FileNotFoundError for datasets==2.16.0 | {
"login": "ArvinZhuang",
"id": 46237844,
"node_id": "MDQ6VXNlcjQ2MjM3ODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/46237844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArvinZhuang",
"html_url": "https://github.com/ArvinZhuang",
"followers_url": "https://api.github.com/users/ArvinZhuang/followers",
"following_url": "https://api.github.com/users/ArvinZhuang/following{/other_user}",
"gists_url": "https://api.github.com/users/ArvinZhuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArvinZhuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArvinZhuang/subscriptions",
"organizations_url": "https://api.github.com/users/ArvinZhuang/orgs",
"repos_url": "https://api.github.com/users/ArvinZhuang/repos",
"events_url": "https://api.github.com/users/ArvinZhuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArvinZhuang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi ! Thanks for reporting\r\n\r\nThis is a bug in 2.16.0 for some datasets when `cache_dir` is a relative path. I opened https://github.com/huggingface/datasets/pull/6543 to fix this",
"We just released 2.16.1 with a fix:\r\n\r\n```\r\npip install -U datasets\r\n```"
] | 2023-12-27T03:15:48 | 2023-12-30T18:58:04 | 2023-12-30T15:54:00 | NONE | null | null | ### Describe the bug
Seems `datasets.load_dataset` raises FileNotFoundError for some hub datasets with the latest `datasets==2.16.0`
### Steps to reproduce the bug
For example `pip install datasets==2.16.0`
then
```python
import datasets
datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_dir='cache1')["train"]
```
This will raise:
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/load.py", line 2545, in load_dataset
builder_instance.download_and_prepare(
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/builder.py", line 1003, in download_and_prepare
self._download_and_prepare(
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/builder.py", line 1076, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 43, in _split_generators
data_files = dl_manager.download_and_extract(self.config.data_files)
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 566, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 539, in extract
extracted_paths = map_nested(
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 466, in map_nested
mapped = [
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 467, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 387, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 387, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 370, in _single_map_nested
return function(data_struct)
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 451, in _download
out = cached_path(url_or_filename, download_config=download_config)
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 188, in cached_path
output_path = get_from_cache(
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 570, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wentingzhao/anthropic-hh-first-prompt/resolve/11b393a5545f706a357ebcd4a5285d93db176715/cache1/downloads/87d66c365626feca116cba323c4856c9aae056e4503f09f23e34aa085eb9de15
```
However, seems it works fine for some datasets, for example, if works fine for `datasets.load_dataset("ag_news", cache_dir='cache2')["test"]`
But the dataset works fine for datasets==2.15.0, for example `pip install datasets==2.15.0`,
then
```python
import datasets
datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_dir='cache3')["train"]
Dataset({
features: ['user', 'system', 'source'],
num_rows: 8552
})
```
### Expected behavior
2.16.0 should work as same as 2.15.0 for all datasets
### Environment info
python3.9
conda env
tested on MacOS and Linux | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6536/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6536/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6535 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6535/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6535/comments | https://api.github.com/repos/huggingface/datasets/issues/6535/events | https://github.com/huggingface/datasets/issues/6535 | 2,056,264,339 | I_kwDODunzps56kBqT | 6,535 | IndexError: Invalid key: 47682 is out of bounds for size 0 while using PEFT | {
"login": "MahavirDabas18",
"id": 57484266,
"node_id": "MDQ6VXNlcjU3NDg0MjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/57484266?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MahavirDabas18",
"html_url": "https://github.com/MahavirDabas18",
"followers_url": "https://api.github.com/users/MahavirDabas18/followers",
"following_url": "https://api.github.com/users/MahavirDabas18/following{/other_user}",
"gists_url": "https://api.github.com/users/MahavirDabas18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MahavirDabas18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MahavirDabas18/subscriptions",
"organizations_url": "https://api.github.com/users/MahavirDabas18/orgs",
"repos_url": "https://api.github.com/users/MahavirDabas18/repos",
"events_url": "https://api.github.com/users/MahavirDabas18/events{/privacy}",
"received_events_url": "https://api.github.com/users/MahavirDabas18/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@sabman @pvl @kashif @vigsterkr ",
"This is surely the same issue as https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0/14298/25 that comes from the `transformers` `Trainer`. You should add `remove_unused_columns=False` to `TrainingArguments`\r\n\r\nAlso check your logs: the `Trainer` should log the length of your dataset before training starts and it surely showed length=0.",
"the same error \r\nIndexError: Invalid key: 22330 is out of bounds for size 0"
] | 2023-12-26T10:14:33 | 2024-02-05T08:42:31 | null | NONE | null | null | ### Describe the bug
I am trying to fine-tune the t5 model on the paraphrasing task. While running the same code without-
model = get_peft_model(model, config)
the model trains without any issues. However, using the model returned from get_peft_model raises the following error due to datasets-
IndexError: Invalid key: 47682 is out of bounds for size 0.
I had raised this in https://github.com/huggingface/peft/issues/1299#issue-2056173386 and they suggested that I raise it here.
Here is the complete error-
IndexError Traceback (most recent call last)
in <cell line: 1>()
----> 1 trainer.train()
11 frames
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1553 hf_hub_utils.enable_progress_bars()
1554 else:
-> 1555 return inner_training_loop(
1556 args=args,
1557 resume_from_checkpoint=resume_from_checkpoint,
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1836
1837 step = -1
-> 1838 for step, inputs in enumerate(epoch_iterator):
1839 total_batched_samples += 1
1840 if rng_to_sync:
[/usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py](https://localhost:8080/#) in iter(self)
446 # We iterate one batch ahead to check when we are at the end
447 try:
--> 448 current_batch = next(dataloader_iter)
449 except StopIteration:
450 yield
[/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in next(self)
628 # TODO(https://github.com/pytorch/pytorch/issues/76750)
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
631 self._num_yielded += 1
632 if self._dataset_kind == _DatasetKind.Iterable and \
[/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in _next_data(self)
672 def _next_data(self):
673 index = self._next_index() # may raise StopIteration
--> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
675 if self._pin_memory:
676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
[/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py](https://localhost:8080/#) in fetch(self, possibly_batched_index)
47 if self.auto_collation:
48 if hasattr(self.dataset, "getitems") and self.dataset.getitems:
---> 49 data = self.dataset.getitems(possibly_batched_index)
50 else:
51 data = [self.dataset[idx] for idx in possibly_batched_index]
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in getitems(self, keys)
2802 def getitems(self, keys: List) -> List:
2803 """Can be used to get a batch using a list of integers indices."""
-> 2804 batch = self.getitem(keys)
2805 n_examples = len(batch[next(iter(batch))])
2806 return [{col: array[i] for col, array in batch.items()} for i in range(n_examples)]
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in getitem(self, key)
2798 def getitem(self, key): # noqa: F811
2799 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2800 return self._getitem(key)
2801
2802 def getitems(self, keys: List) -> List:
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _getitem(self, key, **kwargs)
2782 format_kwargs = format_kwargs if format_kwargs is not None else {}
2783 formatter = get_formatter(format_type, features=self._info.features, **format_kwargs)
-> 2784 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
2785 formatted_output = format_table(
2786 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
[/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in query_table(table, key, indices)
581 else:
582 size = indices.num_rows if indices is not None else table.num_rows
--> 583 _check_valid_index_key(key, size)
584 # Query the main table
585 if indices is None:
[/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size)
534 elif isinstance(key, Iterable):
535 if len(key) > 0:
--> 536 _check_valid_index_key(int(max(key)), size=size)
537 _check_valid_index_key(int(min(key)), size=size)
538 else:
[/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size)
524 if isinstance(key, int):
525 if (key < 0 and key + size < 0) or (key >= size):
--> 526 raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
527 return
528 elif isinstance(key, slice):
IndexError: Invalid key: 47682 is out of bounds for size 0
### Steps to reproduce the bug
device = "cuda:0" if torch.cuda.is_available() else "cpu"
#defining model name for tokenizer and model loading
model_name= "t5-small"
#loading the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
def preprocess_function(data, tokenizer):
inputs = [f"Paraphrase this sentence: {doc}" for doc in data["text"]]
model_inputs = tokenizer(inputs, max_length=150, truncation=True)
labels = [ast.literal_eval(i)[0] for i in data['paraphrases']]
labels = tokenizer(labels, max_length=150, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
train_dataset = load_dataset("humarin/chatgpt-paraphrases", split="train").shuffle(seed=42).select(range(50000))
val_dataset = load_dataset("humarin/chatgpt-paraphrases", split="train").shuffle(seed=42).select(range(50000,55000))
tokenized_train = train_dataset.map(lambda batch: preprocess_function(batch, tokenizer), batched=True)
tokenized_val = val_dataset.map(lambda batch: preprocess_function(batch, tokenizer), batched=True)
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
config = LoraConfig(
r=16, #attention heads
lora_alpha=32, #alpha scaling
lora_dropout=0.05,
bias="none",
task_type="Seq2Seq"
)
#loading the model
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)
model = get_peft_model(model, config)
print_trainable_parameters(model)
#loading the data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer=tokenizer,
model=model,
label_pad_token_id=-100,
padding="longest"
)
#defining the training arguments
training_args = Seq2SeqTrainingArguments(
output_dir=os.getcwd(),
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=1e-3,
save_total_limit=3,
load_best_model_at_end=True,
num_train_epochs=1,
predict_with_generate=True
)
def compute_metric_with_extra(tokenizer):
def compute_metrics(eval_preds):
metric = evaluate.load('rouge')
preds, labels = eval_preds
# decode preds and labels
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# rougeLSum expects newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
return result
return compute_metrics
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=tokenized_train,
eval_dataset=tokenized_val,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics= compute_metric_with_extra(tokenizer)
)
trainer.train()
### Expected behavior
I would want the trainer to train normally as it was before I used-
model = get_peft_model(model, config)
### Environment info
datasets version- 2.16.0
peft version- 0.7.1
transformers version- 4.35.2
accelerate version- 0.25.0
python- 3.10.12
enviroment- google colab | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6535/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6534/comments | https://api.github.com/repos/huggingface/datasets/issues/6534/events | https://github.com/huggingface/datasets/issues/6534 | 2,056,002,548 | I_kwDODunzps56jBv0 | 6,534 | How to configure multiple folders in the same zip package | {
"login": "d710055071",
"id": 12895488,
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d710055071",
"html_url": "https://github.com/d710055071",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"repos_url": "https://api.github.com/users/d710055071/repos",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova"
] | 2023-12-26T03:56:20 | 2023-12-26T06:31:16 | null | CONTRIBUTOR | null | null | How should I write "config" in readme when all the data, such as train test, is in a zip file
train floder and test floder in data.zip | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6534/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6533 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6533/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6533/comments | https://api.github.com/repos/huggingface/datasets/issues/6533/events | https://github.com/huggingface/datasets/issues/6533 | 2,055,929,101 | I_kwDODunzps56iv0N | 6,533 | ted_talks_iwslt | Error: Config name is missing | {
"login": "rayliuca",
"id": 35850903,
"node_id": "MDQ6VXNlcjM1ODUwOTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/35850903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rayliuca",
"html_url": "https://github.com/rayliuca",
"followers_url": "https://api.github.com/users/rayliuca/followers",
"following_url": "https://api.github.com/users/rayliuca/following{/other_user}",
"gists_url": "https://api.github.com/users/rayliuca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rayliuca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rayliuca/subscriptions",
"organizations_url": "https://api.github.com/users/rayliuca/orgs",
"repos_url": "https://api.github.com/users/rayliuca/repos",
"events_url": "https://api.github.com/users/rayliuca/events{/privacy}",
"received_events_url": "https://api.github.com/users/rayliuca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi ! Thanks for reporting. I opened https://github.com/huggingface/datasets/pull/6544 to fix this",
"We just released 2.16.1 with a fix:\r\n\r\n```\r\npip install -U datasets\r\n```"
] | 2023-12-26T00:38:18 | 2023-12-30T18:58:21 | 2023-12-30T16:09:50 | NONE | null | null | ### Describe the bug
Running load_dataset using the newest `datasets` library like below on the ted_talks_iwslt using year pair data will throw an error "Config name is missing"
see also:
https://huggingface.co/datasets/ted_talks_iwslt/discussions/3
likely caused by #6493, where the `and not config_kwargs` part in the if logic was removed
https://github.com/huggingface/datasets/blob/ef3b5dd3633995c95d77f35fb17f89ff44990bc4/src/datasets/builder.py#L512
### Steps to reproduce the bug
run:
```python
load_dataset("ted_talks_iwslt", language_pair=("ja", "en"), year="2015")
```
### Expected behavior
Load the data without error
### Environment info
datasets 2.16.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6533/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6532/comments | https://api.github.com/repos/huggingface/datasets/issues/6532/events | https://github.com/huggingface/datasets/issues/6532 | 2,055,631,201 | I_kwDODunzps56hnFh | 6,532 | [Feature request] Indexing datasets by a customly-defined id field to enable random access dataset items via the id | {
"login": "Yu-Shi",
"id": 3377221,
"node_id": "MDQ6VXNlcjMzNzcyMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3377221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yu-Shi",
"html_url": "https://github.com/Yu-Shi",
"followers_url": "https://api.github.com/users/Yu-Shi/followers",
"following_url": "https://api.github.com/users/Yu-Shi/following{/other_user}",
"gists_url": "https://api.github.com/users/Yu-Shi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yu-Shi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yu-Shi/subscriptions",
"organizations_url": "https://api.github.com/users/Yu-Shi/orgs",
"repos_url": "https://api.github.com/users/Yu-Shi/repos",
"events_url": "https://api.github.com/users/Yu-Shi/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yu-Shi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [
"You can simply use a python dict as index:\r\n\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> ds = load_dataset(\"BeIR/dbpedia-entity\", \"corpus\", split=\"corpus\")\r\n>>> index = {key: idx for idx, key in enumerate(ds[\"_id\"])}\r\n>>> ds[index[\"<dbpedia:Pikachu>\"]]\r\n{'_id': '<dbpedia:Pikachu>',\r\n 'title': 'Pikachu',\r\n 'text': 'Pikachu (Japanese: γγ«γγ₯γ¦) are a fictional species of PokΓ©mon. PokΓ©mon are fictional creatures that appear in an assortment of comic books, animated movies and television shows, video games, and trading card games licensed by The PokΓ©mon Company, a Japanese corporation. The Pikachu design was conceived by Ken Sugimori.'}\r\n```",
"Thanks for your reply. Yes, I can do that, but it is time-consuming to do that every time I launch the program (some datasets are extremely big). HF Datasets has a nice feature to support instant data loading and efficient random access via row ids. I'm curious if this beneficial feature could be further extended to custom data columns.\r\n"
] | 2023-12-25T11:37:10 | 2024-01-02T13:52:05 | null | NONE | null | null | ### Feature request
Some datasets may contain an id-like field, for example the `id` field in [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and the `_id` field in [BeIR/dbpedia-entity](https://huggingface.co/datasets/BeIR/dbpedia-entity). HF datasets support efficient random access via row, but not via this kinds of id fields. I wonder if it is possible to add support for indexing by a custom "id-like" field to enable random access via such ids. The ids may be numbers or strings.
### Motivation
In some cases, especially during inference/evaluation, I may want to find out the item that has a specified id, defined by the dataset itself.
For example, in a typical re-ranking setting in information retrieval, the user may want to re-rank the set of candidate documents of each query. The input is usually presented in a TREC-style run file, with the following format:
```
<qid> Q0 <docno> <rank> <score> <tag>
```
The re-ranking program should be able to fetch the queries and documents according to the `<qid>` and `<docno>`, which are the original id defined in the query/document datasets. To accomplish this, I have to iterate over the whole HF dataset to get the mapping from real ids to row ids every time I start the program, which is time-consuming. Thus I want HF dataset to provide options for users to index by a custom id column, not by row.
### Your contribution
I'm not an expert in this project and I'm afraid that I'm not able to make contributions on the code. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6532/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6531/comments | https://api.github.com/repos/huggingface/datasets/issues/6531/events | https://github.com/huggingface/datasets/pull/6531 | 2,055,201,605 | PR_kwDODunzps5it5Sm | 6,531 | Add polars compatibility | {
"login": "psmyth94",
"id": 11325244,
"node_id": "MDQ6VXNlcjExMzI1MjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psmyth94",
"html_url": "https://github.com/psmyth94",
"followers_url": "https://api.github.com/users/psmyth94/followers",
"following_url": "https://api.github.com/users/psmyth94/following{/other_user}",
"gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions",
"organizations_url": "https://api.github.com/users/psmyth94/orgs",
"repos_url": "https://api.github.com/users/psmyth94/repos",
"events_url": "https://api.github.com/users/psmyth94/events{/privacy}",
"received_events_url": "https://api.github.com/users/psmyth94/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-24T20:03:23 | 2023-12-24T20:03:23 | null | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6531",
"html_url": "https://github.com/huggingface/datasets/pull/6531",
"diff_url": "https://github.com/huggingface/datasets/pull/6531.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6531.patch",
"merged_at": null
} | Hey there,
I've just finished adding support to convert and format to `polars.DataFrame`. This was in response to the open issue about integrating Polars [#3334](https://github.com/huggingface/datasets/issues/3334). Datasets can be switched to Polars format via `Dataset.set_format("polars")`. I've also included `to_polars` and `from_polars`. All polars functions are checked via config.POLARS_AVAILABLE.
A few notes:
This only supports `DataFrames` and not `LazyFrames`. This probably could be integrated fairly easily via `is_lazy` args in `set_format`, and `to_polars`.
Let me know your feedbacks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6531/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6531/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6530/comments | https://api.github.com/repos/huggingface/datasets/issues/6530/events | https://github.com/huggingface/datasets/issues/6530 | 2,054,817,609 | I_kwDODunzps56egdJ | 6,530 | Impossible to save a mapped dataset to disk | {
"login": "kopyl",
"id": 17604849,
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kopyl",
"html_url": "https://github.com/kopyl",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"repos_url": "https://api.github.com/users/kopyl/repos",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I solved it with `train_dataset.with_format(None)`\r\nBut then faced some more issues (which i later solved too).\r\n\r\nHuggingface does not seem to care, so I do. Here is an updated training script which saves a pre-processed (mapped) dataset to your local directory if you specify `--save_precomputed_data_dir=DIR_NAME`. Then use `--train_precomputed_data_dir` with the same dir to load it instead of `--dataset_name`.\r\n\r\n[Proper SDXL trainer code](https://github.com/kopyl/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py)\r\n[Notebook for pre-computing a dataset and saving locally](https://colab.research.google.com/drive/17Yo08hePx-NlHs99RecdeiNc8CQg4O7l?usp=sharing)\r\n\r\nExample:\r\n\r\n1st run (nothing is pre-computed yet):\r\n```\r\naccelerate launch train_text_to_image_sdxl.py \\\r\n --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \\\r\n --pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix \\\r\n --dataset_name=lambdalabs/pokemon-blip-captions \\\r\n --save_precomputed_data_dir=\"test-5\"\r\n```\r\n\r\n2nd run (the pre-computed dataset is saved to `test-5` directory):\r\n```\r\naccelerate launch train_text_to_image_sdxl.py \\\r\n --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \\\r\n --pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix \\\r\n --train_precomputed_data_dir test-5\r\n```\r\n\r\nThis way when you're gonna be using your pre-computed dataset you don't need to worry about re-mapping your dataset when you change an argument for your trainer script"
] | 2023-12-23T15:18:27 | 2023-12-24T09:40:30 | null | NONE | null | null | ### Describe the bug
I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py).
After I do the mapping like this:
```
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True)
train_dataset = train_dataset.map(
compute_vae_encodings_fn,
batched=True,
batch_size=16,
)
```
and try to save it like this:
`train_dataset.save_to_disk("test")`
i get this error ([full traceback](https://pastebin.com/kq3vt739)):
```
TypeError: Object of type function is not JSON serializable
The format kwargs must be JSON serializable, but key 'transform' isn't.
```
But what is interesting is that pushing to hub works like that:
`train_dataset.push_to_hub("kopyl/mapped-833-icons-sdxl-1024-dataset", token=True)`
Here is the link of the pushed dataset: https://huggingface.co/datasets/kopyl/mapped-833-icons-sdxl-1024-dataset
### Steps to reproduce the bug
Here is the self-contained notebook:
https://colab.research.google.com/drive/1RtCsEMVcwWcMwlWURk_cj_9xUBHz065M?usp=sharing
### Expected behavior
It should be easily saved to disk
### Environment info
NVIDIA A100, Linux (NC24ads A100 v4 from Azure), CUDA 12.2.
[pip freeze](https://pastebin.com/QTNb6iru) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6530/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6529/comments | https://api.github.com/repos/huggingface/datasets/issues/6529/events | https://github.com/huggingface/datasets/issues/6529 | 2,054,209,449 | I_kwDODunzps56cL-p | 6,529 | Impossible to only download a test split | {
"login": "ysig",
"id": 28439529,
"node_id": "MDQ6VXNlcjI4NDM5NTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/28439529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ysig",
"html_url": "https://github.com/ysig",
"followers_url": "https://api.github.com/users/ysig/followers",
"following_url": "https://api.github.com/users/ysig/following{/other_user}",
"gists_url": "https://api.github.com/users/ysig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ysig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ysig/subscriptions",
"organizations_url": "https://api.github.com/users/ysig/orgs",
"repos_url": "https://api.github.com/users/ysig/repos",
"events_url": "https://api.github.com/users/ysig/events{/privacy}",
"received_events_url": "https://api.github.com/users/ysig/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The only way right now is to load with streaming=True",
"This feature has been proposed for a long time. I'm looking forward to the implementation. On clusters `streaming=True` is not an option since we do not have Internet on compute nodes. See: https://github.com/huggingface/datasets/discussions/1896#discussioncomment-2359593"
] | 2023-12-22T16:56:32 | 2024-02-02T00:05:04 | null | NONE | null | null | I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function.
Then after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed before! split is passed to the dataset builder in `as_dataset`.
If I'm not missing something, this seems like bad design, for the following use case:
> Imagine there is a huge dataset that has an evaluation test set and you want to just download and run just to compare your method.
Is there a current workaround that can help me achieve the same result?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6529/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/6529/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6528/comments | https://api.github.com/repos/huggingface/datasets/issues/6528/events | https://github.com/huggingface/datasets/pull/6528 | 2,053,996,494 | PR_kwDODunzps5ip9JH | 6,528 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6528). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004875 / 0.011353 (-0.006478) | 0.003501 / 0.011008 (-0.007507) | 0.062604 / 0.038508 (0.024096) | 0.031916 / 0.023109 (0.008806) | 0.256138 / 0.275898 (-0.019760) | 0.278514 / 0.323480 (-0.044966) | 0.002917 / 0.007986 (-0.005069) | 0.002636 / 0.004328 (-0.001693) | 0.049154 / 0.004250 (0.044904) | 0.041985 / 0.037052 (0.004933) | 0.256857 / 0.258489 (-0.001632) | 0.282628 / 0.293841 (-0.011213) | 0.027506 / 0.128546 (-0.101041) | 0.010736 / 0.075646 (-0.064910) | 0.207268 / 0.419271 (-0.212003) | 0.035312 / 0.043533 (-0.008221) | 0.259274 / 0.255139 (0.004135) | 0.281463 / 0.283200 (-0.001737) | 0.019905 / 0.141683 (-0.121778) | 1.108719 / 1.452155 (-0.343435) | 1.177871 / 1.492716 (-0.314845) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004435 / 0.018006 (-0.013571) | 0.310643 / 0.000490 (0.310153) | 0.000243 / 0.000200 (0.000043) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018013 / 0.037411 (-0.019398) | 0.060702 / 0.014526 (0.046176) | 0.073243 / 0.176557 (-0.103314) | 0.119523 / 0.737135 (-0.617613) | 0.074204 / 0.296338 (-0.222134) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281075 / 0.215209 (0.065866) | 2.722154 / 2.077655 (0.644499) | 1.441052 / 1.504120 (-0.063068) | 1.305940 / 1.541195 (-0.235255) | 1.356752 / 1.468490 (-0.111738) | 0.570399 / 4.584777 (-4.014378) | 2.329158 / 3.745712 (-1.416554) | 2.749093 / 5.269862 (-2.520768) | 1.717752 / 4.565676 (-2.847925) | 0.063228 / 0.424275 (-0.361047) | 0.004981 / 0.007607 (-0.002626) | 0.330601 / 0.226044 (0.104557) | 3.300987 / 2.268929 (1.032059) | 1.778673 / 55.444624 (-53.665951) | 1.507841 / 6.876477 (-5.368636) | 1.520454 / 2.142072 (-0.621619) | 0.650816 / 4.805227 (-4.154412) | 0.118606 / 6.500664 (-6.382058) | 0.042199 / 0.075469 (-0.033271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.919668 / 1.841788 (-0.922119) | 11.293437 / 8.074308 (3.219129) | 9.928525 / 10.191392 (-0.262867) | 0.127142 / 0.680424 (-0.553282) | 0.013470 / 0.534201 (-0.520731) | 0.284648 / 0.579283 (-0.294636) | 0.264942 / 0.434364 (-0.169422) | 0.321866 / 0.540337 (-0.218471) | 0.414513 / 1.386936 (-0.972423) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005052 / 0.011353 (-0.006301) | 0.003204 / 0.011008 (-0.007804) | 0.051102 / 0.038508 (0.012594) | 0.032105 / 0.023109 (0.008996) | 0.273923 / 0.275898 (-0.001976) | 0.297031 / 0.323480 (-0.026449) | 0.004002 / 0.007986 (-0.003984) | 0.002636 / 0.004328 (-0.001693) | 0.047696 / 0.004250 (0.043445) | 0.044086 / 0.037052 (0.007034) | 0.277779 / 0.258489 (0.019289) | 0.306678 / 0.293841 (0.012837) | 0.028557 / 0.128546 (-0.099989) | 0.010631 / 0.075646 (-0.065015) | 0.056419 / 0.419271 (-0.362852) | 0.054285 / 0.043533 (0.010752) | 0.276506 / 0.255139 (0.021367) | 0.296315 / 0.283200 (0.013116) | 0.018642 / 0.141683 (-0.123040) | 1.146926 / 1.452155 (-0.305229) | 1.257625 / 1.492716 (-0.235092) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094231 / 0.018006 (0.076225) | 0.302805 / 0.000490 (0.302315) | 0.000229 / 0.000200 (0.000029) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022510 / 0.037411 (-0.014901) | 0.076092 / 0.014526 (0.061566) | 0.090642 / 0.176557 (-0.085915) | 0.127016 / 0.737135 (-0.610120) | 0.089169 / 0.296338 (-0.207169) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290812 / 0.215209 (0.075603) | 2.858528 / 2.077655 (0.780873) | 1.577555 / 1.504120 (0.073436) | 1.447810 / 1.541195 (-0.093384) | 1.447546 / 1.468490 (-0.020944) | 0.559118 / 4.584777 (-4.025659) | 2.408930 / 3.745712 (-1.336782) | 2.733761 / 5.269862 (-2.536101) | 1.700107 / 4.565676 (-2.865570) | 0.062447 / 0.424275 (-0.361828) | 0.004999 / 0.007607 (-0.002608) | 0.340207 / 0.226044 (0.114162) | 3.344131 / 2.268929 (1.075203) | 1.902289 / 55.444624 (-53.542335) | 1.628226 / 6.876477 (-5.248251) | 1.629435 / 2.142072 (-0.512637) | 0.625011 / 4.805227 (-4.180216) | 0.119929 / 6.500664 (-6.380735) | 0.041097 / 0.075469 (-0.034372) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977461 / 1.841788 (-0.864327) | 12.303189 / 8.074308 (4.228881) | 11.008743 / 10.191392 (0.817351) | 0.128578 / 0.680424 (-0.551845) | 0.015305 / 0.534201 (-0.518896) | 0.286468 / 0.579283 (-0.292816) | 0.275824 / 0.434364 (-0.158540) | 0.321487 / 0.540337 (-0.218851) | 0.420591 / 1.386936 (-0.966345) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54 \"CML watermark\")\n"
] | 2023-12-22T14:23:18 | 2023-12-22T14:31:42 | 2023-12-22T14:25:34 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6528",
"html_url": "https://github.com/huggingface/datasets/pull/6528",
"diff_url": "https://github.com/huggingface/datasets/pull/6528.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6528.patch",
"merged_at": "2023-12-22T14:25:34"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6528/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6527 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6527/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6527/comments | https://api.github.com/repos/huggingface/datasets/issues/6527/events | https://github.com/huggingface/datasets/pull/6527 | 2,053,966,748 | PR_kwDODunzps5ip2vd | 6,527 | Release: 2.16.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6527). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004870 / 0.011353 (-0.006483) | 0.003606 / 0.011008 (-0.007402) | 0.062719 / 0.038508 (0.024211) | 0.031785 / 0.023109 (0.008676) | 0.238809 / 0.275898 (-0.037089) | 0.263000 / 0.323480 (-0.060480) | 0.002844 / 0.007986 (-0.005142) | 0.002698 / 0.004328 (-0.001631) | 0.048070 / 0.004250 (0.043819) | 0.042333 / 0.037052 (0.005280) | 0.243032 / 0.258489 (-0.015457) | 0.273197 / 0.293841 (-0.020644) | 0.027498 / 0.128546 (-0.101048) | 0.010592 / 0.075646 (-0.065055) | 0.204770 / 0.419271 (-0.214502) | 0.034837 / 0.043533 (-0.008696) | 0.242518 / 0.255139 (-0.012621) | 0.267461 / 0.283200 (-0.015739) | 0.018479 / 0.141683 (-0.123204) | 1.105444 / 1.452155 (-0.346710) | 1.163659 / 1.492716 (-0.329057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004717 / 0.018006 (-0.013289) | 0.303338 / 0.000490 (0.302849) | 0.000221 / 0.000200 (0.000021) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018298 / 0.037411 (-0.019113) | 0.061225 / 0.014526 (0.046699) | 0.073514 / 0.176557 (-0.103043) | 0.120230 / 0.737135 (-0.616905) | 0.076195 / 0.296338 (-0.220144) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284731 / 0.215209 (0.069522) | 2.773463 / 2.077655 (0.695809) | 1.498239 / 1.504120 (-0.005881) | 1.372143 / 1.541195 (-0.169052) | 1.448949 / 1.468490 (-0.019542) | 0.572516 / 4.584777 (-4.012261) | 2.404041 / 3.745712 (-1.341671) | 2.763047 / 5.269862 (-2.506814) | 1.722419 / 4.565676 (-2.843257) | 0.063104 / 0.424275 (-0.361172) | 0.004989 / 0.007607 (-0.002618) | 0.341864 / 0.226044 (0.115820) | 3.391635 / 2.268929 (1.122707) | 1.872694 / 55.444624 (-53.571931) | 1.594490 / 6.876477 (-5.281987) | 1.596940 / 2.142072 (-0.545132) | 0.645265 / 4.805227 (-4.159962) | 0.117408 / 6.500664 (-6.383256) | 0.042405 / 0.075469 (-0.033064) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963207 / 1.841788 (-0.878580) | 11.676551 / 8.074308 (3.602243) | 10.194287 / 10.191392 (0.002895) | 0.130329 / 0.680424 (-0.550094) | 0.015381 / 0.534201 (-0.518820) | 0.288848 / 0.579283 (-0.290435) | 0.264781 / 0.434364 (-0.169583) | 0.321212 / 0.540337 (-0.219126) | 0.418308 / 1.386936 (-0.968628) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005533 / 0.011353 (-0.005819) | 0.003733 / 0.011008 (-0.007276) | 0.048877 / 0.038508 (0.010369) | 0.030263 / 0.023109 (0.007154) | 0.281161 / 0.275898 (0.005263) | 0.302971 / 0.323480 (-0.020509) | 0.003943 / 0.007986 (-0.004043) | 0.002717 / 0.004328 (-0.001612) | 0.047845 / 0.004250 (0.043594) | 0.045809 / 0.037052 (0.008757) | 0.283337 / 0.258489 (0.024848) | 0.312914 / 0.293841 (0.019073) | 0.029074 / 0.128546 (-0.099472) | 0.010775 / 0.075646 (-0.064871) | 0.057461 / 0.419271 (-0.361810) | 0.053756 / 0.043533 (0.010223) | 0.281809 / 0.255139 (0.026670) | 0.298339 / 0.283200 (0.015139) | 0.019270 / 0.141683 (-0.122413) | 1.117575 / 1.452155 (-0.334580) | 1.191703 / 1.492716 (-0.301013) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093513 / 0.018006 (0.075507) | 0.301267 / 0.000490 (0.300777) | 0.000211 / 0.000200 (0.000012) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022278 / 0.037411 (-0.015133) | 0.076805 / 0.014526 (0.062279) | 0.088820 / 0.176557 (-0.087736) | 0.127903 / 0.737135 (-0.609233) | 0.092988 / 0.296338 (-0.203350) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297787 / 0.215209 (0.082578) | 2.899652 / 2.077655 (0.821997) | 1.598830 / 1.504120 (0.094710) | 1.469398 / 1.541195 (-0.071797) | 1.511099 / 1.468490 (0.042609) | 0.559785 / 4.584777 (-4.024992) | 2.426448 / 3.745712 (-1.319264) | 2.798811 / 5.269862 (-2.471051) | 1.737790 / 4.565676 (-2.827887) | 0.062219 / 0.424275 (-0.362056) | 0.005120 / 0.007607 (-0.002487) | 0.351051 / 0.226044 (0.125007) | 3.492063 / 2.268929 (1.223134) | 1.965674 / 55.444624 (-53.478950) | 1.672874 / 6.876477 (-5.203603) | 1.709700 / 2.142072 (-0.432373) | 0.639347 / 4.805227 (-4.165880) | 0.126383 / 6.500664 (-6.374281) | 0.042731 / 0.075469 (-0.032738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968619 / 1.841788 (-0.873168) | 12.671030 / 8.074308 (4.596722) | 11.125347 / 10.191392 (0.933955) | 0.142983 / 0.680424 (-0.537441) | 0.015726 / 0.534201 (-0.518475) | 0.288610 / 0.579283 (-0.290673) | 0.276473 / 0.434364 (-0.157891) | 0.326590 / 0.540337 (-0.213748) | 0.423832 / 1.386936 (-0.963104) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a85fb52fc8ddb41307e61cbf6a5189f3bba27829 \"CML watermark\")\n"
] | 2023-12-22T13:59:56 | 2023-12-22T14:24:12 | 2023-12-22T14:17:55 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6527",
"html_url": "https://github.com/huggingface/datasets/pull/6527",
"diff_url": "https://github.com/huggingface/datasets/pull/6527.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6527.patch",
"merged_at": "2023-12-22T14:17:55"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6527/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6526 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6526/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6526/comments | https://api.github.com/repos/huggingface/datasets/issues/6526/events | https://github.com/huggingface/datasets/pull/6526 | 2,053,726,451 | PR_kwDODunzps5ipB5v | 6,526 | Preserve order of configs and splits when using Parquet exports | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6526). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005101 / 0.011353 (-0.006252) | 0.003471 / 0.011008 (-0.007537) | 0.062293 / 0.038508 (0.023785) | 0.032650 / 0.023109 (0.009541) | 0.249241 / 0.275898 (-0.026657) | 0.277079 / 0.323480 (-0.046400) | 0.002971 / 0.007986 (-0.005015) | 0.002637 / 0.004328 (-0.001691) | 0.048415 / 0.004250 (0.044165) | 0.042832 / 0.037052 (0.005779) | 0.247840 / 0.258489 (-0.010649) | 0.283994 / 0.293841 (-0.009847) | 0.027764 / 0.128546 (-0.100782) | 0.010544 / 0.075646 (-0.065102) | 0.208810 / 0.419271 (-0.210462) | 0.035744 / 0.043533 (-0.007789) | 0.252811 / 0.255139 (-0.002328) | 0.276163 / 0.283200 (-0.007036) | 0.018581 / 0.141683 (-0.123102) | 1.130043 / 1.452155 (-0.322112) | 1.194298 / 1.492716 (-0.298418) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004488 / 0.018006 (-0.013518) | 0.302072 / 0.000490 (0.301582) | 0.000211 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017799 / 0.037411 (-0.019613) | 0.061146 / 0.014526 (0.046620) | 0.081796 / 0.176557 (-0.094761) | 0.120407 / 0.737135 (-0.616729) | 0.075211 / 0.296338 (-0.221127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295349 / 0.215209 (0.080140) | 2.953511 / 2.077655 (0.875857) | 1.495332 / 1.504120 (-0.008788) | 1.364144 / 1.541195 (-0.177051) | 1.429562 / 1.468490 (-0.038928) | 0.574325 / 4.584777 (-4.010452) | 2.384352 / 3.745712 (-1.361360) | 2.843625 / 5.269862 (-2.426236) | 1.806802 / 4.565676 (-2.758875) | 0.065076 / 0.424275 (-0.359199) | 0.004970 / 0.007607 (-0.002638) | 0.339935 / 0.226044 (0.113891) | 3.375103 / 2.268929 (1.106175) | 1.822921 / 55.444624 (-53.621703) | 1.546126 / 6.876477 (-5.330350) | 1.573630 / 2.142072 (-0.568442) | 0.655081 / 4.805227 (-4.150146) | 0.122446 / 6.500664 (-6.378218) | 0.042220 / 0.075469 (-0.033249) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.942127 / 1.841788 (-0.899661) | 11.470401 / 8.074308 (3.396093) | 10.025961 / 10.191392 (-0.165431) | 0.129087 / 0.680424 (-0.551337) | 0.014141 / 0.534201 (-0.520060) | 0.285470 / 0.579283 (-0.293813) | 0.266755 / 0.434364 (-0.167608) | 0.323391 / 0.540337 (-0.216947) | 0.427645 / 1.386936 (-0.959291) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005578 / 0.011353 (-0.005775) | 0.003734 / 0.011008 (-0.007274) | 0.049200 / 0.038508 (0.010692) | 0.030981 / 0.023109 (0.007872) | 0.281195 / 0.275898 (0.005297) | 0.309950 / 0.323480 (-0.013530) | 0.004046 / 0.007986 (-0.003939) | 0.002709 / 0.004328 (-0.001620) | 0.048505 / 0.004250 (0.044254) | 0.046245 / 0.037052 (0.009193) | 0.280130 / 0.258489 (0.021641) | 0.313739 / 0.293841 (0.019898) | 0.029828 / 0.128546 (-0.098718) | 0.011152 / 0.075646 (-0.064495) | 0.057753 / 0.419271 (-0.361518) | 0.055112 / 0.043533 (0.011580) | 0.281861 / 0.255139 (0.026722) | 0.304402 / 0.283200 (0.021203) | 0.019931 / 0.141683 (-0.121752) | 1.150585 / 1.452155 (-0.301570) | 1.217850 / 1.492716 (-0.274866) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091552 / 0.018006 (0.073546) | 0.301772 / 0.000490 (0.301282) | 0.000225 / 0.000200 (0.000025) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023189 / 0.037411 (-0.014223) | 0.078741 / 0.014526 (0.064216) | 0.092320 / 0.176557 (-0.084236) | 0.129636 / 0.737135 (-0.607500) | 0.091673 / 0.296338 (-0.204665) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298542 / 0.215209 (0.083333) | 2.899358 / 2.077655 (0.821703) | 1.673896 / 1.504120 (0.169776) | 1.489518 / 1.541195 (-0.051677) | 1.542853 / 1.468490 (0.074363) | 0.559843 / 4.584777 (-4.024934) | 2.422101 / 3.745712 (-1.323611) | 2.844592 / 5.269862 (-2.425270) | 1.794527 / 4.565676 (-2.771150) | 0.064615 / 0.424275 (-0.359660) | 0.005078 / 0.007607 (-0.002530) | 0.355112 / 0.226044 (0.129068) | 3.462129 / 2.268929 (1.193200) | 1.975393 / 55.444624 (-53.469231) | 1.706513 / 6.876477 (-5.169963) | 1.716954 / 2.142072 (-0.425118) | 0.642094 / 4.805227 (-4.163133) | 0.119215 / 6.500664 (-6.381449) | 0.041941 / 0.075469 (-0.033528) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986774 / 1.841788 (-0.855014) | 12.702049 / 8.074308 (4.627741) | 11.727663 / 10.191392 (1.536271) | 0.135008 / 0.680424 (-0.545416) | 0.016055 / 0.534201 (-0.518146) | 0.293564 / 0.579283 (-0.285719) | 0.284884 / 0.434364 (-0.149480) | 0.332524 / 0.540337 (-0.207814) | 0.425392 / 1.386936 (-0.961544) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7b5fc585fcaf77b92839e82d0ce2c2fbf0d9ea95 \"CML watermark\")\n"
] | 2023-12-22T10:35:56 | 2023-12-22T11:42:22 | 2023-12-22T11:36:14 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6526",
"html_url": "https://github.com/huggingface/datasets/pull/6526",
"diff_url": "https://github.com/huggingface/datasets/pull/6526.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6526.patch",
"merged_at": "2023-12-22T11:36:14"
} | Preserve order of configs and splits, as defined in dataset infos.
Fix #6521. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6526/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6525/comments | https://api.github.com/repos/huggingface/datasets/issues/6525/events | https://github.com/huggingface/datasets/pull/6525 | 2,053,119,357 | PR_kwDODunzps5im-lL | 6,525 | BBox type | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6525). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"closing in favor of other ideas that would not involve any typing"
] | 2023-12-21T22:13:27 | 2024-01-11T06:34:51 | 2023-12-21T22:39:27 | MEMBER | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6525",
"html_url": "https://github.com/huggingface/datasets/pull/6525",
"diff_url": "https://github.com/huggingface/datasets/pull/6525.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6525.patch",
"merged_at": null
} | see [internal discussion](https://huggingface.slack.com/archives/C02EK7C3SHW/p1703097195609209)
Draft to get some feedback on a possible `BBox` feature type that can be used to get object detection bounding boxes data in one format or another.
```python
>>> from datasets import load_dataset, BBox
>>> ds = load_dataset("svhn", "full_numbers", split="train")
>>> ds[0]
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=107x46 at 0x126409BE0>,
'digits': {'bbox': [[38, 1, 21, 40], [57, 3, 16, 40]], 'label': [4, 6]}
}
>>> ds = ds.rename_column("digits", "annotations").cast_column("annotations", BBox(format="coco"))
>>> ds[0]
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=107x46 at 0x147730070>,
'annotations': [{'bbox': [38, 1, 21, 40], 'category_id': 4}, {'bbox': [57, 3, 16, 40], 'category_id': 6}]
}
```
note that it's a type for a list of bounding boxes, not just one - which would be needed to switch from a format to another using type casting. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6525/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6524 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6524/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6524/comments | https://api.github.com/repos/huggingface/datasets/issues/6524/events | https://github.com/huggingface/datasets/issues/6524 | 2,053,076,311 | I_kwDODunzps56X3VX | 6,524 | Streaming the Pile: Missing Files | {
"login": "FelixLabelle",
"id": 23347756,
"node_id": "MDQ6VXNlcjIzMzQ3NzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/23347756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FelixLabelle",
"html_url": "https://github.com/FelixLabelle",
"followers_url": "https://api.github.com/users/FelixLabelle/followers",
"following_url": "https://api.github.com/users/FelixLabelle/following{/other_user}",
"gists_url": "https://api.github.com/users/FelixLabelle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FelixLabelle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FelixLabelle/subscriptions",
"organizations_url": "https://api.github.com/users/FelixLabelle/orgs",
"repos_url": "https://api.github.com/users/FelixLabelle/repos",
"events_url": "https://api.github.com/users/FelixLabelle/events{/privacy}",
"received_events_url": "https://api.github.com/users/FelixLabelle/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello @FelixLabelle,\r\n\r\nAs you can see in the Community tab of the corresponding dataset, it is a known issue: https://huggingface.co/datasets/EleutherAI/pile/discussions/15\r\n\r\nThe data has been taken down due to reported copyright infringement.\r\n\r\nFeel free to continue the discussion there."
] | 2023-12-21T21:25:09 | 2023-12-22T09:17:05 | 2023-12-22T09:17:05 | NONE | null | null | ### Describe the bug
The pile does not stream, a "File not Found error" is returned. It looks like the Pile's files have been moved.
### Steps to reproduce the bug
To reproduce run the following code:
```
from datasets import load_dataset
dataset = load_dataset('EleutherAI/pile', 'en', split='train', streaming=True)
next(iter(dataset))
```
I get the following error:
`FileNotFoundError: https://the-eye.eu/public/AI/pile/train/00.jsonl.zst`
### Expected behavior
Return the data in a stream.
### Environment info
- `datasets` version: 2.12.0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6524/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6523 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6523/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6523/comments | https://api.github.com/repos/huggingface/datasets/issues/6523/events | https://github.com/huggingface/datasets/pull/6523 | 2,052,643,484 | PR_kwDODunzps5ilV6d | 6,523 | fix tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6523). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005160 / 0.011353 (-0.006192) | 0.003962 / 0.011008 (-0.007046) | 0.064952 / 0.038508 (0.026444) | 0.053291 / 0.023109 (0.030182) | 0.237182 / 0.275898 (-0.038716) | 0.263855 / 0.323480 (-0.059625) | 0.004157 / 0.007986 (-0.003829) | 0.002901 / 0.004328 (-0.001428) | 0.050679 / 0.004250 (0.046428) | 0.044885 / 0.037052 (0.007832) | 0.243806 / 0.258489 (-0.014683) | 0.273828 / 0.293841 (-0.020013) | 0.028681 / 0.128546 (-0.099866) | 0.011086 / 0.075646 (-0.064560) | 0.211987 / 0.419271 (-0.207285) | 0.035881 / 0.043533 (-0.007652) | 0.249618 / 0.255139 (-0.005521) | 0.262880 / 0.283200 (-0.020319) | 0.017788 / 0.141683 (-0.123895) | 1.209060 / 1.452155 (-0.243094) | 1.272143 / 1.492716 (-0.220574) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004594 / 0.018006 (-0.013412) | 0.305188 / 0.000490 (0.304698) | 0.000213 / 0.000200 (0.000013) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019526 / 0.037411 (-0.017886) | 0.062280 / 0.014526 (0.047754) | 0.074983 / 0.176557 (-0.101573) | 0.123466 / 0.737135 (-0.613670) | 0.076240 / 0.296338 (-0.220099) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276001 / 0.215209 (0.060792) | 2.689614 / 2.077655 (0.611959) | 1.441092 / 1.504120 (-0.063028) | 1.319775 / 1.541195 (-0.221419) | 1.386904 / 1.468490 (-0.081587) | 0.561388 / 4.584777 (-4.023389) | 2.386718 / 3.745712 (-1.358994) | 2.813959 / 5.269862 (-2.455903) | 1.727447 / 4.565676 (-2.838230) | 0.061965 / 0.424275 (-0.362310) | 0.004977 / 0.007607 (-0.002630) | 0.335077 / 0.226044 (0.109032) | 3.313860 / 2.268929 (1.044932) | 1.814018 / 55.444624 (-53.630606) | 1.542840 / 6.876477 (-5.333637) | 1.586887 / 2.142072 (-0.555185) | 0.643225 / 4.805227 (-4.162002) | 0.117834 / 6.500664 (-6.382830) | 0.044024 / 0.075469 (-0.031445) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.952804 / 1.841788 (-0.888984) | 12.447378 / 8.074308 (4.373070) | 11.281734 / 10.191392 (1.090342) | 0.143407 / 0.680424 (-0.537017) | 0.014749 / 0.534201 (-0.519452) | 0.289298 / 0.579283 (-0.289985) | 0.268217 / 0.434364 (-0.166146) | 0.327995 / 0.540337 (-0.212343) | 0.430302 / 1.386936 (-0.956634) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005683 / 0.011353 (-0.005670) | 0.003813 / 0.011008 (-0.007195) | 0.048943 / 0.038508 (0.010435) | 0.060730 / 0.023109 (0.037621) | 0.266925 / 0.275898 (-0.008973) | 0.292553 / 0.323480 (-0.030927) | 0.004236 / 0.007986 (-0.003750) | 0.002790 / 0.004328 (-0.001538) | 0.048962 / 0.004250 (0.044711) | 0.040354 / 0.037052 (0.003302) | 0.266353 / 0.258489 (0.007864) | 0.298397 / 0.293841 (0.004556) | 0.029977 / 0.128546 (-0.098570) | 0.010788 / 0.075646 (-0.064858) | 0.057529 / 0.419271 (-0.361743) | 0.032896 / 0.043533 (-0.010636) | 0.266696 / 0.255139 (0.011557) | 0.283422 / 0.283200 (0.000223) | 0.020939 / 0.141683 (-0.120744) | 1.169867 / 1.452155 (-0.282287) | 1.213586 / 1.492716 (-0.279130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097035 / 0.018006 (0.079029) | 0.306968 / 0.000490 (0.306478) | 0.000234 / 0.000200 (0.000034) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023343 / 0.037411 (-0.014068) | 0.078238 / 0.014526 (0.063712) | 0.091083 / 0.176557 (-0.085474) | 0.131487 / 0.737135 (-0.605649) | 0.092614 / 0.296338 (-0.203724) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294454 / 0.215209 (0.079245) | 2.881053 / 2.077655 (0.803398) | 1.623934 / 1.504120 (0.119814) | 1.509001 / 1.541195 (-0.032194) | 1.567541 / 1.468490 (0.099051) | 0.574326 / 4.584777 (-4.010451) | 2.476826 / 3.745712 (-1.268886) | 2.826183 / 5.269862 (-2.443678) | 1.771949 / 4.565676 (-2.793727) | 0.063663 / 0.424275 (-0.360613) | 0.005039 / 0.007607 (-0.002568) | 0.354861 / 0.226044 (0.128816) | 3.397655 / 2.268929 (1.128727) | 1.961958 / 55.444624 (-53.482666) | 1.694795 / 6.876477 (-5.181682) | 1.719459 / 2.142072 (-0.422614) | 0.654512 / 4.805227 (-4.150715) | 0.119285 / 6.500664 (-6.381379) | 0.042146 / 0.075469 (-0.033323) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982187 / 1.841788 (-0.859601) | 12.944627 / 8.074308 (4.870319) | 11.370381 / 10.191392 (1.178989) | 0.142759 / 0.680424 (-0.537665) | 0.016319 / 0.534201 (-0.517882) | 0.291339 / 0.579283 (-0.287944) | 0.276842 / 0.434364 (-0.157522) | 0.324285 / 0.540337 (-0.216052) | 0.426234 / 1.386936 (-0.960702) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e1b82eaa75d2c610e59b463a67d685ec858c0838 \"CML watermark\")\n"
] | 2023-12-21T15:36:21 | 2023-12-21T15:56:54 | 2023-12-21T15:50:38 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6523",
"html_url": "https://github.com/huggingface/datasets/pull/6523",
"diff_url": "https://github.com/huggingface/datasets/pull/6523.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6523.patch",
"merged_at": "2023-12-21T15:50:38"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6523/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6522/comments | https://api.github.com/repos/huggingface/datasets/issues/6522/events | https://github.com/huggingface/datasets/issues/6522 | 2,052,332,528 | I_kwDODunzps56VBvw | 6,522 | Loading HF Hub Dataset (private org repo) fails to load all features | {
"login": "versipellis",
"id": 6579034,
"node_id": "MDQ6VXNlcjY1NzkwMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6579034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/versipellis",
"html_url": "https://github.com/versipellis",
"followers_url": "https://api.github.com/users/versipellis/followers",
"following_url": "https://api.github.com/users/versipellis/following{/other_user}",
"gists_url": "https://api.github.com/users/versipellis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/versipellis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/versipellis/subscriptions",
"organizations_url": "https://api.github.com/users/versipellis/orgs",
"repos_url": "https://api.github.com/users/versipellis/repos",
"events_url": "https://api.github.com/users/versipellis/events{/privacy}",
"received_events_url": "https://api.github.com/users/versipellis/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-21T12:26:35 | 2023-12-21T13:24:31 | null | NONE | null | null | ### Describe the bug
When pushing a `Dataset` with multiple `Features` (`input`, `output`, `tags`) to Huggingface Hub (private org repo), and later downloading the `Dataset`, only `input` and `output` load - I believe the expected behavior is for all `Features` to be loaded by default?
### Steps to reproduce the bug
Pushing the data. `data_concat` is a `list` of `dict`s.
```python
for datum in data_concat:
datum_tags = {d["key"]: d["value"] for d in datum["tags"]}
split_fraction = # some logic that generates a train/test split number
if split_faction < test_fraction:
data_test.append(datum)
else:
data_train.append(datum)
dataset = DatasetDict(
{
"train": Dataset.from_list(data_train),
"test": Dataset.from_list(data_test),
"full": Dataset.from_list(data_concat),
},
)
dataset_shuffled = dataset.shuffle(seed=shuffle_seed)
dataset_shuffled.push_to_hub(
repo_id=hf_repo_id,
private=True,
config_name=m,
revision=revision,
token=hf_token,
)
```
Loading it later:
```python
dataset = datasets.load_dataset(
path=hf_repo_id,
name=name,
token=hf_token,
)
```
Produces:
```
DatasetDict({
train: Dataset({
features: ['input', 'output'],
num_rows: <obfuscated>
})
test: Dataset({
features: ['input', 'output'],
num_rows: <obfuscated>
})
full: Dataset({
features: ['input', 'output'],
num_rows: <obfuscated>
})
})
```
### Expected behavior
The expected result is below:
```
DatasetDict({
train: Dataset({
features: ['input', 'output', 'tags'],
num_rows: <obfuscated>
})
test: Dataset({
features: ['input', 'output', 'tags'],
num_rows: <obfuscated>
})
full: Dataset({
features: ['input', 'output', 'tags'],
num_rows: <obfuscated>
})
})
```
My workaround is as follows:
```python
dsinfo = datasets.get_dataset_config_info(
path=data_files,
config_name=data_config,
token=hf_token,
)
allfeatures = dsinfo.features.copy()
if "tags" not in allfeatures:
allfeatures["tags"] = [{"key": Value(dtype="string", id=None), "value": Value(dtype="string", id=None)}]
dataset = datasets.load_dataset(
path=data_files,
name=data_config,
features=allfeatures,
token=hf_token,
)
```
Interestingly enough (and perhaps a related bug?), if I don't add the `tags` to `allfeatures` above (i.e. only loading `input` and `output`), it throws an error when executing `load_dataset`:
```
ValueError: Couldn't cast
tags: list<element: struct<key: string, value: string>>
child 0, element: struct<key: string, value: string>
child 0, key: string
child 1, value: string
input: <obfuscated>
output: <obfuscated>
-- schema metadata --
huggingface: '{"info": {"features": {"tags": [{"key": {"dtype": "string",' + 532
to
{'input': <obfuscated>, 'output': <obfuscated>
because column names don't match
```
Traceback for this:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/load.py", line 2152, in load_dataset
builder_instance.download_and_prepare(
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 948, in download_and_prepare
self._download_and_prepare(
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1043, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1805, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1950, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Environment info
- `datasets` version: 2.15.0
- Platform: macOS-14.0-arm64-arm-64bit
- Python version: 3.11.5
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6522/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6521/comments | https://api.github.com/repos/huggingface/datasets/issues/6521/events | https://github.com/huggingface/datasets/issues/6521 | 2,052,229,538 | I_kwDODunzps56Uomi | 6,521 | The order of the splits is not preserved | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"After investigation, I think the issue was introduced by the use of the Parquet export:\r\n- #6448\r\n\r\nI am proposing a fix.\r\n\r\nCC: @lhoestq "
] | 2023-12-21T11:17:27 | 2023-12-22T11:36:15 | 2023-12-22T11:36:15 | MEMBER | null | null | We had a regression and the order of the splits is not preserved. They are alphabetically sorted, instead of preserving original "train", "validation", "test" order.
Check: In branch "main"
```python
In [9]: dataset = load_dataset("adversarial_qa", '"adversarialQA")
In [10]: dataset
Out[10]:
DatasetDict({
test: Dataset({
features: ['id', 'title', 'context', 'question', 'answers', 'metadata'],
num_rows: 3000
})
train: Dataset({
features: ['id', 'title', 'context', 'question', 'answers', 'metadata'],
num_rows: 30000
})
validation: Dataset({
features: ['id', 'title', 'context', 'question', 'answers', 'metadata'],
num_rows: 3000
})
})
```
Before (2.15.0) it was:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'context', 'question', 'answers', 'metadata'],
num_rows: 30000
})
validation: Dataset({
features: ['id', 'title', 'context', 'question', 'answers', 'metadata'],
num_rows: 3000
})
test: Dataset({
features: ['id', 'title', 'context', 'question', 'answers', 'metadata'],
num_rows: 3000
})
})
```
See issues:
- https://huggingface.co/datasets/adversarial_qa/discussions/3
- https://huggingface.co/datasets/beans/discussions/4
This is a regression because it was previously fixed. See:
- #6196
- #5728 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6521/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6520/comments | https://api.github.com/repos/huggingface/datasets/issues/6520/events | https://github.com/huggingface/datasets/pull/6520 | 2,052,059,078 | PR_kwDODunzps5ijUiw | 6,520 | Support commit_description parameter in push_to_hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6520). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005484 / 0.011353 (-0.005869) | 0.003537 / 0.011008 (-0.007471) | 0.062631 / 0.038508 (0.024123) | 0.048037 / 0.023109 (0.024927) | 0.240342 / 0.275898 (-0.035556) | 0.268103 / 0.323480 (-0.055377) | 0.002927 / 0.007986 (-0.005059) | 0.002609 / 0.004328 (-0.001719) | 0.048112 / 0.004250 (0.043862) | 0.046111 / 0.037052 (0.009058) | 0.249249 / 0.258489 (-0.009240) | 0.277723 / 0.293841 (-0.016118) | 0.028374 / 0.128546 (-0.100172) | 0.010900 / 0.075646 (-0.064746) | 0.206252 / 0.419271 (-0.213019) | 0.035262 / 0.043533 (-0.008271) | 0.247438 / 0.255139 (-0.007701) | 0.270003 / 0.283200 (-0.013197) | 0.019157 / 0.141683 (-0.122526) | 1.116833 / 1.452155 (-0.335322) | 1.174495 / 1.492716 (-0.318221) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092490 / 0.018006 (0.074484) | 0.302794 / 0.000490 (0.302304) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018669 / 0.037411 (-0.018743) | 0.061902 / 0.014526 (0.047376) | 0.073612 / 0.176557 (-0.102945) | 0.121196 / 0.737135 (-0.615940) | 0.075960 / 0.296338 (-0.220378) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286983 / 0.215209 (0.071774) | 2.836819 / 2.077655 (0.759165) | 1.506635 / 1.504120 (0.002515) | 1.387134 / 1.541195 (-0.154061) | 1.442310 / 1.468490 (-0.026180) | 0.571281 / 4.584777 (-4.013496) | 2.440220 / 3.745712 (-1.305492) | 2.775306 / 5.269862 (-2.494555) | 1.727047 / 4.565676 (-2.838630) | 0.064955 / 0.424275 (-0.359320) | 0.004982 / 0.007607 (-0.002625) | 0.343153 / 0.226044 (0.117108) | 3.388745 / 2.268929 (1.119817) | 1.878983 / 55.444624 (-53.565641) | 1.592642 / 6.876477 (-5.283835) | 1.601037 / 2.142072 (-0.541035) | 0.636882 / 4.805227 (-4.168345) | 0.117804 / 6.500664 (-6.382861) | 0.042467 / 0.075469 (-0.033002) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941534 / 1.841788 (-0.900254) | 12.093230 / 8.074308 (4.018922) | 10.590854 / 10.191392 (0.399462) | 0.136636 / 0.680424 (-0.543788) | 0.015244 / 0.534201 (-0.518957) | 0.300216 / 0.579283 (-0.279067) | 0.267622 / 0.434364 (-0.166742) | 0.337526 / 0.540337 (-0.202811) | 0.426856 / 1.386936 (-0.960080) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005282 / 0.011353 (-0.006071) | 0.003595 / 0.011008 (-0.007413) | 0.049237 / 0.038508 (0.010729) | 0.054057 / 0.023109 (0.030948) | 0.269781 / 0.275898 (-0.006117) | 0.293544 / 0.323480 (-0.029936) | 0.003991 / 0.007986 (-0.003995) | 0.002705 / 0.004328 (-0.001623) | 0.048755 / 0.004250 (0.044505) | 0.040425 / 0.037052 (0.003373) | 0.264753 / 0.258489 (0.006264) | 0.312773 / 0.293841 (0.018932) | 0.030011 / 0.128546 (-0.098535) | 0.010707 / 0.075646 (-0.064939) | 0.058164 / 0.419271 (-0.361107) | 0.033365 / 0.043533 (-0.010168) | 0.268854 / 0.255139 (0.013715) | 0.283618 / 0.283200 (0.000418) | 0.019571 / 0.141683 (-0.122111) | 1.114738 / 1.452155 (-0.337417) | 1.178990 / 1.492716 (-0.313726) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092183 / 0.018006 (0.074177) | 0.303797 / 0.000490 (0.303307) | 0.000218 / 0.000200 (0.000018) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023088 / 0.037411 (-0.014323) | 0.079813 / 0.014526 (0.065287) | 0.089593 / 0.176557 (-0.086964) | 0.128127 / 0.737135 (-0.609008) | 0.091578 / 0.296338 (-0.204761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300153 / 0.215209 (0.084944) | 2.919532 / 2.077655 (0.841877) | 1.587870 / 1.504120 (0.083750) | 1.459031 / 1.541195 (-0.082164) | 1.483305 / 1.468490 (0.014815) | 0.555865 / 4.584777 (-4.028912) | 2.388350 / 3.745712 (-1.357362) | 2.817947 / 5.269862 (-2.451914) | 1.764446 / 4.565676 (-2.801230) | 0.067142 / 0.424275 (-0.357133) | 0.005148 / 0.007607 (-0.002460) | 0.347998 / 0.226044 (0.121953) | 3.431208 / 2.268929 (1.162280) | 1.942175 / 55.444624 (-53.502450) | 1.676606 / 6.876477 (-5.199871) | 1.692431 / 2.142072 (-0.449641) | 0.645974 / 4.805227 (-4.159253) | 0.117729 / 6.500664 (-6.382935) | 0.041670 / 0.075469 (-0.033799) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981554 / 1.841788 (-0.860234) | 12.671959 / 8.074308 (4.597650) | 11.230694 / 10.191392 (1.039302) | 0.132694 / 0.680424 (-0.547730) | 0.015694 / 0.534201 (-0.518507) | 0.290271 / 0.579283 (-0.289013) | 0.279358 / 0.434364 (-0.155006) | 0.326515 / 0.540337 (-0.213823) | 0.421755 / 1.386936 (-0.965181) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0b2147ac644596b66886f398012351641672ee54 \"CML watermark\")\n"
] | 2023-12-21T09:36:11 | 2023-12-21T14:49:47 | 2023-12-21T14:43:35 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6520",
"html_url": "https://github.com/huggingface/datasets/pull/6520",
"diff_url": "https://github.com/huggingface/datasets/pull/6520.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6520.patch",
"merged_at": "2023-12-21T14:43:35"
} | Support `commit_description` parameter in `push_to_hub`.
CC: @Wauplin | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6520/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6519/comments | https://api.github.com/repos/huggingface/datasets/issues/6519/events | https://github.com/huggingface/datasets/pull/6519 | 2,050,759,824 | PR_kwDODunzps5ie4MA | 6,519 | Support push_to_hub canonical datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6519). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"nice catch @albertvillanova ",
"@huggingface/datasets this PR is ready for review.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005306 / 0.011353 (-0.006047) | 0.003454 / 0.011008 (-0.007555) | 0.062157 / 0.038508 (0.023649) | 0.051945 / 0.023109 (0.028835) | 0.241834 / 0.275898 (-0.034064) | 0.265590 / 0.323480 (-0.057890) | 0.003149 / 0.007986 (-0.004837) | 0.002695 / 0.004328 (-0.001633) | 0.049197 / 0.004250 (0.044947) | 0.045576 / 0.037052 (0.008524) | 0.242866 / 0.258489 (-0.015623) | 0.280963 / 0.293841 (-0.012878) | 0.028466 / 0.128546 (-0.100080) | 0.010670 / 0.075646 (-0.064976) | 0.206501 / 0.419271 (-0.212771) | 0.035314 / 0.043533 (-0.008219) | 0.240893 / 0.255139 (-0.014246) | 0.264762 / 0.283200 (-0.018438) | 0.019988 / 0.141683 (-0.121695) | 1.095222 / 1.452155 (-0.356933) | 1.144051 / 1.492716 (-0.348666) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098034 / 0.018006 (0.080028) | 0.308541 / 0.000490 (0.308051) | 0.000261 / 0.000200 (0.000061) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018646 / 0.037411 (-0.018766) | 0.062881 / 0.014526 (0.048355) | 0.074062 / 0.176557 (-0.102494) | 0.120860 / 0.737135 (-0.616276) | 0.075388 / 0.296338 (-0.220951) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282974 / 0.215209 (0.067765) | 2.755589 / 2.077655 (0.677934) | 1.459536 / 1.504120 (-0.044584) | 1.364543 / 1.541195 (-0.176652) | 1.429860 / 1.468490 (-0.038630) | 0.573277 / 4.584777 (-4.011500) | 2.422983 / 3.745712 (-1.322730) | 3.257258 / 5.269862 (-2.012603) | 1.930053 / 4.565676 (-2.635623) | 0.067476 / 0.424275 (-0.356799) | 0.005612 / 0.007607 (-0.001995) | 0.351538 / 0.226044 (0.125494) | 3.380356 / 2.268929 (1.111427) | 1.837887 / 55.444624 (-53.606738) | 1.537994 / 6.876477 (-5.338483) | 1.623630 / 2.142072 (-0.518442) | 0.662652 / 4.805227 (-4.142576) | 0.127074 / 6.500664 (-6.373590) | 0.049311 / 0.075469 (-0.026158) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.151273 / 1.841788 (-0.690515) | 12.766622 / 8.074308 (4.692314) | 10.967610 / 10.191392 (0.776218) | 0.131305 / 0.680424 (-0.549119) | 0.014227 / 0.534201 (-0.519974) | 0.292054 / 0.579283 (-0.287229) | 0.262737 / 0.434364 (-0.171627) | 0.334360 / 0.540337 (-0.205978) | 0.446711 / 1.386936 (-0.940225) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005194 / 0.011353 (-0.006159) | 0.003508 / 0.011008 (-0.007500) | 0.049287 / 0.038508 (0.010779) | 0.052109 / 0.023109 (0.029000) | 0.271501 / 0.275898 (-0.004397) | 0.290959 / 0.323480 (-0.032521) | 0.004347 / 0.007986 (-0.003638) | 0.002659 / 0.004328 (-0.001669) | 0.048769 / 0.004250 (0.044518) | 0.039388 / 0.037052 (0.002336) | 0.272811 / 0.258489 (0.014322) | 0.305632 / 0.293841 (0.011791) | 0.028419 / 0.128546 (-0.100127) | 0.010617 / 0.075646 (-0.065029) | 0.057433 / 0.419271 (-0.361838) | 0.032383 / 0.043533 (-0.011149) | 0.266566 / 0.255139 (0.011427) | 0.290993 / 0.283200 (0.007794) | 0.019939 / 0.141683 (-0.121743) | 1.157623 / 1.452155 (-0.294532) | 1.183298 / 1.492716 (-0.309419) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099074 / 0.018006 (0.081068) | 0.315282 / 0.000490 (0.314792) | 0.000235 / 0.000200 (0.000035) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022692 / 0.037411 (-0.014719) | 0.076455 / 0.014526 (0.061929) | 0.089094 / 0.176557 (-0.087462) | 0.126407 / 0.737135 (-0.610728) | 0.089588 / 0.296338 (-0.206750) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.338853 / 0.215209 (0.123644) | 2.809843 / 2.077655 (0.732188) | 1.538262 / 1.504120 (0.034143) | 1.418290 / 1.541195 (-0.122905) | 1.435145 / 1.468490 (-0.033345) | 0.565763 / 4.584777 (-4.019014) | 2.491525 / 3.745712 (-1.254187) | 2.944879 / 5.269862 (-2.324983) | 1.835840 / 4.565676 (-2.729837) | 0.065101 / 0.424275 (-0.359174) | 0.005196 / 0.007607 (-0.002412) | 0.345291 / 0.226044 (0.119247) | 3.399658 / 2.268929 (1.130729) | 1.892321 / 55.444624 (-53.552303) | 1.608293 / 6.876477 (-5.268184) | 1.651188 / 2.142072 (-0.490884) | 0.647806 / 4.805227 (-4.157421) | 0.119318 / 6.500664 (-6.381346) | 0.043058 / 0.075469 (-0.032412) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983956 / 1.841788 (-0.857831) | 13.516125 / 8.074308 (5.441817) | 11.712571 / 10.191392 (1.521179) | 0.134253 / 0.680424 (-0.546171) | 0.015844 / 0.534201 (-0.518357) | 0.292444 / 0.579283 (-0.286839) | 0.282182 / 0.434364 (-0.152182) | 0.329327 / 0.540337 (-0.211010) | 0.419960 / 1.386936 (-0.966976) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a887ee78835573f5d80f9e414e8443b4caff3541 \"CML watermark\")\n"
] | 2023-12-20T15:16:45 | 2023-12-21T14:48:20 | 2023-12-21T14:40:57 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6519",
"html_url": "https://github.com/huggingface/datasets/pull/6519",
"diff_url": "https://github.com/huggingface/datasets/pull/6519.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6519.patch",
"merged_at": "2023-12-21T14:40:57"
} | Support `push_to_hub` canonical datasets.
This is necessary in the Space to convert script-datasets to Parquet: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet
Note that before this PR, the `repo_id` "dataset_name" was transformed to "user/dataset_name". This behavior was introduced by:
- #6269 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6519/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6519/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6518 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6518/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6518/comments | https://api.github.com/repos/huggingface/datasets/issues/6518/events | https://github.com/huggingface/datasets/pull/6518 | 2,050,137,038 | PR_kwDODunzps5icu-W | 6,518 | fix get_metadata_patterns function args error | {
"login": "d710055071",
"id": 12895488,
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d710055071",
"html_url": "https://github.com/d710055071",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"repos_url": "https://api.github.com/users/d710055071/repos",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6518). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"hello!\r\n@albertvillanova \r\nThank you very much for your recognitionγ\r\nWhen can this PR be merged?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005205 / 0.011353 (-0.006148) | 0.003730 / 0.011008 (-0.007278) | 0.063195 / 0.038508 (0.024687) | 0.052329 / 0.023109 (0.029219) | 0.247299 / 0.275898 (-0.028599) | 0.269600 / 0.323480 (-0.053880) | 0.004801 / 0.007986 (-0.003185) | 0.002728 / 0.004328 (-0.001600) | 0.049195 / 0.004250 (0.044944) | 0.044859 / 0.037052 (0.007807) | 0.253047 / 0.258489 (-0.005442) | 0.277253 / 0.293841 (-0.016588) | 0.028370 / 0.128546 (-0.100176) | 0.011095 / 0.075646 (-0.064551) | 0.211090 / 0.419271 (-0.208182) | 0.035944 / 0.043533 (-0.007589) | 0.252755 / 0.255139 (-0.002384) | 0.269466 / 0.283200 (-0.013733) | 0.017514 / 0.141683 (-0.124169) | 1.107815 / 1.452155 (-0.344339) | 1.154989 / 1.492716 (-0.337728) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093925 / 0.018006 (0.075919) | 0.300923 / 0.000490 (0.300433) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018268 / 0.037411 (-0.019143) | 0.060508 / 0.014526 (0.045983) | 0.074564 / 0.176557 (-0.101992) | 0.121523 / 0.737135 (-0.615612) | 0.077394 / 0.296338 (-0.218945) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275859 / 0.215209 (0.060650) | 2.707593 / 2.077655 (0.629938) | 1.419178 / 1.504120 (-0.084942) | 1.286737 / 1.541195 (-0.254458) | 1.350504 / 1.468490 (-0.117986) | 0.570461 / 4.584777 (-4.014316) | 2.400795 / 3.745712 (-1.344917) | 2.840876 / 5.269862 (-2.428986) | 1.724044 / 4.565676 (-2.841633) | 0.063819 / 0.424275 (-0.360456) | 0.004961 / 0.007607 (-0.002647) | 0.342537 / 0.226044 (0.116492) | 3.370942 / 2.268929 (1.102013) | 1.788659 / 55.444624 (-53.655966) | 1.501921 / 6.876477 (-5.374556) | 1.535352 / 2.142072 (-0.606721) | 0.651838 / 4.805227 (-4.153390) | 0.118979 / 6.500664 (-6.381685) | 0.047796 / 0.075469 (-0.027673) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949850 / 1.841788 (-0.891937) | 11.581988 / 8.074308 (3.507680) | 10.462837 / 10.191392 (0.271445) | 0.133298 / 0.680424 (-0.547125) | 0.015008 / 0.534201 (-0.519193) | 0.299265 / 0.579283 (-0.280018) | 0.268864 / 0.434364 (-0.165500) | 0.332888 / 0.540337 (-0.207450) | 0.420423 / 1.386936 (-0.966513) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005309 / 0.011353 (-0.006044) | 0.003628 / 0.011008 (-0.007380) | 0.049545 / 0.038508 (0.011036) | 0.054095 / 0.023109 (0.030985) | 0.270679 / 0.275898 (-0.005219) | 0.295744 / 0.323480 (-0.027736) | 0.004131 / 0.007986 (-0.003855) | 0.002732 / 0.004328 (-0.001596) | 0.048714 / 0.004250 (0.044464) | 0.039916 / 0.037052 (0.002863) | 0.272354 / 0.258489 (0.013865) | 0.310553 / 0.293841 (0.016712) | 0.029525 / 0.128546 (-0.099021) | 0.011322 / 0.075646 (-0.064324) | 0.058007 / 0.419271 (-0.361265) | 0.032883 / 0.043533 (-0.010650) | 0.273609 / 0.255139 (0.018470) | 0.291780 / 0.283200 (0.008581) | 0.020538 / 0.141683 (-0.121145) | 1.118031 / 1.452155 (-0.334123) | 1.160777 / 1.492716 (-0.331940) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092966 / 0.018006 (0.074959) | 0.301432 / 0.000490 (0.300943) | 0.000225 / 0.000200 (0.000025) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022736 / 0.037411 (-0.014676) | 0.077655 / 0.014526 (0.063129) | 0.093386 / 0.176557 (-0.083171) | 0.129694 / 0.737135 (-0.607441) | 0.092790 / 0.296338 (-0.203548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299161 / 0.215209 (0.083952) | 2.923300 / 2.077655 (0.845645) | 1.629661 / 1.504120 (0.125541) | 1.510797 / 1.541195 (-0.030398) | 1.507269 / 1.468490 (0.038778) | 0.574346 / 4.584777 (-4.010431) | 2.454396 / 3.745712 (-1.291316) | 2.843402 / 5.269862 (-2.426460) | 1.774815 / 4.565676 (-2.790861) | 0.063601 / 0.424275 (-0.360674) | 0.004977 / 0.007607 (-0.002630) | 0.347693 / 0.226044 (0.121649) | 3.430054 / 2.268929 (1.161126) | 1.987308 / 55.444624 (-53.457316) | 1.682756 / 6.876477 (-5.193721) | 1.688463 / 2.142072 (-0.453609) | 0.646449 / 4.805227 (-4.158778) | 0.117860 / 6.500664 (-6.382804) | 0.041305 / 0.075469 (-0.034164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987355 / 1.841788 (-0.854433) | 12.398721 / 8.074308 (4.324412) | 11.070442 / 10.191392 (0.879050) | 0.134946 / 0.680424 (-0.545477) | 0.016172 / 0.534201 (-0.518029) | 0.293359 / 0.579283 (-0.285924) | 0.282271 / 0.434364 (-0.152093) | 0.331919 / 0.540337 (-0.208418) | 0.432137 / 1.386936 (-0.954799) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2246d3187222ef939aa8e69cd1aa476cf9526945 \"CML watermark\")\n"
] | 2023-12-20T09:06:22 | 2023-12-21T15:14:17 | 2023-12-21T15:07:57 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6518",
"html_url": "https://github.com/huggingface/datasets/pull/6518",
"diff_url": "https://github.com/huggingface/datasets/pull/6518.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6518.patch",
"merged_at": "2023-12-21T15:07:57"
} | Bug get_metadata_patterns arg error https://github.com/huggingface/datasets/issues/6517 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6518/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6517/comments | https://api.github.com/repos/huggingface/datasets/issues/6517/events | https://github.com/huggingface/datasets/issues/6517 | 2,050,121,588 | I_kwDODunzps56Ml90 | 6,517 | Bug get_metadata_patterns arg error | {
"login": "d710055071",
"id": 12895488,
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d710055071",
"html_url": "https://github.com/d710055071",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"repos_url": "https://api.github.com/users/d710055071/repos",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 2023-12-20T08:56:44 | 2023-12-22T00:24:23 | 2023-12-22T00:24:23 | CONTRIBUTOR | null | null | https://github.com/huggingface/datasets/blob/3f149204a2a5948287adcade5e90707aa5207a92/src/datasets/load.py#L1240C1-L1240C69
metadata_patterns = get_metadata_patterns(base_path, download_config=self.download_config) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6517/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6516/comments | https://api.github.com/repos/huggingface/datasets/issues/6516/events | https://github.com/huggingface/datasets/pull/6516 | 2,050,033,322 | PR_kwDODunzps5icYX0 | 6,516 | Support huggingface-hub pre-releases | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6516). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005309 / 0.011353 (-0.006044) | 0.003231 / 0.011008 (-0.007777) | 0.062690 / 0.038508 (0.024182) | 0.050811 / 0.023109 (0.027701) | 0.258319 / 0.275898 (-0.017579) | 0.275977 / 0.323480 (-0.047503) | 0.002842 / 0.007986 (-0.005143) | 0.002606 / 0.004328 (-0.001723) | 0.048672 / 0.004250 (0.044421) | 0.038730 / 0.037052 (0.001677) | 0.258531 / 0.258489 (0.000042) | 0.289327 / 0.293841 (-0.004514) | 0.027994 / 0.128546 (-0.100552) | 0.010446 / 0.075646 (-0.065200) | 0.207152 / 0.419271 (-0.212119) | 0.035839 / 0.043533 (-0.007693) | 0.258416 / 0.255139 (0.003277) | 0.274348 / 0.283200 (-0.008851) | 0.019661 / 0.141683 (-0.122022) | 1.103688 / 1.452155 (-0.348466) | 1.207711 / 1.492716 (-0.285006) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090693 / 0.018006 (0.072687) | 0.300648 / 0.000490 (0.300158) | 0.000215 / 0.000200 (0.000015) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018589 / 0.037411 (-0.018822) | 0.061056 / 0.014526 (0.046530) | 0.074512 / 0.176557 (-0.102044) | 0.121260 / 0.737135 (-0.615875) | 0.073111 / 0.296338 (-0.223227) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285811 / 0.215209 (0.070602) | 2.785081 / 2.077655 (0.707426) | 1.469493 / 1.504120 (-0.034627) | 1.346389 / 1.541195 (-0.194806) | 1.391866 / 1.468490 (-0.076624) | 0.567304 / 4.584777 (-4.017473) | 2.407150 / 3.745712 (-1.338562) | 2.809915 / 5.269862 (-2.459946) | 1.741185 / 4.565676 (-2.824491) | 0.063073 / 0.424275 (-0.361202) | 0.004974 / 0.007607 (-0.002633) | 0.336431 / 0.226044 (0.110386) | 3.331371 / 2.268929 (1.062443) | 1.841466 / 55.444624 (-53.603159) | 1.559065 / 6.876477 (-5.317411) | 1.585033 / 2.142072 (-0.557039) | 0.647469 / 4.805227 (-4.157759) | 0.117488 / 6.500664 (-6.383176) | 0.042535 / 0.075469 (-0.032934) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.936409 / 1.841788 (-0.905379) | 11.301514 / 8.074308 (3.227206) | 10.500465 / 10.191392 (0.309073) | 0.131316 / 0.680424 (-0.549107) | 0.014007 / 0.534201 (-0.520194) | 0.286932 / 0.579283 (-0.292351) | 0.263516 / 0.434364 (-0.170848) | 0.340883 / 0.540337 (-0.199454) | 0.443589 / 1.386936 (-0.943347) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005204 / 0.011353 (-0.006149) | 0.003472 / 0.011008 (-0.007536) | 0.049235 / 0.038508 (0.010727) | 0.050668 / 0.023109 (0.027559) | 0.270198 / 0.275898 (-0.005700) | 0.293942 / 0.323480 (-0.029538) | 0.003964 / 0.007986 (-0.004022) | 0.002596 / 0.004328 (-0.001733) | 0.048654 / 0.004250 (0.044404) | 0.039411 / 0.037052 (0.002358) | 0.271938 / 0.258489 (0.013449) | 0.304308 / 0.293841 (0.010467) | 0.029042 / 0.128546 (-0.099504) | 0.010414 / 0.075646 (-0.065232) | 0.058273 / 0.419271 (-0.360999) | 0.032507 / 0.043533 (-0.011025) | 0.271671 / 0.255139 (0.016532) | 0.289850 / 0.283200 (0.006650) | 0.017292 / 0.141683 (-0.124391) | 1.126160 / 1.452155 (-0.325995) | 1.177365 / 1.492716 (-0.315351) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091158 / 0.018006 (0.073152) | 0.299143 / 0.000490 (0.298653) | 0.000217 / 0.000200 (0.000017) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022558 / 0.037411 (-0.014853) | 0.076139 / 0.014526 (0.061613) | 0.088344 / 0.176557 (-0.088212) | 0.126640 / 0.737135 (-0.610495) | 0.089736 / 0.296338 (-0.206602) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295351 / 0.215209 (0.080142) | 2.895779 / 2.077655 (0.818125) | 1.585886 / 1.504120 (0.081766) | 1.458601 / 1.541195 (-0.082594) | 1.468880 / 1.468490 (0.000390) | 0.554686 / 4.584777 (-4.030091) | 2.466276 / 3.745712 (-1.279437) | 2.741938 / 5.269862 (-2.527924) | 1.711793 / 4.565676 (-2.853883) | 0.062928 / 0.424275 (-0.361347) | 0.005177 / 0.007607 (-0.002430) | 0.343908 / 0.226044 (0.117863) | 3.393360 / 2.268929 (1.124431) | 1.928800 / 55.444624 (-53.515824) | 1.652181 / 6.876477 (-5.224296) | 1.643667 / 2.142072 (-0.498405) | 0.632829 / 4.805227 (-4.172398) | 0.114583 / 6.500664 (-6.386081) | 0.041248 / 0.075469 (-0.034221) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986196 / 1.841788 (-0.855592) | 12.006772 / 8.074308 (3.932464) | 10.522661 / 10.191392 (0.331269) | 0.133710 / 0.680424 (-0.546713) | 0.016714 / 0.534201 (-0.517487) | 0.286502 / 0.579283 (-0.292781) | 0.280090 / 0.434364 (-0.154273) | 0.326063 / 0.540337 (-0.214275) | 0.548485 / 1.386936 (-0.838452) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3f149204a2a5948287adcade5e90707aa5207a92 \"CML watermark\")\n"
] | 2023-12-20T07:52:29 | 2023-12-20T08:51:34 | 2023-12-20T08:44:44 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6516",
"html_url": "https://github.com/huggingface/datasets/pull/6516",
"diff_url": "https://github.com/huggingface/datasets/pull/6516.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6516.patch",
"merged_at": "2023-12-20T08:44:44"
} | Support `huggingface-hub` pre-releases.
This way we will have our CI green when testing `huggingface-hub` release candidates. See: https://github.com/huggingface/datasets/tree/ci-test-huggingface-hub-v0.20.0.rc1
Close #6513. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6516/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6516/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6515/comments | https://api.github.com/repos/huggingface/datasets/issues/6515/events | https://github.com/huggingface/datasets/issues/6515 | 2,049,724,251 | I_kwDODunzps56LE9b | 6,515 | Why call http_head() when fsspec_head() succeeds | {
"login": "d710055071",
"id": 12895488,
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d710055071",
"html_url": "https://github.com/d710055071",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"repos_url": "https://api.github.com/users/d710055071/repos",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 2023-12-20T02:25:51 | 2023-12-26T05:35:46 | 2023-12-26T05:35:46 | CONTRIBUTOR | null | null | https://github.com/huggingface/datasets/blob/a91582de288d98e94bcb5ab634ca1cfeeff544c5/src/datasets/utils/file_utils.py#L510C1-L523C14 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6515/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6514/comments | https://api.github.com/repos/huggingface/datasets/issues/6514/events | https://github.com/huggingface/datasets/pull/6514 | 2,049,600,663 | PR_kwDODunzps5ia6Os | 6,514 | Cache backward compatibility with 2.15.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6514). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"it's hard to tell if this works as expected without a test but i guess it's not trivial to implement such a test.\r\n\r\ni tried to reproduce locally (with this branch merged into the lazy-resolve-and-cache-reload) and it didn't work. \r\nI run:\r\n```\r\n ds = load_dataset(\"polinaeterna/audiofolder_two_configs_in_metadata\", \"v2\", data_files=\"v2/train/*\") \r\n```\r\nand i got this in the cache:\r\n```\r\nv2-374bfde4f55442bc/\r\nβββ 0.0.0\r\n βββ 5a2339ad2bb7caf6a6daf2f213204e3ac03a13a5 # - from this pr\r\n βΒ Β βββ audiofolder_two_configs_in_metadata-train.arrow\r\n βΒ Β βββ dataset_info.json\r\n βββ 5a2339ad2bb7caf6a6daf2f213204e3ac03a13a5_builder.lock\r\n βββ 5a2339ad2bb7caf6a6daf2f213204e3ac03a13a5.incomplete_info.lock\r\n βββ 7896925d64deea5d # from 2.15.0\r\n βΒ Β βββ audiofolder_two_configs_in_metadata-train.arrow\r\n βΒ Β βββ dataset_info.json\r\n βββ 7896925d64deea5d_builder.lock\r\n βββ 7896925d64deea5d.incomplete_info.lock\r\n```\r\nso the first hash (the top-level dir v2-374bfde4f55442bc) matches but the second (after version) doesn't.\r\nmaybe i did something wrong though.\r\n\r\nalso i'm not sure if this is worth too much effort, maybe nobody notices if their datasets will be generated again :D idk",
"I just pushed a fix, it should work just fine now :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004798 / 0.011353 (-0.006555) | 0.003203 / 0.011008 (-0.007805) | 0.062247 / 0.038508 (0.023738) | 0.029906 / 0.023109 (0.006797) | 0.259370 / 0.275898 (-0.016528) | 0.276084 / 0.323480 (-0.047396) | 0.002910 / 0.007986 (-0.005076) | 0.002364 / 0.004328 (-0.001964) | 0.048080 / 0.004250 (0.043830) | 0.041168 / 0.037052 (0.004116) | 0.259833 / 0.258489 (0.001343) | 0.289882 / 0.293841 (-0.003959) | 0.026790 / 0.128546 (-0.101756) | 0.010336 / 0.075646 (-0.065311) | 0.209628 / 0.419271 (-0.209643) | 0.035080 / 0.043533 (-0.008452) | 0.256278 / 0.255139 (0.001139) | 0.279502 / 0.283200 (-0.003697) | 0.019755 / 0.141683 (-0.121928) | 1.121552 / 1.452155 (-0.330602) | 1.174360 / 1.492716 (-0.318356) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093510 / 0.018006 (0.075504) | 0.302065 / 0.000490 (0.301575) | 0.000214 / 0.000200 (0.000014) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017652 / 0.037411 (-0.019759) | 0.060512 / 0.014526 (0.045986) | 0.072441 / 0.176557 (-0.104115) | 0.118058 / 0.737135 (-0.619078) | 0.072657 / 0.296338 (-0.223682) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283949 / 0.215209 (0.068740) | 2.803275 / 2.077655 (0.725620) | 1.527353 / 1.504120 (0.023233) | 1.408176 / 1.541195 (-0.133019) | 1.375335 / 1.468490 (-0.093155) | 0.546426 / 4.584777 (-4.038351) | 2.402210 / 3.745712 (-1.343502) | 2.765879 / 5.269862 (-2.503982) | 1.703722 / 4.565676 (-2.861955) | 0.062669 / 0.424275 (-0.361606) | 0.005006 / 0.007607 (-0.002601) | 0.337941 / 0.226044 (0.111897) | 3.385494 / 2.268929 (1.116566) | 1.817360 / 55.444624 (-53.627264) | 1.548594 / 6.876477 (-5.327883) | 1.548610 / 2.142072 (-0.593463) | 0.630188 / 4.805227 (-4.175040) | 0.117079 / 6.500664 (-6.383585) | 0.042077 / 0.075469 (-0.033392) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941606 / 1.841788 (-0.900182) | 11.226277 / 8.074308 (3.151969) | 10.118005 / 10.191392 (-0.073387) | 0.130408 / 0.680424 (-0.550015) | 0.014419 / 0.534201 (-0.519782) | 0.284812 / 0.579283 (-0.294471) | 0.266951 / 0.434364 (-0.167413) | 0.322251 / 0.540337 (-0.218087) | 0.415014 / 1.386936 (-0.971922) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005192 / 0.011353 (-0.006161) | 0.003028 / 0.011008 (-0.007980) | 0.048322 / 0.038508 (0.009814) | 0.030550 / 0.023109 (0.007441) | 0.264360 / 0.275898 (-0.011538) | 0.289544 / 0.323480 (-0.033936) | 0.004053 / 0.007986 (-0.003933) | 0.002480 / 0.004328 (-0.001848) | 0.048215 / 0.004250 (0.043964) | 0.044208 / 0.037052 (0.007156) | 0.263943 / 0.258489 (0.005454) | 0.297648 / 0.293841 (0.003807) | 0.029315 / 0.128546 (-0.099231) | 0.010533 / 0.075646 (-0.065114) | 0.057021 / 0.419271 (-0.362251) | 0.053751 / 0.043533 (0.010218) | 0.265153 / 0.255139 (0.010014) | 0.284988 / 0.283200 (0.001788) | 0.018459 / 0.141683 (-0.123224) | 1.225657 / 1.452155 (-0.226498) | 1.195737 / 1.492716 (-0.296979) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093030 / 0.018006 (0.075024) | 0.301022 / 0.000490 (0.300533) | 0.000228 / 0.000200 (0.000028) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022073 / 0.037411 (-0.015339) | 0.075912 / 0.014526 (0.061386) | 0.087628 / 0.176557 (-0.088929) | 0.125607 / 0.737135 (-0.611529) | 0.088568 / 0.296338 (-0.207770) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.303482 / 0.215209 (0.088273) | 2.965987 / 2.077655 (0.888333) | 1.615273 / 1.504120 (0.111153) | 1.482851 / 1.541195 (-0.058344) | 1.562627 / 1.468490 (0.094137) | 0.563626 / 4.584777 (-4.021151) | 2.448741 / 3.745712 (-1.296971) | 2.761006 / 5.269862 (-2.508855) | 1.711242 / 4.565676 (-2.854434) | 0.064593 / 0.424275 (-0.359682) | 0.005044 / 0.007607 (-0.002563) | 0.354131 / 0.226044 (0.128087) | 3.511698 / 2.268929 (1.242770) | 1.951087 / 55.444624 (-53.493538) | 1.682171 / 6.876477 (-5.194305) | 1.666330 / 2.142072 (-0.475742) | 0.654880 / 4.805227 (-4.150347) | 0.118544 / 6.500664 (-6.382120) | 0.040753 / 0.075469 (-0.034717) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967771 / 1.841788 (-0.874017) | 12.017277 / 8.074308 (3.942969) | 10.624947 / 10.191392 (0.433555) | 0.128834 / 0.680424 (-0.551590) | 0.015739 / 0.534201 (-0.518462) | 0.285906 / 0.579283 (-0.293377) | 0.273659 / 0.434364 (-0.160705) | 0.324044 / 0.540337 (-0.216293) | 0.419469 / 1.386936 (-0.967467) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2afbf785f8d0551cdd65a81c5c3228e469613724 \"CML watermark\")\n"
] | 2023-12-19T23:52:25 | 2023-12-21T21:14:11 | 2023-12-21T21:07:55 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6514",
"html_url": "https://github.com/huggingface/datasets/pull/6514",
"diff_url": "https://github.com/huggingface/datasets/pull/6514.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6514.patch",
"merged_at": "2023-12-21T21:07:55"
} | ...for datasets without scripts
It takes into account the changes in cache from
- https://github.com/huggingface/datasets/pull/6493: switch to `config/version/commit_sha` schema
- https://github.com/huggingface/datasets/pull/6454: fix `DataFilesDict` keys ordering when hashing
requires https://github.com/huggingface/datasets/pull/6493 to be merged | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6514/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6513/comments | https://api.github.com/repos/huggingface/datasets/issues/6513/events | https://github.com/huggingface/datasets/issues/6513 | 2,048,869,151 | I_kwDODunzps56H0Mf | 6,513 | Support huggingface-hub 0.20.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 2023-12-19T15:15:46 | 2023-12-20T08:44:45 | 2023-12-20T08:44:45 | MEMBER | null | null | CI to test the support of `huggingface-hub` 0.20.0: https://github.com/huggingface/datasets/compare/main...ci-test-huggingface-hub-v0.20.0.rc1
We need to merge:
- #6510
- #6512
- #6516 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6513/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6512/comments | https://api.github.com/repos/huggingface/datasets/issues/6512/events | https://github.com/huggingface/datasets/pull/6512 | 2,048,795,819 | PR_kwDODunzps5iYI5z | 6,512 | Remove deprecated HfFolder | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6512). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005468 / 0.011353 (-0.005885) | 0.003447 / 0.011008 (-0.007561) | 0.062569 / 0.038508 (0.024061) | 0.049427 / 0.023109 (0.026318) | 0.238463 / 0.275898 (-0.037435) | 0.268320 / 0.323480 (-0.055159) | 0.002834 / 0.007986 (-0.005151) | 0.002679 / 0.004328 (-0.001649) | 0.048613 / 0.004250 (0.044363) | 0.038793 / 0.037052 (0.001741) | 0.247710 / 0.258489 (-0.010779) | 0.277557 / 0.293841 (-0.016284) | 0.027134 / 0.128546 (-0.101412) | 0.010346 / 0.075646 (-0.065301) | 0.205782 / 0.419271 (-0.213490) | 0.035549 / 0.043533 (-0.007983) | 0.241667 / 0.255139 (-0.013472) | 0.268358 / 0.283200 (-0.014842) | 0.017119 / 0.141683 (-0.124563) | 1.108487 / 1.452155 (-0.343668) | 1.177519 / 1.492716 (-0.315197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090925 / 0.018006 (0.072919) | 0.310422 / 0.000490 (0.309932) | 0.000212 / 0.000200 (0.000012) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018912 / 0.037411 (-0.018499) | 0.061534 / 0.014526 (0.047008) | 0.073608 / 0.176557 (-0.102949) | 0.119278 / 0.737135 (-0.617858) | 0.074698 / 0.296338 (-0.221640) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287224 / 0.215209 (0.072014) | 2.792022 / 2.077655 (0.714367) | 1.474605 / 1.504120 (-0.029515) | 1.348714 / 1.541195 (-0.192481) | 1.381339 / 1.468490 (-0.087151) | 0.553033 / 4.584777 (-4.031744) | 2.360745 / 3.745712 (-1.384967) | 2.779281 / 5.269862 (-2.490580) | 1.743922 / 4.565676 (-2.821754) | 0.063817 / 0.424275 (-0.360458) | 0.004954 / 0.007607 (-0.002653) | 0.340039 / 0.226044 (0.113994) | 3.336771 / 2.268929 (1.067843) | 1.825573 / 55.444624 (-53.619051) | 1.525362 / 6.876477 (-5.351115) | 1.578793 / 2.142072 (-0.563280) | 0.638432 / 4.805227 (-4.166795) | 0.117601 / 6.500664 (-6.383063) | 0.041890 / 0.075469 (-0.033579) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.936896 / 1.841788 (-0.904892) | 11.426979 / 8.074308 (3.352671) | 10.636043 / 10.191392 (0.444651) | 0.136172 / 0.680424 (-0.544252) | 0.014249 / 0.534201 (-0.519952) | 0.287806 / 0.579283 (-0.291477) | 0.266046 / 0.434364 (-0.168318) | 0.326155 / 0.540337 (-0.214183) | 0.455508 / 1.386936 (-0.931428) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005199 / 0.011353 (-0.006154) | 0.003476 / 0.011008 (-0.007532) | 0.050519 / 0.038508 (0.012011) | 0.050732 / 0.023109 (0.027623) | 0.270140 / 0.275898 (-0.005758) | 0.295539 / 0.323480 (-0.027941) | 0.004057 / 0.007986 (-0.003928) | 0.002771 / 0.004328 (-0.001558) | 0.049157 / 0.004250 (0.044906) | 0.039863 / 0.037052 (0.002811) | 0.275934 / 0.258489 (0.017445) | 0.306971 / 0.293841 (0.013130) | 0.029405 / 0.128546 (-0.099141) | 0.010746 / 0.075646 (-0.064900) | 0.058427 / 0.419271 (-0.360845) | 0.032448 / 0.043533 (-0.011085) | 0.271851 / 0.255139 (0.016712) | 0.290337 / 0.283200 (0.007138) | 0.019145 / 0.141683 (-0.122538) | 1.112232 / 1.452155 (-0.339922) | 1.215153 / 1.492716 (-0.277564) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088590 / 0.018006 (0.070584) | 0.299047 / 0.000490 (0.298558) | 0.000219 / 0.000200 (0.000019) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022755 / 0.037411 (-0.014656) | 0.078720 / 0.014526 (0.064194) | 0.089051 / 0.176557 (-0.087505) | 0.129330 / 0.737135 (-0.607805) | 0.090645 / 0.296338 (-0.205693) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294083 / 0.215209 (0.078874) | 2.907195 / 2.077655 (0.829540) | 1.607392 / 1.504120 (0.103272) | 1.481931 / 1.541195 (-0.059263) | 1.486934 / 1.468490 (0.018444) | 0.574093 / 4.584777 (-4.010684) | 2.439775 / 3.745712 (-1.305937) | 2.739818 / 5.269862 (-2.530044) | 1.753922 / 4.565676 (-2.811755) | 0.063738 / 0.424275 (-0.360537) | 0.005219 / 0.007607 (-0.002388) | 0.350342 / 0.226044 (0.124297) | 3.463644 / 2.268929 (1.194716) | 1.971598 / 55.444624 (-53.473026) | 1.671752 / 6.876477 (-5.204724) | 1.686504 / 2.142072 (-0.455569) | 0.655870 / 4.805227 (-4.149357) | 0.117580 / 6.500664 (-6.383084) | 0.041210 / 0.075469 (-0.034259) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996305 / 1.841788 (-0.845482) | 12.426361 / 8.074308 (4.352053) | 10.600309 / 10.191392 (0.408917) | 0.129728 / 0.680424 (-0.550695) | 0.015267 / 0.534201 (-0.518934) | 0.285444 / 0.579283 (-0.293839) | 0.272375 / 0.434364 (-0.161989) | 0.323478 / 0.540337 (-0.216860) | 0.547566 / 1.386936 (-0.839370) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a91582de288d98e94bcb5ab634ca1cfeeff544c5 \"CML watermark\")\n"
] | 2023-12-19T14:40:49 | 2023-12-19T20:21:13 | 2023-12-19T20:14:30 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6512",
"html_url": "https://github.com/huggingface/datasets/pull/6512",
"diff_url": "https://github.com/huggingface/datasets/pull/6512.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6512.patch",
"merged_at": "2023-12-19T20:14:30"
} | ...and use `huggingface_hub.get_token()` instead | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6512/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6511 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6511/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6511/comments | https://api.github.com/repos/huggingface/datasets/issues/6511/events | https://github.com/huggingface/datasets/pull/6511 | 2,048,465,958 | PR_kwDODunzps5iXAXR | 6,511 | Implement get dataset default config name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6511). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@huggingface/datasets, this PR is ready for review.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005343 / 0.011353 (-0.006010) | 0.003521 / 0.011008 (-0.007487) | 0.061835 / 0.038508 (0.023327) | 0.052633 / 0.023109 (0.029524) | 0.243897 / 0.275898 (-0.032001) | 0.272961 / 0.323480 (-0.050519) | 0.003013 / 0.007986 (-0.004973) | 0.002692 / 0.004328 (-0.001636) | 0.050099 / 0.004250 (0.045848) | 0.045381 / 0.037052 (0.008329) | 0.249981 / 0.258489 (-0.008508) | 0.276789 / 0.293841 (-0.017052) | 0.027929 / 0.128546 (-0.100617) | 0.010933 / 0.075646 (-0.064714) | 0.206757 / 0.419271 (-0.212514) | 0.035334 / 0.043533 (-0.008199) | 0.249411 / 0.255139 (-0.005728) | 0.268893 / 0.283200 (-0.014306) | 0.019175 / 0.141683 (-0.122507) | 1.106932 / 1.452155 (-0.345223) | 1.177819 / 1.492716 (-0.314897) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092895 / 0.018006 (0.074889) | 0.303658 / 0.000490 (0.303169) | 0.000214 / 0.000200 (0.000014) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018978 / 0.037411 (-0.018434) | 0.060459 / 0.014526 (0.045934) | 0.072900 / 0.176557 (-0.103657) | 0.119803 / 0.737135 (-0.617332) | 0.074349 / 0.296338 (-0.221989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283715 / 0.215209 (0.068505) | 2.752394 / 2.077655 (0.674739) | 1.446619 / 1.504120 (-0.057501) | 1.319612 / 1.541195 (-0.221582) | 1.374769 / 1.468490 (-0.093721) | 0.571543 / 4.584777 (-4.013234) | 2.389106 / 3.745712 (-1.356607) | 2.797837 / 5.269862 (-2.472025) | 1.737615 / 4.565676 (-2.828062) | 0.063268 / 0.424275 (-0.361007) | 0.005118 / 0.007607 (-0.002489) | 0.340238 / 0.226044 (0.114193) | 3.366207 / 2.268929 (1.097278) | 1.845934 / 55.444624 (-53.598690) | 1.540640 / 6.876477 (-5.335837) | 1.585489 / 2.142072 (-0.556584) | 0.641178 / 4.805227 (-4.164049) | 0.118701 / 6.500664 (-6.381964) | 0.042719 / 0.075469 (-0.032750) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.946706 / 1.841788 (-0.895082) | 11.846230 / 8.074308 (3.771921) | 10.459268 / 10.191392 (0.267876) | 0.130557 / 0.680424 (-0.549867) | 0.014292 / 0.534201 (-0.519909) | 0.287455 / 0.579283 (-0.291828) | 0.265213 / 0.434364 (-0.169151) | 0.325670 / 0.540337 (-0.214667) | 0.422800 / 1.386936 (-0.964136) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005454 / 0.011353 (-0.005899) | 0.003567 / 0.011008 (-0.007441) | 0.048696 / 0.038508 (0.010188) | 0.058844 / 0.023109 (0.035735) | 0.277011 / 0.275898 (0.001113) | 0.302544 / 0.323480 (-0.020936) | 0.004077 / 0.007986 (-0.003908) | 0.002720 / 0.004328 (-0.001609) | 0.058251 / 0.004250 (0.054001) | 0.040946 / 0.037052 (0.003893) | 0.276261 / 0.258489 (0.017772) | 0.352827 / 0.293841 (0.058986) | 0.029915 / 0.128546 (-0.098632) | 0.010562 / 0.075646 (-0.065084) | 0.057836 / 0.419271 (-0.361436) | 0.033129 / 0.043533 (-0.010404) | 0.276053 / 0.255139 (0.020914) | 0.292045 / 0.283200 (0.008846) | 0.020504 / 0.141683 (-0.121179) | 1.129746 / 1.452155 (-0.322409) | 1.190888 / 1.492716 (-0.301829) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095202 / 0.018006 (0.077196) | 0.303956 / 0.000490 (0.303466) | 0.000226 / 0.000200 (0.000026) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021960 / 0.037411 (-0.015451) | 0.076209 / 0.014526 (0.061683) | 0.088813 / 0.176557 (-0.087744) | 0.129061 / 0.737135 (-0.608074) | 0.091202 / 0.296338 (-0.205136) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301394 / 0.215209 (0.086185) | 2.948057 / 2.077655 (0.870403) | 1.591371 / 1.504120 (0.087251) | 1.463515 / 1.541195 (-0.077680) | 1.516477 / 1.468490 (0.047987) | 0.577223 / 4.584777 (-4.007554) | 2.506716 / 3.745712 (-1.238996) | 2.833385 / 5.269862 (-2.436477) | 1.808896 / 4.565676 (-2.756781) | 0.063241 / 0.424275 (-0.361034) | 0.005057 / 0.007607 (-0.002550) | 0.350108 / 0.226044 (0.124063) | 3.470252 / 2.268929 (1.201324) | 1.925689 / 55.444624 (-53.518935) | 1.667521 / 6.876477 (-5.208955) | 1.690909 / 2.142072 (-0.451164) | 0.647070 / 4.805227 (-4.158157) | 0.117596 / 6.500664 (-6.383068) | 0.042431 / 0.075469 (-0.033038) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977297 / 1.841788 (-0.864490) | 12.947399 / 8.074308 (4.873091) | 10.964949 / 10.191392 (0.773557) | 0.130905 / 0.680424 (-0.549518) | 0.015207 / 0.534201 (-0.518994) | 0.288151 / 0.579283 (-0.291132) | 0.281817 / 0.434364 (-0.152547) | 0.326398 / 0.540337 (-0.213940) | 0.421354 / 1.386936 (-0.965582) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8b04288f0b94c987a278c5bb8459746bc35ba367 \"CML watermark\")\n"
] | 2023-12-19T11:26:19 | 2023-12-21T14:48:57 | 2023-12-21T14:42:41 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6511",
"html_url": "https://github.com/huggingface/datasets/pull/6511",
"diff_url": "https://github.com/huggingface/datasets/pull/6511.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6511.patch",
"merged_at": "2023-12-21T14:42:40"
} | Implement `get_dataset_default_config_name`.
Now that we support setting a configuration as default in `push_to_hub` (see #6500), we need a programmatically way to know in advance which is the default configuration. This will be used in the Space to convert script-datasets to Parquet: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet
Follow-up of:
- #6500
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6511/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6511/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6510/comments | https://api.github.com/repos/huggingface/datasets/issues/6510/events | https://github.com/huggingface/datasets/pull/6510 | 2,046,928,742 | PR_kwDODunzps5iRyiV | 6,510 | Replace `list_files_info` with `list_repo_tree` in `push_to_hub` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6510). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"CI errors are unrelated to the changes, so I'm merging.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005161 / 0.011353 (-0.006192) | 0.003494 / 0.011008 (-0.007515) | 0.062601 / 0.038508 (0.024093) | 0.052876 / 0.023109 (0.029767) | 0.255595 / 0.275898 (-0.020303) | 0.283108 / 0.323480 (-0.040371) | 0.003856 / 0.007986 (-0.004130) | 0.002686 / 0.004328 (-0.001642) | 0.048604 / 0.004250 (0.044353) | 0.037886 / 0.037052 (0.000834) | 0.252902 / 0.258489 (-0.005587) | 0.286906 / 0.293841 (-0.006935) | 0.028570 / 0.128546 (-0.099976) | 0.010684 / 0.075646 (-0.064962) | 0.208154 / 0.419271 (-0.211118) | 0.036169 / 0.043533 (-0.007364) | 0.276026 / 0.255139 (0.020887) | 0.272274 / 0.283200 (-0.010925) | 0.017690 / 0.141683 (-0.123993) | 1.202400 / 1.452155 (-0.249755) | 1.231223 / 1.492716 (-0.261494) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095229 / 0.018006 (0.077222) | 0.302205 / 0.000490 (0.301716) | 0.000226 / 0.000200 (0.000026) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018877 / 0.037411 (-0.018534) | 0.062286 / 0.014526 (0.047760) | 0.075191 / 0.176557 (-0.101366) | 0.121419 / 0.737135 (-0.615716) | 0.075641 / 0.296338 (-0.220697) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282914 / 0.215209 (0.067705) | 2.769156 / 2.077655 (0.691501) | 1.480219 / 1.504120 (-0.023901) | 1.355742 / 1.541195 (-0.185453) | 1.399740 / 1.468490 (-0.068750) | 0.556365 / 4.584777 (-4.028412) | 2.399679 / 3.745712 (-1.346033) | 2.850510 / 5.269862 (-2.419351) | 1.781428 / 4.565676 (-2.784249) | 0.063045 / 0.424275 (-0.361230) | 0.004931 / 0.007607 (-0.002676) | 0.343743 / 0.226044 (0.117698) | 3.374907 / 2.268929 (1.105978) | 1.857774 / 55.444624 (-53.586851) | 1.577154 / 6.876477 (-5.299323) | 1.626597 / 2.142072 (-0.515475) | 0.653991 / 4.805227 (-4.151236) | 0.121306 / 6.500664 (-6.379358) | 0.042131 / 0.075469 (-0.033339) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948826 / 1.841788 (-0.892962) | 11.922497 / 8.074308 (3.848188) | 10.592334 / 10.191392 (0.400942) | 0.129145 / 0.680424 (-0.551279) | 0.014652 / 0.534201 (-0.519549) | 0.286074 / 0.579283 (-0.293210) | 0.265338 / 0.434364 (-0.169026) | 0.346872 / 0.540337 (-0.193466) | 0.450480 / 1.386936 (-0.936456) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005305 / 0.011353 (-0.006048) | 0.003583 / 0.011008 (-0.007426) | 0.049855 / 0.038508 (0.011347) | 0.052882 / 0.023109 (0.029773) | 0.268429 / 0.275898 (-0.007469) | 0.293375 / 0.323480 (-0.030105) | 0.004052 / 0.007986 (-0.003934) | 0.002685 / 0.004328 (-0.001644) | 0.049206 / 0.004250 (0.044955) | 0.040187 / 0.037052 (0.003135) | 0.270112 / 0.258489 (0.011623) | 0.306380 / 0.293841 (0.012539) | 0.029161 / 0.128546 (-0.099386) | 0.010948 / 0.075646 (-0.064698) | 0.057721 / 0.419271 (-0.361550) | 0.032628 / 0.043533 (-0.010905) | 0.267458 / 0.255139 (0.012319) | 0.291905 / 0.283200 (0.008705) | 0.018096 / 0.141683 (-0.123587) | 1.112744 / 1.452155 (-0.339410) | 1.161962 / 1.492716 (-0.330754) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097449 / 0.018006 (0.079443) | 0.304270 / 0.000490 (0.303780) | 0.000235 / 0.000200 (0.000035) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023550 / 0.037411 (-0.013861) | 0.078246 / 0.014526 (0.063720) | 0.091229 / 0.176557 (-0.085327) | 0.130624 / 0.737135 (-0.606511) | 0.092767 / 0.296338 (-0.203571) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284962 / 0.215209 (0.069753) | 2.761090 / 2.077655 (0.683435) | 1.545409 / 1.504120 (0.041289) | 1.424573 / 1.541195 (-0.116622) | 1.438869 / 1.468490 (-0.029621) | 0.571281 / 4.584777 (-4.013496) | 2.419493 / 3.745712 (-1.326219) | 2.802611 / 5.269862 (-2.467251) | 1.749880 / 4.565676 (-2.815796) | 0.062566 / 0.424275 (-0.361709) | 0.005243 / 0.007607 (-0.002364) | 0.344653 / 0.226044 (0.118608) | 3.367488 / 2.268929 (1.098559) | 1.925871 / 55.444624 (-53.518754) | 1.624258 / 6.876477 (-5.252219) | 1.663742 / 2.142072 (-0.478330) | 0.634553 / 4.805227 (-4.170675) | 0.116745 / 6.500664 (-6.383919) | 0.041734 / 0.075469 (-0.033735) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006808 / 1.841788 (-0.834980) | 12.499711 / 8.074308 (4.425403) | 10.956260 / 10.191392 (0.764868) | 0.132393 / 0.680424 (-0.548031) | 0.015924 / 0.534201 (-0.518277) | 0.289837 / 0.579283 (-0.289446) | 0.281565 / 0.434364 (-0.152799) | 0.337393 / 0.540337 (-0.202945) | 0.560385 / 1.386936 (-0.826551) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3f699ab27ef2c0c23dc3a514b5bb155485ff6913 \"CML watermark\")\n"
] | 2023-12-18T15:34:19 | 2023-12-19T18:05:47 | 2023-12-19T17:58:34 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6510",
"html_url": "https://github.com/huggingface/datasets/pull/6510",
"diff_url": "https://github.com/huggingface/datasets/pull/6510.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6510.patch",
"merged_at": "2023-12-19T17:58:34"
} | Starting from `huggingface_hub` 0.20.0, `list_files_info` will be deprecated in favor of `list_repo_tree` (see https://github.com/huggingface/huggingface_hub/pull/1910) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6510/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6509 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6509/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6509/comments | https://api.github.com/repos/huggingface/datasets/issues/6509/events | https://github.com/huggingface/datasets/pull/6509 | 2,046,720,869 | PR_kwDODunzps5iREyE | 6,509 | Better cast error when generating dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6509). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I created `DatatasetGenerationCastError` in `exceptions.py` that inherits from `DatasetGenerationError` (for backward compatibility) that inherits from `DatasetsError`.\r\n\r\nI also added a help message at the end of the error:\r\n\r\n```\r\nPlease either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)\r\n```",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004991 / 0.011353 (-0.006361) | 0.003362 / 0.011008 (-0.007646) | 0.062093 / 0.038508 (0.023585) | 0.051533 / 0.023109 (0.028424) | 0.247508 / 0.275898 (-0.028390) | 0.275593 / 0.323480 (-0.047886) | 0.003828 / 0.007986 (-0.004158) | 0.002573 / 0.004328 (-0.001755) | 0.047727 / 0.004250 (0.043477) | 0.037029 / 0.037052 (-0.000023) | 0.250359 / 0.258489 (-0.008130) | 0.282640 / 0.293841 (-0.011201) | 0.027853 / 0.128546 (-0.100693) | 0.010247 / 0.075646 (-0.065400) | 0.206826 / 0.419271 (-0.212445) | 0.035837 / 0.043533 (-0.007695) | 0.251795 / 0.255139 (-0.003344) | 0.275654 / 0.283200 (-0.007545) | 0.017722 / 0.141683 (-0.123960) | 1.120287 / 1.452155 (-0.331868) | 1.203087 / 1.492716 (-0.289630) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092320 / 0.018006 (0.074314) | 0.300079 / 0.000490 (0.299589) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018193 / 0.037411 (-0.019218) | 0.061310 / 0.014526 (0.046784) | 0.072433 / 0.176557 (-0.104124) | 0.119092 / 0.737135 (-0.618043) | 0.074044 / 0.296338 (-0.222294) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297184 / 0.215209 (0.081975) | 2.805197 / 2.077655 (0.727543) | 1.521326 / 1.504120 (0.017206) | 1.374321 / 1.541195 (-0.166874) | 1.388767 / 1.468490 (-0.079723) | 0.571865 / 4.584777 (-4.012912) | 2.385213 / 3.745712 (-1.360499) | 2.726840 / 5.269862 (-2.543021) | 1.725352 / 4.565676 (-2.840325) | 0.063012 / 0.424275 (-0.361263) | 0.004911 / 0.007607 (-0.002697) | 0.336430 / 0.226044 (0.110385) | 3.390616 / 2.268929 (1.121688) | 1.846398 / 55.444624 (-53.598227) | 1.576797 / 6.876477 (-5.299680) | 1.579445 / 2.142072 (-0.562627) | 0.652515 / 4.805227 (-4.152712) | 0.118393 / 6.500664 (-6.382271) | 0.042155 / 0.075469 (-0.033314) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.942269 / 1.841788 (-0.899518) | 11.318258 / 8.074308 (3.243950) | 10.299948 / 10.191392 (0.108556) | 0.136088 / 0.680424 (-0.544336) | 0.013682 / 0.534201 (-0.520519) | 0.287549 / 0.579283 (-0.291734) | 0.258346 / 0.434364 (-0.176018) | 0.337146 / 0.540337 (-0.203191) | 0.443922 / 1.386936 (-0.943014) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005302 / 0.011353 (-0.006051) | 0.003234 / 0.011008 (-0.007774) | 0.049159 / 0.038508 (0.010651) | 0.050459 / 0.023109 (0.027350) | 0.273718 / 0.275898 (-0.002180) | 0.296997 / 0.323480 (-0.026483) | 0.003948 / 0.007986 (-0.004038) | 0.002590 / 0.004328 (-0.001739) | 0.048129 / 0.004250 (0.043879) | 0.039369 / 0.037052 (0.002317) | 0.276469 / 0.258489 (0.017980) | 0.306359 / 0.293841 (0.012519) | 0.028864 / 0.128546 (-0.099682) | 0.010253 / 0.075646 (-0.065394) | 0.058264 / 0.419271 (-0.361008) | 0.032451 / 0.043533 (-0.011082) | 0.277336 / 0.255139 (0.022197) | 0.296137 / 0.283200 (0.012937) | 0.018094 / 0.141683 (-0.123589) | 1.119539 / 1.452155 (-0.332615) | 1.163116 / 1.492716 (-0.329600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092578 / 0.018006 (0.074572) | 0.300756 / 0.000490 (0.300267) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022333 / 0.037411 (-0.015078) | 0.076632 / 0.014526 (0.062107) | 0.087829 / 0.176557 (-0.088727) | 0.127686 / 0.737135 (-0.609449) | 0.091314 / 0.296338 (-0.205024) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297499 / 0.215209 (0.082290) | 2.889775 / 2.077655 (0.812120) | 1.598976 / 1.504120 (0.094856) | 1.478805 / 1.541195 (-0.062389) | 1.481818 / 1.468490 (0.013328) | 0.557972 / 4.584777 (-4.026804) | 2.453248 / 3.745712 (-1.292464) | 2.771823 / 5.269862 (-2.498039) | 1.721527 / 4.565676 (-2.844150) | 0.062786 / 0.424275 (-0.361489) | 0.005298 / 0.007607 (-0.002309) | 0.346660 / 0.226044 (0.120615) | 3.412262 / 2.268929 (1.143334) | 1.940240 / 55.444624 (-53.504384) | 1.654015 / 6.876477 (-5.222461) | 1.652039 / 2.142072 (-0.490034) | 0.636870 / 4.805227 (-4.168357) | 0.116213 / 6.500664 (-6.384451) | 0.040937 / 0.075469 (-0.034532) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001605 / 1.841788 (-0.840183) | 11.986592 / 8.074308 (3.912284) | 10.231288 / 10.191392 (0.039896) | 0.130242 / 0.680424 (-0.550182) | 0.015764 / 0.534201 (-0.518437) | 0.289257 / 0.579283 (-0.290026) | 0.275996 / 0.434364 (-0.158368) | 0.323089 / 0.540337 (-0.217248) | 0.556383 / 1.386936 (-0.830553) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#773324159ad4afd7931588a710839b76670ddf87 \"CML watermark\")\n"
] | 2023-12-18T13:57:24 | 2023-12-19T09:37:12 | 2023-12-19T09:31:03 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6509",
"html_url": "https://github.com/huggingface/datasets/pull/6509",
"diff_url": "https://github.com/huggingface/datasets/pull/6509.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6509.patch",
"merged_at": "2023-12-19T09:31:03"
} | I want to improve the error message for datasets like https://huggingface.co/datasets/m-a-p/COIG-CQIA
Cc @albertvillanova @severo is this new error ok ? Or should I use a dedicated error class ?
New:
```python
Traceback (most recent call last):
File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1920, in _prepare_split_single
writer.write_table(table)
File "/Users/quentinlhoest/hf/datasets/src/datasets/arrow_writer.py", line 574, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2322, in table_cast
return cast_table_to_schema(table, schema)
File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2276, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
instruction: string
other: string
index: string
domain: list<item: string>
child 0, item: string
output: string
task_type: struct<major: list<item: string>, minor: list<item: string>>
child 0, major: list<item: string>
child 0, item: string
child 1, minor: list<item: string>
child 0, item: string
task_name_in_eng: string
input: string
to
{'answer_from': Value(dtype='string', id=None), 'instruction': Value(dtype='string', id=None), 'human_verified': Value(dtype='bool', id=None), 'domain': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'output': Value(dtype='string', id=None), 'task_type': {'major': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'minor': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'copyright': Value(dtype='string', id=None), 'input': Value(dtype='string', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/quentinlhoest/hf/datasets/playground/ttest.py", line 74, in <module>
load_dataset("m-a-p/COIG-CQIA")
File "/Users/quentinlhoest/hf/datasets/src/datasets/load.py", line 2529, in load_dataset
builder_instance.download_and_prepare(
File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 936, in download_and_prepare
self._download_and_prepare(
File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1031, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1791, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1922, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 3 new columns (other, index, task_name_in_eng) and 3 missing columns (answer_from, copyright, human_verified).
This happened while the json dataset builder was generating data using
hf://datasets/m-a-p/COIG-CQIA/coig_pc/coig_pc_core_sample.json (at revision b7b7ecf290f6515036c7c04bd8537228ac2eb474)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
```
Previously:
```python
Traceback (most recent call last):
File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1931, in _prepare_split_single
writer.write_table(table)
File "/Users/quentinlhoest/hf/datasets/src/datasets/arrow_writer.py", line 574, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2295, in table_cast
return cast_table_to_schema(table, schema)
File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2253, in cast_table_to_schema
raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
ValueError: Couldn't cast
task_type: struct<major: list<item: string>, minor: list<item: string>>
child 0, major: list<item: string>
child 0, item: string
child 1, minor: list<item: string>
child 0, item: string
other: string
instruction: string
task_name_in_eng: string
domain: list<item: string>
child 0, item: string
index: string
output: string
input: string
to
{'human_verified': Value(dtype='bool', id=None), 'task_type': {'major': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'minor': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'answer_from': Value(dtype='string', id=None), 'copyright': Value(dtype='string', id=None), 'instruction': Value(dtype='string', id=None), 'domain': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'output': Value(dtype='string', id=None), 'input': Value(dtype='string', id=None)}
because column names don't match
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/quentinlhoest/hf/datasets/playground/ttest.py", line 74, in <module>
load_dataset("m-a-p/COIG-CQIA")
File "/Users/quentinlhoest/hf/datasets/src/datasets/load.py", line 2529, in load_dataset
builder_instance.download_and_prepare(
File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 949, in download_and_prepare
self._download_and_prepare(
File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1044, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1804, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1949, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6509/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6508/comments | https://api.github.com/repos/huggingface/datasets/issues/6508/events | https://github.com/huggingface/datasets/pull/6508 | 2,045,733,273 | PR_kwDODunzps5iNvAu | 6,508 | Read GeoParquet files using parquet reader | {
"login": "weiji14",
"id": 23487320,
"node_id": "MDQ6VXNlcjIzNDg3MzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/23487320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weiji14",
"html_url": "https://github.com/weiji14",
"followers_url": "https://api.github.com/users/weiji14/followers",
"following_url": "https://api.github.com/users/weiji14/following{/other_user}",
"gists_url": "https://api.github.com/users/weiji14/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weiji14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weiji14/subscriptions",
"organizations_url": "https://api.github.com/users/weiji14/orgs",
"repos_url": "https://api.github.com/users/weiji14/repos",
"events_url": "https://api.github.com/users/weiji14/events{/privacy}",
"received_events_url": "https://api.github.com/users/weiji14/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6508). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Cool ! Do you mind writing a test using a geoparquet file in `tests/io/test_parquet.py` ?\r\n\r\nI'm not too familiar with geoparquet, does it use e.g. pyarrow extension types ? or schema metadata ?",
"> Geometry columns MUST be stored using the BYTE_ARRAY parquet type. They MUST be encoded as [WKB](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry#Well-known_binary).\r\n\r\nhttps://github.com/opengeospatial/geoparquet/blob/main/format-specs/geoparquet.md#geometry-columns\r\n\r\nIt has metadata:\r\n\r\n> File metadata indicating things like the version of this specification used\r\n> Column metadata with additional metadata for each geometry column\r\n\r\nhttps://github.com/opengeospatial/geoparquet/blob/main/format-specs/geoparquet.md#metadata",
"The specification is very short by the way:\r\n\r\nhttps://github.com/opengeospatial/geoparquet/blob/main/format-specs/geoparquet.md",
"https://github.com/opengeospatial/geoparquet/blob/main/format-specs/compatible-parquet.md is also worth reading for this PR",
"> Cool ! Do you mind writing a test using a geoparquet file in `tests/io/test_parquet.py` ?\r\n\r\nYep, let me do that do that later today!\r\n\r\n> I'm not too familiar with geoparquet, does it use e.g. pyarrow extension types ? or schema metadata ?\r\n\r\nGeoParquet is a Parquet file with a `geometry` column that is encoded in a Binary format (technically WKB as @severo mentioned above). It is not a pyarrow extension type (though that would be cool). Regular `parquet` readers such as `pyarrow` would thus see the column as a binary column, while libraries such as `geopandas` which implement a GeoParquet reader would look at the schema metadata.\r\n\r\nE.g. taking this [file](https://huggingface.co/datasets/weiji14/clay_vector_embeddings/resolve/862b1602f326421adc99375912c08603a9f2cc5c/32VLM_v01.gpq) as an example, this is how the 'geo' schema looks like:\r\n\r\n```python\r\nimport pyarrow.parquet as pq\r\n\r\nschema = pq.read_schema(where=\"32VLM_v01.gpq\")\r\nprint(schema.metadata[b\"geo\"])\r\n```\r\n\r\n```\r\n{\r\n \"primary_column\": \"geometry\",\r\n \"columns\": {\r\n \"geometry\": {\r\n \"encoding\": \"WKB\",\r\n \"crs\": {\r\n \"$schema\": \"https://proj.org/schemas/v0.7/projjson.schema.json\",\r\n \"type\": \"GeographicCRS\",\r\n \"name\": \"WGS 84 (CRS84)\",\r\n \"datum_ensemble\": {\r\n \"name\": \"World Geodetic System 1984 ensemble\",\r\n \"members\": [\r\n {\"name\": \"World Geodetic System 1984 (Transit)\"},\r\n {\"name\": \"World Geodetic System 1984 (G730)\"},\r\n {\"name\": \"World Geodetic System 1984 (G873)\"},\r\n {\"name\": \"World Geodetic System 1984 (G1150)\"},\r\n {\"name\": \"World Geodetic System 1984 (G1674)\"},\r\n {\"name\": \"World Geodetic System 1984 (G1762)\"},\r\n {\"name\": \"World Geodetic System 1984 (G2139)\"},\r\n ],\r\n \"ellipsoid\": {\r\n \"name\": \"WGS 84\",\r\n \"semi_major_axis\": 6378137,\r\n \"inverse_flattening\": 298.257223563,\r\n },\r\n \"accuracy\": \"2.0\",\r\n \"id\": {\"authority\": \"EPSG\", \"code\": 6326},\r\n },\r\n \"coordinate_system\": {\r\n \"subtype\": \"ellipsoidal\",\r\n \"axis\": [\r\n {\r\n \"name\": \"Geodetic longitude\",\r\n \"abbreviation\": \"Lon\",\r\n \"direction\": \"east\",\r\n \"unit\": \"degree\",\r\n },\r\n {\r\n \"name\": \"Geodetic latitude\",\r\n \"abbreviation\": \"Lat\",\r\n \"direction\": \"north\",\r\n \"unit\": \"degree\",\r\n },\r\n ],\r\n },\r\n \"scope\": \"Not known.\",\r\n \"area\": \"World.\",\r\n \"bbox\": {\r\n \"south_latitude\": -90,\r\n \"west_longitude\": -180,\r\n \"north_latitude\": 90,\r\n \"east_longitude\": 180,\r\n },\r\n \"id\": {\"authority\": \"OGC\", \"code\": \"CRS84\"},\r\n },\r\n \"geometry_types\": [\"Polygon\"],\r\n \"bbox\": [\r\n 5.370542846111244,\r\n 59.42344573656881,\r\n 7.368267282586697,\r\n 60.42591328670696,\r\n ],\r\n }\r\n },\r\n \"version\": \"1.0.0\",\r\n \"creator\": {\"library\": \"geopandas\", \"version\": \"0.14.1\"},\r\n}\r\n```\r\n\r\nWe can continue the discussion on how to handle this extra 'geo' schema metadata in #6438. I'd like to keep this PR small by just piggy-backing off the default Parquet reader for now, which would just show the 'geometry' column as a binary column.",
"Thanks ! Also if you can make sure that doing `ds.to_parquet(\"path/to/output.geoparquet\")` also saves as a valid geoparquet files (including the schema metadata) that would be awesome.\r\n\r\nIt would also enable `push_to_hub` to save geoparquet files",
"> Thanks ! Also if you can make sure that doing `ds.to_parquet(\"path/to/output.geoparquet\")` also saves as a valid geoparquet files (including the schema metadata) that would be awesome.\r\n> \r\n> It would also enable `push_to_hub` to save geoparquet files\r\n\r\nHmm, it should be possible to let PyArrow save a Parquet file with a geometry WKB column, but saving the GeoParquet schema metadata won't be easy without introducing [`geopandas`](https://github.com/geopandas/geopandas) as a dependency. Does this need to be done in this PR, or can it be a separate one?",
"I see, then let's keep it like this for now.\r\nI just checked and it would require to add support for keeping the schema metadata in `Features` anyway.\r\n\r\nFeel free to fix your code formatting using\r\n\r\n```\r\nmake style\r\n```\r\n\r\nand we can merge this PR :)\r\n\r\n",
"Cool, linted to remove the extra blank line at 7088f585557807a63673cdc58900d7ce56146cf7. :rocket:",
"The previous CI failure at https://github.com/huggingface/datasets/actions/runs/7482863299/job/20668381959#step:6:5299 says `datasets.exceptions.DefunctDatasetError: Dataset 'eli5' is defunct and no longer accessible due to unavailability of the source data` which seems unrelated, might be to do with https://github.com/huggingface/datasets/issues/6605. I've updated the PR branch with changes from `main` again, could someone re-run the tests and merge if ok? Thanks!",
"sorry, it took me some time to fix the CI on the `main` branch\r\n\r\nwill merge once it's green :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005467 / 0.011353 (-0.005886) | 0.003696 / 0.011008 (-0.007313) | 0.063298 / 0.038508 (0.024790) | 0.032209 / 0.023109 (0.009100) | 0.246307 / 0.275898 (-0.029591) | 0.276864 / 0.323480 (-0.046616) | 0.003941 / 0.007986 (-0.004044) | 0.002616 / 0.004328 (-0.001713) | 0.049543 / 0.004250 (0.045292) | 0.044886 / 0.037052 (0.007833) | 0.266502 / 0.258489 (0.008013) | 0.288401 / 0.293841 (-0.005440) | 0.027911 / 0.128546 (-0.100635) | 0.011011 / 0.075646 (-0.064636) | 0.207972 / 0.419271 (-0.211299) | 0.036324 / 0.043533 (-0.007209) | 0.259450 / 0.255139 (0.004311) | 0.267317 / 0.283200 (-0.015883) | 0.018857 / 0.141683 (-0.122826) | 1.145350 / 1.452155 (-0.306805) | 1.204204 / 1.492716 (-0.288513) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103864 / 0.018006 (0.085858) | 0.306941 / 0.000490 (0.306451) | 0.000218 / 0.000200 (0.000018) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018391 / 0.037411 (-0.019020) | 0.064600 / 0.014526 (0.050074) | 0.075454 / 0.176557 (-0.101102) | 0.120913 / 0.737135 (-0.616223) | 0.076998 / 0.296338 (-0.219341) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279491 / 0.215209 (0.064282) | 2.742471 / 2.077655 (0.664816) | 1.447980 / 1.504120 (-0.056140) | 1.328202 / 1.541195 (-0.212992) | 1.397291 / 1.468490 (-0.071199) | 0.585726 / 4.584777 (-3.999051) | 2.385132 / 3.745712 (-1.360580) | 2.874888 / 5.269862 (-2.394974) | 1.820177 / 4.565676 (-2.745500) | 0.063876 / 0.424275 (-0.360399) | 0.004946 / 0.007607 (-0.002661) | 0.336445 / 0.226044 (0.110401) | 3.396813 / 2.268929 (1.127885) | 1.832644 / 55.444624 (-53.611981) | 1.581304 / 6.876477 (-5.295172) | 1.607243 / 2.142072 (-0.534829) | 0.662752 / 4.805227 (-4.142476) | 0.119494 / 6.500664 (-6.381170) | 0.042573 / 0.075469 (-0.032896) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.936784 / 1.841788 (-0.905003) | 12.154288 / 8.074308 (4.079980) | 10.944835 / 10.191392 (0.753443) | 0.132856 / 0.680424 (-0.547568) | 0.015197 / 0.534201 (-0.519004) | 0.290647 / 0.579283 (-0.288636) | 0.273498 / 0.434364 (-0.160866) | 0.324893 / 0.540337 (-0.215444) | 0.427905 / 1.386936 (-0.959032) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005695 / 0.011353 (-0.005658) | 0.003562 / 0.011008 (-0.007446) | 0.050117 / 0.038508 (0.011608) | 0.033876 / 0.023109 (0.010767) | 0.275514 / 0.275898 (-0.000384) | 0.298460 / 0.323480 (-0.025020) | 0.004240 / 0.007986 (-0.003745) | 0.002738 / 0.004328 (-0.001591) | 0.048518 / 0.004250 (0.044268) | 0.049064 / 0.037052 (0.012012) | 0.287094 / 0.258489 (0.028605) | 0.314281 / 0.293841 (0.020440) | 0.057861 / 0.128546 (-0.070686) | 0.010893 / 0.075646 (-0.064753) | 0.062251 / 0.419271 (-0.357020) | 0.036788 / 0.043533 (-0.006745) | 0.272431 / 0.255139 (0.017292) | 0.292022 / 0.283200 (0.008822) | 0.019874 / 0.141683 (-0.121809) | 1.156939 / 1.452155 (-0.295216) | 1.237966 / 1.492716 (-0.254751) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096156 / 0.018006 (0.078150) | 0.306652 / 0.000490 (0.306162) | 0.000230 / 0.000200 (0.000031) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022965 / 0.037411 (-0.014447) | 0.081349 / 0.014526 (0.066823) | 0.089035 / 0.176557 (-0.087521) | 0.128831 / 0.737135 (-0.608304) | 0.090321 / 0.296338 (-0.206017) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293110 / 0.215209 (0.077901) | 2.884493 / 2.077655 (0.806839) | 1.582522 / 1.504120 (0.078402) | 1.518977 / 1.541195 (-0.022218) | 1.528449 / 1.468490 (0.059959) | 0.577369 / 4.584777 (-4.007408) | 2.473060 / 3.745712 (-1.272652) | 3.104363 / 5.269862 (-2.165499) | 1.916529 / 4.565676 (-2.649147) | 0.064594 / 0.424275 (-0.359682) | 0.005386 / 0.007607 (-0.002221) | 0.353336 / 0.226044 (0.127292) | 3.471914 / 2.268929 (1.202985) | 1.959222 / 55.444624 (-53.485402) | 1.677153 / 6.876477 (-5.199324) | 1.716961 / 2.142072 (-0.425112) | 0.658355 / 4.805227 (-4.146873) | 0.117296 / 6.500664 (-6.383368) | 0.041139 / 0.075469 (-0.034330) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025220 / 1.841788 (-0.816567) | 14.510987 / 8.074308 (6.436679) | 11.851428 / 10.191392 (1.660036) | 0.143759 / 0.680424 (-0.536665) | 0.015644 / 0.534201 (-0.518557) | 0.296824 / 0.579283 (-0.282459) | 0.281566 / 0.434364 (-0.152798) | 0.335094 / 0.540337 (-0.205244) | 0.425199 / 1.386936 (-0.961737) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fabc2c8cee8822572115893715b76dfdabac1631 \"CML watermark\")\n"
] | 2023-12-18T04:50:37 | 2024-01-26T18:22:35 | 2024-01-26T16:18:41 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6508",
"html_url": "https://github.com/huggingface/datasets/pull/6508",
"diff_url": "https://github.com/huggingface/datasets/pull/6508.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6508.patch",
"merged_at": "2024-01-26T16:18:41"
} | Let GeoParquet files with the file extension `*.geoparquet` or `*.gpq` be readable by the default parquet reader.
Those two file extensions are the ones most commonly used for GeoParquet files, and is included in the `gpq` validator tool at https://github.com/planetlabs/gpq/blob/e5576b4ee7306b4d2259d56c879465a9364dab90/cmd/gpq/command/convert.go#L73-L75
Addresses https://github.com/huggingface/datasets/issues/6438 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6508/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6508/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6507/comments | https://api.github.com/repos/huggingface/datasets/issues/6507/events | https://github.com/huggingface/datasets/issues/6507 | 2,045,152,928 | I_kwDODunzps555o6g | 6,507 | where is glue_metric.py> @Frankie123421 what was the resolution to this? | {
"login": "Mcccccc1024",
"id": 119146162,
"node_id": "U_kgDOBxoGsg",
"avatar_url": "https://avatars.githubusercontent.com/u/119146162?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mcccccc1024",
"html_url": "https://github.com/Mcccccc1024",
"followers_url": "https://api.github.com/users/Mcccccc1024/followers",
"following_url": "https://api.github.com/users/Mcccccc1024/following{/other_user}",
"gists_url": "https://api.github.com/users/Mcccccc1024/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mcccccc1024/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mcccccc1024/subscriptions",
"organizations_url": "https://api.github.com/users/Mcccccc1024/orgs",
"repos_url": "https://api.github.com/users/Mcccccc1024/repos",
"events_url": "https://api.github.com/users/Mcccccc1024/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mcccccc1024/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 2023-12-17T09:58:25 | 2023-12-18T11:42:49 | 2023-12-18T11:42:49 | NONE | null | null | > @Frankie123421 what was the resolution to this?
use glue_metric.py instead of glue.py in load_metric
_Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6507/timeline | not_planned | false |
https://api.github.com/repos/huggingface/datasets/issues/6506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6506/comments | https://api.github.com/repos/huggingface/datasets/issues/6506/events | https://github.com/huggingface/datasets/issues/6506 | 2,044,975,038 | I_kwDODunzps5549e- | 6,506 | Incorrect test set labels for RTE and CoLA datasets via load_dataset | {
"login": "emreonal11",
"id": 73316684,
"node_id": "MDQ6VXNlcjczMzE2Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/73316684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emreonal11",
"html_url": "https://github.com/emreonal11",
"followers_url": "https://api.github.com/users/emreonal11/followers",
"following_url": "https://api.github.com/users/emreonal11/following{/other_user}",
"gists_url": "https://api.github.com/users/emreonal11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emreonal11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emreonal11/subscriptions",
"organizations_url": "https://api.github.com/users/emreonal11/orgs",
"repos_url": "https://api.github.com/users/emreonal11/repos",
"events_url": "https://api.github.com/users/emreonal11/events{/privacy}",
"received_events_url": "https://api.github.com/users/emreonal11/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As this is a specific issue of the \"glue\" dataset, I have transferred it to the dataset Discussion page: https://huggingface.co/datasets/glue/discussions/15\r\n\r\nLet's continue the discussion there!"
] | 2023-12-16T22:06:08 | 2023-12-21T09:57:57 | 2023-12-21T09:57:57 | NONE | null | null | ### Describe the bug
The test set labels for the RTE and CoLA datasets when loading via datasets load_dataset are all -1.
Edit: It appears this is also the case for every other dataset except for MRPC (stsb, sst2, qqp, mnli (both matched and mismatched), qnli, wnli, ax). Is this intended behavior to safeguard the test set for evaluation purposes?
### Steps to reproduce the bug
!pip install datasets
from datasets import load_dataset
rte_data = load_dataset('glue', 'rte')
cola_data = load_dataset('glue', 'cola')
print(rte_data['test'][0:30]['label'])
print(cola_data['test'][0:30]['label'])
Output:
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
The non-label test data seems to be fine:
e.g. rte_data['test'][1] is:
{'sentence1': "Authorities in Brazil say that more than 200 people are being held hostage in a prison in the country's remote, Amazonian-jungle state of Rondonia.",
'sentence2': 'Authorities in Brazil hold 200 people as hostage.',
'label': -1,
'idx': 1}
Training and validation data are also fine:
e.g. rte_data['train][0] is:
{'sentence1': 'No Weapons of Mass Destruction Found in Iraq Yet.',
'sentence2': 'Weapons of Mass Destruction Found in Iraq.',
'label': 1,
'idx': 0}
### Expected behavior
Expected the labels to be binary 0/1 values; Got all -1s instead
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6506/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6505/comments | https://api.github.com/repos/huggingface/datasets/issues/6505/events | https://github.com/huggingface/datasets/issues/6505 | 2,044,721,288 | I_kwDODunzps553_iI | 6,505 | Got stuck when I trying to load a dataset | {
"login": "yirenpingsheng",
"id": 18232551,
"node_id": "MDQ6VXNlcjE4MjMyNTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/18232551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yirenpingsheng",
"html_url": "https://github.com/yirenpingsheng",
"followers_url": "https://api.github.com/users/yirenpingsheng/followers",
"following_url": "https://api.github.com/users/yirenpingsheng/following{/other_user}",
"gists_url": "https://api.github.com/users/yirenpingsheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yirenpingsheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yirenpingsheng/subscriptions",
"organizations_url": "https://api.github.com/users/yirenpingsheng/orgs",
"repos_url": "https://api.github.com/users/yirenpingsheng/repos",
"events_url": "https://api.github.com/users/yirenpingsheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/yirenpingsheng/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I ran into the same problem when I used a server cluster (Slurm system managed) that couldn't load any of the huggingface datasets or models, but it worked on my laptop. I suspected some system configuration-related problem, but I had no idea. \r\nMy problems are consistent with [issue #2618](https://github.com/huggingface/datasets/issues/2618). All the huggingface-related libraries I use are the latest versions.\r\n\r\n",
"> I ran into the same problem when I used a server cluster (Slurm system managed) that couldn't load any of the huggingface datasets or models, but it worked on my laptop. I suspected some system configuration-related problem, but I had no idea. My problems are consistent with [issue #2618](https://github.com/huggingface/datasets/issues/2618). All the huggingface-related libraries I use are the latest versions.\r\n\r\nhave you solved this issue yet? i met the same problem on server but everything works on laptop. I think maybe the filelock repo is contradictory with file system.",
"I am having the same issue on a computing cluster but this works on my laptop as well. I instead have this error:\r\n`/home/.conda/envs/py10/lib/python3.10/site-packages/filelock/_unix.py\", line 43, in _acquire\r\n fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)\r\nOSError: [Errno 5] Input/output error`\r\n\r\nthe load_dataset command does not work on server for local or hosted hugging-face datasets, and I have tried for several files",
"Same here. Is there any solution?"
] | 2023-12-16T11:51:07 | 2024-02-08T21:52:05 | null | NONE | null | null | ### Describe the bug
Hello, everyone. I met a problem when I am trying to load a data file using load_dataset method on a Debian 10 system. The data file is not very large, only 1.63MB with 600 records.
Here is my code:
from datasets import load_dataset
dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json')
I waited it for 20 minutes. It still no response. I cannot using Ctrl+C to cancel the command. I have to use Ctrl+Z to kill it. I also try it with a txt file, it still no response in a long time.
I can load the same file successfully using my laptop (windows 10, python 3.8.5, datasets==2.14.5). I can also make it on another computer (Ubuntu 20.04.5 LTS, python 3.10.13, datasets 2.14.7). It only takes me 1-2 miniutes.
Could you give me some suggestions? Thank you.
### Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json')
### Expected behavior
I hope it can load the file successfully.
### Environment info
OS: Debian GNU/Linux 10
Python: Python 3.10.13
Pip list:
Package Version
------------------------- ------------
accelerate 0.25.0
addict 2.4.0
aiofiles 23.2.1
aiohttp 3.9.1
aiosignal 1.3.1
aliyun-python-sdk-core 2.14.0
aliyun-python-sdk-kms 2.16.2
altair 5.2.0
annotated-types 0.6.0
anyio 3.7.1
async-timeout 4.0.3
attrs 23.1.0
certifi 2023.11.17
cffi 1.16.0
charset-normalizer 3.3.2
click 8.1.7
contourpy 1.2.0
crcmod 1.7
cryptography 41.0.7
cycler 0.12.1
datasets 2.14.7
dill 0.3.7
docstring-parser 0.15
einops 0.7.0
exceptiongroup 1.2.0
fastapi 0.105.0
ffmpy 0.3.1
filelock 3.13.1
fonttools 4.46.0
frozenlist 1.4.1
fsspec 2023.10.0
gast 0.5.4
gradio 3.50.2
gradio_client 0.6.1
h11 0.14.0
httpcore 1.0.2
httpx 0.25.2
huggingface-hub 0.19.4
idna 3.6
importlib-metadata 7.0.0
importlib-resources 6.1.1
jieba 0.42.1
Jinja2 3.1.2
jmespath 0.10.0
joblib 1.3.2
jsonschema 4.20.0
jsonschema-specifications 2023.11.2
kiwisolver 1.4.5
markdown-it-py 3.0.0
MarkupSafe 2.1.3
matplotlib 3.8.2
mdurl 0.1.2
modelscope 1.10.0
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.15
networkx 3.2.1
nltk 3.8.1
numpy 1.26.2
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.18.1
nvidia-nvjitlink-cu12 12.3.101
nvidia-nvtx-cu12 12.1.105
orjson 3.9.10
oss2 2.18.3
packaging 23.2
pandas 2.1.4
peft 0.7.1
Pillow 10.1.0
pip 23.3.1
platformdirs 4.1.0
protobuf 4.25.1
psutil 5.9.6
pyarrow 14.0.1
pyarrow-hotfix 0.6
pycparser 2.21
pycryptodome 3.19.0
pydantic 2.5.2
pydantic_core 2.14.5
pydub 0.25.1
Pygments 2.17.2
pyparsing 3.1.1
python-dateutil 2.8.2
python-multipart 0.0.6
pytz 2023.3.post1
PyYAML 6.0.1
referencing 0.32.0
regex 2023.10.3
requests 2.31.0
rich 13.7.0
rouge-chinese 1.0.3
rpds-py 0.13.2
safetensors 0.4.1
scipy 1.11.4
semantic-version 2.10.0
sentencepiece 0.1.99
setuptools 68.2.2
shtab 1.6.5
simplejson 3.19.2
six 1.16.0
sniffio 1.3.0
sortedcontainers 2.4.0
sse-starlette 1.8.2
starlette 0.27.0
sympy 1.12
tiktoken 0.5.2
tokenizers 0.15.0
tomli 2.0.1
toolz 0.12.0
torch 2.1.2
tqdm 4.66.1
transformers 4.36.1
triton 2.1.0
trl 0.7.4
typing_extensions 4.9.0
tyro 0.6.0
tzdata 2023.3
urllib3 2.1.0
uvicorn 0.24.0.post1
websockets 11.0.3
wheel 0.41.2
xxhash 3.4.1
yapf 0.40.2
yarl 1.9.4
zipp 3.17.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6505/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6505/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6504/comments | https://api.github.com/repos/huggingface/datasets/issues/6504/events | https://github.com/huggingface/datasets/issues/6504 | 2,044,541,154 | I_kwDODunzps553Tji | 6,504 | Error Pushing to Hub | {
"login": "Jiayi-Pan",
"id": 55055083,
"node_id": "MDQ6VXNlcjU1MDU1MDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55055083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jiayi-Pan",
"html_url": "https://github.com/Jiayi-Pan",
"followers_url": "https://api.github.com/users/Jiayi-Pan/followers",
"following_url": "https://api.github.com/users/Jiayi-Pan/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiayi-Pan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jiayi-Pan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiayi-Pan/subscriptions",
"organizations_url": "https://api.github.com/users/Jiayi-Pan/orgs",
"repos_url": "https://api.github.com/users/Jiayi-Pan/repos",
"events_url": "https://api.github.com/users/Jiayi-Pan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jiayi-Pan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 2023-12-16T01:05:22 | 2023-12-16T06:20:53 | 2023-12-16T06:20:53 | NONE | null | null | ### Describe the bug
Error when trying to push a dataset in a special format to hub
### Steps to reproduce the bug
```
import datasets
from datasets import Dataset
dataset_dict = {
"filename": ["apple", "banana"],
"token": [[[1,2],[3,4]],[[1,2],[3,4]]],
"label": [0, 1],
}
dataset = Dataset.from_dict(dataset_dict)
dataset = dataset.cast_column("token", datasets.features.features.Array2D(shape=(2, 2),dtype="int16"))
dataset.push_to_hub("SequenceModel/imagenet_val_256")
```
Error:
```
...
ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/tuple'
in "<unicode string>", line 8, column 16:
shape: !!python/tuple
^
```
### Expected behavior
Dataset being pushed to hub
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.19.0-1022-gcp-x86_64-with-glibc2.35
- Python version: 3.11.5
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6504/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6503/comments | https://api.github.com/repos/huggingface/datasets/issues/6503/events | https://github.com/huggingface/datasets/pull/6503 | 2,043,847,591 | PR_kwDODunzps5iHgZf | 6,503 | Fix streaming xnli | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6503). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005003 / 0.011353 (-0.006350) | 0.003020 / 0.011008 (-0.007988) | 0.061370 / 0.038508 (0.022862) | 0.050996 / 0.023109 (0.027887) | 0.243434 / 0.275898 (-0.032464) | 0.266317 / 0.323480 (-0.057163) | 0.003888 / 0.007986 (-0.004098) | 0.002607 / 0.004328 (-0.001721) | 0.047541 / 0.004250 (0.043290) | 0.037933 / 0.037052 (0.000881) | 0.259695 / 0.258489 (0.001206) | 0.279374 / 0.293841 (-0.014467) | 0.027258 / 0.128546 (-0.101288) | 0.010184 / 0.075646 (-0.065462) | 0.207412 / 0.419271 (-0.211860) | 0.034978 / 0.043533 (-0.008554) | 0.247871 / 0.255139 (-0.007267) | 0.265273 / 0.283200 (-0.017927) | 0.017886 / 0.141683 (-0.123796) | 1.090451 / 1.452155 (-0.361704) | 1.152034 / 1.492716 (-0.340682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094383 / 0.018006 (0.076377) | 0.301151 / 0.000490 (0.300661) | 0.000211 / 0.000200 (0.000011) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018927 / 0.037411 (-0.018484) | 0.062152 / 0.014526 (0.047626) | 0.072177 / 0.176557 (-0.104380) | 0.119792 / 0.737135 (-0.617343) | 0.073333 / 0.296338 (-0.223005) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282671 / 0.215209 (0.067462) | 2.721148 / 2.077655 (0.643494) | 1.472689 / 1.504120 (-0.031431) | 1.355226 / 1.541195 (-0.185969) | 1.375935 / 1.468490 (-0.092556) | 0.562600 / 4.584777 (-4.022177) | 2.364046 / 3.745712 (-1.381666) | 2.714984 / 5.269862 (-2.554878) | 1.738413 / 4.565676 (-2.827263) | 0.062564 / 0.424275 (-0.361711) | 0.004964 / 0.007607 (-0.002643) | 0.341300 / 0.226044 (0.115255) | 3.345187 / 2.268929 (1.076259) | 1.857822 / 55.444624 (-53.586803) | 1.581002 / 6.876477 (-5.295475) | 1.585919 / 2.142072 (-0.556153) | 0.640105 / 4.805227 (-4.165122) | 0.117880 / 6.500664 (-6.382784) | 0.042032 / 0.075469 (-0.033437) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962701 / 1.841788 (-0.879086) | 11.309251 / 8.074308 (3.234943) | 10.462520 / 10.191392 (0.271128) | 0.127399 / 0.680424 (-0.553025) | 0.014549 / 0.534201 (-0.519652) | 0.297017 / 0.579283 (-0.282266) | 0.266152 / 0.434364 (-0.168212) | 0.349252 / 0.540337 (-0.191085) | 0.457015 / 1.386936 (-0.929921) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005341 / 0.011353 (-0.006012) | 0.003108 / 0.011008 (-0.007900) | 0.048862 / 0.038508 (0.010353) | 0.053354 / 0.023109 (0.030245) | 0.274499 / 0.275898 (-0.001399) | 0.296698 / 0.323480 (-0.026782) | 0.003974 / 0.007986 (-0.004012) | 0.002631 / 0.004328 (-0.001697) | 0.048013 / 0.004250 (0.043762) | 0.040416 / 0.037052 (0.003363) | 0.276581 / 0.258489 (0.018092) | 0.301296 / 0.293841 (0.007455) | 0.029049 / 0.128546 (-0.099497) | 0.010253 / 0.075646 (-0.065393) | 0.057157 / 0.419271 (-0.362114) | 0.031830 / 0.043533 (-0.011703) | 0.274341 / 0.255139 (0.019202) | 0.292583 / 0.283200 (0.009383) | 0.018449 / 0.141683 (-0.123234) | 1.145099 / 1.452155 (-0.307055) | 1.192958 / 1.492716 (-0.299758) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091596 / 0.018006 (0.073590) | 0.300917 / 0.000490 (0.300427) | 0.000225 / 0.000200 (0.000025) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021657 / 0.037411 (-0.015754) | 0.068464 / 0.014526 (0.053938) | 0.079869 / 0.176557 (-0.096687) | 0.117523 / 0.737135 (-0.619613) | 0.081257 / 0.296338 (-0.215082) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294876 / 0.215209 (0.079667) | 2.879372 / 2.077655 (0.801718) | 1.619887 / 1.504120 (0.115767) | 1.482154 / 1.541195 (-0.059041) | 1.494656 / 1.468490 (0.026166) | 0.558914 / 4.584777 (-4.025862) | 2.420948 / 3.745712 (-1.324765) | 2.728992 / 5.269862 (-2.540869) | 1.722135 / 4.565676 (-2.843542) | 0.062182 / 0.424275 (-0.362093) | 0.004933 / 0.007607 (-0.002674) | 0.342759 / 0.226044 (0.116715) | 3.424083 / 2.268929 (1.155154) | 1.950673 / 55.444624 (-53.493951) | 1.683126 / 6.876477 (-5.193351) | 1.673135 / 2.142072 (-0.468937) | 0.633711 / 4.805227 (-4.171516) | 0.114898 / 6.500664 (-6.385766) | 0.040332 / 0.075469 (-0.035137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975102 / 1.841788 (-0.866685) | 11.975731 / 8.074308 (3.901423) | 10.961103 / 10.191392 (0.769711) | 0.131152 / 0.680424 (-0.549272) | 0.016268 / 0.534201 (-0.517933) | 0.285031 / 0.579283 (-0.294252) | 0.279556 / 0.434364 (-0.154808) | 0.324183 / 0.540337 (-0.216154) | 0.571404 / 1.386936 (-0.815532) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4f67312956fc15572b6a0ca0dfcc0ceb90fbb794 \"CML watermark\")\n"
] | 2023-12-15T14:40:57 | 2023-12-15T14:51:06 | 2023-12-15T14:44:47 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6503",
"html_url": "https://github.com/huggingface/datasets/pull/6503",
"diff_url": "https://github.com/huggingface/datasets/pull/6503.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6503.patch",
"merged_at": "2023-12-15T14:44:46"
} | This code was failing
```python
In [1]: from datasets import load_dataset
In [2]:
...: ds = load_dataset("xnli", "all_languages", split="test", streaming=True)
...:
...: sample_data = next(iter(ds))["premise"] # pick up one data
...: input_text = list(sample_data.values())
```
```
File ~/hf/datasets/src/datasets/features/translation.py:104, in TranslationVariableLanguages.encode_example(self, translation_dict)
102 return translation_dict
103 elif self.languages and set(translation_dict) - lang_set:
--> 104 raise ValueError(
105 f'Some languages in example ({", ".join(sorted(set(translation_dict) - lang_set))}) are not in valid set ({", ".join(lang_set)}).'
106 )
108 # Convert dictionary into tuples, splitting out cases where there are
109 # multiple translations for a single language.
110 translation_tuples = []
ValueError: Some languages in example (language, translation) are not in valid set (ur, fr, hi, sw, vi, el, de, th, en, tr, zh, ar, bg, ru, es).
```
because in streaming mode we expect features encode methods to be no-ops if the example is already encoded.
I fixed `TranslationVariableLanguages` to account for that | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6503/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6502/comments | https://api.github.com/repos/huggingface/datasets/issues/6502/events | https://github.com/huggingface/datasets/pull/6502 | 2,043,771,731 | PR_kwDODunzps5iHPt- | 6,502 | Pickle support for `torch.Generator` objects | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6502). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005472 / 0.011353 (-0.005881) | 0.003715 / 0.011008 (-0.007293) | 0.063257 / 0.038508 (0.024749) | 0.060683 / 0.023109 (0.037574) | 0.250885 / 0.275898 (-0.025013) | 0.271685 / 0.323480 (-0.051795) | 0.003051 / 0.007986 (-0.004934) | 0.002799 / 0.004328 (-0.001530) | 0.049113 / 0.004250 (0.044863) | 0.038965 / 0.037052 (0.001912) | 0.252688 / 0.258489 (-0.005801) | 0.282536 / 0.293841 (-0.011305) | 0.028722 / 0.128546 (-0.099824) | 0.010586 / 0.075646 (-0.065060) | 0.205145 / 0.419271 (-0.214127) | 0.036996 / 0.043533 (-0.006537) | 0.248874 / 0.255139 (-0.006265) | 0.266148 / 0.283200 (-0.017051) | 0.018540 / 0.141683 (-0.123143) | 1.120216 / 1.452155 (-0.331938) | 1.191072 / 1.492716 (-0.301644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095721 / 0.018006 (0.077714) | 0.313401 / 0.000490 (0.312911) | 0.000234 / 0.000200 (0.000034) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018604 / 0.037411 (-0.018807) | 0.061571 / 0.014526 (0.047045) | 0.075343 / 0.176557 (-0.101213) | 0.121272 / 0.737135 (-0.615864) | 0.076448 / 0.296338 (-0.219890) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286885 / 0.215209 (0.071676) | 2.809100 / 2.077655 (0.731445) | 1.485365 / 1.504120 (-0.018755) | 1.367672 / 1.541195 (-0.173523) | 1.423570 / 1.468490 (-0.044920) | 0.571063 / 4.584777 (-4.013714) | 2.385248 / 3.745712 (-1.360464) | 2.855251 / 5.269862 (-2.414610) | 1.799371 / 4.565676 (-2.766306) | 0.063491 / 0.424275 (-0.360784) | 0.004942 / 0.007607 (-0.002665) | 0.346181 / 0.226044 (0.120137) | 3.388123 / 2.268929 (1.119195) | 1.819093 / 55.444624 (-53.625532) | 1.552998 / 6.876477 (-5.323479) | 1.627930 / 2.142072 (-0.514143) | 0.653438 / 4.805227 (-4.151789) | 0.123831 / 6.500664 (-6.376833) | 0.043340 / 0.075469 (-0.032129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.952167 / 1.841788 (-0.889621) | 12.149515 / 8.074308 (4.075207) | 10.665085 / 10.191392 (0.473693) | 0.127768 / 0.680424 (-0.552656) | 0.014022 / 0.534201 (-0.520179) | 0.285959 / 0.579283 (-0.293324) | 0.269727 / 0.434364 (-0.164637) | 0.336646 / 0.540337 (-0.203692) | 0.442932 / 1.386936 (-0.944005) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005351 / 0.011353 (-0.006002) | 0.003561 / 0.011008 (-0.007448) | 0.048890 / 0.038508 (0.010382) | 0.054093 / 0.023109 (0.030984) | 0.274397 / 0.275898 (-0.001501) | 0.296980 / 0.323480 (-0.026500) | 0.004126 / 0.007986 (-0.003860) | 0.002751 / 0.004328 (-0.001578) | 0.049131 / 0.004250 (0.044880) | 0.040769 / 0.037052 (0.003716) | 0.279147 / 0.258489 (0.020658) | 0.302014 / 0.293841 (0.008173) | 0.029847 / 0.128546 (-0.098699) | 0.010710 / 0.075646 (-0.064936) | 0.057626 / 0.419271 (-0.361645) | 0.032801 / 0.043533 (-0.010732) | 0.272698 / 0.255139 (0.017559) | 0.289238 / 0.283200 (0.006039) | 0.017876 / 0.141683 (-0.123807) | 1.152059 / 1.452155 (-0.300096) | 1.212289 / 1.492716 (-0.280427) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092914 / 0.018006 (0.074908) | 0.303092 / 0.000490 (0.302603) | 0.000214 / 0.000200 (0.000014) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022074 / 0.037411 (-0.015337) | 0.070109 / 0.014526 (0.055583) | 0.083360 / 0.176557 (-0.093196) | 0.122445 / 0.737135 (-0.614690) | 0.083625 / 0.296338 (-0.212714) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282788 / 0.215209 (0.067579) | 2.789229 / 2.077655 (0.711574) | 1.571077 / 1.504120 (0.066957) | 1.452627 / 1.541195 (-0.088567) | 1.493176 / 1.468490 (0.024686) | 0.556892 / 4.584777 (-4.027885) | 2.442771 / 3.745712 (-1.302941) | 2.826316 / 5.269862 (-2.443545) | 1.758276 / 4.565676 (-2.807401) | 0.063039 / 0.424275 (-0.361236) | 0.004928 / 0.007607 (-0.002679) | 0.338247 / 0.226044 (0.112202) | 3.346344 / 2.268929 (1.077416) | 1.952520 / 55.444624 (-53.492104) | 1.664520 / 6.876477 (-5.211956) | 1.701528 / 2.142072 (-0.440544) | 0.634746 / 4.805227 (-4.170481) | 0.116879 / 6.500664 (-6.383786) | 0.040990 / 0.075469 (-0.034479) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969521 / 1.841788 (-0.872267) | 12.431395 / 8.074308 (4.357087) | 10.907503 / 10.191392 (0.716111) | 0.131028 / 0.680424 (-0.549396) | 0.015239 / 0.534201 (-0.518962) | 0.290793 / 0.579283 (-0.288490) | 0.275072 / 0.434364 (-0.159292) | 0.331036 / 0.540337 (-0.209301) | 0.567858 / 1.386936 (-0.819078) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#092118fc00f7dd718ab3643739d7b23ff16c9eff \"CML watermark\")\n"
] | 2023-12-15T13:55:12 | 2023-12-15T15:04:33 | 2023-12-15T14:58:22 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6502",
"html_url": "https://github.com/huggingface/datasets/pull/6502",
"diff_url": "https://github.com/huggingface/datasets/pull/6502.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6502.patch",
"merged_at": "2023-12-15T14:58:22"
} | Fix for https://discuss.huggingface.co/t/caching-a-dataset-processed-with-randomness/65616 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6502/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6501/comments | https://api.github.com/repos/huggingface/datasets/issues/6501/events | https://github.com/huggingface/datasets/issues/6501 | 2,043,377,240 | I_kwDODunzps55y3ZY | 6,501 | OverflowError: value too large to convert to int32_t | {
"login": "zhangfan-algo",
"id": 47747764,
"node_id": "MDQ6VXNlcjQ3NzQ3NzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/47747764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangfan-algo",
"html_url": "https://github.com/zhangfan-algo",
"followers_url": "https://api.github.com/users/zhangfan-algo/followers",
"following_url": "https://api.github.com/users/zhangfan-algo/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangfan-algo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangfan-algo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangfan-algo/subscriptions",
"organizations_url": "https://api.github.com/users/zhangfan-algo/orgs",
"repos_url": "https://api.github.com/users/zhangfan-algo/repos",
"events_url": "https://api.github.com/users/zhangfan-algo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangfan-algo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-15T10:10:21 | 2023-12-15T10:10:21 | null | NONE | null | null | ### Describe the bug
![image](https://github.com/huggingface/datasets/assets/47747764/f58044fb-ddda-48b6-ba68-7bbfef781630)
### Steps to reproduce the bug
just loading datasets
### Expected behavior
how can I fix it
### Environment info
pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3-none-any.whl
pip install huggingface_hub-0.19.4-py3-none-any.whl tokenizers-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl transformers-4.36.1-py3-none-any.whl pyarrow_hotfix-0.6-py3-none-any.whl datasets-2.15.0-py3-none-any.whl tyro-0.5.18-py3-none-any.whl trl-0.7.4-py3-none-any.whl
done | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6501/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6500/comments | https://api.github.com/repos/huggingface/datasets/issues/6500/events | https://github.com/huggingface/datasets/pull/6500 | 2,043,258,633 | PR_kwDODunzps5iFc6e | 6,500 | Enable setting config as default when push_to_hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6500). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This is ready for review @huggingface/datasets. ",
"Also what if the config is being overwritten and it was the default config and the user doesn't pass `set_default` ?\r\nI'd expect the config to keep being the default one but lmk what you think",
"How can you unset a config as the default one? In the case you mentioned, I would expect the config not being the default one.",
"Maybe by passing `set_default=False` ? (set_default can be None by default)",
"I think that way we are unnecessarily complicating the logic of `push_to_hub` and as I told you, I would expect the contrary: the result of calling `push_to_hub` with a determined set of arguments should always be the same, independently of previous calls and the current state of the config on the Hub. Push to hub should be somehow stateless in that sense, and IMO the user expects that the push overwrites previous config if already present on the Hub. I find very confusing making it to partially update the config on the Hub.",
"That makes sense, having it stateless is simpler and no need to do something too fancy indeed",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005329 / 0.011353 (-0.006024) | 0.002998 / 0.011008 (-0.008010) | 0.063756 / 0.038508 (0.025248) | 0.051713 / 0.023109 (0.028603) | 0.248135 / 0.275898 (-0.027763) | 0.269136 / 0.323480 (-0.054344) | 0.002970 / 0.007986 (-0.005015) | 0.002566 / 0.004328 (-0.001763) | 0.048110 / 0.004250 (0.043859) | 0.038415 / 0.037052 (0.001363) | 0.254012 / 0.258489 (-0.004477) | 0.281915 / 0.293841 (-0.011926) | 0.027503 / 0.128546 (-0.101043) | 0.010370 / 0.075646 (-0.065276) | 0.208965 / 0.419271 (-0.210306) | 0.035508 / 0.043533 (-0.008024) | 0.249116 / 0.255139 (-0.006023) | 0.266350 / 0.283200 (-0.016850) | 0.018440 / 0.141683 (-0.123243) | 1.101089 / 1.452155 (-0.351066) | 1.164870 / 1.492716 (-0.327847) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090909 / 0.018006 (0.072903) | 0.298041 / 0.000490 (0.297551) | 0.000211 / 0.000200 (0.000012) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018137 / 0.037411 (-0.019275) | 0.059574 / 0.014526 (0.045048) | 0.071754 / 0.176557 (-0.104803) | 0.117980 / 0.737135 (-0.619155) | 0.072903 / 0.296338 (-0.223435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282844 / 0.215209 (0.067635) | 2.740916 / 2.077655 (0.663261) | 1.444546 / 1.504120 (-0.059574) | 1.321904 / 1.541195 (-0.219291) | 1.356957 / 1.468490 (-0.111533) | 0.568389 / 4.584777 (-4.016388) | 2.354042 / 3.745712 (-1.391671) | 2.719427 / 5.269862 (-2.550435) | 1.719616 / 4.565676 (-2.846061) | 0.062537 / 0.424275 (-0.361738) | 0.004915 / 0.007607 (-0.002692) | 0.334716 / 0.226044 (0.108672) | 3.299499 / 2.268929 (1.030571) | 1.814629 / 55.444624 (-53.629996) | 1.515245 / 6.876477 (-5.361232) | 1.553085 / 2.142072 (-0.588987) | 0.643859 / 4.805227 (-4.161368) | 0.116650 / 6.500664 (-6.384014) | 0.041432 / 0.075469 (-0.034037) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948227 / 1.841788 (-0.893561) | 11.331103 / 8.074308 (3.256795) | 10.209658 / 10.191392 (0.018266) | 0.126721 / 0.680424 (-0.553703) | 0.013638 / 0.534201 (-0.520563) | 0.282540 / 0.579283 (-0.296743) | 0.262635 / 0.434364 (-0.171729) | 0.335357 / 0.540337 (-0.204981) | 0.441798 / 1.386936 (-0.945138) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005200 / 0.011353 (-0.006153) | 0.003012 / 0.011008 (-0.007996) | 0.047571 / 0.038508 (0.009063) | 0.055069 / 0.023109 (0.031959) | 0.271150 / 0.275898 (-0.004748) | 0.294957 / 0.323480 (-0.028523) | 0.003922 / 0.007986 (-0.004064) | 0.002627 / 0.004328 (-0.001702) | 0.047777 / 0.004250 (0.043527) | 0.039507 / 0.037052 (0.002454) | 0.276314 / 0.258489 (0.017825) | 0.300436 / 0.293841 (0.006595) | 0.028951 / 0.128546 (-0.099595) | 0.010583 / 0.075646 (-0.065063) | 0.056535 / 0.419271 (-0.362737) | 0.032654 / 0.043533 (-0.010879) | 0.272945 / 0.255139 (0.017806) | 0.291909 / 0.283200 (0.008709) | 0.017545 / 0.141683 (-0.124138) | 1.195897 / 1.452155 (-0.256258) | 1.171855 / 1.492716 (-0.320861) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091919 / 0.018006 (0.073913) | 0.299297 / 0.000490 (0.298807) | 0.000225 / 0.000200 (0.000025) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022271 / 0.037411 (-0.015140) | 0.068903 / 0.014526 (0.054377) | 0.083767 / 0.176557 (-0.092790) | 0.120239 / 0.737135 (-0.616896) | 0.083448 / 0.296338 (-0.212891) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295353 / 0.215209 (0.080144) | 2.911452 / 2.077655 (0.833798) | 1.577941 / 1.504120 (0.073821) | 1.454514 / 1.541195 (-0.086681) | 1.459575 / 1.468490 (-0.008915) | 0.572475 / 4.584777 (-4.012302) | 2.443634 / 3.745712 (-1.302078) | 2.801171 / 5.269862 (-2.468691) | 1.724214 / 4.565676 (-2.841462) | 0.063539 / 0.424275 (-0.360736) | 0.004939 / 0.007607 (-0.002668) | 0.347705 / 0.226044 (0.121660) | 3.489591 / 2.268929 (1.220663) | 1.944952 / 55.444624 (-53.499672) | 1.652810 / 6.876477 (-5.223667) | 1.656361 / 2.142072 (-0.485712) | 0.647052 / 4.805227 (-4.158176) | 0.117286 / 6.500664 (-6.383379) | 0.040979 / 0.075469 (-0.034490) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971761 / 1.841788 (-0.870027) | 11.770547 / 8.074308 (3.696239) | 10.402502 / 10.191392 (0.211110) | 0.128280 / 0.680424 (-0.552144) | 0.015160 / 0.534201 (-0.519041) | 0.286706 / 0.579283 (-0.292578) | 0.274539 / 0.434364 (-0.159825) | 0.324591 / 0.540337 (-0.215747) | 0.573846 / 1.386936 (-0.813090) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3329be80b9abfe83285ef940a590a4e9f68835a3 \"CML watermark\")\n"
] | 2023-12-15T09:17:41 | 2023-12-18T11:56:11 | 2023-12-18T11:50:03 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6500",
"html_url": "https://github.com/huggingface/datasets/pull/6500",
"diff_url": "https://github.com/huggingface/datasets/pull/6500.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6500.patch",
"merged_at": "2023-12-18T11:50:03"
} | Fix #6497. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6500/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6500/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6499/comments | https://api.github.com/repos/huggingface/datasets/issues/6499/events | https://github.com/huggingface/datasets/pull/6499 | 2,043,166,976 | PR_kwDODunzps5iFIUF | 6,499 | docs: add reference Git over SSH | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6499). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005701 / 0.011353 (-0.005652) | 0.003546 / 0.011008 (-0.007463) | 0.063335 / 0.038508 (0.024827) | 0.051987 / 0.023109 (0.028878) | 0.240429 / 0.275898 (-0.035469) | 0.260659 / 0.323480 (-0.062820) | 0.003866 / 0.007986 (-0.004120) | 0.002617 / 0.004328 (-0.001712) | 0.048653 / 0.004250 (0.044403) | 0.038176 / 0.037052 (0.001124) | 0.245496 / 0.258489 (-0.012993) | 0.277141 / 0.293841 (-0.016700) | 0.027886 / 0.128546 (-0.100660) | 0.010738 / 0.075646 (-0.064908) | 0.211255 / 0.419271 (-0.208016) | 0.045205 / 0.043533 (0.001672) | 0.243062 / 0.255139 (-0.012077) | 0.262877 / 0.283200 (-0.020323) | 0.023426 / 0.141683 (-0.118257) | 1.092247 / 1.452155 (-0.359908) | 1.161074 / 1.492716 (-0.331642) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090488 / 0.018006 (0.072482) | 0.300993 / 0.000490 (0.300504) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018543 / 0.037411 (-0.018868) | 0.061418 / 0.014526 (0.046892) | 0.073242 / 0.176557 (-0.103314) | 0.120757 / 0.737135 (-0.616378) | 0.073967 / 0.296338 (-0.222372) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282341 / 0.215209 (0.067132) | 2.741106 / 2.077655 (0.663451) | 1.416573 / 1.504120 (-0.087547) | 1.287904 / 1.541195 (-0.253291) | 1.309425 / 1.468490 (-0.159065) | 0.582592 / 4.584777 (-4.002184) | 2.404866 / 3.745712 (-1.340846) | 2.895397 / 5.269862 (-2.374464) | 1.799864 / 4.565676 (-2.765812) | 0.064386 / 0.424275 (-0.359889) | 0.004920 / 0.007607 (-0.002687) | 0.330879 / 0.226044 (0.104835) | 3.287064 / 2.268929 (1.018135) | 1.765169 / 55.444624 (-53.679456) | 1.490442 / 6.876477 (-5.386034) | 1.530960 / 2.142072 (-0.611113) | 0.655939 / 4.805227 (-4.149288) | 0.118529 / 6.500664 (-6.382135) | 0.042350 / 0.075469 (-0.033119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959027 / 1.841788 (-0.882761) | 11.911284 / 8.074308 (3.836976) | 10.576898 / 10.191392 (0.385506) | 0.141038 / 0.680424 (-0.539386) | 0.014184 / 0.534201 (-0.520017) | 0.305335 / 0.579283 (-0.273948) | 0.267531 / 0.434364 (-0.166832) | 0.353362 / 0.540337 (-0.186975) | 0.466258 / 1.386936 (-0.920678) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005194 / 0.011353 (-0.006159) | 0.003561 / 0.011008 (-0.007448) | 0.049181 / 0.038508 (0.010673) | 0.056664 / 0.023109 (0.033555) | 0.267142 / 0.275898 (-0.008756) | 0.291871 / 0.323480 (-0.031609) | 0.003996 / 0.007986 (-0.003990) | 0.003147 / 0.004328 (-0.001181) | 0.048527 / 0.004250 (0.044276) | 0.040239 / 0.037052 (0.003187) | 0.269728 / 0.258489 (0.011239) | 0.295531 / 0.293841 (0.001690) | 0.030316 / 0.128546 (-0.098231) | 0.010666 / 0.075646 (-0.064981) | 0.058176 / 0.419271 (-0.361095) | 0.033218 / 0.043533 (-0.010315) | 0.265383 / 0.255139 (0.010244) | 0.285102 / 0.283200 (0.001902) | 0.018295 / 0.141683 (-0.123388) | 1.117830 / 1.452155 (-0.334325) | 1.196919 / 1.492716 (-0.295798) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088547 / 0.018006 (0.070541) | 0.293220 / 0.000490 (0.292730) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022060 / 0.037411 (-0.015351) | 0.071973 / 0.014526 (0.057448) | 0.081721 / 0.176557 (-0.094836) | 0.119990 / 0.737135 (-0.617145) | 0.081639 / 0.296338 (-0.214700) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293712 / 0.215209 (0.078503) | 2.872986 / 2.077655 (0.795331) | 1.568944 / 1.504120 (0.064824) | 1.434555 / 1.541195 (-0.106639) | 1.457747 / 1.468490 (-0.010743) | 0.559296 / 4.584777 (-4.025481) | 2.471845 / 3.745712 (-1.273867) | 2.840916 / 5.269862 (-2.428946) | 1.754909 / 4.565676 (-2.810768) | 0.064585 / 0.424275 (-0.359690) | 0.004992 / 0.007607 (-0.002615) | 0.349149 / 0.226044 (0.123104) | 3.385906 / 2.268929 (1.116977) | 1.940644 / 55.444624 (-53.503980) | 1.638300 / 6.876477 (-5.238177) | 1.649939 / 2.142072 (-0.492133) | 0.645680 / 4.805227 (-4.159547) | 0.118080 / 6.500664 (-6.382584) | 0.040643 / 0.075469 (-0.034826) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969965 / 1.841788 (-0.871822) | 12.099766 / 8.074308 (4.025457) | 10.550650 / 10.191392 (0.359258) | 0.131736 / 0.680424 (-0.548688) | 0.015483 / 0.534201 (-0.518718) | 0.289231 / 0.579283 (-0.290052) | 0.287505 / 0.434364 (-0.146858) | 0.327326 / 0.540337 (-0.213011) | 0.570364 / 1.386936 (-0.816572) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#533c38cef16111e9e8154eeb76c207f1f4936ddf \"CML watermark\")\n"
] | 2023-12-15T08:38:31 | 2023-12-15T11:48:47 | 2023-12-15T11:42:38 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6499",
"html_url": "https://github.com/huggingface/datasets/pull/6499",
"diff_url": "https://github.com/huggingface/datasets/pull/6499.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6499.patch",
"merged_at": "2023-12-15T11:42:38"
} | see https://discuss.huggingface.co/t/update-datasets-getting-started-to-new-git-security/65893 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6499/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6498/comments | https://api.github.com/repos/huggingface/datasets/issues/6498/events | https://github.com/huggingface/datasets/pull/6498 | 2,042,075,969 | PR_kwDODunzps5iBcFj | 6,498 | Fallback on dataset script if user wants to load default config | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6498). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> I was just thinking: what if the user does not pass a config name and the dataset has only a config with a name different from \"default\"?\r\n\r\nYou mean if there is a DEFAULT_CONFIG_NAME defined in the script but the dataset only has one configuration ? We can't easily get the number of configs without running the python code so I don't think we can support detect this case\r\n",
"Most datasets with a script don't define DEFAULT_CONFIG_NAME if there is only one configuration anyway.\r\n\r\nSo there is no issue e.g. for `squad`",
"> I was trying to mean the case where DEFAULT_CONFIG_NAME is None but there is only a single config in BUILDER_CONFIGS, with a name different from \"default\".\r\n\r\nIn this case we can detect if \"DEFAULT_CONFIG_NAME\" is not mentioned and use the Parquet export. If it is mentioned (and maybe it is set to None or to the single config) I consider that it may have multiple configs and fall back on using the script",
"... but the user does not pass the config name.",
"In this case we load the single configuration (this is how a DatasetBuilder works)",
"see \r\n\r\nhttps://github.com/huggingface/datasets/blob/2feaa589de86dd85941301fc8c3fa091731a67c0/src/datasets/builder.py#L532-L532",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005122 / 0.011353 (-0.006231) | 0.003565 / 0.011008 (-0.007443) | 0.062706 / 0.038508 (0.024198) | 0.049314 / 0.023109 (0.026205) | 0.247325 / 0.275898 (-0.028573) | 0.269788 / 0.323480 (-0.053692) | 0.003895 / 0.007986 (-0.004090) | 0.002788 / 0.004328 (-0.001540) | 0.048615 / 0.004250 (0.044365) | 0.037591 / 0.037052 (0.000539) | 0.253495 / 0.258489 (-0.004994) | 0.281200 / 0.293841 (-0.012641) | 0.027712 / 0.128546 (-0.100834) | 0.010901 / 0.075646 (-0.064745) | 0.205577 / 0.419271 (-0.213694) | 0.035989 / 0.043533 (-0.007544) | 0.252978 / 0.255139 (-0.002161) | 0.268042 / 0.283200 (-0.015157) | 0.017857 / 0.141683 (-0.123826) | 1.096633 / 1.452155 (-0.355521) | 1.147026 / 1.492716 (-0.345691) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095609 / 0.018006 (0.077603) | 0.311941 / 0.000490 (0.311451) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019042 / 0.037411 (-0.018369) | 0.060549 / 0.014526 (0.046023) | 0.074761 / 0.176557 (-0.101796) | 0.121729 / 0.737135 (-0.615406) | 0.075661 / 0.296338 (-0.220677) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284774 / 0.215209 (0.069565) | 2.764576 / 2.077655 (0.686921) | 1.489926 / 1.504120 (-0.014194) | 1.387276 / 1.541195 (-0.153919) | 1.400931 / 1.468490 (-0.067559) | 0.555623 / 4.584777 (-4.029154) | 2.409488 / 3.745712 (-1.336224) | 2.781053 / 5.269862 (-2.488808) | 1.750472 / 4.565676 (-2.815204) | 0.062232 / 0.424275 (-0.362043) | 0.004974 / 0.007607 (-0.002633) | 0.336324 / 0.226044 (0.110280) | 3.286619 / 2.268929 (1.017691) | 1.825070 / 55.444624 (-53.619554) | 1.537993 / 6.876477 (-5.338484) | 1.586520 / 2.142072 (-0.555553) | 0.640090 / 4.805227 (-4.165138) | 0.117637 / 6.500664 (-6.383027) | 0.042318 / 0.075469 (-0.033151) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964051 / 1.841788 (-0.877736) | 11.706259 / 8.074308 (3.631951) | 10.752311 / 10.191392 (0.560919) | 0.128117 / 0.680424 (-0.552307) | 0.014001 / 0.534201 (-0.520200) | 0.286255 / 0.579283 (-0.293028) | 0.263810 / 0.434364 (-0.170554) | 0.329347 / 0.540337 (-0.210991) | 0.437349 / 1.386936 (-0.949587) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005303 / 0.011353 (-0.006050) | 0.003586 / 0.011008 (-0.007422) | 0.049339 / 0.038508 (0.010831) | 0.051287 / 0.023109 (0.028178) | 0.274397 / 0.275898 (-0.001501) | 0.292977 / 0.323480 (-0.030503) | 0.004029 / 0.007986 (-0.003957) | 0.002727 / 0.004328 (-0.001602) | 0.048779 / 0.004250 (0.044528) | 0.040075 / 0.037052 (0.003022) | 0.277676 / 0.258489 (0.019187) | 0.301963 / 0.293841 (0.008122) | 0.029340 / 0.128546 (-0.099206) | 0.010714 / 0.075646 (-0.064932) | 0.057253 / 0.419271 (-0.362018) | 0.033426 / 0.043533 (-0.010107) | 0.276673 / 0.255139 (0.021534) | 0.291053 / 0.283200 (0.007854) | 0.017660 / 0.141683 (-0.124023) | 1.122354 / 1.452155 (-0.329800) | 1.180381 / 1.492716 (-0.312335) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091903 / 0.018006 (0.073897) | 0.300720 / 0.000490 (0.300231) | 0.000288 / 0.000200 (0.000088) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021521 / 0.037411 (-0.015890) | 0.068233 / 0.014526 (0.053707) | 0.081245 / 0.176557 (-0.095312) | 0.119996 / 0.737135 (-0.617139) | 0.082483 / 0.296338 (-0.213856) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302776 / 0.215209 (0.087567) | 2.950776 / 2.077655 (0.873122) | 1.631032 / 1.504120 (0.126912) | 1.502021 / 1.541195 (-0.039174) | 1.514213 / 1.468490 (0.045723) | 0.578246 / 4.584777 (-4.006531) | 2.443768 / 3.745712 (-1.301944) | 2.827811 / 5.269862 (-2.442051) | 1.771529 / 4.565676 (-2.794148) | 0.064479 / 0.424275 (-0.359797) | 0.005061 / 0.007607 (-0.002546) | 0.350966 / 0.226044 (0.124922) | 3.458616 / 2.268929 (1.189687) | 1.967917 / 55.444624 (-53.476707) | 1.704661 / 6.876477 (-5.171815) | 1.698895 / 2.142072 (-0.443178) | 0.663259 / 4.805227 (-4.141968) | 0.122140 / 6.500664 (-6.378525) | 0.041099 / 0.075469 (-0.034371) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972080 / 1.841788 (-0.869708) | 12.123286 / 8.074308 (4.048978) | 10.819854 / 10.191392 (0.628462) | 0.131486 / 0.680424 (-0.548938) | 0.015785 / 0.534201 (-0.518416) | 0.290048 / 0.579283 (-0.289235) | 0.277822 / 0.434364 (-0.156542) | 0.325949 / 0.540337 (-0.214388) | 0.577681 / 1.386936 (-0.809255) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#30f6a2d9af183eba4501f0b8d90e9200bdca6bb1 \"CML watermark\")\n"
] | 2023-12-14T16:46:01 | 2023-12-15T13:16:56 | 2023-12-15T13:10:48 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6498",
"html_url": "https://github.com/huggingface/datasets/pull/6498",
"diff_url": "https://github.com/huggingface/datasets/pull/6498.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6498.patch",
"merged_at": "2023-12-15T13:10:48"
} | Right now this code is failing on `main`:
```python
load_dataset("openbookqa")
```
This is because it tries to load the dataset from the Parquet export but the dataset has multiple configurations and the Parquet export doesn't know which one is the default one.
I fixed this by simply falling back on using the dataset script (which tells the user to pass `trust_remote_code=True`):
```python
load_dataset("openbookqa", trust_remote_code=True)
```
Note that if the user happened to specify a config name I don't fall back on the script since we can use the Parquet export in this case (no need to know which config is the default)
```python
load_dataset("openbookqa", "main")
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6498/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6497/comments | https://api.github.com/repos/huggingface/datasets/issues/6497/events | https://github.com/huggingface/datasets/issues/6497 | 2,041,994,274 | I_kwDODunzps55tlwi | 6,497 | Support setting a default config name in push_to_hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [] | 2023-12-14T15:59:03 | 2023-12-18T11:50:04 | 2023-12-18T11:50:04 | MEMBER | null | null | In order to convert script-datasets to no-script datasets, we need to support setting a default config name for those scripts that set one. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6497/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6496/comments | https://api.github.com/repos/huggingface/datasets/issues/6496/events | https://github.com/huggingface/datasets/issues/6496 | 2,041,589,386 | I_kwDODunzps55sC6K | 6,496 | Error when writing a dataset to HF Hub: A commit has happened since. Please refresh and try again. | {
"login": "GeorgesLorre",
"id": 35808396,
"node_id": "MDQ6VXNlcjM1ODA4Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/35808396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GeorgesLorre",
"html_url": "https://github.com/GeorgesLorre",
"followers_url": "https://api.github.com/users/GeorgesLorre/followers",
"following_url": "https://api.github.com/users/GeorgesLorre/following{/other_user}",
"gists_url": "https://api.github.com/users/GeorgesLorre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GeorgesLorre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeorgesLorre/subscriptions",
"organizations_url": "https://api.github.com/users/GeorgesLorre/orgs",
"repos_url": "https://api.github.com/users/GeorgesLorre/repos",
"events_url": "https://api.github.com/users/GeorgesLorre/events{/privacy}",
"received_events_url": "https://api.github.com/users/GeorgesLorre/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I transferred from datasets-server, since the issue is more about `datasets` and the integration with `huggingface_hub`."
] | 2023-12-14T11:24:54 | 2023-12-14T12:22:21 | null | NONE | null | null | **Describe the bug**
Getting a `412 Client Error: Precondition Failed` when trying to write a dataset to the HF hub.
```
huggingface_hub.utils._errors.HfHubHTTPError: 412 Client Error: Precondition Failed for url: https://huggingface.co/api/datasets/GLorr/test-dask/commit/main (Request ID: Root=1-657ae26f-3bd92bf861bb254b2cc0826c;50a09ab7-9347-406a-ba49-69f98abee9cc)
A commit has happened since. Please refresh and try again.
```
**Steps to reproduce the bug**
This is a minimal reproducer:
```
import dask.dataframe as dd
import pandas as pd
import random
import os
import huggingface_hub
import datasets
huggingface_hub.login(token=os.getenv("HF_TOKEN"))
data = {"number": [random.randint(0,10) for _ in range(1000)]}
df = pd.DataFrame.from_dict(data)
dataframe = dd.from_pandas(df, npartitions=1)
dataframe = dataframe.repartition(npartitions=3)
schema = datasets.Features({"number": datasets.Value("int64")}).arrow_schema
repo_id = "GLorr/test-dask"
repo_path = f"hf://datasets/{repo_id}"
huggingface_hub.create_repo(repo_id=repo_id, repo_type="dataset", exist_ok=True)
dd.to_parquet(dataframe, path=f"{repo_path}/data", schema=schema)
```
**Expected behavior**
Would expect to write to the hub without any problem.
**Environment info**
```
datasets==2.15.0
huggingface-hub==0.19.4
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6496/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6494/comments | https://api.github.com/repos/huggingface/datasets/issues/6494/events | https://github.com/huggingface/datasets/issues/6494 | 2,039,684,839 | I_kwDODunzps55kx7n | 6,494 | Image Data loaded Twice | {
"login": "baowuzhida",
"id": 28867010,
"node_id": "MDQ6VXNlcjI4ODY3MDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/28867010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baowuzhida",
"html_url": "https://github.com/baowuzhida",
"followers_url": "https://api.github.com/users/baowuzhida/followers",
"following_url": "https://api.github.com/users/baowuzhida/following{/other_user}",
"gists_url": "https://api.github.com/users/baowuzhida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baowuzhida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baowuzhida/subscriptions",
"organizations_url": "https://api.github.com/users/baowuzhida/orgs",
"repos_url": "https://api.github.com/users/baowuzhida/repos",
"events_url": "https://api.github.com/users/baowuzhida/events{/privacy}",
"received_events_url": "https://api.github.com/users/baowuzhida/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-13T13:11:42 | 2023-12-13T13:11:42 | null | NONE | null | null | ### Describe the bug
![1702472610561](https://github.com/huggingface/datasets/assets/28867010/4b7ef5e7-32c3-4b73-84cb-5de059caa0b6)
When I learn from https://huggingface.co/docs/datasets/image_load and try to load image data from a folder. I noticed that the image was read twice in the returned data. As you can see in the attached image, there are only four images in the train folder, but reading brings up eight images
### Steps to reproduce the bug
from datasets import Dataset, load_dataset
dataset = load_dataset("imagefolder", data_dir="data/", drop_labels=False)
# print(dataset["train"][0]["image"] == dataset["train"][1]["image"])
print(dataset)
print(dataset["train"]["image"])
print(len(dataset["train"]["image"]))
### Expected behavior
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 8
})
})
[<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D1CA8B0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D2452E0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245310>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2453A0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245460>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245430>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2454F0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245550>]
8
### Environment info
- `datasets` version: 2.14.5
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.9.17
- Huggingface_hub version: 0.19.4
- PyArrow version: 13.0.0
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6494/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6495/comments | https://api.github.com/repos/huggingface/datasets/issues/6495/events | https://github.com/huggingface/datasets/issues/6495 | 2,039,708,529 | I_kwDODunzps55k3tx | 6,495 | Newline characters don't behave as expected when calling dataset.info | {
"login": "gerald-wrona",
"id": 32300890,
"node_id": "MDQ6VXNlcjMyMzAwODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/32300890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gerald-wrona",
"html_url": "https://github.com/gerald-wrona",
"followers_url": "https://api.github.com/users/gerald-wrona/followers",
"following_url": "https://api.github.com/users/gerald-wrona/following{/other_user}",
"gists_url": "https://api.github.com/users/gerald-wrona/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gerald-wrona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerald-wrona/subscriptions",
"organizations_url": "https://api.github.com/users/gerald-wrona/orgs",
"repos_url": "https://api.github.com/users/gerald-wrona/repos",
"events_url": "https://api.github.com/users/gerald-wrona/events{/privacy}",
"received_events_url": "https://api.github.com/users/gerald-wrona/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-12T23:07:51 | 2023-12-13T13:24:22 | null | NONE | null | null | ### System Info
- `transformers` version: 4.32.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cpu (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@marios
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
[Source](https://huggingface.co/docs/datasets/v2.2.1/en/access)
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset.info
```
DatasetInfo(description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n', citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398', license='', features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(names=['not_equivalent', 'equivalent'], id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='glue', dataset_name=None, config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943843, num_examples=3668, shard_lengths=None, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105879, num_examples=408, shard_lengths=None, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442410, num_examples=1725, shard_lengths=None, dataset_name='glue')}, download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': None}}, download_size=1494541, post_processing_size=None, dataset_size=1492132, size_in_bytes=2986673)
### Expected behavior
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset.info
```
DatasetInfo(
description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n',
citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398',
license='',
features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, builder_name='glue', config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943851, num_examples=3668, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105887, num_examples=408, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442418, num_examples=1725, dataset_name='glue')},
download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': '971d7767d81b997fd9060ade0ec23c4fc31cbb226a55d1bd4a1bac474eb81dc7'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': '60a9b09084528f0673eedee2b69cb941920f0b8cd0eeccefc464a98768457f89'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': 'a04e271090879aaba6423d65b94950c089298587d9c084bf9cd7439bd785f784'}},
download_size=1494541,
post_processing_size=None,
dataset_size=1492156,
size_in_bytes=2986697
) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6495/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6493/comments | https://api.github.com/repos/huggingface/datasets/issues/6493/events | https://github.com/huggingface/datasets/pull/6493 | 2,038,221,490 | PR_kwDODunzps5h0XJK | 6,493 | Lazy data files resolution and offline cache reload | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6493). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Naive question: is there any breaking change when loading?\r\n\r\nNo breaking changes except that the cache folders are different\r\n\r\ne.g. for glue sst2 (has parquet export)\r\n\r\n```\r\nThis branch (new format is config/version/commit_sha)\r\n~/.cache/huggingface/datasets/glue/sst2/1.0.0/fd8e86499fa5c264fcaad392a8f49ddf58bf4037\r\nOn main\r\n~/.cache/huggingface/datasets/glue/sst2/0.0.0/74a75637ac4acd3f\r\nOn 2.15.0\r\n~/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad\r\n```\r\n\r\ne.g. for wikimedia/wikipedia 20231101.ab (has metadata configs)\r\n\r\n\r\n```\r\nThis branch (new format is config/version/commit_sha)\r\n~/.cache/huggingface/datasets/wikimedia___wikipedia/20231101.ab/0.0.0/4cb9b0d719291f1a10f96f67d609c5d442980dc9\r\nOn main (takes ages to load)\r\n~/.cache/huggingface/datasets/wikimedia___wikipedia/20231101.ab/0.0.0/cfa627e27933df13\r\nOn 2.15.0 (takes ages to load)\r\n~/.cache/huggingface/datasets/wikimedia___wikipedia/20231101.ab/0.0.0/e92ee7a91c466564\r\n```\r\n\r\n\r\ne.g. for lhoestq/demo1 (no metadata configs)\r\n\r\n\r\n```\r\nThis branch (new format is config/version/commit_sha)\r\n~/.cache/huggingface/datasets/lhoestq___demo1/default/0.0.0/87ecf163bedca9d80598b528940a9c4f99e14c11\r\nOn main\r\n~/.cache/huggingface/datasets/lhoestq___demo1/default-8a4a0b7a240d3c5e/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d\r\nOn 2.15.0\r\n~/.cache/huggingface/datasets/lhoestq___demo1/default-59d4029e0bb36ae0/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d\r\n```",
"There was a last bug I just fixed: if you modify a dataset and reload it from the hub it won't download the new version - I think I need to use another hash to name the cache directory\r\nedit: fixed",
"I switched to using the git commit sha for the cache directory, which is now `config/version/commit_sha` :) much cleaner than before.\r\n\r\nAnd for local file it's a hash that takes into account the resolved files (and their last modified dates)",
"I also ran the `transformers` CI on this branch and it's green",
"FYI `huggingface_hub` will have a release on tuesday/wednesday (will speed up load_dataset data files resolution which is now needed for datasets loaded from parquet export) so we can aim on merging this around the same time and do a release on thursday",
"Merging this one, and hopefully the cache backward compatibility PR soon too :)\r\n\r\nThen it will be release time",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005444 / 0.011353 (-0.005909) | 0.003562 / 0.011008 (-0.007446) | 0.063183 / 0.038508 (0.024675) | 0.048885 / 0.023109 (0.025776) | 0.248422 / 0.275898 (-0.027476) | 0.277844 / 0.323480 (-0.045636) | 0.003019 / 0.007986 (-0.004966) | 0.002660 / 0.004328 (-0.001669) | 0.048928 / 0.004250 (0.044677) | 0.044850 / 0.037052 (0.007798) | 0.248505 / 0.258489 (-0.009984) | 0.282231 / 0.293841 (-0.011610) | 0.028302 / 0.128546 (-0.100244) | 0.010829 / 0.075646 (-0.064818) | 0.206738 / 0.419271 (-0.212533) | 0.035485 / 0.043533 (-0.008048) | 0.244575 / 0.255139 (-0.010564) | 0.281411 / 0.283200 (-0.001789) | 0.019563 / 0.141683 (-0.122120) | 1.113769 / 1.452155 (-0.338386) | 1.176831 / 1.492716 (-0.315885) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004718 / 0.018006 (-0.013288) | 0.304103 / 0.000490 (0.303614) | 0.000214 / 0.000200 (0.000014) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019642 / 0.037411 (-0.017769) | 0.060275 / 0.014526 (0.045749) | 0.073072 / 0.176557 (-0.103484) | 0.119789 / 0.737135 (-0.617346) | 0.074535 / 0.296338 (-0.221804) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278799 / 0.215209 (0.063590) | 2.725320 / 2.077655 (0.647665) | 1.419048 / 1.504120 (-0.085071) | 1.335041 / 1.541195 (-0.206154) | 1.373029 / 1.468490 (-0.095461) | 0.566774 / 4.584777 (-4.018003) | 2.383796 / 3.745712 (-1.361916) | 2.734804 / 5.269862 (-2.535057) | 1.712277 / 4.565676 (-2.853399) | 0.062119 / 0.424275 (-0.362156) | 0.004949 / 0.007607 (-0.002658) | 0.336126 / 0.226044 (0.110082) | 3.298602 / 2.268929 (1.029674) | 1.842815 / 55.444624 (-53.601809) | 1.544028 / 6.876477 (-5.332449) | 1.566717 / 2.142072 (-0.575355) | 0.643006 / 4.805227 (-4.162221) | 0.118241 / 6.500664 (-6.382423) | 0.042453 / 0.075469 (-0.033016) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949015 / 1.841788 (-0.892773) | 11.717958 / 8.074308 (3.643649) | 10.482448 / 10.191392 (0.291056) | 0.128564 / 0.680424 (-0.551860) | 0.014792 / 0.534201 (-0.519408) | 0.288636 / 0.579283 (-0.290647) | 0.263345 / 0.434364 (-0.171019) | 0.325753 / 0.540337 (-0.214584) | 0.421294 / 1.386936 (-0.965642) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005367 / 0.011353 (-0.005985) | 0.003802 / 0.011008 (-0.007206) | 0.049322 / 0.038508 (0.010814) | 0.055201 / 0.023109 (0.032092) | 0.287811 / 0.275898 (0.011913) | 0.305141 / 0.323480 (-0.018339) | 0.004095 / 0.007986 (-0.003890) | 0.002733 / 0.004328 (-0.001595) | 0.049508 / 0.004250 (0.045258) | 0.039199 / 0.037052 (0.002147) | 0.282719 / 0.258489 (0.024230) | 0.311156 / 0.293841 (0.017315) | 0.029469 / 0.128546 (-0.099077) | 0.010709 / 0.075646 (-0.064937) | 0.057646 / 0.419271 (-0.361626) | 0.032696 / 0.043533 (-0.010837) | 0.285087 / 0.255139 (0.029948) | 0.294142 / 0.283200 (0.010942) | 0.019779 / 0.141683 (-0.121904) | 1.176844 / 1.452155 (-0.275310) | 1.190925 / 1.492716 (-0.301792) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092885 / 0.018006 (0.074879) | 0.301129 / 0.000490 (0.300640) | 0.000232 / 0.000200 (0.000032) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023202 / 0.037411 (-0.014210) | 0.076850 / 0.014526 (0.062325) | 0.090058 / 0.176557 (-0.086499) | 0.128091 / 0.737135 (-0.609045) | 0.091098 / 0.296338 (-0.205240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292973 / 0.215209 (0.077764) | 2.876022 / 2.077655 (0.798367) | 1.672115 / 1.504120 (0.167995) | 1.555103 / 1.541195 (0.013909) | 1.559832 / 1.468490 (0.091342) | 0.558017 / 4.584777 (-4.026760) | 2.428448 / 3.745712 (-1.317264) | 2.812024 / 5.269862 (-2.457837) | 1.738470 / 4.565676 (-2.827207) | 0.062669 / 0.424275 (-0.361607) | 0.005071 / 0.007607 (-0.002536) | 0.351804 / 0.226044 (0.125759) | 3.412207 / 2.268929 (1.143279) | 2.023478 / 55.444624 (-53.421147) | 1.761281 / 6.876477 (-5.115195) | 1.770789 / 2.142072 (-0.371283) | 0.643062 / 4.805227 (-4.162165) | 0.116616 / 6.500664 (-6.384048) | 0.041816 / 0.075469 (-0.033653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988430 / 1.841788 (-0.853357) | 12.278636 / 8.074308 (4.204328) | 11.066185 / 10.191392 (0.874793) | 0.141191 / 0.680424 (-0.539233) | 0.015547 / 0.534201 (-0.518654) | 0.288045 / 0.579283 (-0.291238) | 0.279651 / 0.434364 (-0.154713) | 0.329869 / 0.540337 (-0.210469) | 0.420391 / 1.386936 (-0.966545) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef3b5dd3633995c95d77f35fb17f89ff44990bc4 \"CML watermark\")\n"
] | 2023-12-12T17:15:17 | 2023-12-21T15:19:20 | 2023-12-21T15:13:11 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6493",
"html_url": "https://github.com/huggingface/datasets/pull/6493",
"diff_url": "https://github.com/huggingface/datasets/pull/6493.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6493.patch",
"merged_at": "2023-12-21T15:13:11"
} | Includes both https://github.com/huggingface/datasets/pull/6458 and https://github.com/huggingface/datasets/pull/6459
This PR should be merged instead of the two individually, since they are conflicting
## Offline cache reload
it can reload datasets that were pushed to hub if they exist in the cache.
example:
```python
>>> Dataset.from_dict({"a": [1, 2]}).push_to_hub("lhoestq/tmp")
>>> load_dataset("lhoestq/tmp")
DatasetDict({
train: Dataset({
features: ['a'],
num_rows: 2
})
})
```
and later, without connection:
```python
>>> load_dataset("lhoestq/tmp")
Using the latest cached version of the dataset since lhoestq/tmp couldn't be found on the Hugging Face Hub
Found the latest cached dataset configuration 'default' at /Users/quentinlhoest/.cache/huggingface/datasets/lhoestq___tmp/default/0.0.0/da0e902a945afeb9 (last modified on Wed Dec 13 14:55:52 2023).
DatasetDict({
train: Dataset({
features: ['a'],
num_rows: 2
})
})
```
- Updated `CachedDatasetModuleFactory` to look for datasets in the cache at `<namespace>___<dataset_name>/<config_id>`
- Since the metadata configs parameters are not available in offline mode, we don't know which folder to load (config_id and hash change), so I simply load the latest one
- I instantiate a BuilderConfig even if there is no metadata config with the right config_name
- Its config_id is equal to the config_name to be able to retrieve it in the cache (no more suffix for configs from metadata configs)
- We can reload this config if offline mode by specifying the right config_name (same as online !)
- Consequences of this change:
- Only when there are user's parameters it creates a custom builder config with config_id = config_name + user parameters hash
- the hash used to name the cache folder takes into account the metadata config and the dataset info, so that the right cache can be reloaded when there is internet connection without redownloading the data or resolving the data files. For local directories I hash the builder configs and dataset info, and for datasets on the hub I use the commit sha as hash.
- cache directories now look like `config/version/commit_sha` for hub datasets which is clean :)
Fix https://github.com/huggingface/datasets/issues/3547
## Lazy data files resolution
this makes this code run in 2sec instead of >10sec
```python
from datasets import load_dataset
ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False)
```
For some datasets with many configs and files it can be up to 100x faster.
This is particularly important now that some datasets will be loaded from the Parquet export instead of the scripts.
The data files are only resolved in the builder `__init__`. To do so I added DataFilesPatternsList and DataFilesPatternsDict that have `.resolve()` to return resolved DataFilesList and DataFilesDict
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6493/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6492/comments | https://api.github.com/repos/huggingface/datasets/issues/6492/events | https://github.com/huggingface/datasets/pull/6492 | 2,037,987,267 | PR_kwDODunzps5hzjhQ | 6,492 | Make push_to_hub return CommitInfo | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6492). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This PR is ready to review @huggingface/datasets.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005093 / 0.011353 (-0.006259) | 0.003695 / 0.011008 (-0.007313) | 0.064648 / 0.038508 (0.026140) | 0.054677 / 0.023109 (0.031568) | 0.242007 / 0.275898 (-0.033891) | 0.265216 / 0.323480 (-0.058264) | 0.003847 / 0.007986 (-0.004138) | 0.003773 / 0.004328 (-0.000556) | 0.048595 / 0.004250 (0.044345) | 0.038122 / 0.037052 (0.001070) | 0.245698 / 0.258489 (-0.012791) | 0.278095 / 0.293841 (-0.015746) | 0.027488 / 0.128546 (-0.101058) | 0.011002 / 0.075646 (-0.064644) | 0.211443 / 0.419271 (-0.207829) | 0.035664 / 0.043533 (-0.007869) | 0.244754 / 0.255139 (-0.010385) | 0.261078 / 0.283200 (-0.022121) | 0.017768 / 0.141683 (-0.123915) | 1.130765 / 1.452155 (-0.321390) | 1.189825 / 1.492716 (-0.302891) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093027 / 0.018006 (0.075021) | 0.302193 / 0.000490 (0.301703) | 0.000207 / 0.000200 (0.000007) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018413 / 0.037411 (-0.018999) | 0.062715 / 0.014526 (0.048190) | 0.073287 / 0.176557 (-0.103269) | 0.120394 / 0.737135 (-0.616741) | 0.077573 / 0.296338 (-0.218765) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284445 / 0.215209 (0.069236) | 2.780718 / 2.077655 (0.703063) | 1.460988 / 1.504120 (-0.043132) | 1.345799 / 1.541195 (-0.195395) | 1.399892 / 1.468490 (-0.068598) | 0.576051 / 4.584777 (-4.008726) | 2.418792 / 3.745712 (-1.326921) | 2.901330 / 5.269862 (-2.368532) | 1.765083 / 4.565676 (-2.800593) | 0.063555 / 0.424275 (-0.360720) | 0.004991 / 0.007607 (-0.002616) | 0.339657 / 0.226044 (0.113613) | 3.372963 / 2.268929 (1.104034) | 1.853667 / 55.444624 (-53.590958) | 1.552022 / 6.876477 (-5.324454) | 1.616452 / 2.142072 (-0.525620) | 0.652309 / 4.805227 (-4.152919) | 0.121125 / 6.500664 (-6.379539) | 0.042420 / 0.075469 (-0.033049) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.954514 / 1.841788 (-0.887274) | 11.853736 / 8.074308 (3.779428) | 10.624571 / 10.191392 (0.433179) | 0.134118 / 0.680424 (-0.546306) | 0.014200 / 0.534201 (-0.520001) | 0.290106 / 0.579283 (-0.289177) | 0.270637 / 0.434364 (-0.163727) | 0.336155 / 0.540337 (-0.204182) | 0.443962 / 1.386936 (-0.942974) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005282 / 0.011353 (-0.006071) | 0.003526 / 0.011008 (-0.007482) | 0.048994 / 0.038508 (0.010486) | 0.055345 / 0.023109 (0.032236) | 0.271587 / 0.275898 (-0.004311) | 0.294676 / 0.323480 (-0.028804) | 0.003989 / 0.007986 (-0.003996) | 0.002594 / 0.004328 (-0.001735) | 0.048310 / 0.004250 (0.044059) | 0.039945 / 0.037052 (0.002893) | 0.277304 / 0.258489 (0.018815) | 0.312017 / 0.293841 (0.018176) | 0.028364 / 0.128546 (-0.100182) | 0.010683 / 0.075646 (-0.064963) | 0.057990 / 0.419271 (-0.361281) | 0.032418 / 0.043533 (-0.011115) | 0.273835 / 0.255139 (0.018697) | 0.288585 / 0.283200 (0.005385) | 0.018964 / 0.141683 (-0.122719) | 1.148863 / 1.452155 (-0.303292) | 1.195684 / 1.492716 (-0.297032) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091967 / 0.018006 (0.073960) | 0.303236 / 0.000490 (0.302747) | 0.000214 / 0.000200 (0.000015) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021960 / 0.037411 (-0.015452) | 0.068744 / 0.014526 (0.054218) | 0.081167 / 0.176557 (-0.095390) | 0.119623 / 0.737135 (-0.617513) | 0.084965 / 0.296338 (-0.211373) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297740 / 0.215209 (0.082531) | 2.924856 / 2.077655 (0.847201) | 1.602080 / 1.504120 (0.097960) | 1.494083 / 1.541195 (-0.047112) | 1.544662 / 1.468490 (0.076172) | 0.581212 / 4.584777 (-4.003565) | 2.451064 / 3.745712 (-1.294648) | 2.875213 / 5.269862 (-2.394649) | 1.780777 / 4.565676 (-2.784900) | 0.063751 / 0.424275 (-0.360524) | 0.004967 / 0.007607 (-0.002641) | 0.350321 / 0.226044 (0.124276) | 3.449585 / 2.268929 (1.180657) | 1.977666 / 55.444624 (-53.466958) | 1.685125 / 6.876477 (-5.191351) | 1.734466 / 2.142072 (-0.407606) | 0.657477 / 4.805227 (-4.147750) | 0.116767 / 6.500664 (-6.383898) | 0.041400 / 0.075469 (-0.034069) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985751 / 1.841788 (-0.856037) | 12.300065 / 8.074308 (4.225756) | 10.608238 / 10.191392 (0.416846) | 0.139907 / 0.680424 (-0.540517) | 0.015379 / 0.534201 (-0.518822) | 0.283528 / 0.579283 (-0.295755) | 0.278751 / 0.434364 (-0.155613) | 0.328811 / 0.540337 (-0.211527) | 0.584041 / 1.386936 (-0.802895) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef0f986518bd252c5314a7e3a419dedcbb166630 \"CML watermark\")\n"
] | 2023-12-12T15:18:16 | 2023-12-13T14:29:01 | 2023-12-13T14:22:41 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6492",
"html_url": "https://github.com/huggingface/datasets/pull/6492",
"diff_url": "https://github.com/huggingface/datasets/pull/6492.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6492.patch",
"merged_at": "2023-12-13T14:22:41"
} | Make `push_to_hub` return `CommitInfo`.
This is useful, for example, if we pass `create_pr=True` and we want to know the created PR ID.
CC: @severo for the use case in https://huggingface.co/datasets/jmhessel/newyorker_caption_contest/discussions/4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6492/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6492/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6491/comments | https://api.github.com/repos/huggingface/datasets/issues/6491/events | https://github.com/huggingface/datasets/pull/6491 | 2,037,690,643 | PR_kwDODunzps5hyiTY | 6,491 | Fix metrics dead link | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6491). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005193 / 0.011353 (-0.006160) | 0.003246 / 0.011008 (-0.007762) | 0.063053 / 0.038508 (0.024545) | 0.049636 / 0.023109 (0.026527) | 0.240990 / 0.275898 (-0.034908) | 0.263732 / 0.323480 (-0.059747) | 0.004062 / 0.007986 (-0.003923) | 0.002681 / 0.004328 (-0.001648) | 0.048527 / 0.004250 (0.044277) | 0.044159 / 0.037052 (0.007107) | 0.248031 / 0.258489 (-0.010458) | 0.275705 / 0.293841 (-0.018136) | 0.028210 / 0.128546 (-0.100336) | 0.010314 / 0.075646 (-0.065332) | 0.209887 / 0.419271 (-0.209384) | 0.035649 / 0.043533 (-0.007884) | 0.251321 / 0.255139 (-0.003818) | 0.266672 / 0.283200 (-0.016528) | 0.017382 / 0.141683 (-0.124301) | 1.088937 / 1.452155 (-0.363217) | 1.143692 / 1.492716 (-0.349024) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092558 / 0.018006 (0.074552) | 0.301648 / 0.000490 (0.301159) | 0.000208 / 0.000200 (0.000008) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018305 / 0.037411 (-0.019106) | 0.059836 / 0.014526 (0.045310) | 0.072926 / 0.176557 (-0.103631) | 0.119826 / 0.737135 (-0.617309) | 0.074357 / 0.296338 (-0.221982) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279051 / 0.215209 (0.063842) | 2.711402 / 2.077655 (0.633747) | 1.431782 / 1.504120 (-0.072338) | 1.316592 / 1.541195 (-0.224603) | 1.352062 / 1.468490 (-0.116428) | 0.562553 / 4.584777 (-4.022224) | 2.387719 / 3.745712 (-1.357993) | 2.693330 / 5.269862 (-2.576532) | 1.682040 / 4.565676 (-2.883636) | 0.061832 / 0.424275 (-0.362443) | 0.005066 / 0.007607 (-0.002541) | 0.332730 / 0.226044 (0.106685) | 3.315503 / 2.268929 (1.046575) | 1.787129 / 55.444624 (-53.657496) | 1.508955 / 6.876477 (-5.367522) | 1.512620 / 2.142072 (-0.629453) | 0.637120 / 4.805227 (-4.168107) | 0.116005 / 6.500664 (-6.384660) | 0.041973 / 0.075469 (-0.033496) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.936996 / 1.841788 (-0.904792) | 11.485975 / 8.074308 (3.411667) | 10.604481 / 10.191392 (0.413089) | 0.130803 / 0.680424 (-0.549621) | 0.014561 / 0.534201 (-0.519640) | 0.285905 / 0.579283 (-0.293378) | 0.271573 / 0.434364 (-0.162791) | 0.329206 / 0.540337 (-0.211132) | 0.411977 / 1.386936 (-0.974959) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005333 / 0.011353 (-0.006020) | 0.003519 / 0.011008 (-0.007489) | 0.050880 / 0.038508 (0.012372) | 0.053681 / 0.023109 (0.030571) | 0.269359 / 0.275898 (-0.006539) | 0.291498 / 0.323480 (-0.031982) | 0.004006 / 0.007986 (-0.003979) | 0.002676 / 0.004328 (-0.001653) | 0.049652 / 0.004250 (0.045401) | 0.040588 / 0.037052 (0.003536) | 0.271701 / 0.258489 (0.013212) | 0.308384 / 0.293841 (0.014543) | 0.028713 / 0.128546 (-0.099833) | 0.010423 / 0.075646 (-0.065223) | 0.058099 / 0.419271 (-0.361172) | 0.032372 / 0.043533 (-0.011161) | 0.269395 / 0.255139 (0.014256) | 0.292252 / 0.283200 (0.009052) | 0.020038 / 0.141683 (-0.121645) | 1.124761 / 1.452155 (-0.327393) | 1.177609 / 1.492716 (-0.315107) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092187 / 0.018006 (0.074181) | 0.301936 / 0.000490 (0.301446) | 0.000230 / 0.000200 (0.000030) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022932 / 0.037411 (-0.014480) | 0.076552 / 0.014526 (0.062027) | 0.088729 / 0.176557 (-0.087827) | 0.127198 / 0.737135 (-0.609937) | 0.091902 / 0.296338 (-0.204436) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299964 / 0.215209 (0.084755) | 2.929352 / 2.077655 (0.851697) | 1.598715 / 1.504120 (0.094595) | 1.462438 / 1.541195 (-0.078756) | 1.474308 / 1.468490 (0.005818) | 0.567120 / 4.584777 (-4.017657) | 2.481757 / 3.745712 (-1.263955) | 2.795375 / 5.269862 (-2.474487) | 1.740346 / 4.565676 (-2.825331) | 0.064048 / 0.424275 (-0.360227) | 0.004995 / 0.007607 (-0.002612) | 0.349084 / 0.226044 (0.123040) | 3.417679 / 2.268929 (1.148750) | 1.910615 / 55.444624 (-53.534009) | 1.694120 / 6.876477 (-5.182356) | 1.658654 / 2.142072 (-0.483419) | 0.638158 / 4.805227 (-4.167069) | 0.115509 / 6.500664 (-6.385156) | 0.040650 / 0.075469 (-0.034819) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988081 / 1.841788 (-0.853707) | 12.210089 / 8.074308 (4.135781) | 11.090203 / 10.191392 (0.898811) | 0.131861 / 0.680424 (-0.548563) | 0.015461 / 0.534201 (-0.518740) | 0.287737 / 0.579283 (-0.291546) | 0.284170 / 0.434364 (-0.150194) | 0.324949 / 0.540337 (-0.215388) | 0.414912 / 1.386936 (-0.972024) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cf71653947cecd84050daf0448dc5a73c2c071f3 \"CML watermark\")\n"
] | 2023-12-12T12:51:49 | 2023-12-21T15:15:08 | 2023-12-21T15:08:53 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6491",
"html_url": "https://github.com/huggingface/datasets/pull/6491",
"diff_url": "https://github.com/huggingface/datasets/pull/6491.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6491.patch",
"merged_at": "2023-12-21T15:08:53"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6491/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6490/comments | https://api.github.com/repos/huggingface/datasets/issues/6490/events | https://github.com/huggingface/datasets/issues/6490 | 2,037,204,892 | I_kwDODunzps55bUec | 6,490 | `load_dataset(...,save_infos=True)` not working without loading script | {
"login": "morganveyret",
"id": 114978051,
"node_id": "U_kgDOBtptAw",
"avatar_url": "https://avatars.githubusercontent.com/u/114978051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morganveyret",
"html_url": "https://github.com/morganveyret",
"followers_url": "https://api.github.com/users/morganveyret/followers",
"following_url": "https://api.github.com/users/morganveyret/following{/other_user}",
"gists_url": "https://api.github.com/users/morganveyret/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morganveyret/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morganveyret/subscriptions",
"organizations_url": "https://api.github.com/users/morganveyret/orgs",
"repos_url": "https://api.github.com/users/morganveyret/repos",
"events_url": "https://api.github.com/users/morganveyret/events{/privacy}",
"received_events_url": "https://api.github.com/users/morganveyret/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Also, once the README.md exists in the python environment it is used when loading another dataset in the same format (e.g. json) since it always resolves the path to the same directory.\r\nThe consequence here is any other dataset won't load because of infos mismatch.\r\nTo reproduce this aspect:\r\n1. Do a `load_datasets(...,save_infos=True)` with one dataset without a loading script\r\n2. Try to load another dataset without a loading script in the same format (e.g. json) but with a different schema "
] | 2023-12-12T08:09:18 | 2023-12-12T08:36:22 | null | NONE | null | null | ### Describe the bug
It seems that saving a dataset infos back into the card file is not working for datasets without a loading script.
After tracking the problem a bit it looks like saving the infos uses `Builder.get_imported_module_dir()` as its destination directory.
Internally this is a call to `inspect.getfile()` but since the actual builder class used is dynamically created (cf. `datasets.load.configure_builder_class`) this method actually return te path to the parent builder class (e.g. `datasets.packaged_modules.json.JSON`).
### Steps to reproduce the bug
1. Have a local dataset without any loading script
2. Make sure there are no dataset infos in the README.md
3. Load with `save_infos=True`
4. No change in the dataset README.md
5. A new README.md file is created in the directory of the parent builder class (e.g. for json in `.../site-packages/datasets/packaged_modules/json/README.md`)
### Expected behavior
The dataset README.md should be updated and no file should be created in the python environment.
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.6.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6490/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6489/comments | https://api.github.com/repos/huggingface/datasets/issues/6489/events | https://github.com/huggingface/datasets/issues/6489 | 2,036,743,777 | I_kwDODunzps55Zj5h | 6,489 | load_dataset imageflder for aws s3 path | {
"login": "segalinc",
"id": 9353106,
"node_id": "MDQ6VXNlcjkzNTMxMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/segalinc",
"html_url": "https://github.com/segalinc",
"followers_url": "https://api.github.com/users/segalinc/followers",
"following_url": "https://api.github.com/users/segalinc/following{/other_user}",
"gists_url": "https://api.github.com/users/segalinc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/segalinc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/segalinc/subscriptions",
"organizations_url": "https://api.github.com/users/segalinc/orgs",
"repos_url": "https://api.github.com/users/segalinc/repos",
"events_url": "https://api.github.com/users/segalinc/events{/privacy}",
"received_events_url": "https://api.github.com/users/segalinc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [] | 2023-12-12T00:08:43 | 2023-12-12T00:09:27 | null | NONE | null | null | ### Feature request
I would like to load a dataset from S3 using the imagefolder option
something like
`dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) `
### Motivation
no need of data_files
### Your contribution
no experience with this | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6489/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6488/comments | https://api.github.com/repos/huggingface/datasets/issues/6488/events | https://github.com/huggingface/datasets/issues/6488 | 2,035,899,898 | I_kwDODunzps55WV36 | 6,488 | 429 Client Error | {
"login": "sasaadi",
"id": 7882383,
"node_id": "MDQ6VXNlcjc4ODIzODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7882383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sasaadi",
"html_url": "https://github.com/sasaadi",
"followers_url": "https://api.github.com/users/sasaadi/followers",
"following_url": "https://api.github.com/users/sasaadi/following{/other_user}",
"gists_url": "https://api.github.com/users/sasaadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sasaadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sasaadi/subscriptions",
"organizations_url": "https://api.github.com/users/sasaadi/orgs",
"repos_url": "https://api.github.com/users/sasaadi/repos",
"events_url": "https://api.github.com/users/sasaadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sasaadi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Transferring repos as this is a datasets issue ",
"I'm getting a similar issue even though I've already downloaded the dataset π
\r\n\r\n```\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/HuggingFaceM4/WebSight\r\n```"
] | 2023-12-11T15:06:01 | 2024-01-18T02:05:15 | null | NONE | null | null | Hello, I was downloading the following dataset and after 20% of data was downloaded, I started getting error 429. It is not resolved since a few days. How should I resolve it?
Thanks
Dataset:
https://huggingface.co/datasets/cerebras/SlimPajama-627B
Error:
`requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/datasets/cerebras/SlimPajama-627B/resolve/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543/train/chunk1/example_train_3300.jsonl.zst`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6488/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6487/comments | https://api.github.com/repos/huggingface/datasets/issues/6487/events | https://github.com/huggingface/datasets/pull/6487 | 2,035,424,254 | PR_kwDODunzps5hqyfV | 6,487 | Update builder hash with info | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6487). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Closing this one in favor of https://github.com/huggingface/datasets/pull/6458/commits/565c294fc12bc547730a023a610ed4f92313d8fb in https://github.com/huggingface/datasets/pull/6458"
] | 2023-12-11T11:09:16 | 2024-01-11T06:35:07 | 2023-12-11T11:41:34 | MEMBER | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6487",
"html_url": "https://github.com/huggingface/datasets/pull/6487",
"diff_url": "https://github.com/huggingface/datasets/pull/6487.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6487.patch",
"merged_at": null
} | Currently if you change the `dataset_info` of a dataset (e.g. in the YAML part of the README.md), the cache ignores this change.
This is problematic because you want to regenerate a dataset if you change the features or the split sizes for example (e.g. after push_to_hub)
Ideally we should take the resolved files into account as well but this will be for another PR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6487/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6486/comments | https://api.github.com/repos/huggingface/datasets/issues/6486/events | https://github.com/huggingface/datasets/pull/6486 | 2,035,206,206 | PR_kwDODunzps5hqCSc | 6,486 | Fix docs phrasing about supported formats when sharing a dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6486). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005042 / 0.011353 (-0.006311) | 0.003452 / 0.011008 (-0.007557) | 0.061845 / 0.038508 (0.023337) | 0.052042 / 0.023109 (0.028933) | 0.241791 / 0.275898 (-0.034107) | 0.264639 / 0.323480 (-0.058841) | 0.003940 / 0.007986 (-0.004045) | 0.002768 / 0.004328 (-0.001560) | 0.047851 / 0.004250 (0.043600) | 0.037599 / 0.037052 (0.000547) | 0.251462 / 0.258489 (-0.007028) | 0.274737 / 0.293841 (-0.019104) | 0.027723 / 0.128546 (-0.100823) | 0.010510 / 0.075646 (-0.065137) | 0.205581 / 0.419271 (-0.213691) | 0.035504 / 0.043533 (-0.008029) | 0.242380 / 0.255139 (-0.012759) | 0.259791 / 0.283200 (-0.023409) | 0.017752 / 0.141683 (-0.123931) | 1.089289 / 1.452155 (-0.362865) | 1.161958 / 1.492716 (-0.330759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094288 / 0.018006 (0.076282) | 0.303253 / 0.000490 (0.302763) | 0.000216 / 0.000200 (0.000016) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018496 / 0.037411 (-0.018915) | 0.060411 / 0.014526 (0.045885) | 0.074294 / 0.176557 (-0.102262) | 0.122934 / 0.737135 (-0.614201) | 0.074710 / 0.296338 (-0.221629) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286394 / 0.215209 (0.071185) | 2.806145 / 2.077655 (0.728490) | 1.497071 / 1.504120 (-0.007049) | 1.362254 / 1.541195 (-0.178940) | 1.389642 / 1.468490 (-0.078848) | 0.554503 / 4.584777 (-4.030274) | 2.348029 / 3.745712 (-1.397684) | 2.780862 / 5.269862 (-2.489000) | 1.728058 / 4.565676 (-2.837619) | 0.062617 / 0.424275 (-0.361658) | 0.004901 / 0.007607 (-0.002707) | 0.346267 / 0.226044 (0.120223) | 3.363744 / 2.268929 (1.094815) | 1.826994 / 55.444624 (-53.617630) | 1.560656 / 6.876477 (-5.315820) | 1.561083 / 2.142072 (-0.580990) | 0.643395 / 4.805227 (-4.161832) | 0.116206 / 6.500664 (-6.384458) | 0.042008 / 0.075469 (-0.033461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.953416 / 1.841788 (-0.888371) | 11.461665 / 8.074308 (3.387357) | 10.623865 / 10.191392 (0.432473) | 0.128071 / 0.680424 (-0.552353) | 0.014277 / 0.534201 (-0.519924) | 0.288810 / 0.579283 (-0.290474) | 0.267575 / 0.434364 (-0.166788) | 0.327422 / 0.540337 (-0.212916) | 0.435151 / 1.386936 (-0.951785) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005242 / 0.011353 (-0.006111) | 0.003515 / 0.011008 (-0.007493) | 0.048483 / 0.038508 (0.009975) | 0.051684 / 0.023109 (0.028575) | 0.276564 / 0.275898 (0.000666) | 0.297582 / 0.323480 (-0.025898) | 0.004117 / 0.007986 (-0.003869) | 0.002610 / 0.004328 (-0.001719) | 0.047811 / 0.004250 (0.043561) | 0.040622 / 0.037052 (0.003569) | 0.280265 / 0.258489 (0.021776) | 0.311719 / 0.293841 (0.017878) | 0.028811 / 0.128546 (-0.099735) | 0.010600 / 0.075646 (-0.065047) | 0.056660 / 0.419271 (-0.362611) | 0.032638 / 0.043533 (-0.010894) | 0.276434 / 0.255139 (0.021295) | 0.299095 / 0.283200 (0.015896) | 0.018483 / 0.141683 (-0.123200) | 1.156382 / 1.452155 (-0.295773) | 1.252205 / 1.492716 (-0.240511) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097868 / 0.018006 (0.079862) | 0.309438 / 0.000490 (0.308948) | 0.000229 / 0.000200 (0.000029) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021838 / 0.037411 (-0.015573) | 0.068358 / 0.014526 (0.053832) | 0.080432 / 0.176557 (-0.096125) | 0.119788 / 0.737135 (-0.617348) | 0.081742 / 0.296338 (-0.214597) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301239 / 0.215209 (0.086030) | 2.962242 / 2.077655 (0.884587) | 1.693918 / 1.504120 (0.189798) | 1.573663 / 1.541195 (0.032468) | 1.583125 / 1.468490 (0.114635) | 0.557267 / 4.584777 (-4.027510) | 2.440048 / 3.745712 (-1.305664) | 2.727572 / 5.269862 (-2.542290) | 1.713557 / 4.565676 (-2.852120) | 0.062526 / 0.424275 (-0.361749) | 0.004982 / 0.007607 (-0.002625) | 0.353850 / 0.226044 (0.127806) | 3.530887 / 2.268929 (1.261958) | 2.047864 / 55.444624 (-53.396761) | 1.770776 / 6.876477 (-5.105701) | 1.757621 / 2.142072 (-0.384451) | 0.633847 / 4.805227 (-4.171381) | 0.114055 / 6.500664 (-6.386609) | 0.040078 / 0.075469 (-0.035391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983721 / 1.841788 (-0.858066) | 11.896537 / 8.074308 (3.822229) | 10.529883 / 10.191392 (0.338491) | 0.129593 / 0.680424 (-0.550831) | 0.016213 / 0.534201 (-0.517988) | 0.289623 / 0.579283 (-0.289660) | 0.280073 / 0.434364 (-0.154291) | 0.327446 / 0.540337 (-0.212892) | 0.574847 / 1.386936 (-0.812089) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2684a98fe38e0c87bb11e050586004108e32b79d \"CML watermark\")\n"
] | 2023-12-11T09:21:22 | 2023-12-13T14:21:29 | 2023-12-13T14:15:21 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6486",
"html_url": "https://github.com/huggingface/datasets/pull/6486",
"diff_url": "https://github.com/huggingface/datasets/pull/6486.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6486.patch",
"merged_at": "2023-12-13T14:15:21"
} | Fix docs phrasing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6486/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6485/comments | https://api.github.com/repos/huggingface/datasets/issues/6485/events | https://github.com/huggingface/datasets/issues/6485 | 2,035,141,884 | I_kwDODunzps55Tcz8 | 6,485 | FileNotFoundError: [Errno 2] No such file or directory: 'nul' | {
"login": "amanyara",
"id": 73683903,
"node_id": "MDQ6VXNlcjczNjgzOTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/73683903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amanyara",
"html_url": "https://github.com/amanyara",
"followers_url": "https://api.github.com/users/amanyara/followers",
"following_url": "https://api.github.com/users/amanyara/following{/other_user}",
"gists_url": "https://api.github.com/users/amanyara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amanyara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amanyara/subscriptions",
"organizations_url": "https://api.github.com/users/amanyara/orgs",
"repos_url": "https://api.github.com/users/amanyara/repos",
"events_url": "https://api.github.com/users/amanyara/events{/privacy}",
"received_events_url": "https://api.github.com/users/amanyara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! It seems like the problem is your environment. Maybe this issue can help: https://github.com/pytest-dev/pytest/issues/9519. "
] | 2023-12-11T08:52:13 | 2023-12-14T08:09:08 | 2023-12-14T08:09:08 | NONE | null | null | ### Describe the bug
it seems that sth wrong with my terrible "bug body" life, When i run this code, "import datasets"
i meet this error FileNotFoundError: [Errno 2] No such file or directory: 'nul'
![image](https://github.com/huggingface/datasets/assets/73683903/3973c120-ebb1-42b7-bede-b9de053e861d)
![image](https://github.com/huggingface/datasets/assets/73683903/0496adff-a7a7-4dcb-929e-ec11ede71f04)
### Steps to reproduce the bug
1.import datasets
### Expected behavior
i just run a single line code and stuct in this bug
### Environment info
OS: Windows10
Datasets==2.15.0
python=3.10 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6485/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6483/comments | https://api.github.com/repos/huggingface/datasets/issues/6483/events | https://github.com/huggingface/datasets/issues/6483 | 2,032,946,981 | I_kwDODunzps55LE8l | 6,483 | Iterable Dataset: rename column clashes with remove column | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Column \"text\" doesn't exist anymore so you can't remove it",
"You can get the expected result by fixing typos in the snippet :)\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# check original features\r\ndataset_features = dataset.features.keys()\r\nprint(\"Original features: \", dataset_features)\r\n\r\n# rename \"text\" -> \"sentence\"\r\ndataset = dataset.rename_column(\"text\", \"sentence\")\r\n\r\n# remove unwanted columns\r\nCOLUMNS_TO_KEEP = {\"audio\", \"sentence\"}\r\ndataset = dataset.remove_columns(set(dataset.features) - COLUMNS_TO_KEEP)\r\n\r\n# stream first sample, should return \"audio\" and \"sentence\" columns\r\nprint(next(iter(dataset)))\r\n```",
"Fixed code:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# check original features\r\ndataset_features = dataset.features.keys()\r\nprint(\"Original features: \", dataset_features)\r\n\r\n#Β rename \"text\" -> \"sentence\"\r\ndataset = dataset.rename_column(\"text\", \"sentence\")\r\ndataset_features = dataset.features.keys()\r\n\r\n# remove unwanted columns\r\nCOLUMNS_TO_KEEP = {\"audio\", \"sentence\"}\r\ndataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))\r\n\r\n# stream first sample, should return \"audio\" and \"sentence\" columns\r\nprint(next(iter(dataset)))\r\n```",
"Whoops π
Thanks for the swift reply both! Works like a charm!"
] | 2023-12-08T16:11:30 | 2023-12-08T16:27:16 | 2023-12-08T16:27:04 | CONTRIBUTOR | null | null | ### Describe the bug
Suppose I have a two iterable datasets, one with the features:
* `{"audio", "text", "column_a"}`
And the other with the features:
* `{"audio", "sentence", "column_b"}`
I want to combine both datasets using `interleave_datasets`, which requires me to unify the column names. I would typically do this by:
1. Renaming the common columns to the same name (e.g. `"text"` -> `"sentence"`)
2. Removing the unwanted columns (e.g. `"column_a"`, `"column_b"`)
However, the process of renaming and removing columns in an iterable dataset doesn't work, since we need to preserve the original text column, meaning we can't combine the datasets.
### Steps to reproduce the bug
```python
from datasets import load_dataset
# load LS in streaming mode
dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
# check original features
dataset_features = dataset.features.keys()
print("Original features: ", dataset_features)
#Β rename "text" -> "sentence"
dataset = dataset.rename_column("text", "sentence")
# remove unwanted columns
COLUMNS_TO_KEEP = {"audio", "sentence"}
dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))
# stream first sample, should return "audio" and "sentence" columns
print(next(iter(dataset)))
```
Traceback:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[5], line 17
14 COLUMNS_TO_KEEP = {"audio", "sentence"}
15 dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))
---> 17 print(next(iter(dataset)))
File ~/datasets/src/datasets/iterable_dataset.py:1353, in IterableDataset.__iter__(self)
1350 yield formatter.format_row(pa_table)
1351 return
-> 1353 for key, example in ex_iterable:
1354 if self.features:
1355 # `IterableDataset` automatically fills missing columns with None.
1356 # This is done with `_apply_feature_types_on_example`.
1357 example = _apply_feature_types_on_example(
1358 example, self.features, token_per_repo_id=self._token_per_repo_id
1359 )
File ~/datasets/src/datasets/iterable_dataset.py:652, in MappedExamplesIterable.__iter__(self)
650 yield from ArrowExamplesIterable(self._iter_arrow, {})
651 else:
--> 652 yield from self._iter()
File ~/datasets/src/datasets/iterable_dataset.py:729, in MappedExamplesIterable._iter(self)
727 if self.remove_columns:
728 for c in self.remove_columns:
--> 729 del transformed_example[c]
730 yield key, transformed_example
731 current_idx += 1
KeyError: 'text'
```
=> we see that `datasets` is looking for the column "text", even though we've renamed this to "sentence" and then removed the un-wanted "text" column from our dataset.
### Expected behavior
Should be able to rename and remove columns from iterable dataset.
### Environment info
- `datasets` version: 2.15.1.dev0
- Platform: macOS-13.5.1-arm64-arm-64bit
- Python version: 3.11.6
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.2
- `fsspec` version: 2023.9.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6483/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6484/comments | https://api.github.com/repos/huggingface/datasets/issues/6484/events | https://github.com/huggingface/datasets/issues/6484 | 2,033,333,294 | I_kwDODunzps55MjQu | 6,484 | [Feature Request] Dataset versioning | {
"login": "kenfus",
"id": 47979198,
"node_id": "MDQ6VXNlcjQ3OTc5MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/47979198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kenfus",
"html_url": "https://github.com/kenfus",
"followers_url": "https://api.github.com/users/kenfus/followers",
"following_url": "https://api.github.com/users/kenfus/following{/other_user}",
"gists_url": "https://api.github.com/users/kenfus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kenfus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenfus/subscriptions",
"organizations_url": "https://api.github.com/users/kenfus/orgs",
"repos_url": "https://api.github.com/users/kenfus/repos",
"events_url": "https://api.github.com/users/kenfus/events{/privacy}",
"received_events_url": "https://api.github.com/users/kenfus/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hello @kenfus, this is meant to be possible to do yes. Let me ping @lhoestq or @mariosasko from the `datasets` team (`huggingface_hub` is only the underlying library to download files from the Hub but here it looks more like a `datasets` problem). ",
"Hi! https://github.com/huggingface/datasets/pull/6459 will fix this."
] | 2023-12-08T16:01:35 | 2023-12-11T19:13:46 | null | NONE | null | null | **Is your feature request related to a problem? Please describe.**
I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was not redownloading the data, it was reading in some cached data, until I put `download_mode="force_redownload"`, even though the reversion was different.
Of course, I may have done something wrong or missed a setting somewhere!
**Describe the solution you'd like**
The solution would allow me to easily work with revisions:
- create a new dataset (by combining things, different preprocessing, ..) and give it a new revision (v.1.2.3), maybe like this:
`dataset_audio.push_to_hub('kenfus/xy', revision='v1.0.2')`
- then, get the current revision as follows:
```
dataset = load_dataset(
'kenfus/xy', revision='v1.0.2',
)
```
this downloads the new version and does not load in a different revision and all future map, filter, .. operations are done on this dataset and not loaded from cache produced from a different revision.
- if I rerun the run, the caching should be smart enough in every step to not reuse a mapping operation on a different revision.
**Describe alternatives you've considered**
I created my own caching, putting `download_mode="force_redownload"` and `load_from_cache_file=False,` everywhere.
**Additional context**
Thanks a lot for your great work! Creating NLP datasets and training a model with them is really easy and straightforward with huggingface.
This is the data loading in my script:
```
## CREATE PATHS
prepared_dataset_path = os.path.join(
DATA_FOLDER, str(DATA_VERSION), "prepared_dataset"
)
os.makedirs(os.path.join(DATA_FOLDER, str(DATA_VERSION)), exist_ok=True)
## LOAD DATASET
if os.path.exists(prepared_dataset_path):
print("Loading prepared dataset from disk...")
dataset_prepared = load_from_disk(prepared_dataset_path)
else:
print("Loading dataset from HuggingFace Datasets...")
dataset = load_dataset(
PATH_TO_DATASET, revision=DATA_VERSION, download_mode="force_redownload"
)
print("Preparing dataset...")
dataset_prepared = dataset.map(
prepare_dataset,
remove_columns=["audio", "transcription"],
num_proc=os.cpu_count(),
load_from_cache_file=False,
)
dataset_prepared.save_to_disk(prepared_dataset_path)
del dataset
if CHECK_DATASET:
## CHECK DATASET
dataset_prepared = dataset_prepared.map(
check_dimensions, num_proc=os.cpu_count(), load_from_cache_file=False
)
dataset_filtered = dataset_prepared.filter(
lambda example: not example["incorrect_dimension"],
load_from_cache_file=False,
)
for example in dataset_prepared.filter(
lambda example: example["incorrect_dimension"], load_from_cache_file=False
):
print(example["path"])
print(
f"Number of examples with incorrect dimension: {len(dataset_prepared) - len(dataset_filtered)}"
)
print("Number of examples train: ", len(dataset_filtered["train"]))
print("Number of examples test: ", len(dataset_filtered["test"]))
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6484/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6482/comments | https://api.github.com/repos/huggingface/datasets/issues/6482/events | https://github.com/huggingface/datasets/pull/6482 | 2,032,675,918 | PR_kwDODunzps5hhl23 | 6,482 | Fix max lock length on unix | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6482). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I'm getting `AttributeError: module 'os' has no attribute 'statvfs'` on windows - reverting",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005294 / 0.011353 (-0.006059) | 0.003562 / 0.011008 (-0.007446) | 0.062030 / 0.038508 (0.023522) | 0.053335 / 0.023109 (0.030226) | 0.233303 / 0.275898 (-0.042595) | 0.252029 / 0.323480 (-0.071451) | 0.002835 / 0.007986 (-0.005151) | 0.002732 / 0.004328 (-0.001597) | 0.047973 / 0.004250 (0.043723) | 0.038380 / 0.037052 (0.001328) | 0.235028 / 0.258489 (-0.023461) | 0.265555 / 0.293841 (-0.028286) | 0.027136 / 0.128546 (-0.101410) | 0.010806 / 0.075646 (-0.064840) | 0.205040 / 0.419271 (-0.214231) | 0.035063 / 0.043533 (-0.008470) | 0.236351 / 0.255139 (-0.018788) | 0.254556 / 0.283200 (-0.028643) | 0.019528 / 0.141683 (-0.122155) | 1.099012 / 1.452155 (-0.353142) | 1.156250 / 1.492716 (-0.336466) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093952 / 0.018006 (0.075946) | 0.304181 / 0.000490 (0.303692) | 0.000227 / 0.000200 (0.000027) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018568 / 0.037411 (-0.018844) | 0.060323 / 0.014526 (0.045798) | 0.073010 / 0.176557 (-0.103546) | 0.121723 / 0.737135 (-0.615412) | 0.075668 / 0.296338 (-0.220670) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288429 / 0.215209 (0.073220) | 2.797834 / 2.077655 (0.720180) | 1.480957 / 1.504120 (-0.023163) | 1.360872 / 1.541195 (-0.180323) | 1.406828 / 1.468490 (-0.061663) | 0.587596 / 4.584777 (-3.997181) | 2.533997 / 3.745712 (-1.211715) | 2.906697 / 5.269862 (-2.363164) | 1.801753 / 4.565676 (-2.763923) | 0.064360 / 0.424275 (-0.359915) | 0.005016 / 0.007607 (-0.002591) | 0.347334 / 0.226044 (0.121290) | 3.426344 / 2.268929 (1.157416) | 1.856014 / 55.444624 (-53.588610) | 1.581774 / 6.876477 (-5.294703) | 1.640036 / 2.142072 (-0.502037) | 0.656096 / 4.805227 (-4.149131) | 0.120212 / 6.500664 (-6.380452) | 0.044003 / 0.075469 (-0.031466) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943933 / 1.841788 (-0.897855) | 11.846572 / 8.074308 (3.772263) | 10.330705 / 10.191392 (0.139313) | 0.129767 / 0.680424 (-0.550657) | 0.013508 / 0.534201 (-0.520693) | 0.289672 / 0.579283 (-0.289611) | 0.266427 / 0.434364 (-0.167937) | 0.342766 / 0.540337 (-0.197571) | 0.452068 / 1.386936 (-0.934868) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005308 / 0.011353 (-0.006045) | 0.003712 / 0.011008 (-0.007296) | 0.048848 / 0.038508 (0.010340) | 0.055156 / 0.023109 (0.032047) | 0.271942 / 0.275898 (-0.003956) | 0.293166 / 0.323480 (-0.030314) | 0.004056 / 0.007986 (-0.003930) | 0.002722 / 0.004328 (-0.001606) | 0.048418 / 0.004250 (0.044167) | 0.039320 / 0.037052 (0.002268) | 0.277184 / 0.258489 (0.018695) | 0.312398 / 0.293841 (0.018557) | 0.029392 / 0.128546 (-0.099155) | 0.011314 / 0.075646 (-0.064332) | 0.057883 / 0.419271 (-0.361389) | 0.032603 / 0.043533 (-0.010930) | 0.273025 / 0.255139 (0.017886) | 0.289265 / 0.283200 (0.006065) | 0.017553 / 0.141683 (-0.124129) | 1.127725 / 1.452155 (-0.324430) | 1.202293 / 1.492716 (-0.290423) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097179 / 0.018006 (0.079173) | 0.309712 / 0.000490 (0.309222) | 0.000269 / 0.000200 (0.000069) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024742 / 0.037411 (-0.012670) | 0.070097 / 0.014526 (0.055571) | 0.082273 / 0.176557 (-0.094283) | 0.121696 / 0.737135 (-0.615439) | 0.082983 / 0.296338 (-0.213355) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292688 / 0.215209 (0.077479) | 2.853436 / 2.077655 (0.775781) | 1.588999 / 1.504120 (0.084879) | 1.454547 / 1.541195 (-0.086648) | 1.476342 / 1.468490 (0.007852) | 0.559464 / 4.584777 (-4.025313) | 2.564597 / 3.745712 (-1.181115) | 2.900460 / 5.269862 (-2.369402) | 1.782156 / 4.565676 (-2.783520) | 0.061768 / 0.424275 (-0.362507) | 0.005042 / 0.007607 (-0.002565) | 0.345168 / 0.226044 (0.119124) | 3.412273 / 2.268929 (1.143344) | 1.953154 / 55.444624 (-53.491470) | 1.667347 / 6.876477 (-5.209130) | 1.685138 / 2.142072 (-0.456934) | 0.643270 / 4.805227 (-4.161958) | 0.115955 / 6.500664 (-6.384709) | 0.041090 / 0.075469 (-0.034379) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976324 / 1.841788 (-0.865464) | 12.252294 / 8.074308 (4.177986) | 10.598062 / 10.191392 (0.406670) | 0.129779 / 0.680424 (-0.550644) | 0.015697 / 0.534201 (-0.518504) | 0.287241 / 0.579283 (-0.292042) | 0.287331 / 0.434364 (-0.147033) | 0.331710 / 0.540337 (-0.208628) | 0.574571 / 1.386936 (-0.812365) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#702344140461b7a111139860c944d3dd0a2689e3 \"CML watermark\")\n"
] | 2023-12-08T13:39:30 | 2023-12-12T11:53:32 | 2023-12-12T11:47:27 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6482",
"html_url": "https://github.com/huggingface/datasets/pull/6482",
"diff_url": "https://github.com/huggingface/datasets/pull/6482.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6482.patch",
"merged_at": "2023-12-12T11:47:27"
} | reported in https://github.com/huggingface/datasets/pull/6482 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6482/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6482/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6481/comments | https://api.github.com/repos/huggingface/datasets/issues/6481/events | https://github.com/huggingface/datasets/issues/6481 | 2,032,650,003 | I_kwDODunzps55J8cT | 6,481 | using torchrun, save_to_disk suddenly shows SIGTERM | {
"login": "Ariya12138",
"id": 85916625,
"node_id": "MDQ6VXNlcjg1OTE2NjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/85916625?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ariya12138",
"html_url": "https://github.com/Ariya12138",
"followers_url": "https://api.github.com/users/Ariya12138/followers",
"following_url": "https://api.github.com/users/Ariya12138/following{/other_user}",
"gists_url": "https://api.github.com/users/Ariya12138/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ariya12138/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ariya12138/subscriptions",
"organizations_url": "https://api.github.com/users/Ariya12138/orgs",
"repos_url": "https://api.github.com/users/Ariya12138/repos",
"events_url": "https://api.github.com/users/Ariya12138/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ariya12138/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-08T13:22:03 | 2023-12-08T13:22:03 | null | NONE | null | null | ### Describe the bug
When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages:
Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reaches the 14th shard.
WARNING: torch.distributed.elastic.multiprocessing.api: Sending process 2224968 closing signal SIGTERM
ERROR: torch.distributed.elastic.multiprocessing.api: failed (exitcode: -7). traceback: Signal 7 (SIGBUS) received by PID 2224967.
### Steps to reproduce the bug
ds_shard = ds_shard.map(map_fn, *args, **kwargs)
ds_shard.save_to_disk(ds_shard_filepaths[rank])
Saving the dataset (14/70 shards): 20%|ββ | 875350/4376702 [00:19<01:53, 30863.15 examples/s]
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2224968 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 2224967) of binary: /home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/python
Traceback (most recent call last):
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
==========================================================
run.py FAILED
----------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
----------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-12-08_20:09:04
rank : 0 (local_rank: 0)
exitcode : -7 (pid: 2224967)
error_file: <N/A>
traceback : Signal 7 (SIGBUS) received by PID 2224967
### Expected behavior
I hope it can save successfully without any issues, but it seems there is a problem.
### Environment info
`datasets` version: 2.14.6
- Platform: Linux-4.19.90-24.4.v2101.ky10.aarch64-aarch64-with-glibc2.28
- Python version: 3.10.11
- Huggingface_hub version: 0.17.3
- PyArrow version: 14.0.0
- Pandas version: 2.1.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6481/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6480/comments | https://api.github.com/repos/huggingface/datasets/issues/6480/events | https://github.com/huggingface/datasets/pull/6480 | 2,031,116,653 | PR_kwDODunzps5hcS7P | 6,480 | Add IterableDataset `__repr__` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6480). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005392 / 0.011353 (-0.005960) | 0.003120 / 0.011008 (-0.007888) | 0.062017 / 0.038508 (0.023509) | 0.048824 / 0.023109 (0.025715) | 0.232300 / 0.275898 (-0.043598) | 0.262045 / 0.323480 (-0.061435) | 0.002909 / 0.007986 (-0.005077) | 0.003916 / 0.004328 (-0.000413) | 0.049469 / 0.004250 (0.045218) | 0.038965 / 0.037052 (0.001913) | 0.247841 / 0.258489 (-0.010648) | 0.268259 / 0.293841 (-0.025582) | 0.027588 / 0.128546 (-0.100958) | 0.010334 / 0.075646 (-0.065312) | 0.205811 / 0.419271 (-0.213460) | 0.035456 / 0.043533 (-0.008077) | 0.242774 / 0.255139 (-0.012365) | 0.260377 / 0.283200 (-0.022823) | 0.017469 / 0.141683 (-0.124214) | 1.199665 / 1.452155 (-0.252489) | 1.259316 / 1.492716 (-0.233400) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092357 / 0.018006 (0.074350) | 0.303745 / 0.000490 (0.303255) | 0.000212 / 0.000200 (0.000012) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018820 / 0.037411 (-0.018592) | 0.061548 / 0.014526 (0.047022) | 0.072527 / 0.176557 (-0.104030) | 0.119696 / 0.737135 (-0.617440) | 0.074153 / 0.296338 (-0.222185) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283952 / 0.215209 (0.068743) | 2.769844 / 2.077655 (0.692189) | 1.526100 / 1.504120 (0.021980) | 1.417584 / 1.541195 (-0.123611) | 1.440523 / 1.468490 (-0.027967) | 0.556994 / 4.584777 (-4.027783) | 2.400392 / 3.745712 (-1.345320) | 2.727794 / 5.269862 (-2.542068) | 1.724671 / 4.565676 (-2.841006) | 0.062111 / 0.424275 (-0.362164) | 0.004925 / 0.007607 (-0.002682) | 0.342748 / 0.226044 (0.116704) | 3.376790 / 2.268929 (1.107862) | 1.856498 / 55.444624 (-53.588127) | 1.574143 / 6.876477 (-5.302334) | 1.591828 / 2.142072 (-0.550245) | 0.644416 / 4.805227 (-4.160811) | 0.116862 / 6.500664 (-6.383802) | 0.041484 / 0.075469 (-0.033985) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975704 / 1.841788 (-0.866084) | 11.196447 / 8.074308 (3.122139) | 10.567518 / 10.191392 (0.376126) | 0.126786 / 0.680424 (-0.553638) | 0.013768 / 0.534201 (-0.520433) | 0.284531 / 0.579283 (-0.294752) | 0.260855 / 0.434364 (-0.173509) | 0.328888 / 0.540337 (-0.211450) | 0.439911 / 1.386936 (-0.947025) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005108 / 0.011353 (-0.006245) | 0.003006 / 0.011008 (-0.008003) | 0.048673 / 0.038508 (0.010165) | 0.051066 / 0.023109 (0.027957) | 0.279578 / 0.275898 (0.003680) | 0.298356 / 0.323480 (-0.025123) | 0.003965 / 0.007986 (-0.004020) | 0.002662 / 0.004328 (-0.001667) | 0.049037 / 0.004250 (0.044786) | 0.039385 / 0.037052 (0.002333) | 0.284545 / 0.258489 (0.026055) | 0.314240 / 0.293841 (0.020399) | 0.028493 / 0.128546 (-0.100053) | 0.010400 / 0.075646 (-0.065247) | 0.057375 / 0.419271 (-0.361896) | 0.032382 / 0.043533 (-0.011151) | 0.283163 / 0.255139 (0.028024) | 0.298967 / 0.283200 (0.015768) | 0.017564 / 0.141683 (-0.124119) | 1.172425 / 1.452155 (-0.279730) | 1.219975 / 1.492716 (-0.272742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090664 / 0.018006 (0.072658) | 0.298419 / 0.000490 (0.297929) | 0.000211 / 0.000200 (0.000011) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021739 / 0.037411 (-0.015672) | 0.068274 / 0.014526 (0.053748) | 0.080820 / 0.176557 (-0.095736) | 0.119809 / 0.737135 (-0.617326) | 0.081612 / 0.296338 (-0.214727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.303346 / 0.215209 (0.088137) | 2.971648 / 2.077655 (0.893993) | 1.634828 / 1.504120 (0.130708) | 1.510851 / 1.541195 (-0.030344) | 1.515236 / 1.468490 (0.046745) | 0.558487 / 4.584777 (-4.026289) | 2.436263 / 3.745712 (-1.309449) | 2.718525 / 5.269862 (-2.551336) | 1.727421 / 4.565676 (-2.838255) | 0.061396 / 0.424275 (-0.362879) | 0.004951 / 0.007607 (-0.002656) | 0.352950 / 0.226044 (0.126906) | 3.473766 / 2.268929 (1.204838) | 1.971299 / 55.444624 (-53.473325) | 1.712173 / 6.876477 (-5.164304) | 1.711334 / 2.142072 (-0.430738) | 0.627291 / 4.805227 (-4.177936) | 0.113779 / 6.500664 (-6.386885) | 0.046561 / 0.075469 (-0.028908) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989507 / 1.841788 (-0.852280) | 11.777883 / 8.074308 (3.703575) | 10.525453 / 10.191392 (0.334061) | 0.129118 / 0.680424 (-0.551306) | 0.014989 / 0.534201 (-0.519212) | 0.282324 / 0.579283 (-0.296959) | 0.280688 / 0.434364 (-0.153676) | 0.322579 / 0.540337 (-0.217758) | 0.554327 / 1.386936 (-0.832609) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#79e94fcdf3d4378ddcdf7e130bb1ae23d99c6fce \"CML watermark\")\n"
] | 2023-12-07T16:31:50 | 2023-12-08T13:33:06 | 2023-12-08T13:26:54 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6480",
"html_url": "https://github.com/huggingface/datasets/pull/6480",
"diff_url": "https://github.com/huggingface/datasets/pull/6480.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6480.patch",
"merged_at": "2023-12-08T13:26:54"
} | Example for glue sst2:
Dataset
```
DatasetDict({
test: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1821
})
train: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 67349
})
validation: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 872
})
})
```
IterableDataset (new)
```
IterableDatasetDict({
test: IterableDataset({
features: ['sentence', 'label', 'idx'],
n_shards: 1
})
train: IterableDataset({
features: ['sentence', 'label', 'idx'],
n_shards: 1
})
validation: IterableDataset({
features: ['sentence', 'label', 'idx'],
n_shards: 1
})
})
```
IterableDataset (before)
```
{'test': <datasets.iterable_dataset.IterableDataset object at 0x130d421f0>, 'train': <datasets.iterable_dataset.IterableDataset object at 0x136f3aaf0>, 'validation': <datasets.iterable_dataset.IterableDataset object at 0x136f4b100>}
{'sentence': 'hide new secretions from the parental units ', 'label': 0, 'idx': 0}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6480/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6479/comments | https://api.github.com/repos/huggingface/datasets/issues/6479/events | https://github.com/huggingface/datasets/pull/6479 | 2,029,040,121 | PR_kwDODunzps5hVLom | 6,479 | More robust preupload retry mechanism | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6479). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005669 / 0.011353 (-0.005683) | 0.003684 / 0.011008 (-0.007324) | 0.063477 / 0.038508 (0.024969) | 0.068760 / 0.023109 (0.045651) | 0.252741 / 0.275898 (-0.023157) | 0.286499 / 0.323480 (-0.036981) | 0.003311 / 0.007986 (-0.004674) | 0.003487 / 0.004328 (-0.000842) | 0.049636 / 0.004250 (0.045385) | 0.040983 / 0.037052 (0.003931) | 0.262230 / 0.258489 (0.003740) | 0.292131 / 0.293841 (-0.001710) | 0.028231 / 0.128546 (-0.100315) | 0.010912 / 0.075646 (-0.064734) | 0.211248 / 0.419271 (-0.208023) | 0.036679 / 0.043533 (-0.006854) | 0.258139 / 0.255139 (0.003000) | 0.277568 / 0.283200 (-0.005631) | 0.019576 / 0.141683 (-0.122107) | 1.102588 / 1.452155 (-0.349567) | 1.178587 / 1.492716 (-0.314130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098968 / 0.018006 (0.080962) | 0.298777 / 0.000490 (0.298287) | 0.000220 / 0.000200 (0.000020) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020408 / 0.037411 (-0.017003) | 0.062832 / 0.014526 (0.048306) | 0.076047 / 0.176557 (-0.100509) | 0.125209 / 0.737135 (-0.611926) | 0.079098 / 0.296338 (-0.217240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285603 / 0.215209 (0.070394) | 2.811530 / 2.077655 (0.733875) | 1.481012 / 1.504120 (-0.023108) | 1.362740 / 1.541195 (-0.178455) | 1.448999 / 1.468490 (-0.019491) | 0.557740 / 4.584777 (-4.027037) | 2.391377 / 3.745712 (-1.354335) | 2.973181 / 5.269862 (-2.296681) | 1.837147 / 4.565676 (-2.728530) | 0.064445 / 0.424275 (-0.359831) | 0.004992 / 0.007607 (-0.002615) | 0.339207 / 0.226044 (0.113162) | 3.378508 / 2.268929 (1.109580) | 1.843969 / 55.444624 (-53.600655) | 1.597794 / 6.876477 (-5.278682) | 1.657665 / 2.142072 (-0.484407) | 0.654267 / 4.805227 (-4.150961) | 0.120408 / 6.500664 (-6.380256) | 0.045298 / 0.075469 (-0.030171) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949030 / 1.841788 (-0.892758) | 12.922161 / 8.074308 (4.847852) | 11.115660 / 10.191392 (0.924268) | 0.130556 / 0.680424 (-0.549868) | 0.016278 / 0.534201 (-0.517923) | 0.288137 / 0.579283 (-0.291146) | 0.265978 / 0.434364 (-0.168386) | 0.331491 / 0.540337 (-0.208847) | 0.437782 / 1.386936 (-0.949154) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005342 / 0.011353 (-0.006010) | 0.003636 / 0.011008 (-0.007373) | 0.049527 / 0.038508 (0.011019) | 0.054856 / 0.023109 (0.031746) | 0.271922 / 0.275898 (-0.003976) | 0.295654 / 0.323480 (-0.027826) | 0.004023 / 0.007986 (-0.003963) | 0.002814 / 0.004328 (-0.001515) | 0.048963 / 0.004250 (0.044712) | 0.039936 / 0.037052 (0.002884) | 0.274336 / 0.258489 (0.015847) | 0.310100 / 0.293841 (0.016259) | 0.030006 / 0.128546 (-0.098540) | 0.010750 / 0.075646 (-0.064896) | 0.057989 / 0.419271 (-0.361283) | 0.033692 / 0.043533 (-0.009841) | 0.274084 / 0.255139 (0.018945) | 0.289428 / 0.283200 (0.006229) | 0.018739 / 0.141683 (-0.122944) | 1.126224 / 1.452155 (-0.325931) | 1.171595 / 1.492716 (-0.321121) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093983 / 0.018006 (0.075977) | 0.298516 / 0.000490 (0.298026) | 0.000221 / 0.000200 (0.000022) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022498 / 0.037411 (-0.014914) | 0.071909 / 0.014526 (0.057383) | 0.083940 / 0.176557 (-0.092617) | 0.121059 / 0.737135 (-0.616076) | 0.084141 / 0.296338 (-0.212198) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301792 / 0.215209 (0.086583) | 2.971971 / 2.077655 (0.894317) | 1.618718 / 1.504120 (0.114598) | 1.495816 / 1.541195 (-0.045379) | 1.546709 / 1.468490 (0.078219) | 0.571448 / 4.584777 (-4.013329) | 2.459182 / 3.745712 (-1.286531) | 2.937584 / 5.269862 (-2.332278) | 1.804670 / 4.565676 (-2.761007) | 0.062264 / 0.424275 (-0.362011) | 0.004915 / 0.007607 (-0.002692) | 0.355054 / 0.226044 (0.129009) | 3.490468 / 2.268929 (1.221539) | 1.978948 / 55.444624 (-53.465677) | 1.701020 / 6.876477 (-5.175457) | 1.744684 / 2.142072 (-0.397388) | 0.635880 / 4.805227 (-4.169347) | 0.115933 / 6.500664 (-6.384732) | 0.042646 / 0.075469 (-0.032823) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999486 / 1.841788 (-0.842302) | 13.373854 / 8.074308 (5.299546) | 10.959784 / 10.191392 (0.768392) | 0.131032 / 0.680424 (-0.549392) | 0.015059 / 0.534201 (-0.519142) | 0.289892 / 0.579283 (-0.289391) | 0.279383 / 0.434364 (-0.154981) | 0.337670 / 0.540337 (-0.202668) | 0.597102 / 1.386936 (-0.789834) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dd9044cdaabc1f9abce02c1b71bdb48fd3525d4e \"CML watermark\")\n"
] | 2023-12-06T17:19:38 | 2023-12-06T19:47:29 | 2023-12-06T19:41:06 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6479",
"html_url": "https://github.com/huggingface/datasets/pull/6479",
"diff_url": "https://github.com/huggingface/datasets/pull/6479.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6479.patch",
"merged_at": "2023-12-06T19:41:06"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6479/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6478/comments | https://api.github.com/repos/huggingface/datasets/issues/6478/events | https://github.com/huggingface/datasets/issues/6478 | 2,028,071,596 | I_kwDODunzps544eqs | 6,478 | How to load data from lakefs | {
"login": "d710055071",
"id": 12895488,
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d710055071",
"html_url": "https://github.com/d710055071",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"repos_url": "https://api.github.com/users/d710055071/repos",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"You can create a `pandas` DataFrame following [this](https://lakefs.io/data-version-control/dvc-using-python/) tutorial, and then convert this DataFrame to a `Dataset` with `datasets.Dataset.from_pandas`. For larger datasets (to memory map them), you can use `Dataset.from_generator` with a generator function that reads lakeFS files with `s3fs`.",
"@mariosasko hello,\r\nThis can achieve and https://huggingface.co/datasets Does the same effect apply to the dataset? For example, downloading while using"
] | 2023-12-06T09:04:11 | 2023-12-07T02:19:44 | null | CONTRIBUTOR | null | null | My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6478/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6477/comments | https://api.github.com/repos/huggingface/datasets/issues/6477/events | https://github.com/huggingface/datasets/pull/6477 | 2,028,022,374 | PR_kwDODunzps5hRq_N | 6,477 | Fix PermissionError on Windows CI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6477). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005383 / 0.011353 (-0.005969) | 0.003644 / 0.011008 (-0.007364) | 0.063375 / 0.038508 (0.024866) | 0.055567 / 0.023109 (0.032457) | 0.261376 / 0.275898 (-0.014522) | 0.283731 / 0.323480 (-0.039749) | 0.004022 / 0.007986 (-0.003964) | 0.002780 / 0.004328 (-0.001549) | 0.049407 / 0.004250 (0.045156) | 0.038208 / 0.037052 (0.001156) | 0.256275 / 0.258489 (-0.002214) | 0.293203 / 0.293841 (-0.000638) | 0.028411 / 0.128546 (-0.100135) | 0.010753 / 0.075646 (-0.064894) | 0.210420 / 0.419271 (-0.208851) | 0.036062 / 0.043533 (-0.007471) | 0.260455 / 0.255139 (0.005317) | 0.294991 / 0.283200 (0.011791) | 0.019020 / 0.141683 (-0.122662) | 1.118334 / 1.452155 (-0.333821) | 1.227391 / 1.492716 (-0.265325) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094700 / 0.018006 (0.076694) | 0.302378 / 0.000490 (0.301888) | 0.000215 / 0.000200 (0.000015) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018745 / 0.037411 (-0.018667) | 0.061103 / 0.014526 (0.046578) | 0.075369 / 0.176557 (-0.101188) | 0.121573 / 0.737135 (-0.615563) | 0.076898 / 0.296338 (-0.219440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284143 / 0.215209 (0.068934) | 2.774298 / 2.077655 (0.696644) | 1.483557 / 1.504120 (-0.020563) | 1.365091 / 1.541195 (-0.176104) | 1.390170 / 1.468490 (-0.078320) | 0.561179 / 4.584777 (-4.023598) | 2.401654 / 3.745712 (-1.344058) | 2.782628 / 5.269862 (-2.487233) | 1.731497 / 4.565676 (-2.834179) | 0.061798 / 0.424275 (-0.362477) | 0.004998 / 0.007607 (-0.002609) | 0.336920 / 0.226044 (0.110875) | 3.371891 / 2.268929 (1.102963) | 1.832173 / 55.444624 (-53.612452) | 1.573515 / 6.876477 (-5.302962) | 1.595609 / 2.142072 (-0.546463) | 0.647652 / 4.805227 (-4.157575) | 0.118501 / 6.500664 (-6.382164) | 0.042521 / 0.075469 (-0.032948) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939310 / 1.841788 (-0.902478) | 11.459855 / 8.074308 (3.385547) | 10.677954 / 10.191392 (0.486562) | 0.141029 / 0.680424 (-0.539395) | 0.014321 / 0.534201 (-0.519880) | 0.306679 / 0.579283 (-0.272604) | 0.262303 / 0.434364 (-0.172061) | 0.327422 / 0.540337 (-0.212915) | 0.436159 / 1.386936 (-0.950777) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005430 / 0.011353 (-0.005923) | 0.003646 / 0.011008 (-0.007362) | 0.049272 / 0.038508 (0.010764) | 0.075367 / 0.023109 (0.052257) | 0.275959 / 0.275898 (0.000061) | 0.296317 / 0.323480 (-0.027163) | 0.004129 / 0.007986 (-0.003857) | 0.002731 / 0.004328 (-0.001597) | 0.048475 / 0.004250 (0.044225) | 0.041571 / 0.037052 (0.004518) | 0.277993 / 0.258489 (0.019504) | 0.298709 / 0.293841 (0.004868) | 0.033117 / 0.128546 (-0.095429) | 0.010914 / 0.075646 (-0.064732) | 0.057599 / 0.419271 (-0.361673) | 0.033354 / 0.043533 (-0.010179) | 0.275669 / 0.255139 (0.020530) | 0.288451 / 0.283200 (0.005251) | 0.019953 / 0.141683 (-0.121729) | 1.148608 / 1.452155 (-0.303547) | 1.184818 / 1.492716 (-0.307898) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099566 / 0.018006 (0.081560) | 0.344935 / 0.000490 (0.344445) | 0.000221 / 0.000200 (0.000021) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021925 / 0.037411 (-0.015486) | 0.068623 / 0.014526 (0.054097) | 0.081533 / 0.176557 (-0.095024) | 0.120996 / 0.737135 (-0.616139) | 0.082495 / 0.296338 (-0.213844) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294990 / 0.215209 (0.079781) | 2.892344 / 2.077655 (0.814690) | 1.611090 / 1.504120 (0.106970) | 1.496072 / 1.541195 (-0.045123) | 1.486069 / 1.468490 (0.017579) | 0.569769 / 4.584777 (-4.015008) | 2.477623 / 3.745712 (-1.268089) | 2.819576 / 5.269862 (-2.450286) | 1.745717 / 4.565676 (-2.819959) | 0.063763 / 0.424275 (-0.360512) | 0.004970 / 0.007607 (-0.002637) | 0.344879 / 0.226044 (0.118834) | 3.452795 / 2.268929 (1.183867) | 1.964468 / 55.444624 (-53.480156) | 1.674526 / 6.876477 (-5.201951) | 1.679716 / 2.142072 (-0.462356) | 0.650005 / 4.805227 (-4.155222) | 0.117019 / 6.500664 (-6.383646) | 0.048297 / 0.075469 (-0.027172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965422 / 1.841788 (-0.876366) | 11.989414 / 8.074308 (3.915106) | 10.938462 / 10.191392 (0.747070) | 0.140089 / 0.680424 (-0.540334) | 0.015533 / 0.534201 (-0.518668) | 0.292188 / 0.579283 (-0.287095) | 0.277903 / 0.434364 (-0.156461) | 0.326164 / 0.540337 (-0.214173) | 0.565674 / 1.386936 (-0.821262) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d78f07091bc42c41bea068bf1b6116e2bde46a6f \"CML watermark\")\n"
] | 2023-12-06T08:34:53 | 2023-12-06T09:24:11 | 2023-12-06T09:17:52 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6477",
"html_url": "https://github.com/huggingface/datasets/pull/6477",
"diff_url": "https://github.com/huggingface/datasets/pull/6477.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6477.patch",
"merged_at": "2023-12-06T09:17:52"
} | Fix #6476. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6477/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6476/comments | https://api.github.com/repos/huggingface/datasets/issues/6476/events | https://github.com/huggingface/datasets/issues/6476 | 2,028,018,596 | I_kwDODunzps544Ruk | 6,476 | CI on windows is broken: PermissionError | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [] | 2023-12-06T08:32:53 | 2023-12-06T09:17:53 | 2023-12-06T09:17:53 | MEMBER | null | null | See: https://github.com/huggingface/datasets/actions/runs/7104781624/job/19340572394
```
FAILED tests/test_load.py::test_loading_from_the_datasets_hub - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpfcnps56i\\hf-internal-testing___dataset_with_script\\default\\0.0.0\\c240e2be3370bdbd\\dataset_with_script-train.arrow'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6476/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6475/comments | https://api.github.com/repos/huggingface/datasets/issues/6475/events | https://github.com/huggingface/datasets/issues/6475 | 2,027,373,734 | I_kwDODunzps5410Sm | 6,475 | laion2B-en failed to load on Windows with PrefetchVirtualMemory failed | {
"login": "doctorpangloss",
"id": 2229300,
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doctorpangloss",
"html_url": "https://github.com/doctorpangloss",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"~~You will see this error if the cache dir filepath contains relative `..` paths. Use `os.path.realpath(_CACHE_DIR)` before passing it to the `load_dataset` function.~~",
"This is a real issue and not related to paths.",
"Based on the StackOverflow answer, this causes the error to go away:\r\n```diff\r\ndiff --git a/table.py b/table.py\r\n--- a/table.py\t\r\n+++ b/table.py\t(date 1701824849806)\r\n@@ -47,7 +47,7 @@\r\n \r\n \r\n def _memory_mapped_record_batch_reader_from_file(filename: str) -> pa.RecordBatchStreamReader:\r\n- memory_mapped_stream = pa.memory_map(filename)\r\n+ memory_mapped_stream = pa.memory_map(filename, \"r+\")\r\n return pa.ipc.open_stream(memory_mapped_stream)\r\n```\r\nBut now loading the dataset goes very, very slowly, which is unexpected.",
"I don't really comprehend what it is that `datasets` gave me when it downloaded the laion2B-en dataset, because nothing can seemingly read these 1024 .arrow files it is retrieving. Not `polars`, not `pyarrow`, it's not an `ipc` file, it's not a `parquet` file...",
"Hi! \r\n\r\nInstead of generating one (potentially large) Arrow file, we shard the generated data into 500 MB shards because memory-mapping large Arrow files can be problematic on some systems. Maybe deleting the dataset's cache and increasing the shard size (controlled with the `datasets.config.MAX_SHARD_SIZE` variable; e.g. to \"4GB\") can fix the issue for you.\r\n\r\n> I don't really comprehend what it is that `datasets` gave me when it downloaded the laion2B-en dataset, because nothing can seemingly read these 1024 .arrow files it is retrieving. Not `polars`, not `pyarrow`, it's not an `ipc` file, it's not a `parquet` file...\r\n\r\nOur `.arrow` files are in the [Arrow streaming format](https://arrow.apache.org/docs/python/ipc.html#using-streams). To load them as a `polars` DataFrame, do the following:\r\n```python\r\ndf = pl.from_arrow(Dataset.from_from(path_to_arrow_file).data.table)\r\n```\r\n\r\nWe plan to switch to the IPC version eventually.\r\n",
"Hmm, I have a feeling this works fine on Linux, and is a real bug for however `datasets` is doing the sharding on Windows. I will follow up, but I think this is a real bug."
] | 2023-12-06T00:07:34 | 2023-12-06T23:26:23 | null | NONE | null | null | ### Describe the bug
I have downloaded laion2B-en, and I'm receiving the following error trying to load it:
```
Resolving data files: 100%|ββββββββββ| 128/128 [00:00<00:00, 1173.79it/s]
Traceback (most recent call last):
File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 31, in <module>
count = compute_frequencies()
^^^^^^^^^^^^^^^^^^^^^
File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 17, in compute_frequencies
laion2b_dataset = load_dataset("laion/laion2B-en", split="train", cache_dir=_CACHE_DIR, keep_in_memory=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\load.py", line 2165, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1187, in as_dataset
datasets = map_nested(
^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\utils\py_utils.py", line 456, in map_nested
return function(data_struct)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1217, in _build_single_dataset
ds = self._as_dataset(
^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1291, in _as_dataset
dataset_kwargs = ArrowReader(cache_dir, self.info).read(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 244, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 265, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 200, in _read_files
pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 336, in _get_table_from_filename
table = ArrowReader.read_table(filename, in_memory=in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 357, in read_table
return table_cls.from_file(filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\table.py", line 1059, in from_file
table = _memory_mapped_arrow_table_from_file(filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\table.py", line 66, in _memory_mapped_arrow_table_from_file
pa_table = opened_stream.read_all()
^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow\ipc.pxi", line 757, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status
OSError: [WinError 8] PrefetchVirtualMemory failed. Detail: [Windows error 8] Not enough memory resources are available to process this command.
```
This error is probably a red herring: https://stackoverflow.com/questions/50263929/numpy-memmap-returns-not-enough-memory-while-there-are-plenty-available In other words, the issue is related to asking for a memory mapping of length N > M the length of the file on Windows. This gracefully succeeds on Linux.
I have 1024 arrow files in my cache instead of 128 like in the repository for it. Probably related. I don't know why `datasets` reorganized/rewrote the dataset in my cache to be 1024 slices instead of the original 128.
### Steps to reproduce the bug
```
# as a huggingface developer, you may already have laion2B-en somewhere
_CACHE_DIR = "."
from datasets import load_dataset
load_dataset("laion/laion2B-en", split="train", cache_dir=_CACHE_DIR, keep_in_memory=False)
```
### Expected behavior
This should correctly load as a memory mapped Arrow dataset.
### Environment info
- `datasets` version: 2.15.0
- Platform: Windows-10-10.0.20348-SP0 (this is windows 2022)
- Python version: 3.11.4
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.2
- `fsspec` version: 2023.10.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6475/timeline | reopened | false |
https://api.github.com/repos/huggingface/datasets/issues/6474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6474/comments | https://api.github.com/repos/huggingface/datasets/issues/6474/events | https://github.com/huggingface/datasets/pull/6474 | 2,027,006,715 | PR_kwDODunzps5hONZc | 6,474 | Deprecate Beam API and download from HF GCS bucket | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6474). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2023-12-05T19:51:33 | 2024-02-02T16:03:32 | null | CONTRIBUTOR | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6474",
"html_url": "https://github.com/huggingface/datasets/pull/6474",
"diff_url": "https://github.com/huggingface/datasets/pull/6474.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6474.patch",
"merged_at": null
} | Deprecate the Beam API and download from the HF GCS bucked.
TODO:
- [ ] Deprecate the Beam-based [`wikipedia`](https://huggingface.co/datasets/wikipedia) in favor of [`wikimedia/wikipedia`](https://huggingface.co/datasets/wikimedia/wikipedia) ([Hub PR](https://huggingface.co/datasets/wikipedia/discussions/19))
- [ ] Make [`natural_questions`](https://huggingface.co/datasets/natural_questions) a no-code dataset ([Hub PR](https://huggingface.co/datasets/natural_questions/discussions/7))
- [ ] Make [`wiki40b`](https://huggingface.co/datasets/wiki40b) a no-code dataset ([Hub PR](https://huggingface.co/datasets/wiki40b/discussions/5))
- [ ] Make [`wiki_dpr`](https://huggingface.co/datasets/wiki_dpr) an Arrow-based dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6474/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6474/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6473/comments | https://api.github.com/repos/huggingface/datasets/issues/6473/events | https://github.com/huggingface/datasets/pull/6473 | 2,026,495,084 | PR_kwDODunzps5hMbvz | 6,473 | Fix CI quality | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6473). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005270 / 0.011353 (-0.006083) | 0.003471 / 0.011008 (-0.007537) | 0.061942 / 0.038508 (0.023434) | 0.052671 / 0.023109 (0.029562) | 0.250541 / 0.275898 (-0.025357) | 0.270677 / 0.323480 (-0.052803) | 0.002933 / 0.007986 (-0.005053) | 0.003264 / 0.004328 (-0.001064) | 0.048055 / 0.004250 (0.043804) | 0.037459 / 0.037052 (0.000407) | 0.254926 / 0.258489 (-0.003563) | 0.292547 / 0.293841 (-0.001294) | 0.027959 / 0.128546 (-0.100587) | 0.010762 / 0.075646 (-0.064884) | 0.204961 / 0.419271 (-0.214310) | 0.035488 / 0.043533 (-0.008045) | 0.254102 / 0.255139 (-0.001037) | 0.273654 / 0.283200 (-0.009546) | 0.018126 / 0.141683 (-0.123556) | 1.082330 / 1.452155 (-0.369825) | 1.147179 / 1.492716 (-0.345538) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093223 / 0.018006 (0.075217) | 0.301912 / 0.000490 (0.301422) | 0.000219 / 0.000200 (0.000019) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018407 / 0.037411 (-0.019004) | 0.060412 / 0.014526 (0.045886) | 0.074063 / 0.176557 (-0.102494) | 0.118743 / 0.737135 (-0.618392) | 0.076484 / 0.296338 (-0.219854) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289929 / 0.215209 (0.074720) | 2.825096 / 2.077655 (0.747442) | 1.511444 / 1.504120 (0.007324) | 1.394812 / 1.541195 (-0.146383) | 1.419751 / 1.468490 (-0.048739) | 0.569995 / 4.584777 (-4.014782) | 2.402586 / 3.745712 (-1.343126) | 2.826223 / 5.269862 (-2.443639) | 1.751554 / 4.565676 (-2.814123) | 0.064266 / 0.424275 (-0.360009) | 0.005047 / 0.007607 (-0.002561) | 0.341513 / 0.226044 (0.115469) | 3.372106 / 2.268929 (1.103177) | 1.872693 / 55.444624 (-53.571931) | 1.588200 / 6.876477 (-5.288276) | 1.630800 / 2.142072 (-0.511272) | 0.654266 / 4.805227 (-4.150961) | 0.124292 / 6.500664 (-6.376372) | 0.042876 / 0.075469 (-0.032593) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948406 / 1.841788 (-0.893382) | 11.652947 / 8.074308 (3.578639) | 10.218195 / 10.191392 (0.026803) | 0.128447 / 0.680424 (-0.551976) | 0.014092 / 0.534201 (-0.520109) | 0.287631 / 0.579283 (-0.291652) | 0.264843 / 0.434364 (-0.169521) | 0.329997 / 0.540337 (-0.210340) | 0.439597 / 1.386936 (-0.947339) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005418 / 0.011353 (-0.005935) | 0.003589 / 0.011008 (-0.007419) | 0.050074 / 0.038508 (0.011566) | 0.052566 / 0.023109 (0.029456) | 0.293447 / 0.275898 (0.017549) | 0.320518 / 0.323480 (-0.002962) | 0.004094 / 0.007986 (-0.003892) | 0.002690 / 0.004328 (-0.001639) | 0.048200 / 0.004250 (0.043949) | 0.040692 / 0.037052 (0.003640) | 0.297086 / 0.258489 (0.038597) | 0.323827 / 0.293841 (0.029986) | 0.029511 / 0.128546 (-0.099035) | 0.011079 / 0.075646 (-0.064568) | 0.058562 / 0.419271 (-0.360709) | 0.032897 / 0.043533 (-0.010636) | 0.297244 / 0.255139 (0.042105) | 0.316812 / 0.283200 (0.033612) | 0.018468 / 0.141683 (-0.123215) | 1.140948 / 1.452155 (-0.311207) | 1.195453 / 1.492716 (-0.297263) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092677 / 0.018006 (0.074671) | 0.300775 / 0.000490 (0.300285) | 0.000225 / 0.000200 (0.000025) | 0.000054 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021617 / 0.037411 (-0.015794) | 0.077135 / 0.014526 (0.062610) | 0.079848 / 0.176557 (-0.096709) | 0.118475 / 0.737135 (-0.618661) | 0.081174 / 0.296338 (-0.215164) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294424 / 0.215209 (0.079215) | 2.863989 / 2.077655 (0.786334) | 1.590604 / 1.504120 (0.086484) | 1.474345 / 1.541195 (-0.066849) | 1.482120 / 1.468490 (0.013630) | 0.567829 / 4.584777 (-4.016948) | 2.493782 / 3.745712 (-1.251930) | 2.823460 / 5.269862 (-2.446402) | 1.732677 / 4.565676 (-2.833000) | 0.065518 / 0.424275 (-0.358757) | 0.004923 / 0.007607 (-0.002684) | 0.349313 / 0.226044 (0.123268) | 3.428618 / 2.268929 (1.159689) | 1.970641 / 55.444624 (-53.473983) | 1.655884 / 6.876477 (-5.220593) | 1.657151 / 2.142072 (-0.484921) | 0.661208 / 4.805227 (-4.144019) | 0.119129 / 6.500664 (-6.381535) | 0.040770 / 0.075469 (-0.034699) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964865 / 1.841788 (-0.876923) | 12.050218 / 8.074308 (3.975910) | 10.458749 / 10.191392 (0.267357) | 0.141856 / 0.680424 (-0.538568) | 0.015091 / 0.534201 (-0.519109) | 0.288897 / 0.579283 (-0.290387) | 0.275343 / 0.434364 (-0.159021) | 0.328363 / 0.540337 (-0.211975) | 0.579243 / 1.386936 (-0.807693) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7721021e284859ea0952444bae6300a0d00794f \"CML watermark\")\n"
] | 2023-12-05T15:36:23 | 2023-12-05T18:14:50 | 2023-12-05T18:08:41 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6473",
"html_url": "https://github.com/huggingface/datasets/pull/6473",
"diff_url": "https://github.com/huggingface/datasets/pull/6473.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6473.patch",
"merged_at": "2023-12-05T18:08:41"
} | Fix #6472. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6473/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6472/comments | https://api.github.com/repos/huggingface/datasets/issues/6472/events | https://github.com/huggingface/datasets/issues/6472 | 2,026,493,439 | I_kwDODunzps54ydX_ | 6,472 | CI quality is broken | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [] | 2023-12-05T15:35:34 | 2023-12-06T08:17:34 | 2023-12-05T18:08:43 | MEMBER | null | null | See: https://github.com/huggingface/datasets/actions/runs/7100835633/job/19327734359
```
Would reformat: src/datasets/features/image.py
1 file would be reformatted, 253 files left unchanged
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6472/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6471/comments | https://api.github.com/repos/huggingface/datasets/issues/6471/events | https://github.com/huggingface/datasets/pull/6471 | 2,026,100,761 | PR_kwDODunzps5hLEni | 6,471 | Remove delete doc CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6471). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005573 / 0.011353 (-0.005780) | 0.003449 / 0.011008 (-0.007559) | 0.063323 / 0.038508 (0.024815) | 0.049369 / 0.023109 (0.026260) | 0.254280 / 0.275898 (-0.021618) | 0.267721 / 0.323480 (-0.055759) | 0.002894 / 0.007986 (-0.005092) | 0.002646 / 0.004328 (-0.001683) | 0.049284 / 0.004250 (0.045033) | 0.037947 / 0.037052 (0.000895) | 0.251654 / 0.258489 (-0.006836) | 0.279729 / 0.293841 (-0.014112) | 0.028022 / 0.128546 (-0.100525) | 0.010653 / 0.075646 (-0.064993) | 0.208567 / 0.419271 (-0.210704) | 0.035863 / 0.043533 (-0.007670) | 0.248522 / 0.255139 (-0.006617) | 0.270274 / 0.283200 (-0.012925) | 0.019683 / 0.141683 (-0.122000) | 1.136342 / 1.452155 (-0.315812) | 1.206757 / 1.492716 (-0.285960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094682 / 0.018006 (0.076676) | 0.304092 / 0.000490 (0.303602) | 0.000220 / 0.000200 (0.000020) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018606 / 0.037411 (-0.018805) | 0.060568 / 0.014526 (0.046042) | 0.074067 / 0.176557 (-0.102490) | 0.118979 / 0.737135 (-0.618156) | 0.075676 / 0.296338 (-0.220663) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290452 / 0.215209 (0.075243) | 2.848868 / 2.077655 (0.771213) | 1.534932 / 1.504120 (0.030812) | 1.386717 / 1.541195 (-0.154478) | 1.416645 / 1.468490 (-0.051845) | 0.569020 / 4.584777 (-4.015757) | 2.421168 / 3.745712 (-1.324545) | 2.781358 / 5.269862 (-2.488503) | 1.758495 / 4.565676 (-2.807182) | 0.063851 / 0.424275 (-0.360424) | 0.004968 / 0.007607 (-0.002639) | 0.339198 / 0.226044 (0.113154) | 3.356392 / 2.268929 (1.087464) | 1.858145 / 55.444624 (-53.586479) | 1.589000 / 6.876477 (-5.287477) | 1.569175 / 2.142072 (-0.572897) | 0.650571 / 4.805227 (-4.154657) | 0.120288 / 6.500664 (-6.380376) | 0.042489 / 0.075469 (-0.032980) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939963 / 1.841788 (-0.901824) | 11.493612 / 8.074308 (3.419304) | 10.353780 / 10.191392 (0.162388) | 0.141945 / 0.680424 (-0.538479) | 0.014397 / 0.534201 (-0.519804) | 0.286971 / 0.579283 (-0.292312) | 0.266787 / 0.434364 (-0.167577) | 0.330385 / 0.540337 (-0.209952) | 0.438542 / 1.386936 (-0.948394) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005360 / 0.011353 (-0.005993) | 0.003720 / 0.011008 (-0.007288) | 0.048790 / 0.038508 (0.010282) | 0.050256 / 0.023109 (0.027147) | 0.275445 / 0.275898 (-0.000453) | 0.297725 / 0.323480 (-0.025755) | 0.004077 / 0.007986 (-0.003909) | 0.002759 / 0.004328 (-0.001569) | 0.047653 / 0.004250 (0.043403) | 0.040205 / 0.037052 (0.003153) | 0.281028 / 0.258489 (0.022539) | 0.304682 / 0.293841 (0.010841) | 0.030158 / 0.128546 (-0.098388) | 0.010957 / 0.075646 (-0.064689) | 0.058193 / 0.419271 (-0.361079) | 0.033277 / 0.043533 (-0.010256) | 0.279501 / 0.255139 (0.024362) | 0.295381 / 0.283200 (0.012181) | 0.017889 / 0.141683 (-0.123794) | 1.121354 / 1.452155 (-0.330801) | 1.225702 / 1.492716 (-0.267014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093385 / 0.018006 (0.075378) | 0.304642 / 0.000490 (0.304152) | 0.000219 / 0.000200 (0.000019) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021456 / 0.037411 (-0.015955) | 0.068536 / 0.014526 (0.054010) | 0.080867 / 0.176557 (-0.095689) | 0.119093 / 0.737135 (-0.618042) | 0.081875 / 0.296338 (-0.214464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304434 / 0.215209 (0.089225) | 2.990303 / 2.077655 (0.912649) | 1.616959 / 1.504120 (0.112839) | 1.493256 / 1.541195 (-0.047939) | 1.542857 / 1.468490 (0.074367) | 0.575517 / 4.584777 (-4.009260) | 2.455165 / 3.745712 (-1.290547) | 2.810089 / 5.269862 (-2.459773) | 1.756502 / 4.565676 (-2.809175) | 0.064801 / 0.424275 (-0.359475) | 0.004969 / 0.007607 (-0.002638) | 0.360227 / 0.226044 (0.134183) | 3.575029 / 2.268929 (1.306100) | 1.989955 / 55.444624 (-53.454669) | 1.705306 / 6.876477 (-5.171171) | 1.688523 / 2.142072 (-0.453550) | 0.663266 / 4.805227 (-4.141962) | 0.121852 / 6.500664 (-6.378812) | 0.041853 / 0.075469 (-0.033616) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983535 / 1.841788 (-0.858252) | 11.827656 / 8.074308 (3.753348) | 10.663265 / 10.191392 (0.471873) | 0.145942 / 0.680424 (-0.534482) | 0.016004 / 0.534201 (-0.518197) | 0.288907 / 0.579283 (-0.290376) | 0.279100 / 0.434364 (-0.155264) | 0.328061 / 0.540337 (-0.212276) | 0.570253 / 1.386936 (-0.816683) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b52cbc18919869460557e15028e7f489eae8afc7 \"CML watermark\")\n"
] | 2023-12-05T12:37:50 | 2023-12-05T12:44:59 | 2023-12-05T12:38:50 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6471",
"html_url": "https://github.com/huggingface/datasets/pull/6471",
"diff_url": "https://github.com/huggingface/datasets/pull/6471.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6471.patch",
"merged_at": "2023-12-05T12:38:50"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6471/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6470/comments | https://api.github.com/repos/huggingface/datasets/issues/6470/events | https://github.com/huggingface/datasets/issues/6470 | 2,024,724,319 | I_kwDODunzps54rtdf | 6,470 | If an image in a dataset is corrupted, we get unescapable error | {
"login": "chigozienri",
"id": 14337872,
"node_id": "MDQ6VXNlcjE0MzM3ODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/14337872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigozienri",
"html_url": "https://github.com/chigozienri",
"followers_url": "https://api.github.com/users/chigozienri/followers",
"following_url": "https://api.github.com/users/chigozienri/following{/other_user}",
"gists_url": "https://api.github.com/users/chigozienri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chigozienri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chigozienri/subscriptions",
"organizations_url": "https://api.github.com/users/chigozienri/orgs",
"repos_url": "https://api.github.com/users/chigozienri/repos",
"events_url": "https://api.github.com/users/chigozienri/events{/privacy}",
"received_events_url": "https://api.github.com/users/chigozienri/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-04T20:58:49 | 2023-12-04T20:58:49 | null | NONE | null | null | ### Describe the bug
Example discussed in detail here: https://huggingface.co/datasets/sasha/birdsnap/discussions/1
### Steps to reproduce the bug
```
from datasets import load_dataset, VerificationMode
dataset = load_dataset(
'sasha/birdsnap',
split="train",
verification_mode=VerificationMode.ALL_CHECKS,
streaming=True # I recommend using streaming=True when reproducing, as this dataset is large
)
for idx, row in enumerate(dataset):
# Iterating to 9287 took 7 minutes for me
# If you already have the data locally cached and set streaming=False, you see the same error just by with dataset[9287]
pass
# error at 9287 OSError: image file is truncated (45 bytes not processed)
# note that we can't avoid the error using a try/except + continue inside the loop
```
### Expected behavior
Able to escape errors in casting to Image() without killing the whole loop
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6470/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6469/comments | https://api.github.com/repos/huggingface/datasets/issues/6469/events | https://github.com/huggingface/datasets/pull/6469 | 2,023,695,839 | PR_kwDODunzps5hC6xf | 6,469 | Don't expand_info in HF glob | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6469). All of your documentation changes will be reflected on that endpoint.",
"Merging this one for now, but lmk if you had other optimizations in mind for the next version of `huggingface_hub`",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004998 / 0.011353 (-0.006355) | 0.003523 / 0.011008 (-0.007486) | 0.064932 / 0.038508 (0.026424) | 0.050107 / 0.023109 (0.026998) | 0.253715 / 0.275898 (-0.022183) | 0.275364 / 0.323480 (-0.048116) | 0.003902 / 0.007986 (-0.004084) | 0.002716 / 0.004328 (-0.001612) | 0.048458 / 0.004250 (0.044208) | 0.037802 / 0.037052 (0.000750) | 0.262328 / 0.258489 (0.003839) | 0.285911 / 0.293841 (-0.007930) | 0.027112 / 0.128546 (-0.101435) | 0.010780 / 0.075646 (-0.064867) | 0.206447 / 0.419271 (-0.212824) | 0.035771 / 0.043533 (-0.007761) | 0.255031 / 0.255139 (-0.000108) | 0.270530 / 0.283200 (-0.012670) | 0.017152 / 0.141683 (-0.124530) | 1.094734 / 1.452155 (-0.357421) | 1.163480 / 1.492716 (-0.329237) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092944 / 0.018006 (0.074938) | 0.301042 / 0.000490 (0.300553) | 0.000238 / 0.000200 (0.000038) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019090 / 0.037411 (-0.018321) | 0.061046 / 0.014526 (0.046520) | 0.073330 / 0.176557 (-0.103227) | 0.121124 / 0.737135 (-0.616012) | 0.080544 / 0.296338 (-0.215795) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.323866 / 0.215209 (0.108657) | 2.797727 / 2.077655 (0.720072) | 1.502994 / 1.504120 (-0.001126) | 1.376177 / 1.541195 (-0.165018) | 1.422741 / 1.468490 (-0.045749) | 0.562990 / 4.584777 (-4.021786) | 2.431781 / 3.745712 (-1.313931) | 2.783226 / 5.269862 (-2.486635) | 1.788055 / 4.565676 (-2.777621) | 0.064206 / 0.424275 (-0.360069) | 0.004989 / 0.007607 (-0.002618) | 0.338282 / 0.226044 (0.112237) | 3.356226 / 2.268929 (1.087297) | 1.855644 / 55.444624 (-53.588980) | 1.580876 / 6.876477 (-5.295601) | 1.617418 / 2.142072 (-0.524655) | 0.636816 / 4.805227 (-4.168411) | 0.117680 / 6.500664 (-6.382985) | 0.042560 / 0.075469 (-0.032909) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.956410 / 1.841788 (-0.885377) | 11.764886 / 8.074308 (3.690578) | 10.535801 / 10.191392 (0.344409) | 0.137797 / 0.680424 (-0.542627) | 0.014368 / 0.534201 (-0.519833) | 0.286213 / 0.579283 (-0.293070) | 0.267093 / 0.434364 (-0.167271) | 0.334802 / 0.540337 (-0.205535) | 0.441866 / 1.386936 (-0.945070) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005348 / 0.011353 (-0.006005) | 0.003551 / 0.011008 (-0.007458) | 0.049226 / 0.038508 (0.010718) | 0.052072 / 0.023109 (0.028963) | 0.268025 / 0.275898 (-0.007873) | 0.289968 / 0.323480 (-0.033512) | 0.004034 / 0.007986 (-0.003952) | 0.002675 / 0.004328 (-0.001653) | 0.048099 / 0.004250 (0.043848) | 0.040141 / 0.037052 (0.003089) | 0.272974 / 0.258489 (0.014485) | 0.296097 / 0.293841 (0.002256) | 0.028972 / 0.128546 (-0.099575) | 0.010689 / 0.075646 (-0.064957) | 0.057853 / 0.419271 (-0.361418) | 0.032488 / 0.043533 (-0.011045) | 0.272018 / 0.255139 (0.016879) | 0.287179 / 0.283200 (0.003980) | 0.018446 / 0.141683 (-0.123237) | 1.140346 / 1.452155 (-0.311809) | 1.247743 / 1.492716 (-0.244974) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091987 / 0.018006 (0.073980) | 0.300527 / 0.000490 (0.300037) | 0.000224 / 0.000200 (0.000024) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021390 / 0.037411 (-0.016021) | 0.068768 / 0.014526 (0.054242) | 0.080798 / 0.176557 (-0.095759) | 0.119081 / 0.737135 (-0.618054) | 0.082461 / 0.296338 (-0.213878) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286631 / 0.215209 (0.071422) | 2.804633 / 2.077655 (0.726978) | 1.574122 / 1.504120 (0.070002) | 1.459994 / 1.541195 (-0.081201) | 1.499739 / 1.468490 (0.031249) | 0.579595 / 4.584777 (-4.005182) | 2.426407 / 3.745712 (-1.319306) | 2.917994 / 5.269862 (-2.351868) | 1.846439 / 4.565676 (-2.719238) | 0.063274 / 0.424275 (-0.361001) | 0.005028 / 0.007607 (-0.002579) | 0.341114 / 0.226044 (0.115070) | 3.402677 / 2.268929 (1.133748) | 1.940980 / 55.444624 (-53.503645) | 1.651902 / 6.876477 (-5.224575) | 1.677037 / 2.142072 (-0.465036) | 0.651576 / 4.805227 (-4.153651) | 0.116398 / 6.500664 (-6.384266) | 0.041060 / 0.075469 (-0.034409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973278 / 1.841788 (-0.868509) | 12.248332 / 8.074308 (4.174024) | 10.830627 / 10.191392 (0.639235) | 0.143146 / 0.680424 (-0.537278) | 0.016249 / 0.534201 (-0.517952) | 0.298563 / 0.579283 (-0.280720) | 0.278643 / 0.434364 (-0.155721) | 0.338206 / 0.540337 (-0.202132) | 0.589485 / 1.386936 (-0.797451) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#da29ac32c57e079199c173e4404342cc105ed774 \"CML watermark\")\n"
] | 2023-12-04T12:00:37 | 2023-12-15T13:18:37 | 2023-12-15T13:12:30 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6469",
"html_url": "https://github.com/huggingface/datasets/pull/6469",
"diff_url": "https://github.com/huggingface/datasets/pull/6469.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6469.patch",
"merged_at": "2023-12-15T13:12:30"
} | Finally fix https://github.com/huggingface/datasets/issues/5537 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6469/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6468/comments | https://api.github.com/repos/huggingface/datasets/issues/6468/events | https://github.com/huggingface/datasets/pull/6468 | 2,023,617,877 | PR_kwDODunzps5hCpbN | 6,468 | Use auth to get parquet export | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6468). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005076 / 0.011353 (-0.006277) | 0.003510 / 0.011008 (-0.007499) | 0.062939 / 0.038508 (0.024431) | 0.049191 / 0.023109 (0.026082) | 0.259088 / 0.275898 (-0.016810) | 0.273523 / 0.323480 (-0.049957) | 0.003902 / 0.007986 (-0.004083) | 0.002699 / 0.004328 (-0.001630) | 0.049077 / 0.004250 (0.044827) | 0.037174 / 0.037052 (0.000121) | 0.256467 / 0.258489 (-0.002022) | 0.291235 / 0.293841 (-0.002606) | 0.028119 / 0.128546 (-0.100427) | 0.010404 / 0.075646 (-0.065243) | 0.205825 / 0.419271 (-0.213446) | 0.035741 / 0.043533 (-0.007792) | 0.253219 / 0.255139 (-0.001920) | 0.274986 / 0.283200 (-0.008214) | 0.018379 / 0.141683 (-0.123304) | 1.131139 / 1.452155 (-0.321016) | 1.175875 / 1.492716 (-0.316841) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090717 / 0.018006 (0.072710) | 0.299285 / 0.000490 (0.298796) | 0.000217 / 0.000200 (0.000017) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018678 / 0.037411 (-0.018733) | 0.060558 / 0.014526 (0.046032) | 0.073828 / 0.176557 (-0.102728) | 0.119302 / 0.737135 (-0.617833) | 0.075261 / 0.296338 (-0.221078) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277018 / 0.215209 (0.061809) | 2.713255 / 2.077655 (0.635601) | 1.427512 / 1.504120 (-0.076608) | 1.311374 / 1.541195 (-0.229821) | 1.348756 / 1.468490 (-0.119734) | 0.561777 / 4.584777 (-4.023000) | 2.393578 / 3.745712 (-1.352134) | 2.798109 / 5.269862 (-2.471753) | 1.754808 / 4.565676 (-2.810869) | 0.062302 / 0.424275 (-0.361973) | 0.004948 / 0.007607 (-0.002659) | 0.328468 / 0.226044 (0.102423) | 3.246558 / 2.268929 (0.977629) | 1.786816 / 55.444624 (-53.657808) | 1.482937 / 6.876477 (-5.393540) | 1.516109 / 2.142072 (-0.625963) | 0.634457 / 4.805227 (-4.170770) | 0.116505 / 6.500664 (-6.384159) | 0.042162 / 0.075469 (-0.033308) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935312 / 1.841788 (-0.906476) | 11.540599 / 8.074308 (3.466291) | 10.512593 / 10.191392 (0.321201) | 0.129638 / 0.680424 (-0.550786) | 0.013994 / 0.534201 (-0.520207) | 0.291490 / 0.579283 (-0.287793) | 0.263641 / 0.434364 (-0.170722) | 0.328718 / 0.540337 (-0.211619) | 0.437598 / 1.386936 (-0.949338) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005192 / 0.011353 (-0.006161) | 0.003454 / 0.011008 (-0.007554) | 0.049448 / 0.038508 (0.010940) | 0.050968 / 0.023109 (0.027859) | 0.273702 / 0.275898 (-0.002196) | 0.296934 / 0.323480 (-0.026545) | 0.004066 / 0.007986 (-0.003920) | 0.002611 / 0.004328 (-0.001718) | 0.048284 / 0.004250 (0.044034) | 0.041399 / 0.037052 (0.004346) | 0.283000 / 0.258489 (0.024511) | 0.302553 / 0.293841 (0.008712) | 0.029086 / 0.128546 (-0.099460) | 0.010510 / 0.075646 (-0.065137) | 0.058097 / 0.419271 (-0.361175) | 0.032992 / 0.043533 (-0.010541) | 0.271752 / 0.255139 (0.016613) | 0.293535 / 0.283200 (0.010335) | 0.016958 / 0.141683 (-0.124725) | 1.130126 / 1.452155 (-0.322028) | 1.187228 / 1.492716 (-0.305488) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092321 / 0.018006 (0.074315) | 0.302599 / 0.000490 (0.302109) | 0.000215 / 0.000200 (0.000015) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021837 / 0.037411 (-0.015574) | 0.071148 / 0.014526 (0.056622) | 0.082448 / 0.176557 (-0.094108) | 0.128083 / 0.737135 (-0.609053) | 0.090864 / 0.296338 (-0.205474) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296248 / 0.215209 (0.081039) | 2.881130 / 2.077655 (0.803476) | 1.580360 / 1.504120 (0.076240) | 1.454642 / 1.541195 (-0.086553) | 1.461453 / 1.468490 (-0.007037) | 0.567500 / 4.584777 (-4.017277) | 2.493708 / 3.745712 (-1.252004) | 2.756623 / 5.269862 (-2.513239) | 1.771319 / 4.565676 (-2.794358) | 0.062287 / 0.424275 (-0.361988) | 0.004917 / 0.007607 (-0.002691) | 0.348034 / 0.226044 (0.121990) | 3.426938 / 2.268929 (1.158010) | 1.954190 / 55.444624 (-53.490435) | 1.660870 / 6.876477 (-5.215607) | 1.675118 / 2.142072 (-0.466955) | 0.636843 / 4.805227 (-4.168384) | 0.115028 / 6.500664 (-6.385636) | 0.040702 / 0.075469 (-0.034767) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988076 / 1.841788 (-0.853711) | 11.890867 / 8.074308 (3.816559) | 10.621169 / 10.191392 (0.429777) | 0.131568 / 0.680424 (-0.548856) | 0.014994 / 0.534201 (-0.519207) | 0.288900 / 0.579283 (-0.290384) | 0.272092 / 0.434364 (-0.162272) | 0.329397 / 0.540337 (-0.210940) | 0.569337 / 1.386936 (-0.817599) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ae3b4a2268adc2f21568ff63891e9a83530c7e29 \"CML watermark\")\n"
] | 2023-12-04T11:18:27 | 2023-12-04T17:21:22 | 2023-12-04T17:15:11 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6468",
"html_url": "https://github.com/huggingface/datasets/pull/6468",
"diff_url": "https://github.com/huggingface/datasets/pull/6468.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6468.patch",
"merged_at": "2023-12-04T17:15:11"
} | added `token` to the `_datasets_server` functions | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6468/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6467/comments | https://api.github.com/repos/huggingface/datasets/issues/6467/events | https://github.com/huggingface/datasets/issues/6467 | 2,023,174,233 | I_kwDODunzps54lzBZ | 6,467 | New version release request | {
"login": "LZHgrla",
"id": 36994684,
"node_id": "MDQ6VXNlcjM2OTk0Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/36994684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LZHgrla",
"html_url": "https://github.com/LZHgrla",
"followers_url": "https://api.github.com/users/LZHgrla/followers",
"following_url": "https://api.github.com/users/LZHgrla/following{/other_user}",
"gists_url": "https://api.github.com/users/LZHgrla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LZHgrla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LZHgrla/subscriptions",
"organizations_url": "https://api.github.com/users/LZHgrla/orgs",
"repos_url": "https://api.github.com/users/LZHgrla/repos",
"events_url": "https://api.github.com/users/LZHgrla/events{/privacy}",
"received_events_url": "https://api.github.com/users/LZHgrla/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | [
"We will publish it soon (we usually do it in intervals of 1-2 months, so probably next week)",
"Thanks!"
] | 2023-12-04T07:08:26 | 2023-12-04T15:42:22 | 2023-12-04T15:42:22 | CONTRIBUTOR | null | null | ### Feature request
Hi!
I am using `datasets` in library `xtuner` and am highly interested in the features introduced since v2.15.0.
To avoid installation from source in our pypi wheels, we are eagerly waiting for the new release. So, Does your team have a new release plan for v2.15.1 and could you please share it with us?
Thanks very much!
### Motivation
.
### Your contribution
. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6467/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6466/comments | https://api.github.com/repos/huggingface/datasets/issues/6466/events | https://github.com/huggingface/datasets/issues/6466 | 2,022,601,176 | I_kwDODunzps54jnHY | 6,466 | Can't align optional features of struct | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Friendly bump, I would be happy to work on this issue once I get the go-ahead from the dev team. ",
"Thanks for the PR!\r\n\r\nI'm struggling with this as well and would love to see this PR merged. My case is slightly different, with keys completely missing rather than being `None`:\r\n\r\n```\r\nds = Dataset.from_dict({'speaker': [{'name': 'Ben'}]})\r\nds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': '[email protected]'}]})\r\nprint(concatenate_datasets([ds, ds2]).features)\r\nprint(concatenate_datasets([ds, ds2]).to_dict())\r\n```\r\n\r\nI would expect this to work as well because other Dataset functions already handle this situation well. For example, this works just as expected:\r\n\r\n```\r\nds = Dataset.from_dict({'n': [1,2]})\r\nds_mapped = ds.map(lambda x: {\r\n 'speaker': {'name': 'Ben'} if x['n'] == 1 else {'name': 'Fred', 'email': '[email protected]'}\r\n})\r\nprint(ds_mapped)\r\n```"
] | 2023-12-03T15:57:07 | 2024-02-08T14:38:34 | 2024-02-08T14:38:34 | CONTRIBUTOR | null | null | ### Describe the bug
Hello!
I'm currently experiencing an issue where I can't concatenate datasets if an inner field of a Feature is Optional.
I have a column named `speaker`, and this holds some information about a speaker.
```python
@dataclass
class Speaker:
name: str
email: Optional[str]
```
If I have two datasets, one happens to have `email` always None, then I get `The features can't be aligned because the key email of features`
### Steps to reproduce the bug
You can run the following script:
```python
ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]})
ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': '[email protected]'}]})
concatenate_datasets([ds, ds2])
>>>The features can't be aligned because the key speaker of features {'speaker': {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)}} has unexpected type - {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)} (expected either {'email': Value(dtype='null', id=None), 'name': Value(dtype='string', id=None)} or Value("null").
```
### Expected behavior
I think this should work; if two top-level columns were in the same situation it would properly cast to `string`.
```python
ds = Dataset.from_dict({'email': [None, None]})
ds2 = Dataset.from_dict({'email': ['[email protected]', '[email protected]']})
concatenate_datasets([ds, ds2])
>>> # Works!
```
### Environment info
- `datasets` version: 2.15.1.dev0
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35
- Python version: 3.9.13
- `huggingface_hub` version: 0.19.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
- `fsspec` version: 2023.6.0
I would be happy to fix this issue. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6466/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6466/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6465/comments | https://api.github.com/repos/huggingface/datasets/issues/6465/events | https://github.com/huggingface/datasets/issues/6465 | 2,022,212,468 | I_kwDODunzps54iIN0 | 6,465 | `load_dataset` uses out-of-date cache instead of re-downloading a changed dataset | {
"login": "mnoukhov",
"id": 3391297,
"node_id": "MDQ6VXNlcjMzOTEyOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3391297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnoukhov",
"html_url": "https://github.com/mnoukhov",
"followers_url": "https://api.github.com/users/mnoukhov/followers",
"following_url": "https://api.github.com/users/mnoukhov/following{/other_user}",
"gists_url": "https://api.github.com/users/mnoukhov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mnoukhov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnoukhov/subscriptions",
"organizations_url": "https://api.github.com/users/mnoukhov/orgs",
"repos_url": "https://api.github.com/users/mnoukhov/repos",
"events_url": "https://api.github.com/users/mnoukhov/events{/privacy}",
"received_events_url": "https://api.github.com/users/mnoukhov/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi, thanks for reporting! https://github.com/huggingface/datasets/pull/6459 will fix this."
] | 2023-12-02T21:35:17 | 2023-12-04T16:13:10 | null | NONE | null | null | ### Describe the bug
When a dataset is updated on the hub, using `load_dataset` will load the locally cached dataset instead of re-downloading the updated dataset
### Steps to reproduce the bug
Here is a minimal example script to
1. create an initial dataset and upload
2. download it so it is stored in cache
3. change the dataset and re-upload
4. redownload
```python
import time
from datasets import Dataset, DatasetDict, DownloadMode, load_dataset
username = "YOUR_USERNAME_HERE"
initial = Dataset.from_dict({"foo": [1, 2, 3]})
print(f"Intial {initial['foo']}")
initial_ds = DatasetDict({"train": initial})
initial_ds.push_to_hub("test")
time.sleep(1)
download = load_dataset(f"{username}/test", split="train")
changed = download.map(lambda x: {"foo": x["foo"] + 1})
print(f"Changed {changed['foo']}")
changed.push_to_hub("test")
time.sleep(1)
download_again = load_dataset(f"{username}/test", split="train")
print(f"Download Changed {download_again['foo']}")
# >>> gives the out-dated [1,2,3] when it should be changed [2,3,4]
```
The redownloaded dataset should be the changed dataset but it is actually the cached, initial dataset. Force-redownloading gives the correct dataset
```python
download_again_force = load_dataset(f"{username}/test", split="train", download_mode=DownloadMode.FORCE_REDOWNLOAD)
print(f"Force Download Changed {download_again_force['foo']}")
# >>> [2,3,4]
```
### Expected behavior
I assumed there should be some sort of hashing that should check for changes in the dataset and re-download if the hashes don't match
### Environment info
- `datasets` version: 2.15.0 β
- Platform: Linux-5.15.0-1028-nvidia-x86_64-with-glibc2.17 β
- Python version: 3.8.17 β
- `huggingface_hub` version: 0.19.4 β
- PyArrow version: 13.0.0 β
- Pandas version: 2.0.3 β
- `fsspec` version: 2023.6.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6465/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6464/comments | https://api.github.com/repos/huggingface/datasets/issues/6464/events | https://github.com/huggingface/datasets/pull/6464 | 2,020,860,462 | PR_kwDODunzps5g5djo | 6,464 | Add concurrent loading of shards to datasets.load_from_disk | {
"login": "kkoutini",
"id": 51880718,
"node_id": "MDQ6VXNlcjUxODgwNzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/51880718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kkoutini",
"html_url": "https://github.com/kkoutini",
"followers_url": "https://api.github.com/users/kkoutini/followers",
"following_url": "https://api.github.com/users/kkoutini/following{/other_user}",
"gists_url": "https://api.github.com/users/kkoutini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kkoutini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkoutini/subscriptions",
"organizations_url": "https://api.github.com/users/kkoutini/orgs",
"repos_url": "https://api.github.com/users/kkoutini/repos",
"events_url": "https://api.github.com/users/kkoutini/events{/privacy}",
"received_events_url": "https://api.github.com/users/kkoutini/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If we use multithreading no need to ask for `num_proc`. And maybe we the same numbers of threads as tqdm by default (IIRC it's `max(32, cpu_count() + 4)`) - you can even use `tqdm.contrib.concurrent.thread_map` directly to simplify the code\r\n\r\nAlso you can ignore the `IN_MEMORY_MAX_SIZE` config for this. This parameter is kinda legacy.\r\n\r\nHave you been able to run the benchmark on a fresh node ? The speed up doesn't seem that big in your first report",
"I got some fresh nodes with the 32 threads I'm loading the dataset with around 315 seconds (without any preloading). Sequentially, it used to take around 1865 seconds. \r\nOk I'll roll back the changes and switch to `tqdm.contrib.concurrent.thread_map` without the `num_proc` parameter. ",
"I switched to `tqdm.contrib.concurrent.thread_map` the code looks much simpler now.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6464). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for the update ! Btw you should tell Jack Morris that you added this :) see https://x.com/jxmnop/status/1749812573984461145?s=20 \r\n\r\nThe CI fail is unrelated to this PR - I'm trying to fix it on `main` right now",
"> Thanks for the update ! Btw you should tell Jack Morris that you added this :) see https://x.com/jxmnop/status/1749812573984461145?s=20\r\n> \r\n> The CI fail is unrelated to this PR - I'm trying to fix it on `main` right now\r\n\r\nThank you! I'll let him know :)",
"great work guys! letting you know here too",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005268 / 0.011353 (-0.006085) | 0.003520 / 0.011008 (-0.007488) | 0.063247 / 0.038508 (0.024739) | 0.032337 / 0.023109 (0.009228) | 0.243251 / 0.275898 (-0.032647) | 0.265816 / 0.323480 (-0.057664) | 0.002960 / 0.007986 (-0.005025) | 0.002733 / 0.004328 (-0.001595) | 0.048965 / 0.004250 (0.044715) | 0.044341 / 0.037052 (0.007289) | 0.260352 / 0.258489 (0.001863) | 0.288546 / 0.293841 (-0.005295) | 0.027903 / 0.128546 (-0.100643) | 0.010897 / 0.075646 (-0.064749) | 0.210852 / 0.419271 (-0.208419) | 0.036302 / 0.043533 (-0.007231) | 0.247440 / 0.255139 (-0.007699) | 0.263024 / 0.283200 (-0.020176) | 0.017732 / 0.141683 (-0.123951) | 1.144206 / 1.452155 (-0.307949) | 1.206135 / 1.492716 (-0.286581) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098404 / 0.018006 (0.080398) | 0.310268 / 0.000490 (0.309778) | 0.000231 / 0.000200 (0.000031) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018342 / 0.037411 (-0.019070) | 0.060620 / 0.014526 (0.046094) | 0.074248 / 0.176557 (-0.102308) | 0.121025 / 0.737135 (-0.616110) | 0.075331 / 0.296338 (-0.221008) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293721 / 0.215209 (0.078512) | 2.854259 / 2.077655 (0.776605) | 1.520735 / 1.504120 (0.016615) | 1.393490 / 1.541195 (-0.147705) | 1.494905 / 1.468490 (0.026415) | 0.573812 / 4.584777 (-4.010965) | 2.418383 / 3.745712 (-1.327329) | 2.803916 / 5.269862 (-2.465945) | 1.741646 / 4.565676 (-2.824030) | 0.063341 / 0.424275 (-0.360934) | 0.004950 / 0.007607 (-0.002658) | 0.341758 / 0.226044 (0.115714) | 3.392918 / 2.268929 (1.123989) | 1.867037 / 55.444624 (-53.577587) | 1.571381 / 6.876477 (-5.305096) | 1.582883 / 2.142072 (-0.559190) | 0.663660 / 4.805227 (-4.141567) | 0.119587 / 6.500664 (-6.381077) | 0.042071 / 0.075469 (-0.033398) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.940976 / 1.841788 (-0.900811) | 11.841958 / 8.074308 (3.767650) | 10.510954 / 10.191392 (0.319562) | 0.131927 / 0.680424 (-0.548497) | 0.015373 / 0.534201 (-0.518828) | 0.294245 / 0.579283 (-0.285038) | 0.269355 / 0.434364 (-0.165009) | 0.330173 / 0.540337 (-0.210165) | 0.436809 / 1.386936 (-0.950127) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005609 / 0.011353 (-0.005744) | 0.003800 / 0.011008 (-0.007208) | 0.055693 / 0.038508 (0.017185) | 0.032606 / 0.023109 (0.009497) | 0.302372 / 0.275898 (0.026474) | 0.370530 / 0.323480 (0.047050) | 0.004291 / 0.007986 (-0.003694) | 0.002783 / 0.004328 (-0.001546) | 0.049351 / 0.004250 (0.045101) | 0.048186 / 0.037052 (0.011133) | 0.290022 / 0.258489 (0.031533) | 0.323358 / 0.293841 (0.029517) | 0.053929 / 0.128546 (-0.074617) | 0.011251 / 0.075646 (-0.064395) | 0.058885 / 0.419271 (-0.360387) | 0.033833 / 0.043533 (-0.009699) | 0.283546 / 0.255139 (0.028407) | 0.292416 / 0.283200 (0.009216) | 0.017682 / 0.141683 (-0.124001) | 1.141791 / 1.452155 (-0.310364) | 1.202540 / 1.492716 (-0.290177) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101240 / 0.018006 (0.083233) | 0.313274 / 0.000490 (0.312784) | 0.000255 / 0.000200 (0.000055) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023144 / 0.037411 (-0.014268) | 0.078418 / 0.014526 (0.063892) | 0.089716 / 0.176557 (-0.086840) | 0.129065 / 0.737135 (-0.608070) | 0.090976 / 0.296338 (-0.205362) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294585 / 0.215209 (0.079376) | 2.921350 / 2.077655 (0.843695) | 1.600977 / 1.504120 (0.096857) | 1.483218 / 1.541195 (-0.057977) | 1.533599 / 1.468490 (0.065109) | 0.580064 / 4.584777 (-4.004712) | 2.463501 / 3.745712 (-1.282211) | 2.905853 / 5.269862 (-2.364009) | 1.799701 / 4.565676 (-2.765975) | 0.065057 / 0.424275 (-0.359218) | 0.005080 / 0.007607 (-0.002527) | 0.352292 / 0.226044 (0.126248) | 3.429664 / 2.268929 (1.160735) | 1.970752 / 55.444624 (-53.473872) | 1.697151 / 6.876477 (-5.179326) | 1.751678 / 2.142072 (-0.390394) | 0.679264 / 4.805227 (-4.125963) | 0.118197 / 6.500664 (-6.382467) | 0.041834 / 0.075469 (-0.033635) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985756 / 1.841788 (-0.856032) | 13.335160 / 8.074308 (5.260852) | 11.524807 / 10.191392 (1.333415) | 0.134892 / 0.680424 (-0.545532) | 0.016855 / 0.534201 (-0.517346) | 0.294599 / 0.579283 (-0.284685) | 0.285988 / 0.434364 (-0.148376) | 0.331423 / 0.540337 (-0.208914) | 0.418765 / 1.386936 (-0.968171) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#65434e449b6bb6c57121d9518d92abe9a97e0bb0 \"CML watermark\")\n"
] | 2023-12-01T13:13:53 | 2024-01-26T15:17:43 | 2024-01-26T15:10:26 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6464",
"html_url": "https://github.com/huggingface/datasets/pull/6464",
"diff_url": "https://github.com/huggingface/datasets/pull/6464.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6464.patch",
"merged_at": "2024-01-26T15:10:26"
} | In some file systems (like luster), memory mapping arrow files takes time. This can be accelerated by performing the mmap in parallel on processes or threads.
- Threads seem to be faster than processes when gathering the list of tables from the workers (see https://github.com/huggingface/datasets/issues/2252).
- I'm not sure if using threads would respect theΒ `IN_MEMORY_MAX_SIZE` config.
- I'm not sure if we need to exposeΒ num_procΒ fromΒ `BaseReader.read`Β toΒ `DatasetBuilder.as_dataset`. Since `Β DatasetBuilder.as_dataset` is used in many places beside `load_dataset`.
### Tests on luster file system (on a shared partial node):
Loading 1231 shards of ~2GBs.
The files were pre-loaded in another process before the script runs (couldn't get a fresh node).
```python
import logging
from time import perf_counter
import datasets
logger = datasets.logging.get_logger(__name__)
datasets.logging.set_verbosity_info()
logging.basicConfig(level=logging.DEBUG, format="%(message)s")
class catchtime:
# context to measure loading time: https://stackoverflow.com/questions/33987060/python-context-manager-that-measures-time
def __init__(self, debug_print="Time", logger=logger):
self.debug_print = debug_print
self.logger = logger
def __enter__(self):
self.start = perf_counter()
return self
def __exit__(self, type, value, traceback):
self.time = perf_counter() - self.start
readout = f"{self.debug_print}: {self.time:.3f} seconds"
self.logger.info(readout)
dataset_path=""
# warmup
with catchtime("Loading in parallel", logger=logger):
ds = datasets.load_from_disk(dataset_path,num_proc=16)
# num_proc=16
with catchtime("Loading in parallel", logger=logger):
ds = datasets.load_from_disk(dataset_path,num_proc=16)
# num_proc=32
with catchtime("Loading in parallel", logger=logger):
ds = datasets.load_from_disk(dataset_path,num_proc=32)
# num_proc=1
with catchtime("Loading in conseq", logger=logger):
ds = datasets.load_from_disk(dataset_path,num_proc=1)
```
#### Run 1
```
open file: .../dataset_dict.json
Loading the dataset from disk using 16 threads: 100%|ββββββββββ| 1231/1231 [01:28<00:00, 13.96shards/s]
Loading in parallel: 88.690 seconds
open file: .../dataset_dict.json
Loading the dataset from disk using 16 threads: 100%|ββββββββββ| 1231/1231 [01:48<00:00, 11.31shards/s]
Loading in parallel: 109.339 seconds
open file: .../dataset_dict.json
Loading the dataset from disk using 32 threads: 100%|ββββββββββ| 1231/1231 [01:06<00:00, 18.56shards/s]
Loading in parallel: 66.931 seconds
open file: .../dataset_dict.json
Loading the dataset from disk: 100%|ββββββββββ| 1231/1231 [05:09<00:00, 3.98shards/s]
Loading in conseq: 309.792 seconds
```
#### Run 2
```
open file: .../dataset_dict.json
Loading the dataset from disk using 16 threads: 100%|ββββββββββ| 1231/1231 [01:38<00:00, 12.53shards/s]
Loading in parallel: 98.831 seconds
open file: .../dataset_dict.json
Loading the dataset from disk using 16 threads: 100%|ββββββββββ| 1231/1231 [02:01<00:00, 10.16shards/s]
Loading in parallel: 121.669 seconds
open file: .../dataset_dict.json
Loading the dataset from disk using 32 threads: 100%|ββββββββββ| 1231/1231 [01:07<00:00, 18.18shards/s]
Loading in parallel: 68.192 seconds
open file: .../dataset_dict.json
Loading the dataset from disk: 100%|ββββββββββ| 1231/1231 [05:19<00:00, 3.86shards/s]
Loading in conseq: 319.759 seconds
```
#### Run 3
```
open file: .../dataset_dict.json
Loading the dataset from disk using 16 threads: 100%|ββββββββββ| 1231/1231 [01:36<00:00, 12.74shards/s]
Loading in parallel: 96.936 seconds
open file: .../dataset_dict.json
Loading the dataset from disk using 16 threads: 100%|ββββββββββ| 1231/1231 [02:00<00:00, 10.24shards/s]
Loading in parallel: 120.761 seconds
open file: .../dataset_dict.json
Loading the dataset from disk using 32 threads: 100%|ββββββββββ| 1231/1231 [01:08<00:00, 18.04shards/s]
Loading in parallel: 68.666 seconds
open file: .../dataset_dict.json
Loading the dataset from disk: 100%|ββββββββββ| 1231/1231 [05:35<00:00, 3.67shards/s]
Loading in conseq: 335.777 seconds
```
fix #2252
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6464/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6464/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6463 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6463/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6463/comments | https://api.github.com/repos/huggingface/datasets/issues/6463/events | https://github.com/huggingface/datasets/pull/6463 | 2,020,702,967 | PR_kwDODunzps5g46_4 | 6,463 | Disable benchmarks in PRs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's a way to detect regressions in performance sensitive methods like map, and find the commit that lead to the regression",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005357 / 0.011353 (-0.005996) | 0.003295 / 0.011008 (-0.007713) | 0.062354 / 0.038508 (0.023846) | 0.054207 / 0.023109 (0.031098) | 0.240030 / 0.275898 (-0.035869) | 0.267863 / 0.323480 (-0.055617) | 0.002925 / 0.007986 (-0.005061) | 0.002634 / 0.004328 (-0.001695) | 0.047952 / 0.004250 (0.043702) | 0.038424 / 0.037052 (0.001372) | 0.248059 / 0.258489 (-0.010430) | 0.271923 / 0.293841 (-0.021918) | 0.027513 / 0.128546 (-0.101034) | 0.010344 / 0.075646 (-0.065302) | 0.210864 / 0.419271 (-0.208407) | 0.035911 / 0.043533 (-0.007622) | 0.245166 / 0.255139 (-0.009973) | 0.260914 / 0.283200 (-0.022285) | 0.016709 / 0.141683 (-0.124974) | 1.098324 / 1.452155 (-0.353830) | 1.162638 / 1.492716 (-0.330079) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094419 / 0.018006 (0.076413) | 0.303209 / 0.000490 (0.302719) | 0.000214 / 0.000200 (0.000014) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018350 / 0.037411 (-0.019061) | 0.060625 / 0.014526 (0.046099) | 0.072545 / 0.176557 (-0.104012) | 0.120905 / 0.737135 (-0.616231) | 0.073858 / 0.296338 (-0.222480) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282011 / 0.215209 (0.066802) | 2.758741 / 2.077655 (0.681086) | 1.431691 / 1.504120 (-0.072429) | 1.315883 / 1.541195 (-0.225312) | 1.344235 / 1.468490 (-0.124255) | 0.562117 / 4.584777 (-4.022660) | 2.385641 / 3.745712 (-1.360071) | 2.785402 / 5.269862 (-2.484460) | 1.753912 / 4.565676 (-2.811764) | 0.064054 / 0.424275 (-0.360221) | 0.005050 / 0.007607 (-0.002557) | 0.336452 / 0.226044 (0.110407) | 3.302481 / 2.268929 (1.033553) | 1.794105 / 55.444624 (-53.650519) | 1.519346 / 6.876477 (-5.357131) | 1.514911 / 2.142072 (-0.627161) | 0.655779 / 4.805227 (-4.149449) | 0.117913 / 6.500664 (-6.382751) | 0.042229 / 0.075469 (-0.033240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935196 / 1.841788 (-0.906591) | 11.490113 / 8.074308 (3.415805) | 10.542446 / 10.191392 (0.351054) | 0.129614 / 0.680424 (-0.550810) | 0.014919 / 0.534201 (-0.519282) | 0.288448 / 0.579283 (-0.290835) | 0.266929 / 0.434364 (-0.167435) | 0.328830 / 0.540337 (-0.211507) | 0.475510 / 1.386936 (-0.911426) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005469 / 0.011353 (-0.005884) | 0.003798 / 0.011008 (-0.007210) | 0.049129 / 0.038508 (0.010621) | 0.055490 / 0.023109 (0.032380) | 0.265828 / 0.275898 (-0.010070) | 0.286031 / 0.323480 (-0.037448) | 0.004075 / 0.007986 (-0.003910) | 0.002668 / 0.004328 (-0.001660) | 0.047823 / 0.004250 (0.043573) | 0.041946 / 0.037052 (0.004894) | 0.270359 / 0.258489 (0.011869) | 0.294287 / 0.293841 (0.000446) | 0.029643 / 0.128546 (-0.098903) | 0.010523 / 0.075646 (-0.065123) | 0.057370 / 0.419271 (-0.361902) | 0.033149 / 0.043533 (-0.010384) | 0.264408 / 0.255139 (0.009269) | 0.280413 / 0.283200 (-0.002787) | 0.018313 / 0.141683 (-0.123370) | 1.105982 / 1.452155 (-0.346173) | 1.182486 / 1.492716 (-0.310230) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092643 / 0.018006 (0.074637) | 0.301320 / 0.000490 (0.300831) | 0.000221 / 0.000200 (0.000021) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021253 / 0.037411 (-0.016158) | 0.068052 / 0.014526 (0.053527) | 0.080821 / 0.176557 (-0.095736) | 0.119320 / 0.737135 (-0.617816) | 0.081952 / 0.296338 (-0.214387) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288536 / 0.215209 (0.073327) | 2.819900 / 2.077655 (0.742245) | 1.545210 / 1.504120 (0.041090) | 1.422047 / 1.541195 (-0.119147) | 1.439158 / 1.468490 (-0.029332) | 0.564910 / 4.584777 (-4.019867) | 2.430474 / 3.745712 (-1.315238) | 2.763979 / 5.269862 (-2.505882) | 1.732203 / 4.565676 (-2.833474) | 0.062692 / 0.424275 (-0.361583) | 0.004936 / 0.007607 (-0.002671) | 0.341626 / 0.226044 (0.115582) | 3.366623 / 2.268929 (1.097694) | 1.917198 / 55.444624 (-53.527426) | 1.637635 / 6.876477 (-5.238842) | 1.625953 / 2.142072 (-0.516119) | 0.634936 / 4.805227 (-4.170291) | 0.115336 / 6.500664 (-6.385328) | 0.040946 / 0.075469 (-0.034524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964865 / 1.841788 (-0.876922) | 12.077233 / 8.074308 (4.002925) | 10.664120 / 10.191392 (0.472728) | 0.132084 / 0.680424 (-0.548340) | 0.015931 / 0.534201 (-0.518270) | 0.289181 / 0.579283 (-0.290102) | 0.276943 / 0.434364 (-0.157420) | 0.324884 / 0.540337 (-0.215453) | 0.552570 / 1.386936 (-0.834366) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4ac3f2b3f6d867673e41a0253f9e1ad48db68a8e \"CML watermark\")\n"
] | 2023-12-01T11:35:30 | 2023-12-01T12:09:09 | 2023-12-01T12:03:04 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6463",
"html_url": "https://github.com/huggingface/datasets/pull/6463",
"diff_url": "https://github.com/huggingface/datasets/pull/6463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6463.patch",
"merged_at": "2023-12-01T12:03:04"
} | In order to keep PR pages less spammy / more readable.
Having the benchmarks on commits on `main` is enough imo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6463/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6463/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6462/comments | https://api.github.com/repos/huggingface/datasets/issues/6462/events | https://github.com/huggingface/datasets/pull/6462 | 2,019,238,388 | PR_kwDODunzps5gz68T | 6,462 | Missing DatasetNotFoundError | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005594 / 0.011353 (-0.005759) | 0.003672 / 0.011008 (-0.007337) | 0.062796 / 0.038508 (0.024288) | 0.059432 / 0.023109 (0.036323) | 0.253976 / 0.275898 (-0.021922) | 0.281155 / 0.323480 (-0.042325) | 0.003023 / 0.007986 (-0.004962) | 0.003320 / 0.004328 (-0.001008) | 0.049059 / 0.004250 (0.044809) | 0.040252 / 0.037052 (0.003200) | 0.259526 / 0.258489 (0.001037) | 0.318798 / 0.293841 (0.024957) | 0.027883 / 0.128546 (-0.100663) | 0.010883 / 0.075646 (-0.064763) | 0.206948 / 0.419271 (-0.212323) | 0.036335 / 0.043533 (-0.007198) | 0.253209 / 0.255139 (-0.001930) | 0.275173 / 0.283200 (-0.008026) | 0.020365 / 0.141683 (-0.121318) | 1.121630 / 1.452155 (-0.330524) | 1.174680 / 1.492716 (-0.318036) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098372 / 0.018006 (0.080366) | 0.309949 / 0.000490 (0.309460) | 0.000225 / 0.000200 (0.000025) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019495 / 0.037411 (-0.017916) | 0.062321 / 0.014526 (0.047795) | 0.074525 / 0.176557 (-0.102031) | 0.121832 / 0.737135 (-0.615303) | 0.077612 / 0.296338 (-0.218727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288156 / 0.215209 (0.072947) | 2.816411 / 2.077655 (0.738756) | 1.497926 / 1.504120 (-0.006193) | 1.378137 / 1.541195 (-0.163058) | 1.446466 / 1.468490 (-0.022024) | 0.566195 / 4.584777 (-4.018582) | 2.391933 / 3.745712 (-1.353780) | 2.929290 / 5.269862 (-2.340572) | 1.828215 / 4.565676 (-2.737462) | 0.063312 / 0.424275 (-0.360963) | 0.005199 / 0.007607 (-0.002408) | 0.342883 / 0.226044 (0.116838) | 3.378388 / 2.268929 (1.109459) | 1.865710 / 55.444624 (-53.578915) | 1.573442 / 6.876477 (-5.303035) | 1.631228 / 2.142072 (-0.510845) | 0.651614 / 4.805227 (-4.153613) | 0.118177 / 6.500664 (-6.382487) | 0.043303 / 0.075469 (-0.032166) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.950694 / 1.841788 (-0.891094) | 12.559851 / 8.074308 (4.485543) | 10.751123 / 10.191392 (0.559731) | 0.143107 / 0.680424 (-0.537317) | 0.014469 / 0.534201 (-0.519732) | 0.289531 / 0.579283 (-0.289752) | 0.267316 / 0.434364 (-0.167047) | 0.327748 / 0.540337 (-0.212590) | 0.437758 / 1.386936 (-0.949178) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005669 / 0.011353 (-0.005684) | 0.003831 / 0.011008 (-0.007177) | 0.049096 / 0.038508 (0.010588) | 0.061408 / 0.023109 (0.038299) | 0.274571 / 0.275898 (-0.001327) | 0.299978 / 0.323480 (-0.023501) | 0.004216 / 0.007986 (-0.003769) | 0.002848 / 0.004328 (-0.001480) | 0.048755 / 0.004250 (0.044504) | 0.042576 / 0.037052 (0.005524) | 0.276781 / 0.258489 (0.018292) | 0.300903 / 0.293841 (0.007062) | 0.030243 / 0.128546 (-0.098303) | 0.010967 / 0.075646 (-0.064679) | 0.057879 / 0.419271 (-0.361392) | 0.033206 / 0.043533 (-0.010327) | 0.277620 / 0.255139 (0.022481) | 0.296263 / 0.283200 (0.013064) | 0.019022 / 0.141683 (-0.122660) | 1.125615 / 1.452155 (-0.326539) | 1.278016 / 1.492716 (-0.214700) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096836 / 0.018006 (0.078830) | 0.307491 / 0.000490 (0.307001) | 0.000230 / 0.000200 (0.000030) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021552 / 0.037411 (-0.015859) | 0.071099 / 0.014526 (0.056573) | 0.082432 / 0.176557 (-0.094124) | 0.121826 / 0.737135 (-0.615310) | 0.084902 / 0.296338 (-0.211437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.328113 / 0.215209 (0.112904) | 2.989613 / 2.077655 (0.911959) | 1.604904 / 1.504120 (0.100784) | 1.485459 / 1.541195 (-0.055735) | 1.524829 / 1.468490 (0.056339) | 0.580589 / 4.584777 (-4.004188) | 2.440087 / 3.745712 (-1.305625) | 2.944697 / 5.269862 (-2.325164) | 1.832728 / 4.565676 (-2.732949) | 0.064423 / 0.424275 (-0.359852) | 0.004991 / 0.007607 (-0.002616) | 0.357878 / 0.226044 (0.131834) | 3.515415 / 2.268929 (1.246487) | 1.964492 / 55.444624 (-53.480132) | 1.684058 / 6.876477 (-5.192418) | 1.730294 / 2.142072 (-0.411778) | 0.661228 / 4.805227 (-4.143999) | 0.122894 / 6.500664 (-6.377770) | 0.041776 / 0.075469 (-0.033693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969849 / 1.841788 (-0.871939) | 12.897067 / 8.074308 (4.822758) | 10.908200 / 10.191392 (0.716808) | 0.141139 / 0.680424 (-0.539285) | 0.015377 / 0.534201 (-0.518824) | 0.288625 / 0.579283 (-0.290658) | 0.279020 / 0.434364 (-0.155344) | 0.328386 / 0.540337 (-0.211951) | 0.590833 / 1.386936 (-0.796103) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#39ea60eaabb05d8ee38c072f375816cf87fce1a9 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004986 / 0.011353 (-0.006367) | 0.003070 / 0.011008 (-0.007938) | 0.062433 / 0.038508 (0.023925) | 0.050639 / 0.023109 (0.027530) | 0.241807 / 0.275898 (-0.034091) | 0.262517 / 0.323480 (-0.060963) | 0.003826 / 0.007986 (-0.004160) | 0.002602 / 0.004328 (-0.001727) | 0.048508 / 0.004250 (0.044257) | 0.037276 / 0.037052 (0.000224) | 0.245757 / 0.258489 (-0.012732) | 0.272969 / 0.293841 (-0.020871) | 0.027139 / 0.128546 (-0.101407) | 0.010265 / 0.075646 (-0.065381) | 0.207279 / 0.419271 (-0.211992) | 0.035312 / 0.043533 (-0.008221) | 0.247535 / 0.255139 (-0.007604) | 0.260668 / 0.283200 (-0.022532) | 0.016496 / 0.141683 (-0.125187) | 1.137510 / 1.452155 (-0.314645) | 1.167870 / 1.492716 (-0.324847) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091743 / 0.018006 (0.073736) | 0.298649 / 0.000490 (0.298159) | 0.000208 / 0.000200 (0.000009) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019053 / 0.037411 (-0.018359) | 0.060300 / 0.014526 (0.045774) | 0.072154 / 0.176557 (-0.104402) | 0.120293 / 0.737135 (-0.616842) | 0.073923 / 0.296338 (-0.222415) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283058 / 0.215209 (0.067849) | 2.769503 / 2.077655 (0.691849) | 1.457016 / 1.504120 (-0.047104) | 1.335753 / 1.541195 (-0.205441) | 1.325986 / 1.468490 (-0.142504) | 0.562553 / 4.584777 (-4.022224) | 2.406144 / 3.745712 (-1.339568) | 2.778063 / 5.269862 (-2.491799) | 1.782199 / 4.565676 (-2.783477) | 0.062490 / 0.424275 (-0.361785) | 0.004912 / 0.007607 (-0.002695) | 0.338500 / 0.226044 (0.112456) | 3.309746 / 2.268929 (1.040818) | 1.819693 / 55.444624 (-53.624931) | 1.510295 / 6.876477 (-5.366182) | 1.578402 / 2.142072 (-0.563671) | 0.637517 / 4.805227 (-4.167710) | 0.117018 / 6.500664 (-6.383647) | 0.048149 / 0.075469 (-0.027320) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939424 / 1.841788 (-0.902364) | 11.494891 / 8.074308 (3.420583) | 10.115194 / 10.191392 (-0.076198) | 0.126751 / 0.680424 (-0.553673) | 0.013567 / 0.534201 (-0.520634) | 0.282501 / 0.579283 (-0.296782) | 0.260594 / 0.434364 (-0.173770) | 0.325940 / 0.540337 (-0.214397) | 0.426186 / 1.386936 (-0.960750) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005405 / 0.011353 (-0.005948) | 0.003557 / 0.011008 (-0.007451) | 0.051139 / 0.038508 (0.012631) | 0.053446 / 0.023109 (0.030337) | 0.268051 / 0.275898 (-0.007847) | 0.292343 / 0.323480 (-0.031136) | 0.004716 / 0.007986 (-0.003269) | 0.002677 / 0.004328 (-0.001651) | 0.047634 / 0.004250 (0.043384) | 0.041062 / 0.037052 (0.004009) | 0.269225 / 0.258489 (0.010736) | 0.297462 / 0.293841 (0.003621) | 0.029292 / 0.128546 (-0.099254) | 0.010947 / 0.075646 (-0.064699) | 0.057845 / 0.419271 (-0.361426) | 0.032793 / 0.043533 (-0.010740) | 0.265308 / 0.255139 (0.010169) | 0.288242 / 0.283200 (0.005043) | 0.018311 / 0.141683 (-0.123372) | 1.140957 / 1.452155 (-0.311197) | 1.204883 / 1.492716 (-0.287833) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091375 / 0.018006 (0.073368) | 0.285922 / 0.000490 (0.285432) | 0.000238 / 0.000200 (0.000038) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021277 / 0.037411 (-0.016134) | 0.068853 / 0.014526 (0.054328) | 0.081002 / 0.176557 (-0.095555) | 0.120998 / 0.737135 (-0.616138) | 0.082741 / 0.296338 (-0.213598) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299398 / 0.215209 (0.084189) | 2.909622 / 2.077655 (0.831967) | 1.624381 / 1.504120 (0.120261) | 1.501683 / 1.541195 (-0.039512) | 1.523045 / 1.468490 (0.054555) | 0.548960 / 4.584777 (-4.035817) | 2.413297 / 3.745712 (-1.332415) | 2.817852 / 5.269862 (-2.452010) | 1.754407 / 4.565676 (-2.811270) | 0.061912 / 0.424275 (-0.362363) | 0.004880 / 0.007607 (-0.002727) | 0.353989 / 0.226044 (0.127944) | 3.496147 / 2.268929 (1.227219) | 2.003026 / 55.444624 (-53.441598) | 1.702013 / 6.876477 (-5.174463) | 1.680935 / 2.142072 (-0.461137) | 0.630183 / 4.805227 (-4.175044) | 0.113786 / 6.500664 (-6.386878) | 0.040061 / 0.075469 (-0.035408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957218 / 1.841788 (-0.884569) | 11.914469 / 8.074308 (3.840160) | 10.488896 / 10.191392 (0.297504) | 0.129292 / 0.680424 (-0.551132) | 0.016603 / 0.534201 (-0.517598) | 0.287367 / 0.579283 (-0.291916) | 0.271332 / 0.434364 (-0.163032) | 0.325577 / 0.540337 (-0.214761) | 0.560553 / 1.386936 (-0.826383) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d31e434bbeafdf6a70cb80539342d8fe5f5fd27 \"CML watermark\")\n"
] | 2023-11-30T18:09:43 | 2023-11-30T18:36:40 | 2023-11-30T18:30:30 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6462",
"html_url": "https://github.com/huggingface/datasets/pull/6462",
"diff_url": "https://github.com/huggingface/datasets/pull/6462.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6462.patch",
"merged_at": "2023-11-30T18:30:30"
} | continuation of https://github.com/huggingface/datasets/pull/6431
this should fix the CI in https://github.com/huggingface/datasets/pull/6458 too | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6462/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6461 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6461/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6461/comments | https://api.github.com/repos/huggingface/datasets/issues/6461/events | https://github.com/huggingface/datasets/pull/6461 | 2,018,850,731 | PR_kwDODunzps5gykvO | 6,461 | Fix shard retry mechanism in `push_to_hub` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@Wauplin Maybe `504` should be added to the `retry_on_status_codes` tuple [here](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/lfs.py#L300) to guard against https://github.com/huggingface/datasets/issues/3872",
"We could but I'm not sure to have witness a 504 on S3 before. The issue reported in https://github.com/huggingface/datasets/issues/3872 is a 504 on the `/upload` endpoint on the Hub and this is not an endpoint that is retried on [this line](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/lfs.py#L300).",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005110 / 0.011353 (-0.006243) | 0.003307 / 0.011008 (-0.007701) | 0.062601 / 0.038508 (0.024093) | 0.049644 / 0.023109 (0.026534) | 0.243195 / 0.275898 (-0.032703) | 0.273543 / 0.323480 (-0.049936) | 0.003862 / 0.007986 (-0.004123) | 0.002624 / 0.004328 (-0.001705) | 0.048273 / 0.004250 (0.044023) | 0.037820 / 0.037052 (0.000768) | 0.249134 / 0.258489 (-0.009355) | 0.319359 / 0.293841 (0.025518) | 0.027816 / 0.128546 (-0.100730) | 0.010422 / 0.075646 (-0.065225) | 0.206607 / 0.419271 (-0.212665) | 0.035719 / 0.043533 (-0.007814) | 0.250300 / 0.255139 (-0.004839) | 0.290377 / 0.283200 (0.007177) | 0.018459 / 0.141683 (-0.123224) | 1.114664 / 1.452155 (-0.337490) | 1.171429 / 1.492716 (-0.321288) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091483 / 0.018006 (0.073477) | 0.302770 / 0.000490 (0.302281) | 0.000203 / 0.000200 (0.000003) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018870 / 0.037411 (-0.018541) | 0.062692 / 0.014526 (0.048166) | 0.075381 / 0.176557 (-0.101176) | 0.122338 / 0.737135 (-0.614797) | 0.075608 / 0.296338 (-0.220730) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288115 / 0.215209 (0.072906) | 2.816183 / 2.077655 (0.738528) | 1.535601 / 1.504120 (0.031481) | 1.409546 / 1.541195 (-0.131648) | 1.438569 / 1.468490 (-0.029921) | 0.561797 / 4.584777 (-4.022980) | 2.373921 / 3.745712 (-1.371791) | 2.739437 / 5.269862 (-2.530424) | 1.750921 / 4.565676 (-2.814755) | 0.062114 / 0.424275 (-0.362161) | 0.004965 / 0.007607 (-0.002642) | 0.348614 / 0.226044 (0.122569) | 3.519631 / 2.268929 (1.250703) | 1.910797 / 55.444624 (-53.533827) | 1.610541 / 6.876477 (-5.265936) | 1.617972 / 2.142072 (-0.524100) | 0.639421 / 4.805227 (-4.165806) | 0.117371 / 6.500664 (-6.383293) | 0.041851 / 0.075469 (-0.033618) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945563 / 1.841788 (-0.896224) | 11.362399 / 8.074308 (3.288090) | 10.468468 / 10.191392 (0.277075) | 0.128925 / 0.680424 (-0.551499) | 0.013892 / 0.534201 (-0.520309) | 0.285487 / 0.579283 (-0.293796) | 0.269295 / 0.434364 (-0.165069) | 0.324843 / 0.540337 (-0.215495) | 0.438452 / 1.386936 (-0.948484) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005303 / 0.011353 (-0.006050) | 0.003162 / 0.011008 (-0.007846) | 0.048177 / 0.038508 (0.009669) | 0.048708 / 0.023109 (0.025599) | 0.271663 / 0.275898 (-0.004235) | 0.289948 / 0.323480 (-0.033532) | 0.003955 / 0.007986 (-0.004030) | 0.002616 / 0.004328 (-0.001713) | 0.047510 / 0.004250 (0.043260) | 0.039938 / 0.037052 (0.002886) | 0.277449 / 0.258489 (0.018960) | 0.300315 / 0.293841 (0.006474) | 0.029263 / 0.128546 (-0.099283) | 0.010403 / 0.075646 (-0.065244) | 0.056682 / 0.419271 (-0.362590) | 0.032757 / 0.043533 (-0.010776) | 0.273291 / 0.255139 (0.018152) | 0.289023 / 0.283200 (0.005824) | 0.017843 / 0.141683 (-0.123840) | 1.124762 / 1.452155 (-0.327393) | 1.176646 / 1.492716 (-0.316070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004568 / 0.018006 (-0.013438) | 0.300715 / 0.000490 (0.300225) | 0.000212 / 0.000200 (0.000012) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021528 / 0.037411 (-0.015883) | 0.068317 / 0.014526 (0.053792) | 0.081358 / 0.176557 (-0.095199) | 0.119297 / 0.737135 (-0.617838) | 0.082445 / 0.296338 (-0.213893) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289681 / 0.215209 (0.074472) | 2.843862 / 2.077655 (0.766208) | 1.574257 / 1.504120 (0.070137) | 1.454026 / 1.541195 (-0.087169) | 1.478379 / 1.468490 (0.009889) | 0.558259 / 4.584777 (-4.026518) | 2.513261 / 3.745712 (-1.232451) | 2.759751 / 5.269862 (-2.510111) | 1.730335 / 4.565676 (-2.835341) | 0.063805 / 0.424275 (-0.360470) | 0.004991 / 0.007607 (-0.002616) | 0.346586 / 0.226044 (0.120542) | 3.369163 / 2.268929 (1.100234) | 1.934734 / 55.444624 (-53.509890) | 1.658864 / 6.876477 (-5.217613) | 1.645621 / 2.142072 (-0.496452) | 0.636633 / 4.805227 (-4.168594) | 0.116839 / 6.500664 (-6.383825) | 0.040863 / 0.075469 (-0.034606) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960925 / 1.841788 (-0.880863) | 11.769189 / 8.074308 (3.694881) | 10.713662 / 10.191392 (0.522270) | 0.140510 / 0.680424 (-0.539914) | 0.015424 / 0.534201 (-0.518777) | 0.288039 / 0.579283 (-0.291244) | 0.277623 / 0.434364 (-0.156741) | 0.322622 / 0.540337 (-0.217716) | 0.539805 / 1.386936 (-0.847131) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#07ad81c15bd3b954defe779fc37ba5f432f5ff2a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005501 / 0.011353 (-0.005852) | 0.003754 / 0.011008 (-0.007254) | 0.062628 / 0.038508 (0.024120) | 0.059951 / 0.023109 (0.036842) | 0.254851 / 0.275898 (-0.021047) | 0.272133 / 0.323480 (-0.051347) | 0.003962 / 0.007986 (-0.004024) | 0.002759 / 0.004328 (-0.001569) | 0.048412 / 0.004250 (0.044161) | 0.039349 / 0.037052 (0.002297) | 0.253093 / 0.258489 (-0.005397) | 0.287048 / 0.293841 (-0.006793) | 0.027197 / 0.128546 (-0.101349) | 0.010828 / 0.075646 (-0.064819) | 0.206371 / 0.419271 (-0.212901) | 0.035881 / 0.043533 (-0.007652) | 0.254905 / 0.255139 (-0.000234) | 0.273819 / 0.283200 (-0.009381) | 0.018041 / 0.141683 (-0.123642) | 1.103970 / 1.452155 (-0.348185) | 1.166340 / 1.492716 (-0.326377) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093196 / 0.018006 (0.075190) | 0.302690 / 0.000490 (0.302200) | 0.000219 / 0.000200 (0.000019) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019552 / 0.037411 (-0.017860) | 0.062337 / 0.014526 (0.047811) | 0.074070 / 0.176557 (-0.102486) | 0.120998 / 0.737135 (-0.616137) | 0.076265 / 0.296338 (-0.220074) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.272637 / 0.215209 (0.057427) | 2.693350 / 2.077655 (0.615696) | 1.398020 / 1.504120 (-0.106100) | 1.285706 / 1.541195 (-0.255488) | 1.342810 / 1.468490 (-0.125680) | 0.565378 / 4.584777 (-4.019399) | 2.390131 / 3.745712 (-1.355581) | 2.892137 / 5.269862 (-2.377725) | 1.819840 / 4.565676 (-2.745836) | 0.062789 / 0.424275 (-0.361486) | 0.004920 / 0.007607 (-0.002687) | 0.329281 / 0.226044 (0.103237) | 3.261664 / 2.268929 (0.992735) | 1.775102 / 55.444624 (-53.669523) | 1.514341 / 6.876477 (-5.362136) | 1.530805 / 2.142072 (-0.611267) | 0.641009 / 4.805227 (-4.164218) | 0.118626 / 6.500664 (-6.382038) | 0.042732 / 0.075469 (-0.032737) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.933179 / 1.841788 (-0.908609) | 12.085247 / 8.074308 (4.010939) | 10.541596 / 10.191392 (0.350204) | 0.140141 / 0.680424 (-0.540283) | 0.014646 / 0.534201 (-0.519555) | 0.289640 / 0.579283 (-0.289643) | 0.281042 / 0.434364 (-0.153322) | 0.326462 / 0.540337 (-0.213876) | 0.441981 / 1.386936 (-0.944955) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005259 / 0.011353 (-0.006094) | 0.003766 / 0.011008 (-0.007242) | 0.048782 / 0.038508 (0.010273) | 0.064946 / 0.023109 (0.041836) | 0.264529 / 0.275898 (-0.011369) | 0.289675 / 0.323480 (-0.033805) | 0.004057 / 0.007986 (-0.003928) | 0.002805 / 0.004328 (-0.001523) | 0.047709 / 0.004250 (0.043459) | 0.041149 / 0.037052 (0.004096) | 0.271254 / 0.258489 (0.012765) | 0.296685 / 0.293841 (0.002844) | 0.029486 / 0.128546 (-0.099060) | 0.010608 / 0.075646 (-0.065038) | 0.056392 / 0.419271 (-0.362879) | 0.033181 / 0.043533 (-0.010352) | 0.267029 / 0.255139 (0.011890) | 0.284987 / 0.283200 (0.001787) | 0.018045 / 0.141683 (-0.123637) | 1.137358 / 1.452155 (-0.314796) | 1.184007 / 1.492716 (-0.308709) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004603 / 0.018006 (-0.013403) | 0.303901 / 0.000490 (0.303411) | 0.000225 / 0.000200 (0.000025) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021957 / 0.037411 (-0.015454) | 0.069427 / 0.014526 (0.054901) | 0.082394 / 0.176557 (-0.094163) | 0.120745 / 0.737135 (-0.616390) | 0.084571 / 0.296338 (-0.211767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292832 / 0.215209 (0.077623) | 2.824295 / 2.077655 (0.746640) | 1.563273 / 1.504120 (0.059153) | 1.440202 / 1.541195 (-0.100992) | 1.489810 / 1.468490 (0.021320) | 0.561120 / 4.584777 (-4.023657) | 2.439045 / 3.745712 (-1.306667) | 2.867139 / 5.269862 (-2.402722) | 1.793812 / 4.565676 (-2.771865) | 0.062797 / 0.424275 (-0.361478) | 0.005033 / 0.007607 (-0.002574) | 0.343648 / 0.226044 (0.117604) | 3.432285 / 2.268929 (1.163357) | 1.918175 / 55.444624 (-53.526449) | 1.637245 / 6.876477 (-5.239232) | 1.709246 / 2.142072 (-0.432826) | 0.634744 / 4.805227 (-4.170483) | 0.115782 / 6.500664 (-6.384882) | 0.041228 / 0.075469 (-0.034241) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962369 / 1.841788 (-0.879418) | 12.750819 / 8.074308 (4.676511) | 10.927356 / 10.191392 (0.735964) | 0.143454 / 0.680424 (-0.536970) | 0.015348 / 0.534201 (-0.518853) | 0.291207 / 0.579283 (-0.288076) | 0.276924 / 0.434364 (-0.157440) | 0.327287 / 0.540337 (-0.213050) | 0.577439 / 1.386936 (-0.809497) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#544ad95f6b6da7fee44a2bc838e15a5e0156c946 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005070 / 0.011353 (-0.006283) | 0.003475 / 0.011008 (-0.007533) | 0.061985 / 0.038508 (0.023477) | 0.048539 / 0.023109 (0.025430) | 0.229935 / 0.275898 (-0.045963) | 0.255247 / 0.323480 (-0.068233) | 0.003919 / 0.007986 (-0.004066) | 0.002664 / 0.004328 (-0.001664) | 0.048892 / 0.004250 (0.044642) | 0.037381 / 0.037052 (0.000328) | 0.238517 / 0.258489 (-0.019972) | 0.284069 / 0.293841 (-0.009772) | 0.027513 / 0.128546 (-0.101033) | 0.010778 / 0.075646 (-0.064868) | 0.205004 / 0.419271 (-0.214268) | 0.035553 / 0.043533 (-0.007980) | 0.230117 / 0.255139 (-0.025022) | 0.251150 / 0.283200 (-0.032050) | 0.017951 / 0.141683 (-0.123732) | 1.145548 / 1.452155 (-0.306607) | 1.191659 / 1.492716 (-0.301057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092335 / 0.018006 (0.074329) | 0.300264 / 0.000490 (0.299774) | 0.000206 / 0.000200 (0.000006) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018608 / 0.037411 (-0.018804) | 0.060376 / 0.014526 (0.045850) | 0.073551 / 0.176557 (-0.103006) | 0.118840 / 0.737135 (-0.618295) | 0.074447 / 0.296338 (-0.221892) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287033 / 0.215209 (0.071824) | 2.770958 / 2.077655 (0.693303) | 1.443986 / 1.504120 (-0.060134) | 1.314627 / 1.541195 (-0.226567) | 1.342287 / 1.468490 (-0.126203) | 0.559607 / 4.584777 (-4.025170) | 2.409678 / 3.745712 (-1.336034) | 2.772566 / 5.269862 (-2.497295) | 1.743511 / 4.565676 (-2.822165) | 0.062277 / 0.424275 (-0.361998) | 0.004952 / 0.007607 (-0.002655) | 0.330581 / 0.226044 (0.104537) | 3.280385 / 2.268929 (1.011456) | 1.809599 / 55.444624 (-53.635025) | 1.532186 / 6.876477 (-5.344290) | 1.529689 / 2.142072 (-0.612383) | 0.645213 / 4.805227 (-4.160014) | 0.117564 / 6.500664 (-6.383100) | 0.041657 / 0.075469 (-0.033812) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943912 / 1.841788 (-0.897876) | 11.414317 / 8.074308 (3.340009) | 10.394915 / 10.191392 (0.203523) | 0.129271 / 0.680424 (-0.551153) | 0.013934 / 0.534201 (-0.520267) | 0.288217 / 0.579283 (-0.291066) | 0.267171 / 0.434364 (-0.167193) | 0.327112 / 0.540337 (-0.213225) | 0.446680 / 1.386936 (-0.940256) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005200 / 0.011353 (-0.006152) | 0.003453 / 0.011008 (-0.007555) | 0.048736 / 0.038508 (0.010228) | 0.051073 / 0.023109 (0.027964) | 0.276591 / 0.275898 (0.000693) | 0.294495 / 0.323480 (-0.028985) | 0.004069 / 0.007986 (-0.003917) | 0.002945 / 0.004328 (-0.001383) | 0.047090 / 0.004250 (0.042839) | 0.040445 / 0.037052 (0.003393) | 0.278464 / 0.258489 (0.019975) | 0.304020 / 0.293841 (0.010179) | 0.028811 / 0.128546 (-0.099736) | 0.010388 / 0.075646 (-0.065259) | 0.057214 / 0.419271 (-0.362057) | 0.032588 / 0.043533 (-0.010945) | 0.277694 / 0.255139 (0.022555) | 0.294979 / 0.283200 (0.011779) | 0.018384 / 0.141683 (-0.123299) | 1.162332 / 1.452155 (-0.289822) | 1.188355 / 1.492716 (-0.304361) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090501 / 0.018006 (0.072495) | 0.303122 / 0.000490 (0.302632) | 0.000222 / 0.000200 (0.000022) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022536 / 0.037411 (-0.014876) | 0.068452 / 0.014526 (0.053926) | 0.080932 / 0.176557 (-0.095625) | 0.119185 / 0.737135 (-0.617950) | 0.081513 / 0.296338 (-0.214825) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291522 / 0.215209 (0.076313) | 2.849467 / 2.077655 (0.771812) | 1.597395 / 1.504120 (0.093275) | 1.512872 / 1.541195 (-0.028323) | 1.488144 / 1.468490 (0.019654) | 0.572436 / 4.584777 (-4.012341) | 2.440129 / 3.745712 (-1.305583) | 2.788045 / 5.269862 (-2.481817) | 1.754246 / 4.565676 (-2.811430) | 0.066706 / 0.424275 (-0.357569) | 0.005035 / 0.007607 (-0.002573) | 0.336621 / 0.226044 (0.110576) | 3.322820 / 2.268929 (1.053891) | 1.940494 / 55.444624 (-53.504130) | 1.670022 / 6.876477 (-5.206454) | 1.666353 / 2.142072 (-0.475720) | 0.646180 / 4.805227 (-4.159047) | 0.116676 / 6.500664 (-6.383988) | 0.040559 / 0.075469 (-0.034910) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971396 / 1.841788 (-0.870392) | 11.782426 / 8.074308 (3.708118) | 10.672034 / 10.191392 (0.480642) | 0.137658 / 0.680424 (-0.542766) | 0.016210 / 0.534201 (-0.517991) | 0.288302 / 0.579283 (-0.290981) | 0.280775 / 0.434364 (-0.153589) | 0.326962 / 0.540337 (-0.213375) | 0.558511 / 1.386936 (-0.828425) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#76020180407d7ea9a0b535758d8d1b241fd19d8c \"CML watermark\")\n"
] | 2023-11-30T14:57:14 | 2023-12-01T17:57:39 | 2023-12-01T17:51:33 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6461",
"html_url": "https://github.com/huggingface/datasets/pull/6461",
"diff_url": "https://github.com/huggingface/datasets/pull/6461.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6461.patch",
"merged_at": "2023-12-01T17:51:33"
} | When it fails, `preupload_lfs_files` throws a [`RuntimeError`](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/_commit_api.py#L402) error and chains the original HTTP error. This PR modifies the retry mechanism's error handling to account for that.
Fix https://github.com/huggingface/datasets/issues/6392 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6461/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6460/comments | https://api.github.com/repos/huggingface/datasets/issues/6460/events | https://github.com/huggingface/datasets/issues/6460 | 2,017,433,899 | I_kwDODunzps54P5kr | 6,460 | jsonlines files don't load with `load_dataset` | {
"login": "serenalotreck",
"id": 41377532,
"node_id": "MDQ6VXNlcjQxMzc3NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/41377532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/serenalotreck",
"html_url": "https://github.com/serenalotreck",
"followers_url": "https://api.github.com/users/serenalotreck/followers",
"following_url": "https://api.github.com/users/serenalotreck/following{/other_user}",
"gists_url": "https://api.github.com/users/serenalotreck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/serenalotreck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/serenalotreck/subscriptions",
"organizations_url": "https://api.github.com/users/serenalotreck/orgs",
"repos_url": "https://api.github.com/users/serenalotreck/repos",
"events_url": "https://api.github.com/users/serenalotreck/events{/privacy}",
"received_events_url": "https://api.github.com/users/serenalotreck/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @serenalotreck,\r\n\r\nWe use Apache Arrow `pyarrow` to read jsonlines and it throws an error when trying to load your data files:\r\n```python\r\nIn [1]: import pyarrow as pa\r\n\r\nIn [2]: data = pa.json.read_json(\"train.jsonl\")\r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-14-e9b104832528> in <module>\r\n----> 1 data = pa.json.read_json(\"train.jsonl\")\r\n\r\n.../huggingface/datasets/venv/lib/python3.9/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json()\r\n\r\n.../huggingface/datasets/venv/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n.../huggingface/datasets/venv/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0\r\n```\r\n\r\nI think it has to do with the data structure of the fields \"ner\" (and also \"relations\"):\r\n```json\r\n\"ner\": [\r\n [\r\n [0, 4, \"Biochemical_process\"], \r\n [15, 16, \"Protein\"]\r\n ], \r\n```\r\nArrow interprets this data structure as an array, an arrays contain just a single data type: \r\n- when reading sequentially, it finds first the `0` and infers that the data is of type `number`;\r\n- when it finds the string `\"Biochemical_process\"`, it cannot cast it to number and throws the `ArrowInvalid` error\r\n\r\nOne solution could be to change the data structure of your data files. Any other ideas, @huggingface/datasets ?",
"Hi @albertvillanova, \r\n\r\nThanks for the explanation! To the best of my knowledge, arrays in a json [can contain multiple data types](https://docs.actian.com/ingres/11.2/index.html#page/SQLRef/Data_Types.htm), and I'm able to read these files with the `jsonlines` package. Is the requirement for arrays to only have one data type specific to PyArrow?\r\n\r\nI'd prefer to keep the data structure as is, since it's a specific input requirement for the models this data was generated for. Any thoughts on how to enable the use of `load_dataset` with this dataset would be great!",
"Hi again @serenalotreck,\r\n\r\nYes, it is specific to PyArrow: as far as I know, Arrow does not support arrays with multiple data types.\r\n\r\nAs this is related specifically to your dataset structure (and not the `datasets` library), I have created a dedicated issue in your dataset page: https://huggingface.co/datasets/slotreck/pickle/discussions/1\r\n\r\nLet's continue the discussion there! :hugs: ",
"> Hi again @serenalotreck,\r\n> \r\n> Yes, it is specific to PyArrow: as far as I know, Arrow does not support arrays with multiple data types.\r\n> \r\n> As this is related specifically to your dataset structure (and not the `datasets` library), I have created a dedicated issue in your dataset page: https://huggingface.co/datasets/slotreck/pickle/discussions/1\r\n> \r\n> Let's continue the discussion there! π€\r\n\r\nThis is really terrible. My JSONL format data is very simple, but I still report this error\r\n![image](https://github.com/huggingface/datasets/assets/58240629/e3fed922-ced4-406c-b5bc-90a4b891c4ee)\r\nThe error message is as followsοΌ\r\n File \"pyarrow/_json.pyx\", line 290, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: JSON parse error: Column(/inputs) changed from string to number in row 208\r\n"
] | 2023-11-29T21:20:11 | 2023-12-29T02:58:29 | 2023-12-05T13:30:53 | NONE | null | null | ### Describe the bug
While [the docs](https://huggingface.co/docs/datasets/upload_dataset#upload-dataset) seem to state that `.jsonl` is a supported extension for `datasets`, loading the dataset results in a `JSONDecodeError`.
### Steps to reproduce the bug
Code:
```
from datasets import load_dataset
dset = load_dataset('slotreck/pickle')
```
Traceback:
```
Downloading readme: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 925/925 [00:00<00:00, 3.11MB/s]
Downloading and preparing dataset json/slotreck--pickle to /mnt/home/lotrecks/.cache/huggingface/datasets/slotreck___json/slotreck--pickle-0c311f36ed032b04/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96...
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 589k/589k [00:00<00:00, 18.9MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 104k/104k [00:00<00:00, 4.61MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 170k/170k [00:00<00:00, 7.71MB/s]
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 3.77it/s]
Extracting data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 523.92it/s]
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/mnt/home/lotrecks/.cache/huggingface/datasets/downloads/6ec07bb2f279c9377036af6948532513fa8f48244c672d2644a2d7018ee5c9cb' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0
Traceback (most recent call last):
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 144, in _generate_tables
dataset = json.load(f)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 296, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 3086)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1879, in _prepare_split_single
for _, table in generator:
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 147, in _generate_tables
raise e
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
File "pyarrow/_json.pyx", line 259, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/load.py", line 1815, in load_dataset
storage_options=storage_options,
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 913, in download_and_prepare
**download_and_prepare_kwargs,
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1768, in _prepare_split
gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1912, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
For the dataset to be loaded without error.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 8.0.0
- Pandas version: 1.3.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6460/timeline | completed | false |