html_url
stringlengths 48
51
| title
stringlengths 1
290
| comments
sequencelengths 0
30
| body
stringlengths 0
228k
β | number
int64 2
7.08k
|
---|---|---|---|---|
https://github.com/huggingface/datasets/issues/6973 | IndexError during training with Squad dataset and T5-small model | [
"add remove_unused_columns=False to training_args\r\nhttps://github.com/huggingface/datasets/issues/6535#issuecomment-1874024704",
"Closing this issue because it was a reported and fixed in transformers."
] | ### Describe the bug
I am encountering an IndexError while training a T5-small model on the Squad dataset using the transformers and datasets libraries. The error occurs even with a minimal reproducible example, suggesting a potential bug or incompatibility.
### Steps to reproduce the bug
1.Install the required libraries: !pip install transformers datasets
2.Run the following code:
!pip install transformers datasets
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, TrainingArguments, Trainer, DataCollatorWithPadding
# Load a small, publicly available dataset
from datasets import load_dataset
dataset = load_dataset("squad", split="train[:100]") # Use a small subset for testing
# Load a pre-trained model and tokenizer
model_name = "t5-small"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# Define a basic data collator
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
# Define training arguments
training_args = TrainingArguments(
output_dir="./results",
per_device_train_batch_size=2,
num_train_epochs=1,
)
# Create a trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
data_collator=data_collator,
)
# Train the model
trainer.train()
### Expected behavior
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
[<ipython-input-23-f13a4b23c001>](https://localhost:8080/#) in <cell line: 34>()
32
33 # Train the model
---> 34 trainer.train()
10 frames
[/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size)
427 if isinstance(key, int):
428 if (key < 0 and key + size < 0) or (key >= size):
--> 429 raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
430 return
431 elif isinstance(key, slice):
IndexError: Invalid key: 42 is out of bounds for size 0
### Environment info
transformers version:4.41.2
datasets version:1.18.4
Python version:3.10.12
| 6,973 |
https://github.com/huggingface/datasets/issues/6967 | Method to load Laion400m | [] | ### Feature request
Large datasets like Laion400m are provided as embeddings. The provided methods in load_dataset are not straightforward for loading embedding files, i.e. img_emb_XX.npy ; XX = 0 to 99
### Motivation
The trial and experimentation is the key pivot of HF. It would be great if HF can load embeddings files s,ealessly.
### Your contribution
I cam write the loader with some help. | 6,967 |
https://github.com/huggingface/datasets/issues/6961 | Manual downloads should count as downloads | [
"We're unlikely to add more features/support for datasets with python loading scripts, which include datasets with manual download. Sorry for the inconvenience"
] | ### Feature request
I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats
### Motivation
This would ensure that downloads are accurately reported to end users.
### Your contribution
N/A | 6,961 |
https://github.com/huggingface/datasets/issues/6958 | My Private Dataset doesn't exist on the Hub or cannot be accessed | [
"I can load public dataset, but for my private dataset it fails",
"https://huggingface.co/docs/datasets/upload_dataset",
"I have checked the API HTTP link. Repository Not Found for url: https://huggingface.co/api/datasets/xxx/xxx.\r\n\r\n![image](https://github.com/huggingface/datasets/assets/39621324/4aceef59-0c65-4161-9665-676d25d73225)\r\n\r\nIt just works fine.",
"It seems that everything is in a mass huh....\r\n\r\n![image](https://github.com/huggingface/datasets/assets/39621324/fb2fe12c-4f0a-4bf6-9656-63ba50347b10)\r\n",
"https://huggingface.co/datasets/rajpurkar/squad/blob/main/squad.py fails again",
"https://github.com/huggingface/datasets/blob/main/templates/new_dataset_script.py#L81 can not use this, too complex. I just need a def to load my file to a dict",
"I am facing the same issue. Did you find a fix?",
"You should authenticate to be able to access private or gated repos: https://huggingface.co/docs/hub/datasets-gated#access-gated-datasets-as-a-user"
] | ### Describe the bug
```
File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory
raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg)
datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on the Hub or cannot be accessed
>>> dataset = load_dataset("xxxx", token=True)
404 error 404 Client Error. (Request ID: Root=xxxx)
Repository Not Found for url: https://huggingface.co/api/datasets/xxx/xxx.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 2593, in load_dataset
builder_instance = load_dataset_builder(
File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 2265, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1910, in dataset_module_factory
raise e1 from None
File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory
raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg)
datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on the Hub or cannot be accessed
```
### Steps to reproduce the bug
123
### Expected behavior
123
### Environment info
123 | 6,958 |
https://github.com/huggingface/datasets/issues/6953 | Remove canonical datasets from docs | [
"Canonical datasets are no longer mentioned in the docs."
] | Remove canonical datasets from docs, now that we no longer have canonical datasets. | 6,953 |
https://github.com/huggingface/datasets/issues/6951 | load_dataset() should load all subsets, if no specific subset is specified | [
"@xianbaoqian ",
"Feel free to open a PR in `m-a-p/COIG-CQIA` to define a default subset. Currently there is no default.\r\n\r\nYou can find some documentation at https://huggingface.co/docs/hub/datasets-manual-configuration#multiple-configurations",
"@lhoestq \r\n\r\nWhilst having a default subset readily available (e.g. `all`) by the dataset author is an ideal solution, it is not always the reality.\r\n\r\nWithout the ability to fork the dataset, this can be problematic.\r\n\r\nAs far as I know, it is not possible at all to specify multiple subsets in a generalized programmatic way without hard coding subset names for a specific dataset.\r\n\r\nEven the ability to fetch subset names and loop over them would be sufficient.",
"Please note that each subset can have different feature columns, thus making it impossible to load them all into a unique Dataset instance.\r\n\r\nThat is why subsets were created: to support different but related datasets to coexist in a single dataset repository.\r\n\r\nIf you would like to programmatically get the list of subset names, you can use `datasets.get_dataset_config_names`: https://huggingface.co/docs/datasets/v2.20.0/en/load_hub#configurations"
] | ### Feature request
Currently load_dataset() is forcing users to specify a subset. Example
`from datasets import load_dataset
dataset = load_dataset("m-a-p/COIG-CQIA")`
```---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-10-c0cb49385da6>](https://localhost:8080/#) in <cell line: 2>()
1 from datasets import load_dataset
----> 2 dataset = load_dataset("m-a-p/COIG-CQIA")
3 frames
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _create_builder_config(self, config_name, custom_features, **config_kwargs)
582 if not config_kwargs:
583 example_of_usage = f"load_dataset('{self.dataset_name}', '{self.BUILDER_CONFIGS[0].name}')"
--> 584 raise ValueError(
585 "Config name is missing."
586 f"\nPlease pick one among the available configs: {list(self.builder_configs.keys())}"
ValueError: Config name is missing.
Please pick one among the available configs: ['chinese_traditional', 'coig_pc', 'exam', 'finance', 'douban', 'human_value', 'logi_qa', 'ruozhiba', 'segmentfault', 'wiki', 'wikihow', 'xhs', 'zhihu']
Example of usage:
`load_dataset('coig-cqia', 'chinese_traditional')`
```
This means a dataset cannot contain all the subsets at the same time. I guess one workaround is to manually specify the subset files like in [here](https://huggingface.co/datasets/m-a-p/COIG-CQIA/discussions/1#658698b44bb41498f75c5622), which is clumsy.
### Motivation
Ideally, if not subset is specified, the API should just try to load all subsets. This makes it much easier to handle datasets w/ subsets.
### Your contribution
Not sure since I'm not familiar w/ the lib src. | 6,951 |
https://github.com/huggingface/datasets/issues/6950 | `Dataset.with_format` behaves inconsistently with documentation | [
"Hi ! It seems the documentation was outdated in this paragraph\r\n\r\nI fixed it here: https://github.com/huggingface/datasets/pull/6956",
"Fixed."
] | ### Describe the bug
The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation.
https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays
https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays
> If your dataset consists of N-dimensional arrays, you will see that by default they are considered as nested lists.
> In particular, a PyTorch formatted dataset outputs nested lists instead of a single tensor.
> A TensorFlow formatted dataset outputs a RaggedTensor instead of a single tensor.
But I get a single tensor by default, which is inconsistent with the description.
Actually the current behavior seems more reasonable to me. Therefore, the document needs to be modified.
### Steps to reproduce the bug
```python
>>> from datasets import Dataset
>>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]]
>>> ds = Dataset.from_dict({"data": data})
>>> ds = ds.with_format("torch")
>>> ds[0]
{'data': tensor([[1, 2],
[3, 4]])}
>>> ds = ds.with_format("tf")
>>> ds[0]
{'data': <tf.Tensor: shape=(2, 2), dtype=int64, numpy=
array([[1, 2],
[3, 4]])>}
```
### Expected behavior
```python
>>> from datasets import Dataset
>>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]]
>>> ds = Dataset.from_dict({"data": data})
>>> ds = ds.with_format("torch")
>>> ds[0]
{'data': [tensor([1, 2]), tensor([3, 4])]}
>>> ds = ds.with_format("tf")
>>> ds[0]
{'data': <tf.RaggedTensor [[1, 2], [3, 4]]>}
```
### Environment info
datasets==2.19.1
torch==2.1.0
tensorflow==2.13.1 | 6,950 |
https://github.com/huggingface/datasets/issues/6949 | load_dataset error | [
"Hi, @lion-ops.\r\n\r\nIn our Continuous Integration we have many tests on loading JSON files and all of them work properly.\r\n\r\nCould you please share your \"train.json\" file, so that we can try to reproduce the issue you have? ",
"> Hi, @lion-ops.\r\n> \r\n> In our Continuous Integration we have many tests on loading JSON files and all of them work properly.\r\n> \r\n> Could you please share your \"train.json\" file, so that we can try to reproduce the issue you have?\r\n\r\nThank you for your reply. I can load it normally in another server. Is it possible that the disk of my server is a network disk in the LAN, so it will be downloaded from the LAN and get stuck?"
] | ### Describe the bug
Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r').
### Steps to reproduce the bug
1. pip install datasets==2.19.2
2. from datasets import Dataset, DatasetDict, NamedSplit, Split, load_dataset
3. data = load_dataset('json', data_files='train.json')
### Expected behavior
It is able to load my json correctly
### Environment info
datasets==2.19.2 | 6,949 |
https://github.com/huggingface/datasets/issues/6948 | to_tf_dataset: Visible devices cannot be modified after being initialized | [] | ### Describe the bug
When trying to use to_tf_dataset with a custom data_loader collate_fn when I use parallelism I am met with the following error as many times as number of workers there were in ``num_workers``.
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 314, in _bootstrap
self.run()
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/datasets/utils/tf_utils.py", line 438, in worker_loop
tf.config.set_visible_devices([], "GPU") # Make sure workers don't try to allocate GPU memory
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/tensorflow/python/framework/config.py", line 566, in set_visible_devices
context.context().set_visible_devices(devices, device_type)
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/tensorflow/python/eager/context.py", line 1737, in set_visible_devices
raise RuntimeError(
RuntimeError: Visible devices cannot be modified after being initialized
### Steps to reproduce the bug
1. Download a dataset using HuggingFace load_dataset
2. Define a function that transforms the data in some way to be used in the collate_fn argument
3. Provide a ``batch_size`` and ``num_workers`` value in the ``to_tf_dataset`` function
4. Either retrieve directly or use tfds benchmark to test the dataset
``` python
from datasets import load_datasets
import tensorflow_datasets as tfds
from keras_cv.layers import Resizing
def data_loader(examples):
x = Resizing(examples[0]['image'], 256, 256, crop_to_aspect_ratio=True)
return {X[0]: x}
ds = load_datasets("logasja/FDF", split="test")
ds = ds.to_tf_dataset(collate_fn=data_loader, batch_size=16, num_workers=2)
tfds.benchmark(ds)
```
### Expected behavior
Use multiple processes to apply transformations from the collate_fn to the tf dataset on the CPU.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.5.0-1023-oracle-x86_64-with-glibc2.35
- Python version: 3.11.8
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | 6,948 |
https://github.com/huggingface/datasets/issues/6947 | FileNotFoundErrorοΌerror when loading C4 dataset | [
"same problem here",
"Hello,\r\n\r\nAre you sure you are really using datasets version 2.19.2? We just made the patch release yesterday specifically to fix this issue:\r\n- #6925\r\n\r\nI can't reproduce the error:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation')\r\nDownloading readme: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 41.1k/41.1k [00:00<00:00, 596kB/s]\r\nDownloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 40.7M/40.7M [00:04<00:00, 8.50MB/s]\r\nGenerating validation split: 45576 examples [00:01, 44956.75 examples/s]\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDataset({\r\n features: ['text', 'timestamp', 'url'],\r\n num_rows: 45576\r\n})\r\n```",
"> Hello,\r\n> \r\n> Are you sure you are really using datasets version 2.19.2? We just made the patch release yesterday specifically to fix this issue:\r\n> \r\n> * [Fix NonMatchingSplitsSizesError/ExpectedMoreSplits when passing data_dir/data_files in no-code Hub datasetsΒ #6925](https://github.com/huggingface/datasets/pull/6925)\r\n> \r\n> I can't reproduce the error:\r\n> \r\n> ```python\r\n> In [1]: from datasets import load_dataset\r\n> \r\n> In [2]: ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation')\r\n> Downloading readme: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 41.1k/41.1k [00:00<00:00, 596kB/s]\r\n> Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 40.7M/40.7M [00:04<00:00, 8.50MB/s]\r\n> Generating validation split: 45576 examples [00:01, 44956.75 examples/s]\r\n> \r\n> In [3]: ds\r\n> Out[3]: \r\n> Dataset({\r\n> features: ['text', 'timestamp', 'url'],\r\n> num_rows: 45576\r\n> })\r\n> ```\r\nThank you for your reply,ExpectedMoreSplits was encountered in datasets version 2.12.2. After I updated the version, that is, datasets version 2.19.2, I encountered the FileNotFoundError problem mentioned above.",
"That might be due to a corrupted cache.\r\n\r\nPlease, retry loading the dataset passing: `download_mode=\"force_redownload\"`\r\n```python\r\nds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n```\r\n\r\nIt the above command does not fix the issue, then you will need to fix the cache manually, by removing the corresponding directory inside `~/.cache/huggingface/`.\r\n",
"> That might be due to a corrupted cache.\r\n> \r\n> Please, retry loading the dataset passing: `download_mode=\"force_redownload\"`\r\n> \r\n> ```python\r\n> ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n> ```\r\n> \r\n> It the above command does not fix the issue, then you will need to fix the cache manually, by removing the corresponding directory inside `~/.cache/huggingface/`.\r\n\r\nThe two methods you mentioned above can not solve this problem, but the command line interface shows Downloading readme: 41.1kB [00:00, 281kB/s], and then FileNotFoundError appears. It is worth noting that I have no problem loading other datasets with the initial method, such as wikitext datasets",
"> The two methods you mentioned above can not solve this problem, but the command line interface shows Downloading readme: 41.1kB [00:00, 281kB/s], and then FileNotFoundError appears.\r\n\r\nSame issue encountered.\r\n",
"I really think the issue is caused by a corrupted cache, between versions 2.12.0 (there does not exist 2.12.2 version) and 2.19.2.\r\n\r\nAre you sure you removed all the corresponding corrupted directories within the cache?\r\n\r\nYou can easily check if the issue is caused by a corrupted cache by removing the entire cache:\r\n```shell\r\nmv ~/.cache/huggingface ~/.cache/huggingface.bak\r\n```\r\nand then reloading the dataset:\r\n```python\r\nds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n```",
"@albertvillanova Thanks for the reply. I tried removing the entire cache and reloading the dataset as you suggest. However, the same issue still exists. \r\n\r\nAs a test, I switch to a new platform, which (is a Windows system and) hasn't downloaded huggingface dataset before, and the dataset is loaded successfully. So I think \"a corrupted cache\" explanation makes sense. I wonder, besides `~/.cache/huggingface`, is there any other directory that may save the cache thing?\r\n\r\nAs a side note, I am using `datasets==2.20.0` and proxy `export HF_ENDPOINT=https://hf-mirror.com`.",
"Ho @ZhangGe6,\r\n\r\nAs far as I know, that directory is the only one where the cache is saved, unless you configured another one. You can check it:\r\n```python\r\nimport datasets.config\r\n\r\nprint(datasets.config.HF_CACHE_HOME)\r\n# ~/.cache/huggingface\r\n\r\nprint(datasets.config.HF_DATASETS_CACHE)\r\n# ~/.cache/huggingface/datasets\r\n\r\nprint(datasets.config.HF_MODULES_CACHE)\r\n# ~/.cache/huggingface/modules\r\n\r\nprint(datasets.config.DOWNLOADED_DATASETS_PATH)\r\n# ~/.cache/huggingface/datasets/downloads\r\n\r\nprint(datasets.config.EXTRACTED_DATASETS_PATH)\r\n# ~/.cache/huggingface/datasets/downloads/extracted\r\n```\r\n\r\nAdditionally, `datasets` uses `huggingface_hub`, but its cache directory should also be inside `~/.cache/huggingface`, unless you configured another one. You can check it:\r\n```python\r\nimport huggingface_hub.constants\r\n\r\nprint(huggingface_hub.constants.HF_HOME)\r\n# ~/.cache/huggingface\r\n\r\nprint(huggingface_hub.constants.HF_HUB_CACHE)\r\n# ~/.cache/huggingface/hub\r\n```",
"@albertvillanova I checked the directories you listed, and find that they are the same as the ones you provided. I am going to find more clues and will update what I find here.",
"I've had a similar problem, and for some reason decreasing the number of workers in the dataloader solved it",
"Same issue.\r\n",
"Hi folks. Finally, I find it is a network issue that causes huggingface hub unreachable (in China).\r\n\r\nTo run the following script \r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n```\r\nWithout setting `export HF_ENDPOINT=https://hf-mirror.com`, I get the following error log\r\n```bash\r\nTraceback (most recent call last):\r\n File \".\\demo.py\", line 8, in <module>\r\n ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n File \"D:\\SoftwareInstall\\Python\\lib\\site-packages\\datasets\\load.py\", line 2594, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"D:\\SoftwareInstall\\Python\\lib\\site-packages\\datasets\\load.py\", line 2266, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"D:\\SoftwareInstall\\Python\\lib\\site-packages\\datasets\\load.py\", line 1914, in dataset_module_factory\r\n raise e1 from None\r\n File \"D:\\SoftwareInstall\\Python\\lib\\site-packages\\datasets\\load.py\", line 1845, in dataset_module_factory\r\n raise ConnectionError(f\"Couldn't reach '{path}' on the Hub ({e.__class__.__name__})\") from e\r\nConnectionError: Couldn't reach 'allenai/c4' on the Hub (ConnectionError)\r\n```\r\nAfter setting `export HF_ENDPOINT=https://hf-mirror.com`, I get the following error, which is exactly the same as what we are debugging in this issue\r\n```bash\r\nDownloading readme: 41.1kB [00:00, 41.1MB/s]\r\nTraceback (most recent call last):\r\n File \".\\demo.py\", line 8, in <module>\r\n ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n File \"D:\\SoftwareInstall\\Python\\lib\\site-packages\\datasets\\load.py\", line 2594, in loa builder_instance = load_dataset_builder(\r\n File \"D:\\SoftwareInstall\\Python\\lib\\site-packages\\datasets\\load.py\", line 2266, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n raise FileNotFoundError(\r\nFileNotFoundError: Couldn't find a dataset script at C:\\Users\\ZhangGe\\Desktop\\allenai\\c4\\c4.py or any data file in the same directory. Couldn't find 'allenai/c4' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/allenai/c4@1588ec454eed extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', \r\n'.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns',pm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', \r\n'.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']\r\n```\r\n\r\n**Using a proxy software that avoids the internet access restrictions imposed by China, I can download the dataset using the same script**\r\n```bash\r\nDownloading readme: 100%|βββββββββββββββββββββββββββββββββββββββββββ| 41.1k/41.1k [00:00<00:00, 312kB/s] \r\nDownloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββ| 40.7M/40.7M [00:19<00:00, 2.07MB/s] \r\nGenerating validation split: 45576 examples [00:00, 54883.48 examples/s]\r\n```\r\nSo `allenai/c4` is still unreachable even after setting `export HF_ENDPOINT=https://hf-mirror.com`.",
"I have created an issue to inform the maintainers of `hf-mirror`οΌhttps://github.com/padeoe/hf-mirror-site/issues/30",
"Thanks for the investigation: so finally it is an issue with the specific endpoint you are using.\r\n\r\nYou properly opened an issue in their repo, so they can fix it.\r\n\r\nI am closing this issue here."
] | ### Describe the bug
can't load c4 datasets
When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'}
How can I fix thisοΌ
### Steps to reproduce the bug
1.from datasets import load_dataset
2.dataset = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation')
3. raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at local_path/c4_val/allenai/c4/c4.py or any data file in the same directory. Couldn't find 'allenai/c4' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-validation.00003-of-00008.json.gz' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
### Expected behavior
The data was successfully imported
### Environment info
python version 3.9
datasets version 2.19.2 | 6,947 |
https://github.com/huggingface/datasets/issues/6942 | Import sorting is disabled by flake8 noqa directive after switching to ruff linter | [] | When we switched to `ruff` linter in PR:
- #5519
import sorting was disabled in all files containing the `# flake8: noqa` directive
- https://github.com/astral-sh/ruff/issues/11679
We should re-enable import sorting on those files. | 6,942 |
https://github.com/huggingface/datasets/issues/6941 | Supporting FFCV: Fast Forward Computer Vision | [] | ### Feature request
Supporting FFCV, https://github.com/libffcv/ffcv
### Motivation
According to the benchmark, FFCV seems to be fastest image loading method.
### Your contribution
no | 6,941 |
https://github.com/huggingface/datasets/issues/6940 | Enable Sharding to Equal Sized Shards | [] | ### Feature request
Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation.
### Motivation
Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining shards will have length (n // i).". However, when using FSDP we want the shards to have the same size. This requires the user to manually handle this situation, but it will be nice if we had an option to shard the dataset into equally sized shards.
### Your contribution
For now just a PR. I can also add code that does what is needed, but probably not efficient.
Shard to equal size by duplication:
```
remainder = len(dataset) % num_shards
num_missing_examples = num_shards - remainder
duplicated = dataset.select(list(range(num_missing_examples)))
dataset = concatenate_datasets([dataset, duplicated])
shard = dataset.shard(num_shards, shard_idx)
```
Or by truncation:
```
shard = dataset.shard(num_shards, shard_idx)
num_examples_per_shard = len(dataset) // num_shards
shard = shard.select(list(range(num_examples_per_shard)))
``` | 6,940 |
https://github.com/huggingface/datasets/issues/6939 | ExpectedMoreSplits error when using data_dir | [] | As reported by @regisss, an `ExpectedMoreSplits` error is raised when passing `data_dir`:
```python
from datasets import load_dataset
dataset = load_dataset(
"lvwerra/stack-exchange-paired",
split="train",
cache_dir=None,
data_dir="data/rl",
)
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2609, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1140, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py", line 92, in verify_splits
raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits)))
datasets.utils.info_utils.ExpectedMoreSplits: {'test'}
``` | 6,939 |
https://github.com/huggingface/datasets/issues/6937 | JSON loader implicitly coerces floats to integers | [] | The JSON loader implicitly coerces floats to integers.
The column values `[0.0, 1.0, 2.0]` are coerced to `[0, 1, 2]`.
See CI error in dataset-viewer: https://github.com/huggingface/dataset-viewer/actions/runs/9290164936/job/25576926446
```
=================================== FAILURES ===================================
___________________________ test_statistics_endpoint ___________________________
normal_user_public_json_dataset = 'DVUser/tmp-dataset-17170199043860'
def test_statistics_endpoint(normal_user_public_json_dataset: str) -> None:
dataset = normal_user_public_json_dataset
config, split = get_default_config_split()
statistics_response = poll_until_ready_and_assert(
relative_url=f"/statistics?dataset={dataset}&config={config}&split={split}",
check_x_revision=True,
dataset=dataset,
)
content = statistics_response.json()
assert len(content) == 3
assert sorted(content) == ["num_examples", "partial", "statistics"], statistics_response
statistics = content["statistics"]
num_examples = content["num_examples"]
partial = content["partial"]
assert isinstance(statistics, list), statistics
assert len(statistics) == 6
assert num_examples == 4
assert partial is False
string_label_column = statistics[0]
assert "column_name" in string_label_column
assert "column_statistics" in string_label_column
assert "column_type" in string_label_column
assert string_label_column["column_name"] == "col_1"
assert string_label_column["column_type"] == "string_label" # 4 unique values -> label
assert isinstance(string_label_column["column_statistics"], dict)
assert string_label_column["column_statistics"] == {
"nan_count": 0,
"nan_proportion": 0.0,
"no_label_count": 0,
"no_label_proportion": 0.0,
"n_unique": 4,
"frequencies": {
"There goes another one.": 1,
"Vader turns round and round in circles as his ship spins into space.": 1,
"We count thirty Rebel ships, Lord Vader.": 1,
"The wingman spots the pirateship coming at him and warns the Dark Lord": 1,
},
}
int_column = statistics[1]
assert "column_name" in int_column
assert "column_statistics" in int_column
assert "column_type" in int_column
assert int_column["column_name"] == "col_2"
assert int_column["column_type"] == "int"
assert isinstance(int_column["column_statistics"], dict)
assert int_column["column_statistics"] == {
"histogram": {"bin_edges": [0, 1, 2, 3, 3], "hist": [1, 1, 1, 1]},
"max": 3,
"mean": 1.5,
"median": 1.5,
"min": 0,
"nan_count": 0,
"nan_proportion": 0.0,
"std": 1.29099,
}
float_column = statistics[2]
assert "column_name" in float_column
assert "column_statistics" in float_column
assert "column_type" in float_column
assert float_column["column_name"] == "col_3"
> assert float_column["column_type"] == "float"
E AssertionError: assert 'int' == 'float'
E - float
E + int
tests/test_14_statistics.py:72: AssertionError
=========================== short test summary info ============================
FAILED tests/test_14_statistics.py::test_statistics_endpoint - AssertionError: assert 'int' == 'float'
- float
+ int
```
This bug was introduced after:
- #6914
We have reported the issue to pandas:
- https://github.com/pandas-dev/pandas/issues/58866 | 6,937 |
https://github.com/huggingface/datasets/issues/6936 | save_to_disk() freezes when saving on s3 bucket with multiprocessing | [
"I got the same issue. Any updates so far for this issue?"
] | ### Describe the bug
I'm trying to save a `Dataset` using the `save_to_disk()` function with:
- `num_proc > 1`
- `dataset_path` being a s3 bucket path e.g. "s3://{bucket_name}/{dataset_folder}/"
The hf progress bar shows up but the saving does not seem to start.
When using one processor only (`num_proc=1`), everything works fine.
When saving the dataset on local disk (as opposed to s3 bucket) with `num_proc > 1`, everything works fine.
Thank you for your help! :)
### Steps to reproduce the bug
I tried without any storage options:
```
from datasets import load_dataset
sandbox_ds = load_dataset("openai_humaneval")
sandbox_ds["test"].save_to_disk(
"s3://bucket-name/test_multiprocessing_saving/",
num_proc=4,
)
```
and with the specific s3fs storage options:
```
from datasets import load_dataset
from s3fs import S3FileSystem
def get_s3fs():
return S3FileSystem()
sandbox_ds = load_dataset("openai_humaneval")
sandbox_ds["test"].save_to_disk(
"s3://bucket-name/test_multiprocessing_saving/",
num_proc=4,
storage_options=get_s3fs().storage_options, # also tried: storage_options=S3FileSystem().storage_options
)
```
I'm guessing I might use `storage_options` parameter wrongly, but I didn't find anything online that made it work.
**NB**: Behavior is the same when trying to save the whole `DatasetDict`.
### Expected behavior
Progress bar fills in and saving is carried out.
### Environment info
`datasets==2.18.0` | 6,936 |
https://github.com/huggingface/datasets/issues/6935 | Support for pathlib.Path in datasets 2.19.0 | [] | ### Describe the bug
After the recent update of `datasets`, Dataset.save_to_disk does not accept a pathlib.Path anymore. It was supported in 2.18.0 and previous versions. Is this intentional? Was it supported before only because of a Python dusk-typing miracle?
### Steps to reproduce the bug
```
from datasets import Dataset
import pathlib
path = pathlib.Path("./my_out_path")
Dataset.from_dict(
{"text": ["hello world"], "label": [777], "split": ["train"]}
.save_to_disk(path)
```
This results in an error when using datasets 2.19:
```
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "/Users/jb/scratch/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1515, in save_to_disk
fs, _ = url_to_fs(dataset_path, **(storage_options or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jb/scratch/venv/lib/python3.11/site-packages/fsspec/core.py", line 383, in url_to_fs
chain = _un_chain(url, kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jb/scratch/venv/lib/python3.11/site-packages/fsspec/core.py", line 323, in _un_chain
if "::" in path
^^^^^^^^^^^^
TypeError: argument of type 'PosixPath' is not iterable
```
Converting to str works, however.
```
Dataset.from_dict(
{"text": ["hello world"], "label": [777], "split": ["train"]}
).save_to_disk(str(path))
```
### Expected behavior
My dataset gets saved to disk without an error.
### Environment info
aiohttp==3.9.5
aiosignal==1.3.1
attrs==23.2.0
certifi==2024.2.2
charset-normalizer==3.3.2
datasets==2.19.0
dill==0.3.8
filelock==3.14.0
frozenlist==1.4.1
fsspec==2024.3.1
huggingface-hub==0.23.2
idna==3.7
multidict==6.0.5
multiprocess==0.70.16
numpy==1.26.4
packaging==24.0
pandas==2.2.2
pyarrow==16.1.0
pyarrow-hotfix==0.6
python-dateutil==2.9.0.post0
pytz==2024.1
PyYAML==6.0.1
requests==2.32.3
six==1.16.0
tqdm==4.66.4
typing_extensions==4.12.0
tzdata==2024.1
urllib3==2.2.1
xxhash==3.4.1
yarl==1.9.4 | 6,935 |
https://github.com/huggingface/datasets/issues/6930 | ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})} | [
"How do you solve it ?\r\n",
"> How do you solve it ?\r\n\r\nPlease check your Python environment and dataset version. I have just resolved the issue, which was caused by a Python environment switching error\r\n"
] | ### Describe the bug
When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}.
However, running dataset = load_dataset('allenai/c4', streaming=True, data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') works fine. What is the issue here?
### Steps to reproduce the bug
run codeοΌ
import os
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
from datasets import load_dataset
en = load_dataset("allenai/c4", "en", streaming=True)
### Expected behavior
Successfully loaded the dataset.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.17
- Python version: 3.8.19
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.0.3
- `fsspec` version: 2024.2.0
| 6,930 |
https://github.com/huggingface/datasets/issues/6929 | Avoid downloading the whole dataset when only README.me has been touched on hub. | [
"you're right, we're tackling this here: https://github.com/huggingface/dataset-viewer/issues/2757",
"@severo : great !"
] | ### Feature request
`datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same.
I think the current behaviour of the load_dataset function is triggered whenever a change of the hash of latest commit on huggingface hub, but is there a clever way to only download again the dataset **if and only if** data is modified ?
### Motivation
The current behaviour is a waste of network bandwidth / disk space / research time.
### Your contribution
I don't have time to submit a PR, but I hope a simple solution will emerge from this issue ! | 6,929 |
https://github.com/huggingface/datasets/issues/6924 | Caching map result of DatasetDict. | [] | Hi!
I'm currenty using the map function to tokenize a somewhat large dataset, so I need to use the cache to save ~25 mins.
Changing num_proc incduces the recomputation of the map, I'm not sure why and if this is excepted behavior?
here it says, that cached files are loaded sequentially:
https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3005-L3006
it seems like I can pass in a fingerprint, and load it directly:
https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3108-L3125
**Environment Setup:**
- Python 3.11.9
- datasets 2.19.1 conda-forge
- Linux 6.1.83-1.el9.elrepo.x86_64
**MRE**
```python
fixed raw_datasets
fixed tokenize_function
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=9,
remove_columns=['text'],
load_from_cache_file= True,
desc="Running tokenizer on dataset line_by_line",
)
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=5,
remove_columns=['text'],
load_from_cache_file= True,
desc="Running tokenizer on dataset line_by_line",
)
``` | 6,924 |
https://github.com/huggingface/datasets/issues/6923 | Export Parquet Tablet Audio-Set is null bytes in Arrow | [] | ### Describe the bug
Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"}
At the same time, the same dataset uploaded to the hub has bit arrays
![Screenshot from 2024-05-27 19-14-49](https://github.com/huggingface/datasets/assets/140120605/ddfba089-426f-4659-9df4-7a634c948b9e)
![Screenshot from 2024-05-27 19-12-51](https://github.com/huggingface/datasets/assets/140120605/4cf8c0a1-650e-491b-86c8-b475c284a021)
### Steps to reproduce the bug
1.Get dataset from audio and cast it
2.Export and push dataset
3.Itβs scary to be indignant at the difference in the uploaded dataset and the fact that it was saved locally
```py
from datasets import Dataset, Audio
df = Dataset.from_csv("./datasets.csv")
df = df.cast_column("audio", Audio(16000))
df.to_parquet("./datasets.parquet")
df.push_to_hub(repo_id="************", token="**********************")
```
You can use "try replicate case" for this
[replicate_packet.zip](https://github.com/huggingface/datasets/files/15457114/replicate_packet.zip)
### Expected behavior
Two parquet tables identical in content. It is obvious?
### Environment info
Python 3.11+ (I try did it in 3.12 and got same result ) | 6,923 |
https://github.com/huggingface/datasets/issues/6919 | Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple> | [] | ### Describe the bug
I wrote a notebook to load an existing dataset, process it, and upload as a private dataset using `dataset.push_to_hub(...)` at the end. The push to hub is failing with:
```
ValueError: Invalid metadata in README.md.
- Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](http://192.168.1.128:8888/tuple)> (50:11)
47 | - 4
48 | - 4
49 | - 8
50 | - !!binary |
----------------^
51 | TwAAAA==
52 | '1': !!python[/object/apply](http://192.168.1.128:8888/object/apply):nump ...
```
My dataset has a `train` and `validation` dataset. These are the features:
```
{'c1': Value(dtype='string', id=None),
'c2': Value(dtype='string', id=None),
'c3': [{'value': Value(dtype='string', id=None),
'start': Value(dtype='int64', id=None),
'end': Value(dtype='int64', id=None),
'label': Value(dtype='string', id=None)}],
'c4': Value(dtype='string', id=None),
'c5': Value(dtype='string', id=None),
'c6': Value(dtype='string', id=None),
'c7': Value(dtype='string', id=None),
'c8': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None),
'c9': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),
'c10': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),
'labels': Sequence(feature=ClassLabel(names=['O', 'B-ABC', 'I-ABC', ...], id=None), length=-1, id=None),
'c12': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
```
This used to work until I decided to cast the `labels` feature to a `Sequence(ClassLabel(...))` type with:
```
ds['train'] = ds['train'].cast_column("labels", Sequence(ClassLabel(names=list(labels))))
ds['validation'] = ds['validation'].cast_column("labels", Sequence(ClassLabel(names=list(labels))))
```
### Steps to reproduce the bug
1. Start with any token classification dataset.
2. Add a `labels` column with data such as `[0,0,0,12,13,13,13,0,0]`.
3. Cast the label column from `Sequence` to `Sequence(ClassLabel))` with:
```
labels = ['O', 'B-TEST', 'I-TEST']
ds = ds.cast_column("labels", Sequence(ClassLabel(names=labels)))
```
4. Push to hub with `ds.push_to_hub("me/awesome-stuff-dataset")`
### Expected behavior
I expected `push_to_hub` to successfully push my dataset to the hub without error.
### Environment info
Python 3.11.9
datasets==2.19.1
transformers==4.41.1
PyYAML==6.0.1 | 6,919 |
https://github.com/huggingface/datasets/issues/6918 | NonMatchingSplitsSizesError when using data_dir | [
"Thanks for reporting, @srehaag.\r\n\r\nWe are investigating this issue.",
"I confirm there is a bug for data-based Hub datasets when the user passes `data_dir`, which was introduced by PR:\r\n- #6714"
] | ### Describe the bug
Loading a dataset from with a data_dir argument generates a NonMatchingSplitsSizesError if there are multiple directories in the dataset.
This appears to happen because the expected split is calculated based on the data in all the directories whereas the recorded split is calculated based on the data in the directory specified using the data_dir argument.
This is recent behavior. Until the past few weeks loading using the data_dir argument worked without any issue.
### Steps to reproduce the bug
Simple test dataset available here: https://huggingface.co/datasets/srehaag/hf-bug-temp
The dataset contains two directories "data1" and "data2", each with a file called "train.parquet" with a 2 x 5 table.
from datasets import load_dataset
dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1")
Generates:
---------------------------------------------------------------------------
NonMatchingSplitsSizesError Traceback (most recent call last)
Cell In[3], <a href='vscode-notebook-cell:?execution_count=3&line=2'>line 2</a>
<a href='vscode-notebook-cell:?execution_count=3&line=1'>1</a> from datasets import load_dataset
----> <a href='vscode-notebook-cell:?execution_count=3&line=2'>2</a> dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1")
File ~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
<a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2606'>2606</a> return builder_instance.as_streaming_dataset(split=split)
<a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2608'>2608</a> # Download and prepare data
-> <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609'>2609</a> builder_instance.download_and_prepare(
<a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2610'>2610</a> download_config=download_config,
<a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2611'>2611</a> download_mode=download_mode,
<a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2612'>2612</a> verification_mode=verification_mode,
<a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2613'>2613</a> num_proc=num_proc,
<a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2614'>2614</a> storage_options=storage_options,
<a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2615'>2615</a> )
<a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2617'>2617</a> # Build dataset for splits
<a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2618'>2618</a> keep_in_memory = (
<a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2619'>2619</a> keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
<a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2620'>2620</a> )
File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1025'>1025</a> if num_proc is not None:
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1026'>1026</a> prepare_split_kwargs["num_proc"] = num_proc
-> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027'>1027</a> self._download_and_prepare(
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1028'>1028</a> dl_manager=dl_manager,
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1029'>1029</a> verification_mode=verification_mode,
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1030'>1030</a> **prepare_split_kwargs,
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1031'>1031</a> **download_and_prepare_kwargs,
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1032'>1032</a> )
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1033'>1033</a> # Sync info
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1034'>1034</a> self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1137'>1137</a> dl_manager.manage_extracted_files()
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1139'>1139</a> if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS:
-> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140'>1140</a> verify_splits(self.info.splits, split_dict)
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1142'>1142</a> # Update the info object with the splits.
<a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1143'>1143</a> self.info.splits = split_dict
File ~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101, in verify_splits(expected_splits, recorded_splits)
<a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:95'>95</a> bad_splits = [
<a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:96'>96</a> {"expected": expected_splits[name], "recorded": recorded_splits[name]}
<a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:97'>97</a> for name in expected_splits
<a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:98'>98</a> if expected_splits[name].num_examples != recorded_splits[name].num_examples
<a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:99'>99</a> ]
<a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:100'>100</a> if len(bad_splits) > 0:
--> <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101'>101</a> raise NonMatchingSplitsSizesError(str(bad_splits))
<a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:102'>102</a> logger.info("All the splits matched successfully.")
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=212, num_examples=10, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=106, num_examples=5, shard_lengths=None, dataset_name='hf-bug-temp')}]
__________
By contrast, this loads the data from both data1/train.parquet and data2/train.parquet without any error message:
from datasets import load_dataset
dataset = load_dataset("srehaag/hf-bug-temp")
### Expected behavior
Should load the 5 x 2 table from data1/train.parquet without error message.
### Environment info
Used Codespaces to simplify environment (see details below), but bug is present across various configurations.
- `datasets` version: 2.19.1
- Platform: Linux-6.5.0-1021-azure-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.23.1
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1 | 6,918 |
https://github.com/huggingface/datasets/issues/6917 | WinError 32 The process cannot access the file during load_dataset | [] | ### Describe the bug
When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation))
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"])
```
I get an error:
`PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow'
`
<details><summary>Full stacktrace</summary>
<p>
```python
AttributeError Traceback (most recent call last)
File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1858, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
[1857](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1857) _time = time.time()
-> [1858](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1858) for _, table in generator:
[1859](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1859) if max_shard_size is not None and writer._num_bytes > max_shard_size:
File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\packaged_modules\parquet\parquet.py:59, in Parquet._generate_tables(self, files)
[58](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:58) def _generate_tables(self, files):
---> [59](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:59) schema = self.config.features.arrow_schema if self.config.features is not None else None
[60](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:60) if self.config.features is not None and self.config.columns is not None:
AttributeError: 'list' object has no attribute 'arrow_schema'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1882, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
[1881](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1881) num_shards = shard_id + 1
-> [1882](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1882) num_examples, num_bytes = writer.finalize()
[1883](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1883) writer.close()
File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\arrow_writer.py:584, in ArrowWriter.finalize(self, close_stream)
[583](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:583) # If schema is known, infer features even if no examples were written
--> [584](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:584) if self.pa_writer is None and self.schema:
...
--> [627](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:627) os.unlink(fullname)
[628](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:628) except OSError:
[629](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:629) onerror(os.unlink, fullname, sys.exc_info())
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow'
```
</p>
</details>
### Steps to reproduce the bug
Steps to reproduce:
Just execute these lines
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"])
```
### Expected behavior
I expect the dataset to be loaded without any errors.
### Environment info
| Package| Version|
|--------|--------|
| transformers| 4.37.2|
| python| 3.9.19|
| pytorch| 2.3.0|
| datasets|2.12.0 |
| arrow | 1.2.3|
I am using Conda on Windows 11. | 6,917 |
https://github.com/huggingface/datasets/issues/6916 | ```push_to_hub()``` - Prevent Automatic Generation of Splits | [] | ### Describe the bug
I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening?
### Steps to reproduce the bug
1. Have a unsplit dataset
```python
Dataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], num_rows: 944685 })
```
2. Push it to huggingface
```python
dataset.push_to_hub(dataset_name)
```
3. On the hugging face dataset repo, the dataset then appears to be splited:
![image](https://github.com/huggingface/datasets/assets/29337128/b4fbc141-42b0-4f49-98df-dd479648fe09)
4. Indeed, when loading the dataset from this repo, the dataset is split in two testing and training set.
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Jetlime/NF-CSE-CIC-IDS2018-v2", streaming=True)
dataset
```
output:
```
IterableDatasetDict({
train: IterableDataset({
features: ['input', 'output', 'Attack', '__index_level_0__'],
n_shards: 2
})
test: IterableDataset({
features: ['input', 'output', 'Attack', '__index_level_0__'],
n_shards: 1
})
```
### Expected behavior
The dataset shall not be splited, as not requested.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1 | 6,916 |
https://github.com/huggingface/datasets/issues/6913 | Column order is nondeterministic when loading from JSON | [] | As reported by @meg-huggingface, the order of the JSON object keys is not preserved while loading a dataset from a JSON file with a list of objects.
For example, when loading a JSON files with a list of objects, each with the following ordered keys:
- [ID, Language, Topic],
the resulting dataset may have columns:
- [ID, Topic, Language], or
- [Topic, Language, ID], or
- [Topic, ID, Language],...
This issue is caused by the use of a Python set (which does not preserve the order):
https://github.com/huggingface/datasets/blob/60d21efbc01e15d0b596ac1072750cbecd91548a/src/datasets/packaged_modules/json/json.py#L168
introduced in
- #5772 | 6,913 |
https://github.com/huggingface/datasets/issues/6912 | Add MedImg for streaming | [
"@mariosasko, @lhoestq, @albertvillanova\r\nHello! Can anyone help? or can you guys suggest who can help with this?",
"Hi ! Feel free to download the dataset and create a `Dataset` object with it.\r\n\r\nThen your'll be able to use `push_to_hub()` to upload the dataset to HF in Parquet format and make it streamable :)",
"> Hi ! Feel free to download the dataset and create a `Dataset` object with it.\r\n> \r\n> Then your'll be able to use `push_to_hub()` to upload the dataset to HF in Parquet format and make it streamable :)\r\n\r\nThe dataset is several TB in total, which I do not have the resources to handle."
] | ### Feature request
Host the MedImg dataset (similar to Imagenet but for biomedical images).
### Motivation
There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community.
### Your contribution
MedImg can be found [here](https://www.cuilab.cn/medimg/#). | 6,912 |
https://github.com/huggingface/datasets/issues/6908 | Fail to load "stas/c4-en-10k" dataset since 2.16 version | [
"I am not able to reproduce the error with datasets 2.19.1:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"stas/c4-en-10k\", streaming=True); item = next(iter(ds[\"train\"])); item\r\nOut[1]: {'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.'}\r\n\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"stas/c4-en-10k\", download_mode=\"force_redownload\"); ds\r\nDownloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 13.3M/13.3M [00:00<00:00, 18.7MB/s]\r\nGenerating train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 10000/10000 [00:00<00:00, 78548.55 examples/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text'],\r\n num_rows: 10000\r\n })\r\n})\r\n```\r\n\r\nLooking at your error traceback, I notice that the code line numbers do not correspond to the ones of datasets 2.19.1.\r\n\r\nAdditionally, I can't reproduce the issue with `HfFileSystem`:\r\n```python\r\nIn [1]: from huggingface_hub import HfFileSystem\r\n\r\nIn [2]: fs = HfFileSystem()\r\n\r\nIn [3]: with fs.open(\"datasets/stas/c4-en-10k/c4-en-10k.py\", \"rb\") as f:\r\n ...: data = f.read()\r\n ...: \r\n\r\nIn [4]: data[:20]\r\nOut[4]: b'# coding=utf-8\\n# Cop'\r\n```\r\n\r\nCould you please verify the `datasets` and `huggingface_hub` versions you are indeed using?\r\n```python\r\nimport datasets; print(datasets.__version__)\r\n\r\nimport huggingface_hub; print(huggingface_hub.__version__)\r\n```",
"Thanks for your reply! After I update the datasets version from 2.15.0 back to 2.19.1 again, it seems everything work well. Sorry for bordering you!"
] | ### Describe the bug
When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset
```python
from datasets import load_dataset, Dataset
dataset = load_dataset('stas/c4-en-10k')
```
and then it raise UnicodeDecodeError like
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2523, in load_dataset
builder_instance = load_dataset_builder(
File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2195, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1846, in dataset_module_factory
raise e1 from None
File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1798, in dataset_module_factory
can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read()
File "/home/*/conda3/envs/watermark/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
I found that fs.open loads a gzip file and parses it like plain text using utf-8 encoder.
```python
fs = HfFileSystem('https://huggingface.co')
fs.open("datasets/stas/c4-en-10k/c4-en-10k.py", "rb")
data = fs.read() # data is gzip bytes begin with b'\x1f\x8b\x08\x00\x00\tn\x88\x00...'
data2 = unzip_gzip_bytes(data) # data2 is what we want: '# coding=utf-8\n# Copyright 2020 The HuggingFace Datasets...'
```
### Steps to reproduce the bug
1. Install datasets between version 2.16 and 2.19
2. Use `datasets.load_dataset` method to load `stas/c4-en-10k` dataset.
### Expected behavior
Load dataset normally.
### Environment info
Platform = Linux-5.4.0-159-generic-x86_64-with-glibc2.35
Python = 3.10.14
Datasets = 2.19 | 6,908 |
https://github.com/huggingface/datasets/issues/6907 | Support the deserialization of json lines files comprised of lists | [
"Update: I ended up deciding to go back to use lines of dictionaries instead of arrays, not because of this issue as my users would be capable of downloading my corpus without `datasets`, but the speed and storage savings are not currently worth breaking my API and harming the backwards compatibility of each new revision.\r\n\r\nWith that said, for a static dataset that is not regularly updated like mine, and particularly for extremely large datasets with millions or billions of rows, using arrays could have a meaningful impact, and so there is probably still value in supporting this structure, provided the effort is not too much."
] | ### Feature request
I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a value at a particular column. Previously, my corpus was stored as a json lines file where each line was a dictionary and the keys were the fields.
Essentially, a line in my json lines file used to look like this:
```json
{"version_id":"","type":"","jurisdiction":"","source":"","citation":"","url":"","when_scraped":"","text":""}
```
And now it looks like this:
```json
["","","","","","","",""]
```
This saves 65 bytes per document and allows me very quickly serialise and deserialise documents via `msgspec`.
After making this change, I found that `datasets` was incapable of deserialising my Corpus without a custom loading script, even if I ensured that the `dataset_info` field in my dataset card contained the desired names of my features.
I would like to request that functionality be added to support this format which is more memory-efficent and faster than using dictionaries.
### Motivation
The [documentation](https://huggingface.co/docs/datasets/en/dataset_script) for creating dataset loading scripts asserts that:
> In the next major release, the new safety features of π€ Datasets will disable running dataset loading scripts by default, and you will have to pass trust_remote_code=True to load datasets that require running a dataset script.
I would rather not require my users to pass `trust_remote_code=True` which means that I will need built-in support for this format.
### Your contribution
I would be happy to submit a PR for this if this is something you would incorporate into `datasets` and if I can be pointed to where the code would need to go. | 6,907 |
https://github.com/huggingface/datasets/issues/6906 | irc_disentangle - Issue with splitting data | [
"Thank you I will try this out!\r\n\r\nOn Tue, Jun 11, 2024 at 3:55β―AM Vincent Lau ***@***.***>\r\nwrote:\r\n\r\n> I add a \"streaming=True\" after the name of the dataset, and it\r\n> works.....hope it can help you\r\n>\r\n> And if you install the version datasets==2.15.0, this bug will not happen.\r\n> I don't know why, but all of them works\r\n>\r\n> β\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6906#issuecomment-2160041812>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A3HXU7AMBT2MNO34SC3Z5G3ZG2UOXAVCNFSM6AAAAABH45CNPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNRQGA2DCOBRGI>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"I still find out that there are some strange bug in v2.15.0 of datasets. it seems like that the *.arrow file cannot be established. it may be an index of the subsets. well I still try to debug it. but, one of the most efficient way may be using the google colab to build this index in the ~/huggingface/datasets, and than download them to replace the local file.....lol......it works!",
"Yeah I did try what you suggested and it didnβt work. I was able to get it\r\non a local from someone who access the dataset in the past. Let me know\r\nwhen you end up fixing this bug.\r\n\r\nOn Tue, Jun 11, 2024 at 10:33β―PM Vincent Lau ***@***.***>\r\nwrote:\r\n\r\n> I still find out that there are some strange bug in v2.15.0 of datasets.\r\n> it seems like that the *.arrow file cannot be established. it may be an\r\n> index of the subsets. well I still try to debug it. but, one of the most\r\n> efficient way may be using the google colab to build this index in the\r\n> ~/huggingface/datasets, and than download them to replace the local\r\n> file.....lol......it works!\r\n>\r\n> β\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6906#issuecomment-2161988798>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A3HXU7BCJE2LOCWRVWPMNODZG6XPJAVCNFSM6AAAAABH45CNPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNRRHE4DQNZZHA>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Could you please provide more information, as required by the Bug template: https://github.com/huggingface/datasets/issues/new?assignees=&labels=&projects=&template=bug-report.yml\r\n\r\nWithout all that information, it is very difficult for us to understand the underlying issue and to give a pertinent answer.\r\n\r\nWhat are the versions of the libraries you are using? Datasets, pyarrow, fsspec,...\r\n> Environment info\r\n> Please share your environemnt info with us. You can run the command datasets-cli env and copy-paste its output below.\r\n\r\nWhat is the output you get after executing these code lines?\r\n```python\r\nimport datasets\r\nds = datasets.load_dataset('irc_disentangle')\r\nds\r\n```\r\n\r\n",
"We have made the following fixes:\r\n- [Fix source data URL](https://huggingface.co/datasets/jkkummerfeld/irc_disentangle/discussions/4)\r\n- [Convert dataset to Parquet](https://huggingface.co/datasets/jkkummerfeld/irc_disentangle/discussions/5)",
"Thank you for the fixes. Sorry I lost this conversation in my inbox.\r\n\r\nOn Mon, Jul 8, 2024 at 2:18β―AM Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> Closed #6906 <https://github.com/huggingface/datasets/issues/6906> as\r\n> completed.\r\n>\r\n> β\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6906#event-13418330895>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A3HXU7HREJDE5BZSOEJFJI3ZLIVLNAVCNFSM6AAAAABH45CNPWVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJTGQYTQMZTGA4DSNI>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | ### Describe the bug
I am trying to access your database through python using "datasets.load_dataset("irc_disentangle")" and I am getting this error message:
ValueError: Instruction "train" corresponds to no data!
### Steps to reproduce the bug
import datasets
ds = datasets.load_dataset('irc_disentangle')
ds
### Expected behavior
The data is supposed to load into ds and be accessable as such:
ds['train'][1050], ds['train'][1055]
### Environment info
I tired Python 3.12 and 3.10 | 6,906 |
https://github.com/huggingface/datasets/issues/6905 | Extraction protocol for arrow files is not defined | [] | ### Describe the bug
Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow.
### Steps to reproduce the bug
Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L820)
The method then looks at some base known extensions where `arrow` is not defined so it proceeds to determine the compression with the magic number method which is slow when dealing with a lot of files which are stored in s3 and by looking at this predefined list, I don't see `arrow` in there either so in the end it return None:
```
MAGIC_NUMBER_TO_COMPRESSION_PROTOCOL = {
bytes.fromhex("504B0304"): "zip",
bytes.fromhex("504B0506"): "zip", # empty archive
bytes.fromhex("504B0708"): "zip", # spanned archive
bytes.fromhex("425A68"): "bz2",
bytes.fromhex("1F8B"): "gzip",
bytes.fromhex("FD377A585A00"): "xz",
bytes.fromhex("04224D18"): "lz4",
bytes.fromhex("28B52FFD"): "zstd",
}
```
### Expected behavior
My expectation is that `arrow` would be in the known lists so it would return None without going through the magic number method.
### Environment info
datasets 2.19.0 | 6,905 |
https://github.com/huggingface/datasets/issues/6903 | Add the option of saving in parquet instead of arrow | [
"I think [`Dataset.to_parquet`](https://huggingface.co/docs/datasets/v1.10.2/package_reference/main_classes.html#datasets.Dataset.to_parquet) is what you're looking for.\r\n\r\nLet me know if I'm wrong ",
"No, it does not save the metadata json.\r\n\r\nWe have to recode all meta json load/save\r\nwith another custome functions.\r\n\r\nsave_to_disk\r\nand load should have option with\r\nβParquetβ instead of βarrowβ\r\n\r\nsince βarrowβ is never user for production \r\n(only parquet).\r\n\r\nThanks !\r\n\r\n> On May 17, 2024, at 5:38, FrΓ©dΓ©ric Branchaud-Charron ***@***.***> wrote:\r\n> \r\n> ο»Ώ\r\n> I think Dataset.to_parquet is what you're looking for.\r\n> \r\n> Let me know if I'm wrong\r\n> \r\n> β\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you authored the thread.\r\n",
"You can use `to_parquet` and `ds.info.write_to_directory()` to save the dataset info",
"Ok,\r\n\r\nWhat about loading ?\r\n\r\nShould we do in 2 steps ?\r\n\r\n\r\n\r\n> On Jun 14, 2024, at 1:09, Quentin Lhoest ***@***.***> wrote:\r\n> \r\n> ο»Ώ\r\n> You can use to_parquet and ds.info.write_to_directory() to save the dataset info\r\n> \r\n> β\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you authored the thread.\r\n",
"Yes, and there is DatasetInfo.from_directory(). to reload the info",
"Isnβt easier to combine both\r\ninto load_dataset and save_dataset\r\nwith parquet options.\r\n\r\n2) another question,\r\nHow can we download large dataset into disk directly without loading all in memory (!)\r\n\r\n\r\n\r\n\r\n> On Jun 14, 2024, at 19:54, Quentin Lhoest ***@***.***> wrote:\r\n> \r\n> ο»Ώ\r\n> Yes, and there is DatasetInfo.from_directory(). to reload the info\r\n> \r\n> β\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you authored the thread.\r\n",
"`load_dataset` doesn't load the dataset in memory, it progressively writes to disk in Arrow format and then memory maps the Arrow files. This allows to load datasets bigger than memory and without filling your RAM",
"Sure.\r\nHow memory map is managed ?\r\nManaged by the OS ?\r\n\r\nWhy the need of save_dataset() ?\r\n\r\n\r\n\r\n> On Jun 15, 2024, at 0:06, Quentin Lhoest ***@***.***> wrote:\r\n> \r\n> ο»Ώ\r\n> load_dataset doesn't load the dataset in memory, it progressively writes to disk in Arrow format and then memory maps the Arrow files. This allows to load datasets bigger than memory and without filling your RAM\r\n> \r\n> β\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you authored the thread.\r\n"
] | ### Feature request
In dataset.save_to_disk('/path/to/save/dataset'),
add the option to save in parquet format
dataset.save_to_disk('/path/to/save/dataset', format="parquet"),
because arrow is not used for Production Big data.... (only parquet)
### Motivation
because arrow is not used for Production Big data.... (only parquet)
### Your contribution
I can do the testing ! | 6,903 |
https://github.com/huggingface/datasets/issues/6901 | HTTPError 403 raised by CLI convert_to_parquet when creating script branch on 3rd party repos | [] | CLI convert_to_parquet cannot create "script" branch on 3rd party repos.
It can only create it on repos where the user executing the script has write access.
Otherwise, a 403 Forbidden HTTPError is raised:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/ORG/DATASET/branch/script
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/datasets/commands/datasets_cli.py", line 41, in main
service.run()
File "/usr/local/lib/python3.10/dist-packages/datasets/commands/convert_to_parquet.py", line 92, in run
create_branch(dataset_id, branch="script", repo_type="dataset", token=token, exist_ok=True)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py", line 5503, in create_branch
hf_raise_for_status(response)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 367, in hf_raise_for_status
raise HfHubHTTPError(message, response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: (Request ID: Root=1-6645ee0d-4db1ed8a1fbe04956be15897;139a6e23-df7d-4f62-b5ba-adb6d8e6e696)
403 Forbidden: Forbidden: cannot write to script.
Cannot access content at: https://huggingface.co/api/datasets/ORG/DATASET/branch/script.
If you are trying to create or update content,make sure you have a token with the `write` role.
``` | 6,901 |
https://github.com/huggingface/datasets/issues/6900 | [WebDataset] KeyError with user-defined `Features` when a field is missing in an example | [
"@lhoestq How difficult of fix is this?",
"It shouldn't be difficult, I think it's just a matter of adding the missing fields from `self.config.features` in `example` here: before it iterates on image_field_names and audio_field_names. A missing field should have a value set to None\r\n\r\nhttps://github.com/huggingface/datasets/blob/768cb35ede5a6c35fa7545aa3671f3e321c96440/src/datasets/packaged_modules/webdataset/webdataset.py#L113-L116",
"@lhoestq So like this then?\r\n\r\n``` \r\ndef _generate_examples(self, tar_paths, tar_iterators):\r\n image_field_names = [\r\n field_name for field_name, feature in self.info.features.items() if isinstance(feature, datasets.Image)\r\n ]\r\n audio_field_names = [\r\n field_name for field_name, feature in self.info.features.items() if isinstance(feature, datasets.Audio)\r\n ]\r\n\t\r\n all_field_names = list(self.config.features.keys())\r\n \r\n for tar_idx, (tar_path, tar_iterator) in enumerate(zip(tar_paths, tar_iterators)):\r\n for example_idx, example in enumerate(self._get_pipeline_from_tar(tar_path, tar_iterator)):\r\n for field_name in all_field_names:\r\n if field_name not in example:\r\n if field_name in self.config.features:\r\n example[field_name] = self.config.features[field_name]\r\n else:\r\n example[field_name] = None\r\n \r\n # Process image and audio fields\r\n for field_name in image_field_names + audio_field_names:\r\n if example[field_name] is not None:\r\n example[field_name] = {\"path\": example[\"__key__\"] + \".\" + field_name, \"bytes\": example[field_name]}\r\n \r\n yield f\"{tar_idx}_{example_idx}\", example\r\n```\r\n\r\nOr should we avoid trying add the missing values and just set them to None?\r\n\r\n```\r\n for field_name in all_field_names:\r\n if field_name not in example:\r\n example[field_name] = None\r\n```",
"Yup this is the solution !\r\n\r\n```python\r\n for field_name in all_field_names:\r\n if field_name not in example:\r\n example[field_name] = None\r\n```",
"@lhoestq Awesome, thanks! I made a PR with the fixes"
] | reported at https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1
```
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples
example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]}
``` | 6,900 |
https://github.com/huggingface/datasets/issues/6899 | List of dictionary features get standardized | [] | ### Describe the bug
Hi, iβm trying to create a HF dataset from a list using Dataset.from_list.
Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets library standardizes all dictionaries under a feature and adds all possible keys (with None value) from all the dictionaries under that feature.
How can I keep the same set of keys as in the original list for each dictionary under a feature?
### Steps to reproduce the bug
```
from datasets import Dataset
# Define a function to generate a sample with "tools" feature
def generate_sample():
# Generate random sample data
sample_data = {
"text": "Sample text",
"feature_1": []
}
# Add feature_1 with random keys for this sample
feature_1 = [{"key1": "value1"}, {"key2": "value2"}] # Example feature_1 with random keys
sample_data["feature_1"].extend(feature_1)
return sample_data
# Generate multiple samples
num_samples = 10
samples = [generate_sample() for _ in range(num_samples)]
# Create a Hugging Face Dataset
dataset = Dataset.from_list(samples)
dataset[0]
```
```{'text': 'Sample text', 'feature_1': [{'key1': 'value1', 'key2': None}, {'key1': None, 'key2': 'value2'}]}```
### Expected behavior
```{'text': 'Sample text', 'feature_1': [{'key1': 'value1'}, {'key2': 'value2'}]}```
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-5.15.0-1040-nvidia-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.23.0
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | 6,899 |
https://github.com/huggingface/datasets/issues/6897 | datasets template guide :: issue in documentation YAML | [
"Hello, @bghira.\r\n\r\nThanks for reporting. Please note that the text originating the error is not supposed to be valid YAML: it contains the instructions to generate the actual YAML content, that should replace the instructions comment.\r\n\r\nOn the other hand, I agree that it is not nice to have that YAML error message at the top of the page: \r\n![Screenshot from 2024-05-14 06-58-02](https://github.com/huggingface/datasets/assets/8515462/28409eb4-99e7-4b24-8eaa-21a65a8f23b2)\r\n\r\nI am proposing a change to make the YAML error disappear.",
"thanks albert! i looked at it for a while to figure it out. i think the `raw` view option is the correct way to look at it?"
] | ### Describe the bug
There is a YAML error at the top of the page, and I don't think it's supposed to be there
### Steps to reproduce the bug
1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md)
2. Observe a big red error at the top
3. The rest of the document remains functional
### Expected behavior
I think the YAML block should be displayed or ignored.
### Environment info
N/A | 6,897 |
https://github.com/huggingface/datasets/issues/6896 | Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset | [] | ### Describe the bug
While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error:
```python
---------------------------------------------------------------------------
NonMatchingSplitsSizesError Traceback (most recent call last)
[<ipython-input-1-d6a3c721d3b8>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 ds = load_dataset("pysentimiento/spanish-tweets-small")
3 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2150
2151 # Download and prepare data
-> 2152 builder_instance.download_and_prepare(
2153 download_config=download_config,
2154 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
946 if num_proc is not None:
947 prepare_split_kwargs["num_proc"] = num_proc
--> 948 self._download_and_prepare(
949 dl_manager=dl_manager,
950 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1059
1060 if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS:
-> 1061 verify_splits(self.info.splits, split_dict)
1062
1063 # Update the info object with the splits.
[/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_splits(expected_splits, recorded_splits)
98 ]
99 if len(bad_splits) > 0:
--> 100 raise NonMatchingSplitsSizesError(str(bad_splits))
101 logger.info("All the splits matched successfully.")
102
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=82649695458, num_examples=597433111, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=3358310095, num_examples=24898932, shard_lengths=[3626991, 3716991, 4036990, 3506990, 3676990, 3716990, 2616990], dataset_name='spanish-tweets-small')}]
```
I think I had this dataset updated, might be related to #6271
It is working fine as late in `2.10.0` , but not in `2.13.0` onwards.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("pysentimiento/spanish-tweets-small")
```
You can run it in [this notebook](https://colab.research.google.com/drive/1FdhqLiVimHIlkn7B54DbhizeQ4U3vGVl#scrollTo=YgA50cBSibUg)
### Expected behavior
Load the dataset without any error
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- PyArrow version: 14.0.2
- Pandas version: 2.0.3 | 6,896 |
https://github.com/huggingface/datasets/issues/6894 | Better document defaults of to_json | [] | Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891 | 6,894 |
https://github.com/huggingface/datasets/issues/6891 | Unable to load JSON saved using `to_json` | [
"Hi @DarshanDeshpande,\r\n\r\nPlease note that the default format of the method `Dataset.to_json` is [JSON-Lines](https://jsonlines.org/): it passes `orient=\"records\", lines=True` to `pandas.DataFrame.to_json`. This format is specially useful for large datasets, since unlike regular JSON files, it does not require loading all the data into memory at once, but can be done iteratively by batches.\r\n\r\nIn order to read this file using the `json` library, you should parse line by line:\r\n```python\r\nwith open(\"full_dataset.json\", \"r\") as f:\r\n data = [json.loads(line) for line in f]\r\nlen(data)\r\n```\r\nMaybe we should explain this better in our docs.",
"Now we explain this better in out docs:\r\n- #6895"
] | ### Describe the bug
Datasets stored in the JSON format cannot be loaded using `json.load()`
### Steps to reproduce the bug
```
import json
from datasets import load_dataset
dataset = load_dataset("squad")
train_dataset, test_dataset = dataset["train"], dataset["validation"]
test_dataset.to_json("full_dataset.json")
# This works
loaded_test = load_dataset("json", data_files="full_dataset.json")
# This fails
loaded_test = json.load(open("full_dataset.json", "r"))
```
### Expected behavior
The JSON should be correctly formatted when writing so that it can be loaded using `json.load()`.
### Environment info
Colab: https://colab.research.google.com/drive/1st1iStFUVgu9ZPvnzSzL4vDeYWDwYpUm?usp=sharing | 6,891 |
https://github.com/huggingface/datasets/issues/6890 | add `with_transform` and/or `set_transform` to IterableDataset | [] | ### Feature request
when working with a really large dataset it would save us a lot of time (and compute resources) to use either with_transform or the set_transform from the Dataset class instead of waiting for the entire dataset to map
### Motivation
don't want to wait for a really long dataset to map, this would give IterableDataset an extra advantage over the Dataset class.
reducing time and resources
### Your contribution
I am a little busy with my job search lately, but would post about this feature in my social media.
Apologies again (dad going to kick me out soon), if I ever have some free time I will contribute to making this a reality, but that's going to be hard
Β Β Β / (β¬β¬οΉβ¬β¬)\ | 6,890 |
https://github.com/huggingface/datasets/issues/6887 | FAISS load to None | [
"Hello,\r\n\r\nI'm not sure I understand. \r\nThe return value of `ds.load_faiss_index` is None as expected.\r\n\r\nI see that loading an Index on a dataset that doesn't have an `embedding` column doesn't raise an Issue. Is that the issue?\r\n\r\nSo `ds` doesn't have an `embedding` column, but we load an index that looks for it. But this will raise an issue only when calling `ds.search`."
] | ### Describe the bug
I've use FAISS with Datasets and save to FAISS.
Then load to save FAISS then no error, then ds to None
```python
ds.load_faiss_index('embeddings', 'my_index.faiss')
```
### Steps to reproduce the bug
# 1.
```python
ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transforms(example['image']).unsqueeze(0)).squeeze()}, batch_size=64)
ds_with_embeddings.add_faiss_index(column='embeddings')
ds_with_embeddings.save_faiss_index('embeddings', 'index.faiss')
```
# 2.
```python
ds.load_faiss_index('embeddings', 'my_index.faiss')
```
### Expected behavior
Add column in Datasets.
### Environment info
Google Colab, SageMaker Notebook | 6,887 |
https://github.com/huggingface/datasets/issues/6886 | load_dataset with data_dir and cache_dir set fail with not supported | [] | ### Describe the bug
with python 3.11 I execute:
```py
from transformers import Wav2Vec2Processor, Data2VecAudioModel
import torch
from torch import nn
from datasets import load_dataset, concatenate_datasets
# load demo audio and set processor
dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache")
```
This fails in the last line with
```log
Found cached dataset librispeech_asr (file:///Users/as/Documents/Project/git/audio2vec/cache/librispeech_asr/clean-data_dir=data/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7)
Traceback (most recent call last):
File "/Users/as/Documents/Project/git/audio2vec/src/music2vec-v1.py", line 7, in <module>
dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/load.py", line 1810, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/builder.py", line 1113, in as_dataset
raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
```
### Steps to reproduce the bug
I setup an venv with requirements.txt
```txt
transformers==4.40.2
torch==2.2.2
datasets==2.16.0
fsspec==2023.9.2
```
pip freeze is:
```
aiohttp==3.9.5
aiosignal==1.3.1
attrs==23.2.0
certifi==2024.2.2
charset-normalizer==3.3.2
datasets==2.16.0
dill==0.3.7
filelock==3.14.0
frozenlist==1.4.1
fsspec==2023.9.2
huggingface-hub==0.23.0
idna==3.7
Jinja2==3.1.4
MarkupSafe==2.1.5
mpmath==1.3.0
multidict==6.0.5
multiprocess==0.70.15
networkx==3.3
numpy==1.26.4
packaging==24.0
pandas==2.2.2
pyarrow==16.0.0
pyarrow-hotfix==0.6
python-dateutil==2.9.0.post0
pytz==2024.1
PyYAML==6.0.1
regex==2024.4.28
requests==2.31.0
safetensors==0.4.3
six==1.16.0
sympy==1.12
tokenizers==0.19.1
torch==2.2.2
tqdm==4.66.4
transformers==4.40.2
typing_extensions==4.11.0
tzdata==2024.1
urllib3==2.2.1
xxhash==3.4.1
yarl==1.9.4
```
I execute this on a M1 Mac.
### Expected behavior
I don't understand the error message. Why is "local" caching not supported. Would it possible to give some additional hint with the error message how to solve this issue?
### Environment info
source ....
python -u example.py | 6,886 |
https://github.com/huggingface/datasets/issues/6884 | CI is broken after jax-0.4.27 release: AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device' | [] | After jax-0.4.27 release (https://github.com/google/jax/releases/tag/jax-v0.4.27), our CI is broken with the error:
```Python traceback
AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'?
```
See: https://github.com/huggingface/datasets/actions/runs/8997488610/job/24715736153
```Python traceback
___________________ FormatterTest.test_jax_formatter_device ____________________
[gw1] linux -- Python 3.10.14 /opt/hostedtoolcache/Python/3.10.14/x64/bin/python
self = <tests.test_formatting.FormatterTest testMethod=test_jax_formatter_device>
@require_jax
def test_jax_formatter_device(self):
import jax
from datasets.formatting import JaxFormatter
pa_table = self._create_dummy_table()
device = jax.devices()[0]
formatter = JaxFormatter(device=str(device))
row = formatter.format_row(pa_table)
> assert row["a"].device() == device
E AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'?
tests/test_formatting.py:630: AttributeError
``` | 6,884 |
https://github.com/huggingface/datasets/issues/6882 | Connection Error When Using By-pass Proxies | [
"Changing the supplier of the proxy will solve this problem, or you can visit and follow the instructions in https://hf-mirror.com "
] | ### Describe the bug
I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash providesπ€, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f969d391870>: Failed to establish a new connection: [Errno 111] Connection refused'))")))"
I have already read the documentation provided on the hugginface, but I think I didn't see the detailed instruction on how to set up proxies for this library.
### Steps to reproduce the bug
1. Turn on any proxy software like Clash / ShadosocksR etc.
2. export system varibles to the port provided by your proxy software in wsl (It's ok for other applications to use proxy expect dataset-library)
3. load any dataset from hugginface online
### Expected behavior
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
Cell In[33], [line 3](vscode-notebook-cell:?execution_count=33&line=3)
[1](vscode-notebook-cell:?execution_count=33&line=1) from datasets import load_metric
----> [3](vscode-notebook-cell:?execution_count=33&line=3) metric = load_metric("seqeval")
File ~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46, in deprecated.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
[44](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:44) warnings.warn(warning_msg, category=FutureWarning, stacklevel=2)
[45](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:45) _emitted_deprecation_warnings.add(func_hash)
---> [46](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46) return deprecated_function(*args, **kwargs)
File ~/.local/lib/python3.10/site-packages/datasets/load.py:2104, in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, trust_remote_code, **metric_init_kwargs)
[2101](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2101) warnings.filterwarnings("ignore", message=".*https://huggingface.co/docs/evaluate$", category=FutureWarning)
[2103](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2103) download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS)
-> [2104](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2104) metric_module = metric_module_factory(
[2105](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2105) path,
[2106](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2106) revision=revision,
[2107](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2107) download_config=download_config,
[2108](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2108) download_mode=download_mode,
[2109](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2109) trust_remote_code=trust_remote_code,
[2110](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2110) ).module_path
[2111](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2111) metric_cls = import_main_class(metric_module, dataset=False)
[2112](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2112) metric = metric_cls(
[2113](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2113) config_name=config_name,
[2114](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2114) process_id=process_id,
...
--> [633](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:633) raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
[634](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:634) elif response is not None:
[635](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:635) raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))")))
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 16.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0 | 6,882 |
https://github.com/huggingface/datasets/issues/6881 | AttributeError: module 'PIL.Image' has no attribute 'ExifTags' | [
"@albertvillanova @lhoestq just ran into it and requiring newer pillow isn't a solution as it breaks Pillow-SIMD which is behind Pillow quite a few versions but necessary for training with reasonable throughput. \r\n\r\nA couple things here... \r\n\r\n1. This can be done with a method that isn't an issue for any somewhat recent Pillow\r\n`image = ImageOps.exif_transpose(image)`\r\n\r\n2. I'd rather this not be done for me automatically. Sometimes exif data is correct, sometimes it's not. Sometimes I might want to correct the orientation, sometimes I might not. \r\n\r\nIn any case if I've preprocessed the images properly myself I don't want to incur overhead, possible further fp seeks, parsing, to load the exif that's not loaded and parsed when you just open and decode the image.",
"Hi @rwightman, thanks for your feedback.\r\n\r\nFirst, as a side note comment, please note that you are depending on Pillow-SIMD and that library seems no longer maintained:\r\n- it has not been updated for more than a year: last commit to main was on June 20, 2023: https://github.com/uploadcare/pillow-simd/commit/faae977a00472275690664fe27e21df4e4e8ce07\r\n- in PyPI, the last release was more than 2 years ago, on January 4, 2022: https://pypi.org/project/Pillow-SIMD/#history\r\n\r\nIn relation with your suggestions for the `datasets` library, the changes were introduced by this PR:\r\n- #6739\r\n\r\nI agree maybe we should have given the option whether to perform this operation or not.",
"@albertvillanova \r\n\r\nHuh, thought I'd just installed the current datasets when I ran into this, maybe it was behind...\r\n\r\nI'm aware the support for SIMD is a problem, but it's up to 8x faster than non SIMD Pillow and really necessary in many training situations or you have lots of idle GPUs. The current situation is unfortunate but most changes since 9.0 aren't all that important for 'decoding jpegs and resizing'"
] | When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised:
```Python traceback
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
```
The error traceback:
```Python traceback
~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self)
1391 # `IterableDataset` automatically fills missing columns with None.
1392 # This is done with `_apply_feature_types_on_example`.
-> 1393 example = _apply_feature_types_on_example(
1394 example, self.features, token_per_repo_id=self._token_per_repo_id
1395 )
~/huggingface/datasets/src/datasets/iterable_dataset.py in _apply_feature_types_on_example(example, features, token_per_repo_id)
1080 encoded_example = features.encode_example(example)
1081 # Decode example for Audio feature, e.g.
-> 1082 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
1083 return decoded_example
1084
~/huggingface/datasets/src/datasets/features/features.py in decode_example(self, example, token_per_repo_id)
1974
-> 1975 return {
1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1977 if self._column_requires_decoding[column_name]
~/huggingface/datasets/src/datasets/features/features.py in <dictcomp>(.0)
1974
1975 return {
-> 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1977 if self._column_requires_decoding[column_name]
1978 else value
~/huggingface/datasets/src/datasets/features/features.py in decode_nested_example(schema, obj, token_per_repo_id)
1339 # we pass the token to read and decode files from private repositories in streaming mode
1340 if obj is not None and schema.decode:
-> 1341 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1342 return obj
1343
~/huggingface/datasets/src/datasets/features/image.py in decode_example(self, value, token_per_repo_id)
187 image = PIL.Image.open(BytesIO(bytes_))
188 image.load() # to avoid "Too many open files" errors
--> 189 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None:
190 image = PIL.ImageOps.exif_transpose(image)
191 if self.mode and self.mode != image.mode:
~/huggingface/datasets/venv/lib/python3.9/site-packages/PIL/Image.py in __getattr__(name)
75 )
76 return categories[name]
---> 77 raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
78
79
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
```
### Environment info
Since datasets 2.19.0 | 6,881 |
https://github.com/huggingface/datasets/issues/6880 | Webdataset: KeyError: 'png' on some datasets when streaming | [
"The error is caused by malformed basenames of the files within the TARs:\r\n- `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png` becomes `15_Cohen_1-s2` as the grouping `__key__`, and `0-S0929664620300449-gr3_lrg-b.png` as the additional key to be added to the example\r\n- whereas the intended behavior was to use `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b` as the grouping `__key__`, and `png` as the additional key to be added to the example\r\n\r\nTo get the expected behavior, the basenames of the files within the TARs should be fixed so that they only contain a single dot, the one separating the file extension.",
"I reopen it because I think we should try to give a clearer error message with a specific error code.\r\n\r\nFor now, it's hard for the user to understand where the error comes from (not everybody knows the subtleties of the webdataset filename structure).\r\n\r\n(we can transfer it to https://github.com/huggingface/dataset-viewer if it fits better there)",
"same with .jpg -> https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions\r\n\r\n```\r\nError code: DatasetGenerationError\r\nException: DatasetGenerationError\r\nMessage: An error occurred while generating the dataset\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1748, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 818, in wrapped\r\n for item in generator(*args, **kwargs):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py\", line 109, in _generate_examples\r\n example[field_name] = {\"path\": example[\"__key__\"] + \".\" + field_name, \"bytes\": example[field_name]}\r\n KeyError: 'jpg'\r\n \r\n The above exception was the direct cause of the following exception:\r\n \r\n Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 1316, in compute_config_parquet_and_info_response\r\n parquet_operations, partial = stream_convert_to_parquet(\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 909, in stream_convert_to_parquet\r\n builder._prepare_split(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1627, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1784, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n",
"More details in the spec (https://docs.google.com/document/d/18OdLjruFNX74ILmgrdiCI9J1fQZuhzzRBCHV9URWto0/edit#heading=h.hkptaq2kct2s)\r\n\r\n> The prefix of a file is all directory components of the file plus the file name component up to the first β.β in the file name.\r\n> The last extension (i.e., the portion after the last β.β) in a file name determines the file type.\r\n\r\n> Example:\r\n\timages17/image194.left.jpg\r\n\timages17/image194.right.jpg\r\n\timages17/image194.json\r\n\timages17/image12.left.jpg\r\n\timages17/image12.json\r\n\timages17/image12.right.jpg\r\n\timages3/image1459.left.jpg\r\n> \tβ¦\r\n> When reading this with a WebDataset library, you would get the following two dictionaries back in sequence:\r\n\r\n { β__key__β: βimages17/image194β, βleft.jpgβ: bβ...β, βright.jpgβ: bβ...β, βjsonβ: bβ...β}\r\n { β__key__β: βimages17/image12β, βleft.jpgβ: bβ...β, βright.jpgβ: bβ...β, βjsonβ: bβ...β}\r\n",
"OK, the issue is different in the latter case: some files are suffixed as `.jpeg`, and others as `.jpg` :)\r\n\r\nIs it a limitation of the webdataset format, or of the datasets library @lhoestq? And could we be able to give a clearer error?"
] | reported at https://huggingface.co/datasets/tbone5563/tar_images/discussions/1
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("tbone5563/tar_images")
Downloadingβdata:β100%
β1.41G/1.41Gβ[00:48<00:00,β17.2MB/s]
Downloadingβdata:β100%
β619M/619Mβ[00:11<00:00,β57.4MB/s]
Generatingβtrainβsplit:β
β970/0β[00:02<00:00,β534.94βexamples/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1747 _time = time.time()
-> 1748 for key, record in generator:
1749 if max_shard_size is not None and writer._num_bytes > max_shard_size:
7 frames
[/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/webdataset/webdataset.py](https://localhost:8080/#) in _generate_examples(self, tar_paths, tar_iterators)
108 for field_name in image_field_names + audio_field_names:
--> 109 example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]}
110 yield f"{tar_idx}_{example_idx}", example
KeyError: 'png'
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[<ipython-input-2-8e0fbb7badc9>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 ds = load_dataset("tbone5563/tar_images")
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2607
2608 # Download and prepare data
-> 2609 builder_instance.download_and_prepare(
2610 download_config=download_config,
2611 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
1025 if num_proc is not None:
1026 prepare_split_kwargs["num_proc"] = num_proc
-> 1027 self._download_and_prepare(
1028 dl_manager=dl_manager,
1029 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1787
1788 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1789 super()._download_and_prepare(
1790 dl_manager,
1791 verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1120 try:
1121 # Prepare split will record examples associated to the split
-> 1122 self._prepare_split(split_generator, **prepare_split_kwargs)
1123 except OSError as e:
1124 raise OSError(
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)
1625 job_id = 0
1626 with pbar:
-> 1627 for job_id, done, content in self._prepare_split_single(
1628 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1629 ):
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1782 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1783 e = e.__context__
-> 1784 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1785
1786 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
``` | 6,880 |
https://github.com/huggingface/datasets/issues/6879 | Batched mapping does not raise an error if values for an existing column are empty | [] | ### Describe the bug
Using `Dataset.map(fn, batched=True)` allows resizing the dataset by returning a dict of lists, all of which must be the same size. If they are not the same size, an error like `pyarrow.lib.ArrowInvalid: Column 1 named x expected length 1 but got length 0` is raised.
This is not the case if the function returns an empty list for an existing column in the dataset. In that case, the dataset is silently resized to 0 rows.
### Steps to reproduce the bug
MWE:
```
import datasets
data = datasets.Dataset.from_dict({"test": [1]})
def mapping_fn(examples):
return {"test": [], "y": [1]}
data = data.map(mapping_fn, batched=True)
print(len(data))
```
Note that when returning `"x": []`, the error is raised correctly, also when returning `"test": [1,2]`.
### Expected behavior
Expected an exception: `pyarrow.lib.ArrowInvalid: Column 1 named test expected length 1 but got length 0` or `pyarrow.lib.ArrowInvalid: Column 2 named y expected length 0 but got length 1`.
Any exception would be acceptable.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31
- Python version: 3.11.8
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | 6,879 |
https://github.com/huggingface/datasets/issues/6877 | OSError: [Errno 24] Too many open files | [
"ulimit -n 8192 can solve this problem",
"> ulimit -n 8192 can solve this problem\r\n\r\nWould there be a systematic way to do this ? The data loading is part of the [MTEB](https://github.com/embeddings-benchmark/mteb) library",
"> > ulimit -n 8192 can solve this problem\r\n> \r\n> Would there be a systematic way to do this ? The data loading is part of the [MTEB](https://github.com/embeddings-benchmark/mteb) library\r\n\r\n I think we could modify the _prepare_split_single function",
"I fixed it with https://github.com/huggingface/datasets/pull/6893, feel free to re-open if you're still having the issue :)",
"> I fixed it with #6893, feel free to re-open if you're still having the issue :)\r\n\r\nThanks a lot!"
] | ### Describe the bug
I am trying to load the 'default' subset of the following dataset which contains lots of files (828 per split): [https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb](https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb)
When trying to load it using the `load_dataset` function I get the following error
```python
>>> from datasets import load_dataset
>>> d = load_dataset('mteb/biblenlp-corpus-mmteb')
Downloading readme: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 201k/201k [00:00<00:00, 1.07MB/s]
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 828/828 [00:00<00:00, 1069.15it/s]
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 828/828 [00:00<00:00, 436182.33it/s]
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 828/828 [00:00<00:00, 2228.75it/s]
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 828/828 [00:00<00:00, 646478.73it/s]
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 828/828 [00:00<00:00, 831032.24it/s]
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 828/828 [00:00<00:00, 517645.51it/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 828/828 [00:33<00:00, 24.87files/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 828/828 [00:30<00:00, 27.48files/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 828/828 [00:30<00:00, 26.94files/s]
Generating train split: 1571592 examples [00:03, 461438.97 examples/s]
Generating test split: 11163 examples [00:00, 118190.72 examples/s]
Traceback (most recent call last):
File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File ".env/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 99, in _generate_tables
with open(file, "rb") as f:
^^^^^^^^^^^^^^^^
File ".env/lib/python3.12/site-packages/datasets/streaming.py", line 75, in wrapper
return function(*args, download_config=download_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".env/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1224, in xopen
file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".env/lib/python3.12/site-packages/fsspec/core.py", line 135, in open
return self.__enter__()
^^^^^^^^^^^^^^^^
File ".env/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__
f = self.fs.open(self.path, mode=mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".env/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open
f = self._open(
^^^^^^^^^^^
File ".env/lib/python3.12/site-packages/datasets/filesystems/compression.py", line 81, in _open
return self.file.open()
^^^^^^^^^^^^^^^^
File ".env/lib/python3.12/site-packages/fsspec/core.py", line 135, in open
return self.__enter__()
^^^^^^^^^^^^^^^^
File ".env/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__
f = self.fs.open(self.path, mode=mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".env/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open
f = self._open(
^^^^^^^^^^^
File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 197, in _open
return LocalFileOpener(path, mode, fs=self, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 322, in __init__
self._open()
File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 327, in _open
self.f = open(self.path, mode=self.mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 24] Too many open files: '.cache/huggingface/datasets/downloads/3a347186abfc0f9c924dde0221d246db758c7232c0101523f04a87c17d696618'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".env/lib/python3.12/site-packages/datasets/builder.py", line 981, in incomplete_dir
yield tmp_dir
File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File ".env/lib/python3.12/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".env/lib/python3.12/site-packages/datasets/load.py", line 2609, in load_dataset
builder_instance.download_and_prepare(
File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1007, in download_and_prepare
with incomplete_dir(self._output_dir) as tmp_output_dir:
File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__
self.gen.throw(value)
File ".env/lib/python3.12/site-packages/datasets/builder.py", line 988, in incomplete_dir
shutil.rmtree(tmp_dir)
File "/usr/lib/python3.12/shutil.py", line 785, in rmtree
_rmtree_safe_fd(fd, path, onexc)
File "/usr/lib/python3.12/shutil.py", line 661, in _rmtree_safe_fd
onexc(os.scandir, path, err)
File "/usr/lib/python3.12/shutil.py", line 657, in _rmtree_safe_fd
with os.scandir(topfd) as scandir_it:
^^^^^^^^^^^^^^^^^
OSError: [Errno 24] Too many open files: '.cache/huggingface/datasets/mteb___biblenlp-corpus-mmteb/default/0.0.0/3912ed967b0834547f35b2da9470c4976b357c9a.incomplete'
```
I looked for the maximum number of open files on my machine (Ubuntu 24.04) and it seems to be 1024, but even when I try to load a single split (`load_dataset('mteb/biblenlp-corpus-mmteb', split='train')`) I get the same error
### Steps to reproduce the bug
```python
from datasets import load_dataset
d = load_dataset('mteb/biblenlp-corpus-mmteb')
```
### Expected behavior
Load the dataset without error
### Environment info
- `datasets` version: 2.19.0
- Platform: Linux-6.8.0-31-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.23.0
- PyArrow version: 16.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
| 6,877 |
https://github.com/huggingface/datasets/issues/6869 | Download is broken for dict of dicts: FileNotFoundError | [] | It seems there is a bug when downloading a dict of dicts of URLs introduced by:
- #6794
## Steps to reproduce the bug:
```python
from datasets import DownloadManager
dl_manager = DownloadManager()
paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}})
```
Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-7-0e0d76d25b09> in <module>
----> 1 paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}})
.../huggingface/datasets/src/datasets/download/download_manager.py in download(self, url_or_urls)
255 start_time = datetime.now()
256 with stack_multiprocessing_download_progress_bars():
--> 257 downloaded_path_or_paths = map_nested(
258 download_func,
259 url_or_urls,
.../huggingface/datasets/src/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc)
506 batch_size = max(len(iterable) // num_proc + int(len(iterable) % num_proc > 0), 1)
507 iterable = list(iter_batched(iterable, batch_size))
--> 508 mapped = [
509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
.../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0)
507 iterable = list(iter_batched(iterable, batch_size))
508 mapped = [
--> 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
511 ]
.../huggingface/datasets/src/datasets/utils/py_utils.py in _single_map_nested(args)
375 and all(not isinstance(v, types) for v in data_struct)
376 ):
--> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
378
379 # Reduce logging to keep things readable in multiprocessing with tqdm
.../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0)
375 and all(not isinstance(v, types) for v in data_struct)
376 ):
--> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
378
379 # Reduce logging to keep things readable in multiprocessing with tqdm
.../huggingface/datasets/src/datasets/download/download_manager.py in _download_batched(self, url_or_filenames, download_config)
311 )
312 else:
--> 313 return [
314 self._download_single(url_or_filename, download_config=download_config)
315 for url_or_filename in url_or_filenames
.../huggingface/datasets/src/datasets/download/download_manager.py in <listcomp>(.0)
312 else:
313 return [
--> 314 self._download_single(url_or_filename, download_config=download_config)
315 for url_or_filename in url_or_filenames
316 ]
.../huggingface/datasets/src/datasets/download/download_manager.py in _download_single(self, url_or_filename, download_config)
321 # append the relative path to the base_path
322 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 323 out = cached_path(url_or_filename, download_config=download_config)
324 out = tracked_str(out)
325 out.set_origin(url_or_filename)
.../huggingface/datasets/src/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
220 elif is_local_path(url_or_filename):
221 # File, but it doesn't exist.
--> 222 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist")
223 else:
224 # Something unknown
FileNotFoundError: Local file .../huggingface/datasets/{'frr': 'hf:/datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet'} doesn't exist
```
Related to:
- #6850
| 6,869 |
https://github.com/huggingface/datasets/issues/6868 | datasets.BuilderConfig does not work. | [
"I guess the issue is caused by the customization of BuilderConfig that you use from the repo [https://github.com/BeyonderXX/InstructUIE](https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py). You should report to them.\r\n\r\nI see you already opened an issue in their repo:\r\n- https://github.com/BeyonderXX/InstructUIE/issues/40"
] | ### Describe the bug
I custom a BuilderConfig and GeneratorBasedBuilder.
Here is the code for BuilderConfig
```
class UIEConfig(datasets.BuilderConfig):
def __init__(
self,
*args,
data_dir=None,
instruction_file=None,
instruction_strategy=None,
task_config_dir=None,
num_examples=None,
max_num_instances_per_task=None,
max_num_instances_per_eval_task=None,
over_sampling=None,
**kwargs
):
super().__init__(*args, **kwargs)
self.data_dir = data_dir
self.num_examples = num_examples
self.over_sampling = over_sampling
self.instructions = self._parse_instruction(instruction_file)
self.task_configs = self._parse_task_config(task_config_dir)
self.instruction_strategy = instruction_strategy
self.max_num_instances_per_task = max_num_instances_per_task
self.max_num_instances_per_eval_task = max_num_instances_per_eval_task
```
Besides, here is the code for GeneratorBasedBuilder.
```
class UIEInstructions(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("2.0.0")
BUILDER_CONFIG_CLASS = UIEConfig
BUILDER_CONFIGS = [
UIEConfig(name="default", description="Default config for NaturalInstructions")
]
DEFAULT_CONFIG_NAME = "default"
```
Here is the load_dataset
```
raw_datasets = load_dataset(
os.path.join(CURRENT_DIR, "uie_dataset.py"),
data_dir=data_args.data_dir,
task_config_dir=data_args.task_config_dir,
instruction_file=data_args.instruction_file,
instruction_strategy=data_args.instruction_strategy,
cache_dir=data_cache_dir, # for debug, change dataset size, otherwise open it
max_num_instances_per_task=data_args.max_num_instances_per_task,
max_num_instances_per_eval_task=data_args.max_num_instances_per_eval_task,
num_examples=data_args.num_examples,
over_sampling=data_args.over_sampling
)
```
Finally, I met the error.
```
BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key.
```
I debugged the code, but I find the parameters added by me may not work.
### Steps to reproduce the bug
https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py
### Expected behavior
```
BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key.
```
### Environment info
torch 2.3.0+cu118
transformers 4.40.1
python 3.8 | 6,868 |
https://github.com/huggingface/datasets/issues/6867 | Improve performance of JSON loader | [
"Thanks! Feel free to ping me for examples. May not respond immediately because we're all busy but would like to help.",
"Hi @natolambert, could you please give some examples of JSON files to benchmark?\r\n\r\nPlease note that this JSON file (https://huggingface.co/datasets/allenai/reward-bench-results/blob/main/eval-set-scores/Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback.json) is not in \"records\" orient; instead it has the following structure:\r\n```json\r\n{\r\n \"chat_template\": \"tulu\",\r\n \"id\": [30, 34, 35,...],\r\n \"model\": \"Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback\",\r\n \"model_type\": \"Seq. Classifier\",\r\n \"results\": [1, 1, 1, ...],\r\n \"scores_chosen\": [4.421875, 1.8916015625, 3.8515625,...],\r\n \"scores_rejected\": [-2.416015625, -1.47265625, -0.9912109375,...],\r\n \"subset\": [\"alpacaeval-easy\", \"alpacaeval-easy\", \"alpacaeval-easy\",...]\r\n \"text_chosen\": [\"<s>[INST] How do I detail a...\",...],\r\n \"text_rejected\": [\"<s>[INST] How do I detail a...\",...]\r\n}\r\n```\r\n\r\nNote that \"records\" orient should be a list (not a dict) with each row as one item of the list:\r\n```json\r\n[\r\n {\"chat_template\": \"tulu\", \"id\": 30,... },\r\n {\"chat_template\": \"tulu\", \"id\": 34,... },\r\n ...\r\n]\r\n```",
"We use a mix (which is a mess), here's an example with the records orient\r\nhttps://huggingface.co/datasets/allenai/reward-bench-results/blob/main/best-of-n/alpaca_eval/tulu-13b/OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5.json\r\n\r\nThere are more in that folder, ~40mb maybe?",
"@albertvillanova here's a snippet so you don't need to click\r\n```\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 0\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.076171875\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 1\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.87890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 2\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.287109375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 3\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 1.6337890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 4\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 5.27734375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 5\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.0625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 6\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 2.29296875\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 7\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 6.77734375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 8\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.853515625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 9\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.86328125\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 10\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 2.890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 11\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.70703125\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 12\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.45703125\r\n}\r\n```",
"Thanks again for your feedback, @natolambert.\r\n\r\nHowever, strictly speaking, the last file is not in JSON format but in kind of JSON-Lines like format (although not properly either because there are multiple newline characters within each object). Not even pandas can read that file format.\r\n\r\nAnyway, for JSON-Lines, I would expect that `datasets` and `pandas` have the same performance for JSON Lines files, as both use `pyarrow` under the hood...\r\n\r\nA proper JSON file in records orient should be a list (a JSON array): the first character should be `[`.\r\n\r\nAnyway, I am generating a JSON file from your JSON-Lines file to test performance."
] | As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance.
The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714
> There are benchmarks that compare different JSON packages, with the Standard Library one among the worst performant:
> - https://github.com/ultrajson/ultrajson#benchmarks
> - https://github.com/ijl/orjson#performance
I remember having a discussion about this and it was decided that it was better not to include an additional dependency on a 3rd-party library.
However:
- We already depend on `pandas` and `pandas` depends on `ujson`: so we have an indirect dependency on `ujson`
- Even if the above were not the case, we always could include `ujson` as an optional extra dependency, and check at runtime if it is installed to decide which library to use, either json or ujson | 6,867 |
https://github.com/huggingface/datasets/issues/6866 | DataFilesNotFoundError for datasets in the open-llm-leaderboard | [
"Potentially related:\r\n* #6864\r\n* #6850\r\n* #6848\r\n* #6819",
"Hi @jerome-white, thnaks for reporting.\r\n\r\nHowever, I cannot reproduce your issue:\r\n```python\r\n>>> from datasets import get_dataset_config_names\r\n\r\n>>> get_dataset_config_names(\"open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5\")\r\n['harness_arc_challenge_25',\r\n 'harness_gsm8k_5',\r\n 'harness_hellaswag_10',\r\n 'harness_hendrycksTest_5',\r\n 'harness_hendrycksTest_abstract_algebra_5',\r\n 'harness_hendrycksTest_anatomy_5',\r\n 'harness_hendrycksTest_astronomy_5',\r\n 'harness_hendrycksTest_business_ethics_5',\r\n 'harness_hendrycksTest_clinical_knowledge_5',\r\n 'harness_hendrycksTest_college_biology_5',\r\n 'harness_hendrycksTest_college_chemistry_5',\r\n 'harness_hendrycksTest_college_computer_science_5',\r\n 'harness_hendrycksTest_college_mathematics_5',\r\n 'harness_hendrycksTest_college_medicine_5',\r\n 'harness_hendrycksTest_college_physics_5',\r\n 'harness_hendrycksTest_computer_security_5',\r\n 'harness_hendrycksTest_conceptual_physics_5',\r\n 'harness_hendrycksTest_econometrics_5',\r\n 'harness_hendrycksTest_electrical_engineering_5',\r\n 'harness_hendrycksTest_elementary_mathematics_5',\r\n 'harness_hendrycksTest_formal_logic_5',\r\n 'harness_hendrycksTest_global_facts_5',\r\n 'harness_hendrycksTest_high_school_biology_5',\r\n 'harness_hendrycksTest_high_school_chemistry_5',\r\n 'harness_hendrycksTest_high_school_computer_science_5',\r\n 'harness_hendrycksTest_high_school_european_history_5',\r\n 'harness_hendrycksTest_high_school_geography_5',\r\n 'harness_hendrycksTest_high_school_government_and_politics_5',\r\n 'harness_hendrycksTest_high_school_macroeconomics_5',\r\n 'harness_hendrycksTest_high_school_mathematics_5',\r\n 'harness_hendrycksTest_high_school_microeconomics_5',\r\n 'harness_hendrycksTest_high_school_physics_5',\r\n 'harness_hendrycksTest_high_school_psychology_5',\r\n 'harness_hendrycksTest_high_school_statistics_5',\r\n 'harness_hendrycksTest_high_school_us_history_5',\r\n 'harness_hendrycksTest_high_school_world_history_5',\r\n 'harness_hendrycksTest_human_aging_5',\r\n 'harness_hendrycksTest_human_sexuality_5',\r\n 'harness_hendrycksTest_international_law_5',\r\n 'harness_hendrycksTest_jurisprudence_5',\r\n 'harness_hendrycksTest_logical_fallacies_5',\r\n 'harness_hendrycksTest_machine_learning_5',\r\n 'harness_hendrycksTest_management_5',\r\n 'harness_hendrycksTest_marketing_5',\r\n 'harness_hendrycksTest_medical_genetics_5',\r\n 'harness_hendrycksTest_miscellaneous_5',\r\n 'harness_hendrycksTest_moral_disputes_5',\r\n 'harness_hendrycksTest_moral_scenarios_5',\r\n 'harness_hendrycksTest_nutrition_5',\r\n 'harness_hendrycksTest_philosophy_5',\r\n 'harness_hendrycksTest_prehistory_5',\r\n 'harness_hendrycksTest_professional_accounting_5',\r\n 'harness_hendrycksTest_professional_law_5',\r\n 'harness_hendrycksTest_professional_medicine_5',\r\n 'harness_hendrycksTest_professional_psychology_5',\r\n 'harness_hendrycksTest_public_relations_5',\r\n 'harness_hendrycksTest_security_studies_5',\r\n 'harness_hendrycksTest_sociology_5',\r\n 'harness_hendrycksTest_us_foreign_policy_5',\r\n 'harness_hendrycksTest_virology_5',\r\n 'harness_hendrycksTest_world_religions_5',\r\n 'harness_truthfulqa_mc_0',\r\n 'harness_winogrande_5',\r\n 'results']\r\n```\r\n\r\nMaybe it was just a temporary issue...",
"> Maybe it was just a temporary issue...\r\n\r\nPerhaps. I've changed my workflow to use the hub's `HfFileSystem`, so for now this is no longer a blocker for me. I'll reopen the issue if that changes."
] | ### Describe the bug
When trying to get config names or load any dataset within the open-llm-leaderboard ecosystem (`open-llm-leaderboard/details_`) I receive the DataFilesNotFoundError. For the last month or so I've been loading datasets from the leaderboard almost everyday; yesterday was the first time I started seeing this.
### Steps to reproduce the bug
This snippet has three cells:
1. Loads the modules
2. Tries to get config names
3. Tries to load the dataset
I've chosen "davidkim205"'s Rhea-72b-v0.5 model because it is one of the best performers on the leaderboard should likely have no dataset issues:
```python
In [1]: from datasets import load_dataset, get_dataset_config_names
In [2]: get_dataset_config_names("open-llm-leaderboard/details_davidkim205__Rhea
...: -72b-v0.5")
---------------------------------------------------------------------------
DataFilesNotFoundError Traceback (most recent call last)
Cell In[2], line 1
----> 1 get_dataset_config_names("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5")
File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/inspect.py:347, in get_dataset_config_names(path, revision, download_config, download_mode, dynamic_modules_path, data_files, **download_kwargs)
291 def get_dataset_config_names(
292 path: str,
293 revision: Optional[Union[str, Version]] = None,
(...)
298 **download_kwargs,
299 ):
300 """Get the list of available config names for a particular dataset.
301
302 Args:
(...)
345 ```
346 """
--> 347 dataset_module = dataset_module_factory(
348 path,
349 revision=revision,
350 download_config=download_config,
351 download_mode=download_mode,
352 dynamic_modules_path=dynamic_modules_path,
353 data_files=data_files,
354 **download_kwargs,
355 )
356 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=os.path.basename(path))
357 return list(builder_cls.builder_configs.keys()) or [
358 dataset_module.builder_kwargs.get("config_name", builder_cls.DEFAULT_CONFIG_NAME or "default")
359 ]
File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1821, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
1812 return LocalDatasetModuleFactoryWithScript(
1813 combined_path,
1814 download_mode=download_mode,
1815 dynamic_modules_path=dynamic_modules_path,
1816 trust_remote_code=trust_remote_code,
1817 ).get_module()
1818 elif os.path.isdir(path):
1819 return LocalDatasetModuleFactoryWithoutScript(
1820 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode
-> 1821 ).get_module()
1822 # Try remotely
1823 elif is_relative_path(path) and path.count("/") <= 1:
File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1039, in LocalDatasetModuleFactoryWithoutScript.get_module(self)
1033 patterns = get_data_patterns(base_path)
1034 data_files = DataFilesDict.from_patterns(
1035 patterns,
1036 base_path=base_path,
1037 allowed_extensions=ALL_ALLOWED_EXTENSIONS,
1038 )
-> 1039 module_name, default_builder_kwargs = infer_module_for_data_files(
1040 data_files=data_files,
1041 path=self.path,
1042 )
1043 data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name])
1044 # Collect metadata files if the module supports them
File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config)
595 raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}")
596 if not module_name:
--> 597 raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
598 return module_name, default_builder_kwargs
DataFilesNotFoundError: No (supported) data files found in open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5
In [3]: data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b-
...: v0.5", "harness_winogrande_5")
---------------------------------------------------------------------------
DataFilesNotFoundError Traceback (most recent call last)
Cell In[3], line 1
----> 1 data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5", "harness_winogrande_5")
File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2582 verification_mode = VerificationMode(
2583 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
2584 )
2586 # Create a dataset builder
-> 2587 builder_instance = load_dataset_builder(
2588 path=path,
2589 name=name,
2590 data_dir=data_dir,
2591 data_files=data_files,
2592 cache_dir=cache_dir,
2593 features=features,
2594 download_config=download_config,
2595 download_mode=download_mode,
2596 revision=revision,
2597 token=token,
2598 storage_options=storage_options,
2599 trust_remote_code=trust_remote_code,
2600 _require_default_config_name=name is None,
2601 **config_kwargs,
2602 )
2604 # Return iterable dataset in case of streaming
2605 if streaming:
File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:2259, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
2257 download_config = download_config.copy() if download_config else DownloadConfig()
2258 download_config.storage_options.update(storage_options)
-> 2259 dataset_module = dataset_module_factory(
2260 path,
2261 revision=revision,
2262 download_config=download_config,
2263 download_mode=download_mode,
2264 data_dir=data_dir,
2265 data_files=data_files,
2266 cache_dir=cache_dir,
2267 trust_remote_code=trust_remote_code,
2268 _require_default_config_name=_require_default_config_name,
2269 _require_custom_configs=bool(config_kwargs),
2270 )
2271 # Get dataset builder class from the processing script
2272 builder_kwargs = dataset_module.builder_kwargs
File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1821, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
1812 return LocalDatasetModuleFactoryWithScript(
1813 combined_path,
1814 download_mode=download_mode,
1815 dynamic_modules_path=dynamic_modules_path,
1816 trust_remote_code=trust_remote_code,
1817 ).get_module()
1818 elif os.path.isdir(path):
1819 return LocalDatasetModuleFactoryWithoutScript(
1820 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode
-> 1821 ).get_module()
1822 # Try remotely
1823 elif is_relative_path(path) and path.count("/") <= 1:
File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1039, in LocalDatasetModuleFactoryWithoutScript.get_module(self)
1033 patterns = get_data_patterns(base_path)
1034 data_files = DataFilesDict.from_patterns(
1035 patterns,
1036 base_path=base_path,
1037 allowed_extensions=ALL_ALLOWED_EXTENSIONS,
1038 )
-> 1039 module_name, default_builder_kwargs = infer_module_for_data_files(
1040 data_files=data_files,
1041 path=self.path,
1042 )
1043 data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name])
1044 # Collect metadata files if the module supports them
File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config)
595 raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}")
596 if not module_name:
--> 597 raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
598 return module_name, default_builder_kwargs
DataFilesNotFoundError: No (supported) data files found in open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5
```
### Expected behavior
No exceptions from `get_dataset_config_names` or `load_dataset`
### Environment info
- `datasets` version: 2.19.0
- Platform: Linux-6.5.0-1018-aws-aarch64-with-glibc2.35
- Python version: 3.11.8
- `huggingface_hub` version: 0.23.0
- PyArrow version: 16.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1 | 6,866 |
https://github.com/huggingface/datasets/issues/6865 | Example on Semantic segmentation contains bug | [] | ### Describe the bug
https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms.
Specifically, as one can see in screenshot below, the object boundaries have weird colors.
<img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/59aa0e2c-2e3e-415b-9d42-2314044c5aee">
Original example with `albumentations` is correct
<img width="705" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/27dbd725-cea5-4e48-ba59-7050c3ce17b3">
That is because `torch vision.transforms.Resize` interpolates with bilinear everything which is wrong when used for segmentation labels - you just cannot mix them. Overall, `torchvision.transforms` is designed for classification only and cannot be used to images and masks together, unless you write two separate branches of augmentations.
The correct way would be to use `v2` version of transforms and convert the segmentation labels to https://pytorch.org/vision/main/generated/torchvision.tv_tensors.Mask.html#torchvision.tv_tensors.Mask object
### Steps to reproduce the bug
Go to the website.
<img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/ea1276d0-d69a-48cf-b9c2-cd61217815ef">
https://huggingface.co/docs/datasets/en/semantic_segmentation
### Expected behavior
Results, similar to `albumentation`. Or remove the torch vision part altogether. Or use `kornia` instead.
### Environment info
Irrelevant | 6,865 |
https://github.com/huggingface/datasets/issues/6864 | Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub | [
"Hi @vinodrajendran001, thanks for reporting.\r\n\r\nIndeed the dataset no longer exists on the Hub. The URL https://huggingface.co/datasets/rewardsignal/reddit_writing_prompts gives 404 Not Found error."
] | ### Describe the bug
The dataset `rewardsignal/reddit_writing_prompts` is missing in Huggingface Hub.
### Steps to reproduce the bug
```
from datasets import load_dataset
prompt_response_dataset = load_dataset("rewardsignal/reddit_writing_prompts", data_files="prompt_responses_full.csv", split='train[:80%]')
```
### Expected behavior
DatasetNotFoundError: Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub or cannot be accessed
### Environment info
Nothing to do with versions | 6,864 |
https://github.com/huggingface/datasets/issues/6863 | Revert temporary pin huggingface-hub < 0.23.0 | [] | Revert temporary pin huggingface-hub < 0.23.0 introduced by
- #6861
once the following issue is fixed and released:
- huggingface/transformers#30618 | 6,863 |
https://github.com/huggingface/datasets/issues/6860 | CI fails after huggingface_hub-0.23.0 release: FutureWarning: "resume_download" | [
"I think this needs to be fixed on transformers.\r\n\r\nCC: @Wauplin ",
"See:\r\n- https://github.com/huggingface/transformers/issues/30618",
"Opened https://github.com/huggingface/transformers/pull/30620"
] | CI fails after latest huggingface_hub-0.23.0 release: https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.0
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bertscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer_with_cache - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_arrow_dataset.py::MiscellaneousDatasetTest::test_set_format_encode - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
``` | 6,860 |
https://github.com/huggingface/datasets/issues/6858 | Segmentation fault | [
"I downloaded the jsonl file and extract it manually. \r\nThe issue seems to be related to pyarrow.json \r\n\r\n\r\n\r\npython3 -q -X faulthandler -c \"from datasets import load_dataset; load_dataset('json', data_files='/Users/scampion/Downloads/1998-09.jsonl')\"\r\nGenerating train split: 0 examples [00:00, ? examples/s]Fatal Python error: Segmentation fault\r\n\r\nThread 0x00007000000c1000 (most recent call first):\r\n <no Python frame>\r\n\r\nThread 0x00007000024df000 (most recent call first):\r\n File \"/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 331 in wait\r\n File \"/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 629 in wait\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/tqdm/_monitor.py\", line 60 in run\r\n File \"/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 1045 in _bootstrap_inner\r\n File \"/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 1002 in _bootstrap\r\n\r\nThread 0x00007ff845c66640 (most recent call first):\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/packaged_modules/json/json.py\", line 122 in _generate_tables\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1995 in _prepare_split_single\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1882 in _prepare_split\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1122 in _download_and_prepare\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1027 in download_and_prepare\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/load.py\", line 2609 in load_dataset\r\n File \"<string>\", line 1 in <module>\r\n\r\nExtension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pyarrow.lib, pyarrow._hdfsio, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pyarrow._compute, pandas._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, charset_normalizer.md, yaml._yaml, pyarrow._parquet, pyarrow._fs, pyarrow._hdfs, pyarrow._gcsfs, pyarrow._s3fs, multidict._multidict, yarl._quoting_c, aiohttp._helpers, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket, frozenlist._frozenlist, xxhash._xxhash, pyarrow._json (total: 72)\r\n[1] 56678 segmentation fault python3 -q -X faulthandler -c\r\n/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown\r\n warnings.warn('resource_tracker: There appear to be %d '\r\n(venv_test)",
"The error comes from data where one line contains \"null\""
] | ### Describe the bug
Using various version for datasets, I'm no more longer able to load that dataset without a segmentation fault.
Several others files are also concerned.
### Steps to reproduce the bug
# Create a new venv
python3 -m venv venv_test
source venv_test/bin/activate
# Install the latest version
pip install datasets
# Load that dataset
python3 -q -X faulthandler -c "from datasets import load_dataset; load_dataset('EuropeanParliament/Eurovoc', '1998-09')"
### Expected behavior
Data must be loaded
### Environment info
datasets==2.19.0
Python 3.11.7
Darwin 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:51:50 PDT 2023; root:xnu-8796.121.2~5/RELEASE_X86_64 x86_64 | 6,858 |
https://github.com/huggingface/datasets/issues/6856 | CI fails on Windows for test_delete_from_hub and test_xgetsize_private due to new-line character | [
"After investigation, I have found that when a local file is uploaded to the Hub, the new line character is no longer transformed to \"\\n\": on Windows machine now it is kept as \"\\r\\n\".\r\n\r\nAny idea why this changed?\r\nCC: @lhoestq "
] | CI fails on Windows for test_delete_from_hub after the merge of:
- #6820
This is weird because the CI was green in the PR branch before merging to main.
```
FAILED tests/test_hub.py::test_delete_from_hub - AssertionError: assert [CommitOperat...\r\n---\r\n')] == [CommitOperat...in/*\n---\n')]
At index 1 diff: CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_files:\r\n - split: train\r\n path: cats/train/*\r\n---\r\n') != CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n - split: train\n path: cats/train/*\n---\n')
Full diff:
[
CommitOperationDelete(
path_in_repo='dogs/train/0000.csv',
is_folder=False,
),
CommitOperationAdd(
path_in_repo='README.md',
- path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n '
? --------
+ path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_f'
? ++ ++ ++
- b' - split: train\n path: cats/train/*\n---\n',
? ^^^^^^ -
+ b'iles:\r\n - split: train\r\n path: cats/train/*\r'
? ++++++++++ ++ ^
+ b'\n---\r\n',
),
]
``` | 6,856 |
https://github.com/huggingface/datasets/issues/6854 | Wrong example of usage when config name is missing for community script-datasets | [] | As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name is missing.
Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all']
Example of usage:
`load_dataset('fleurs', 'af_za')`
```
Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs". | 6,854 |
https://github.com/huggingface/datasets/issues/6853 | Support soft links for load_datasets imagefolder | [] | ### Feature request
Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated.
### Motivation
Images are coming from a complex variety of sources and we'd like to be able to soft link directly from the originating folders as opposed to copying. Having a copy of the file ensures that there may be issues with image versioning as well as having double the amount of required disk space.
### Your contribution
N/A | 6,853 |
https://github.com/huggingface/datasets/issues/6852 | Write token isn't working while pushing to datasets | [] | ### Describe the bug
<img width="1001" alt="Screenshot 2024-05-01 at 3 37 06 AM" src="https://github.com/huggingface/datasets/assets/130903099/00fcf12c-fcc1-4749-8592-d263d4efcbcc">
As you can see I logged in to my account and the write token is valid.
But I can't upload on my main account and I am getting that error. It was okay on my test account at first try.
(I refreshed the token, tried a new token but still doesn't work)
### Steps to reproduce the bug
1. I loaded a dataset.
2. I logged in using both cli and huggingface_hub
3. I pushed to my down dataset
(It went well without any issues on my test account)
### Expected behavior
It should have gone smoothly and this is not even my first time uploading to huggingface datasets
### Environment info
colab, dataset (tried multiple versions) | 6,852 |
https://github.com/huggingface/datasets/issues/6851 | load_dataset('emotion') UnicodeDecodeError | [] | ### Describe the bug
**emotions = load_dataset('emotion')**
_UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte_
### Steps to reproduce the bug
load_dataset('emotion')
### Expected behavior
succese
### Environment info
py3.10
transformers 4.41.0.dev0
datasets 2.19.0 | 6,851 |
https://github.com/huggingface/datasets/issues/6850 | Problem loading voxpopuli dataset | [
"Version 2.18 works without problem.",
"@Namangarg110 @mohsen-goodarzi The bug appears because the number of urls is less than 16 and the algorithm is meant to work on the previously created mode for a single url as stated on line 314: https://github.com/huggingface/datasets/blob/1bf8a46cc7b096d5c547ea3794f6a4b6c31ea762/src/datasets/download/download_manager.py#L314\r\n\r\nIn addition, previously `map_nested` function was supported without batching and it is meant to be the default performance. \r\n\r\nOne of the shortest walk-arounds would be changing the part of the manager with the current setting:\r\n```\r\n if len(url_or_urls) >= 16:\r\n download_func = partial(self._download_batched, download_config=download_config)\r\n else:\r\n download_func = partial(self._download_single, download_config=download_config)\r\n\r\n start_time = datetime.now()\r\n with stack_multiprocessing_download_progress_bars():\r\n downloaded_path_or_paths = map_nested(\r\n download_func,\r\n url_or_urls,\r\n map_tuple=True,\r\n num_proc=download_config.num_proc,\r\n desc=\"Downloading data files\",\r\n batched=True if len(url_or_urls) >= 16 else False,\r\n batch_size=-1,\r\n )\r\n```\r\n\r\nI would suggest to consider other datasets for similar issues and make a pull-request. ",
"Thanks for reporting @Namangarg110 and thanks for the investigation @MilanaShhanukova.\r\n\r\nApparently, there is an issue with the download functionality.\r\nI am proposing a fix."
] | ### Describe the bug
```
Exception has occurred: FileNotFoundError
Couldn't find file at https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/{'en': 'data/en/asr_train.tsv'}
```
Error in logic for link url creation. The link should be https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/en/asr_train.tsv
Basically there should be links directly under ```metadata["train"]```, not under ```metadata["train"][self.config.languages[0]]```
same for audio urls
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("facebook/voxpopuli","en")
```
### Expected behavior
Dataset should be loaded successfully.
### Environment info
- `datasets` version: 2.19.0
- Platform: Linux-5.15.0-1041-aws-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.12.2 | 6,850 |
https://github.com/huggingface/datasets/issues/6848 | Cant Downlaod Common Voice 17.0 hy-AM | [
"Same issue here."
] | ### Describe the bug
I want to download Common Voice 17.0 hy-AM but it returns an error.
```
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_name='hfds_config', config_path=None)
/usr/local/lib/python3.10/dist-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
ret = run_job(
/usr/local/lib/python3.10/dist-packages/datasets/load.py:1429: FutureWarning: The repository for mozilla-foundation/common_voice_17_0 contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/mozilla-foundation/common_voice_17_0
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.
warnings.warn(
Reading metadata...: 6180it [00:00, 133224.37it/s]les/s]
Generating train split: 0 examples [00:00, ? examples/s]
HuggingFace datasets failed due to some reason (stack trace below).
For certain datasets (eg: MCV), it may be necessary to login to the huggingface-cli (via `huggingface-cli login`).
Once logged in, you need to set `use_auth_token=True` when calling this script.
Traceback error for reference :
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1743, in _prepare_split_single
example = self.info.features.encode_example(record) if self.info.features is not None else record
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1878, in encode_example
return encode_nested_example(self, example)
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in encode_nested_example
{
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in <dictcomp>
{
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: 'sentence_id'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/nemo/scripts/speech_recognition/convert_hf_dataset_to_nemo.py", line 358, in main
dataset = load_dataset(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2549, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1605, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1762, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
```
from datasets import load_dataset
cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hy-AM")
```
### Expected behavior
It works fine with common_voice_16_1
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35
- Python version: 3.11.6
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0 | 6,848 |
https://github.com/huggingface/datasets/issues/6847 | [Streaming] Only load requested splits without resolving files for the other splits | [
"This should help fixing this issue: https://github.com/huggingface/datasets/pull/6832",
"I'm having a similar issue when using splices:\r\n<img width=\"947\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/2153faac-e1fe-4b6d-a79b-30b2699407e8\">\r\n<img width=\"823\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/80919eca-eb6c-407d-8070-52642fdcee54\">\r\n<img width=\"914\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/5219c201-e22e-4536-acc3-a922677785ff\">\r\n\r\n\r\nIt seems to be downloading, loading, and generating splits using the entire dataset."
] | e.g. [thangvip](https://huggingface.co/thangvip)/[cosmopedia_vi_math](https://huggingface.co/datasets/thangvip/cosmopedia_vi_math) has 300 splits and it takes a very long time to load only one split.
This is due to `load_dataset()` resolving the files of all the splits even if only one is needed.
In `dataset-viewer` the splits are loaded in different jobs so it results in 300 jobs that resolve 300 splits -> 90k calls to `/paths-info` | 6,847 |
https://github.com/huggingface/datasets/issues/6846 | Unimaginable super slow iteration | [
"In every iteration you load the full \"random_input\" column in memory, only then to access it's i-th element.\r\n\r\nYou can try using this instead\r\n\r\na,b=dataset[i]['random_input'],dataset[i]['random_output']"
] | ### Describe the bug
Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the datasetβ¦β¦οΌIs there something wrong with my iteration?
### Steps to reproduce the bug
```python
import datasets
import time
import random
num_rows = 52000
num_cols = 500
random_input = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)]
random_output = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)]
s=time.time()
d={'random_input':random_input,'random_output':random_output}
dataset=datasets.Dataset.from_dict(d)
print('from dict',time.time()-s)
print(dataset)
for i in range(len(dataset)):
aa=time.time()
a,b=dataset['random_input'][i],dataset['random_output'][i]
print(time.time()-aa)
```
corresponding output
```bash
from dict 9.215498685836792
Dataset({
features: ['random_input', 'random_output'],
num_rows: 52000
})
19.129778146743774
19.329464197158813
19.27668261528015
19.28557538986206
19.247620582580566
19.624247074127197
19.28673791885376
19.301053047180176
19.290496110916138
19.291821718215942
19.357765197753906
```
### Expected behavior
Under normal circumstances, iteration should be very rapid as it does not involve the main tasks other than getting items
### Environment info
- `datasets` version: 2.19.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | 6,846 |
https://github.com/huggingface/datasets/issues/6845 | load_dataset doesn't support list column | [
"I encountered this same issue when loading a customized dataset for ORPO training, in which there were three columns and two of them were lists. \r\nI debugged and found that it might be caused by the type-infer mechanism and because in some chunks one of the columns is always an empty list ([]), it was regarded as ```list<item: null>```, however in some other chunk it was ```list<item: string>```. This triggered a TypeError running the function ```table_cast()```.\r\n\r\nI temporarily fixed this by re-dumping the file into a regular JSON format instead of lines of JSON dict. I didn't dig deeper for the lack of knowledge and programming ability but I do hope some developer of this repo will find and fix it."
] | ### Describe the bug
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
got exception:
Generating train split: 1834 examples [00:00, 5227.98 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 585, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2295, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2018, in cast_array_to_feature
casted_array_values = _c(array.values, feature[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1804, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2115, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<m.name: string, x.name: string, p.name: string, n.name: string, h.name: string, name: string, c: int64, collect(r.name): list<item: string>, q.name: string, rel.name: string, count(p): int64, 1: int64, p.location: string, max(n.name): null, mn.name: string, p.time: int64, min(q.name): string>
to
{'q.name': Value(dtype='string', id=None), 'mn.name': Value(dtype='string', id=None), 'x.name': Value(dtype='string', id=None), 'p.name': Value(dtype='string', id=None), 'n.name': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'm.name': Value(dtype='string', id=None), 'h.name': Value(dtype='string', id=None), 'count(p)': Value(dtype='int64', id=None), 'rel.name': Value(dtype='string', id=None), 'c': Value(dtype='int64', id=None), 'collect(r.name)': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '1': Value(dtype='int64', id=None), 'p.location': Value(dtype='string', id=None), 'substring(h.name,0,5)': Value(dtype='string', id=None), 'p.time': Value(dtype='int64', id=None)}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ubuntu/llm/train-2.py", line 150, in <module>
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/load.py", line 2609, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
### Steps to reproduce the bug
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
### Expected behavior
no exception
### Environment info
python 3.11
datasets 2.19.0 | 6,845 |
https://github.com/huggingface/datasets/issues/6843 | IterableDataset raises exception instead of retrying | [
"Thanks for reporting! I've opened a PR with a fix.",
"Thanks, @mariosasko! Related question (although I guess this is a feature request): could we have some kind of exponential back-off for these retries? Here's my reasoning:\r\n- If a one-time accidental error happens, you should retry immediately and will succeed immediately.\r\n- If the Hub has a small outage on the order of minutes, you don't want to retry on the order of hours. \r\n- If the Hub has a prologned outage of several hours, we don't want to keep retrying on the order of minutes.\r\n\r\nThere actually already exists an implementation for (clipped) exponential backoff in the HuggingFace suite ([here](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/utils/_http.py#L306)), but I don't think it is used here.\r\n\r\nThe requirements are basically that you have an initial minimum waiting time and a maximum waiting time, and with each retry, the waiting time is doubled. We don't want to overload your servers with needless retries, especially when they're down :sweat_smile:",
"Oh, I've just remembered that we added retries to the `HfFileSystem` in `huggingface_hub` 0.21.0 (see [this](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/hf_file_system.py#L703)), so I'll close the linked PR as we don't want to retry the retries :).\r\n\r\nI agree with the exponential backoff suggestion, so I'll open another PR.",
"@mariosasko The call you linked indeed points to the implementation I linked in my previous comment, yes, but it has no configurability. Arguably, you want to have this hidden backoff under the hood that catches small network disturbances on the time scale of seconds -- perhaps even with hardcoded limits as is the case currently -- but you also still want to have a separate backoff on top of that with the configurability as suggested by @lhoestq in [the comment I linked](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229).\r\n\r\nMy particular use-case is that I'm streaming a dataset while training on a university cluster with a very long scheduling queue. This means that when the backoff runs out of retries (which happens in under 30 seconds with the call you linked), I lose my spot on the cluster and have to queue for a whole day or more. Ideally, I should be able to specify that I want to retry for 2 to 3 hours but with more and more time between requests, so that I can smooth over hours-long outages without a setback of days.",
"I also have my runs crash a surprising amount due to the dataloader crashing because of the hub, some way to address this would be nice."
] | ### Describe the bug
In light of the recent server outages, I decided to look into whether I could somehow wrap my IterableDataset streams to retry rather than error out immediately. To my surprise, `datasets` [already supports retries](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). Since a commit by @lhoestq [last week](https://github.com/huggingface/datasets/commit/a188022dc43a76a119d90c03832d51d6e4a94d91), that code lives here:
https://github.com/huggingface/datasets/blob/fe2bea6a4b09b180bd23b88fe96dfd1a11191a4f/src/datasets/utils/file_utils.py#L1097C1-L1111C19
If GitHub code snippets still aren't working, here's a copy:
```python
def read_with_retries(*args, **kwargs):
disconnect_err = None
for retry in range(1, max_retries + 1):
try:
out = read(*args, **kwargs)
break
except (ClientError, TimeoutError) as err:
disconnect_err = err
logger.warning(
f"Got disconnected from remote data host. Retrying in {config.STREAMING_READ_RETRY_INTERVAL}sec [{retry}/{max_retries}]"
)
time.sleep(config.STREAMING_READ_RETRY_INTERVAL)
else:
raise ConnectionError("Server Disconnected") from disconnect_err
return out
```
With the latest outage, the end of my stack trace looked like this:
```
...
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 342, in read_with_retries
out = read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 301, in read
return self._buffer.read(size)
^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 505, in read
buf = self._fp.read(io.DEFAULT_BUFFER_SIZE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 88, in read
return self.file.read(size)
^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/spec.py", line 1856, in read
out = self.cache._fetch(self.loc, self.loc + length)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/caching.py", line 189, in _fetch
self.cache = self.fetcher(start, end) # new block replaces old
^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range
hf_raise_for_status(r)
File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/allenai/c4/resolve/1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00346-of-01024.json.gz
```
Indeed, the code for retries only catches `ClientError`s and `TimeoutError`s, and all other exceptions, *including HuggingFace's own custom HTTP error class*, **are not caught. Nothing is retried,** and instead the exception is propagated upwards immediately.
### Steps to reproduce the bug
Not sure how you reproduce this. Maybe unplug your Ethernet cable while streaming a dataset; the issue is pretty clear from the stack trace.
### Expected behavior
All HTTP errors while iterating a streamable dataset should cause retries.
### Environment info
Output from `datasets-cli env`:
- `datasets` version: 2.18.0
- Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.7
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | 6,843 |
https://github.com/huggingface/datasets/issues/6842 | Datasets with files with colon : in filenames cannot be used on Windows | [] | ### Describe the bug
Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings.
### Steps to reproduce the bug
1. Attempt to run load_dataset on MLCommons/peoples_speech
### Expected behavior
Does not crash during extraction
### Environment info
Windows 11, NTFS filesystem, Python 3.12
| 6,842 |
https://github.com/huggingface/datasets/issues/6841 | Unable to load wiki_auto_asset_turk from GEM | [
"Hi! I've opened a [PR](https://huggingface.co/datasets/GEM/wiki_auto_asset_turk/discussions/5) with a fix. While waiting for it to be merged, you can load the dataset from the PR branch with `datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\")`",
"Thanks Mario. Still getting the same issue though with the suggested fix\r\n\r\n#cat gem_sari.py\r\nimport datasets\r\nprint (datasets.__version__)\r\ndataset =datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\")\r\n\r\nEnd up with \r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/load.py\", line 2582, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1767, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1100, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1565, in _prepare_split\r\n split_info = self.info.splits[split_generator.name]\r\n ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/splits.py\", line 532, in __getitem__\r\n instructions = make_file_instructions(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/arrow_reader.py\", line 121, in make_file_instructions\r\n info.name: filenames_for_dataset_split(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/naming.py\", line 72, in filenames_for_dataset_split\r\n prefix = os.path.join(path, prefix)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"<frozen posixpath>\", line 76, in join\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType",
"Hmm, that's weird. Maybe try deleting the cache with `!rm -rf ~/.cache/huggingface/datasets` and then re-download.",
"Tried that a couple of time. It does download the data fresh but end up with same error. Is there a way to see if its using the right version ?",
"You can check the version with `python -c \"import datasets; print(datasets.__version__)\"`",
"the datasets version is 2.18. \r\n\r\nI wanted to see if the command datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\") is using the right revision (refs/pr/5). \r\n\r\n\r\n\r\n\r\n\r\n ",
"Still have this problem",
"The issue is fixed once the fixing PR has been merged and the dataset has been converted to Parquet.\r\n\r\nIf the problem persists on your side, you should update your `datasets` library:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd if you have already the latest version of `datasets`, then you need to delete the old version of this dataset in your cache:\r\n```shell\r\nrm -fr ~/.cache/huggingface/datasets/GEM___wiki_auto_asset_turk\r\nrm -fr ~/.cache/huggingface/modules/datasets_modules/datasets/GEM--wiki_auto_asset_turk\r\n```"
] | ### Describe the bug
I am unable to load the wiki_auto_asset_turk dataset. I get a fatal error while trying to access wiki_auto_asset_turk and load it with datasets.load_dataset. The error (TypeError: expected str, bytes or os.PathLike object, not NoneType) is from filenames_for_dataset_split in a os.path.join call
>>import datasets
>>print (datasets.__version__)
>>dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk")
System output:
Generating train split: 100%|β| 483801/483801 [00:03<00:00, 127164.26 examples/s
Generating validation split: 100%|β| 20000/20000 [00:00<00:00, 116052.94 example
Generating test_asset split: 100%|ββ| 359/359 [00:00<00:00, 76155.93 examples/s]
Generating test_turk split: 100%|βββ| 359/359 [00:00<00:00, 87691.76 examples/s]
Traceback (most recent call last):
File "/Users/abhinav.sethy/Code/openai_evals/evals/evals/grammarly_tasks/gem_sari.py", line 3, in <module>
dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/load.py", line 2582, in load_dataset
builder_instance.download_and_prepare(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1565, in _prepare_split
split_info = self.info.splits[split_generator.name]
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/splits.py", line 532, in __getitem__
instructions = make_file_instructions(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/arrow_reader.py", line 121, in make_file_instructions
info.name: filenames_for_dataset_split(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/naming.py", line 72, in filenames_for_dataset_split
prefix = os.path.join(path, prefix)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen posixpath>", line 76, in join
TypeError: expected str, bytes or os.PathLike object, not NoneType
### Steps to reproduce the bug
import datasets
print (datasets.__version__)
dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk")
### Expected behavior
Should be able to load the dataset without any issues
### Environment info
datasets version 2.18.0 (was able to reproduce bug with older versions 2.16 and 2.14 also)
Python 3.12.0 | 6,841 |
https://github.com/huggingface/datasets/issues/6840 | Delete uploaded files from the UI | [] | ### Feature request
Once a file is uploaded and the commit is made, I am unable to delete individual files without completely deleting the whole dataset via the website UI.
### Motivation
Would be a useful addition
### Your contribution
Would love to help out with some guidance | 6,840 |
https://github.com/huggingface/datasets/issues/6838 | Remove token arg from CLI examples | [] | As suggested by @Wauplin, see: https://github.com/huggingface/datasets/pull/6831#discussion_r1579492603
> I would not advertise the --token arg in the example as this shouldn't be the recommended way (best to login with env variable or huggingface-cli login) | 6,838 |
https://github.com/huggingface/datasets/issues/6837 | Cannot use cached dataset without Internet connection (or when servers are down) | [
"There are 2 workarounds, tho:\r\n1. Download datasets from web and just load them locally\r\n2. Use metadata directly (temporal solution, since metadata can change)\r\n```\r\nimport datasets\r\nfrom datasets.data_files import DataFilesDict, DataFilesList\r\n\r\ndata_files_list = DataFilesList(\r\n [\r\n \"hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00000-of-01024.json.gz\"\r\n ],\r\n [(\"allenai/c4\", \"1588ec454efa1a09f29cd18ddd04fe05fc8653a2\")],\r\n)\r\ndata_files = DataFilesDict({\"train\": data_files_list})\r\nc4_dataset = datasets.load_dataset(\r\n path=\"allenai/c4\",\r\n data_files=data_files,\r\n split=\"train\",\r\n cache_dir=\"/datesets/cache\",\r\n download_mode=\"reuse_cache_if_exists\",\r\n token=False,\r\n)\r\n```\r\nSecond solution also shows where to find the bug. I suggest that the hashing functions should always use only original parameter `data_files`, and not the one they get after connecting to the server and creating `DataFilesDict`",
"Hi! You need to set the `HF_DATASETS_OFFLINE` env variable to `1` to load cached datasets offline, as explained in the docs [here](https://huggingface.co/docs/datasets/v2.19.0/en/loading#offline).",
"Just tested. It doesn't work, because of the exact problem I described above: hash of dataset config is different.\r\nThe only error difference is the reason why it cannot connect to HuggingFace (now it's 'offline mode is enabled')\r\n![image](https://github.com/huggingface/datasets/assets/112088378/1a7e1720-d711-46e3-9c90-53d52c441e68)\r\n",
"Met a pretty similar issue here, as I manually load the dataset into ~/.cache and try to let `load_dataset` detect it automatically, but it will always try reach hub even I set `HF_DATASETS_OFFLINE` to 1. Have you solved it? "
] | ### Describe the bug
I want to be able to use cached dataset from HuggingFace even when I have no Internet connection (or when HuggingFace servers are down, or my company has network issues).
The problem why I can't use it:
`data_files` argument from `datasets.load_dataset()` function get it updates from the server before calculating hash for caching. As a result, when I run the same code with and without Internet I get different dataset configuration directory name.
### Steps to reproduce the bug
```
import datasets
c4_dataset = datasets.load_dataset(
path="allenai/c4",
data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
split="train",
cache_dir="/datesets/cache",
download_mode="reuse_cache_if_exists",
token=False,
)
```
1. Run this code with the Internet.
2. Run the same code without the Internet.
### Expected behavior
When running without the Internet connection, the loader should be able to get dataset from cache
### Environment info
- `datasets` version: 2.19.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.13
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.12.2 | 6,837 |
https://github.com/huggingface/datasets/issues/6836 | ExpectedMoreSplits error on load_dataset when upgrading to 2.19.0 | [
"Get same error on same datasets too.",
"+1",
"same error"
] | ### Describe the bug
Hi there, thanks for the great library! We have been using it a lot in torchtune and it's been a huge help for us.
Regarding the bug: the same call to `load_dataset` errors with `ExpectedMoreSplits` in 2.19.0 after working fine in 2.18.0. Full details given in the repro below.
### Steps to reproduce the bug
On 2.18.0, things work fine:
```
# First clear the locally cached dataset
rm -r ~/.cache/huggingface/datasets/lvwerra___stack-exchange-paired
pip install "datasets==2.18.0"
python3
>>> from datasets import load_dataset
>>> dataset = load_dataset('lvwerra/stack-exchange-paired', split='train', data_dir='data/rl')
```
On 2.19.0, they do not:
```
# First clear the locally cached dataset
rm -r ~/.cache/huggingface/datasets/lvwerra___stack-exchange-paired
pip install "datasets==2.19.0"
python3
>>> from datasets import load_dataset
>>> dataset = load_dataset('lvwerra/stack-exchange-paired', split='train', data_dir='data/rl')
```
The stack trace I see from the 2.19.0 version of load_dataset can be seen [here](https://gist.github.com/ebsmothers/f9b1f1949bee7030a8d7bb8a491550d2).
(Maybe unsurprising but) notably if I do not delete the cache first I am able to load the dataset successfully. So based on this I suspect the cause is somewhere in the download logic.
### Expected behavior
Download the dataset successfully :)
### Environment info
- `datasets` version: 2.19.0
- Platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34
- Python version: 3.11.9
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1 | 6,836 |
https://github.com/huggingface/datasets/issues/6834 | largelisttype not supported (.from_polars()) | [] | ### Describe the bug
The following code fails because LargeListType is not supported.
This is especially a problem for .from_polars since polars uses LargeListType.
### Steps to reproduce the bug
```python
import datasets
import polars as pl
df = pl.DataFrame({"list": [[]]})
datasets.Dataset.from_polars(df)
```
### Expected behavior
Convert LargeListType to list.
### Environment info
- `datasets` version: 2.19.1.dev0
- Platform: Linux-6.8.7-200.fc39.x86_64-x86_64-with-glibc2.38
- Python version: 3.12.2
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2024.3.1 | 6,834 |
https://github.com/huggingface/datasets/issues/6833 | Super slow iteration with trivial custom transform | [
"Similar issue in text process \r\n\r\n```python\r\n\r\ntokenizer=AutoTokenizer.from_pretrained(model_dir[args.model])\r\ntrain_dataset=datasets.load_from_disk(dataset_dir[args.dataset],keep_in_memory=True)['train']\r\ntrain_dataset=train_dataset.map(partial(dname2func[args.dataset],tokenizer=tokenizer),batched=True,num_proc =50,remove_columns=train_dataset.features.keys(),desc='tokenize',keep_in_memory=True)\r\n\r\n```\r\nAfter this train_dataset will be like\r\n```python\r\nDataset({\r\n features: ['input_ids', 'labels'],\r\n num_rows: 51760\r\n})\r\n```\r\nIn which input_ids and labels are both List[int]\r\nHowever, per iter on dataset cost 7.412479639053345s β¦β¦οΌ\r\n```python\r\nfor j in tqdm(range(len(train_dataset)),desc='first stage'):\r\n input_id,label=train_dataset['input_ids'][j],train_dataset['labels'][j]\r\n\r\n``` ",
"The transform currently replaces the numpy formatting.\r\n\r\nSo you're back to copying data to long python lists which is super slow.\r\n\r\nIt would be cool for the transform to not remove the formatting in this case, but this requires a few changes in the lib"
] | ### Describe the bug
Dataset is 10X slower when applying trivial transforms:
```
import time
import numpy as np
from datasets import Dataset, Features, Array2D
a = np.zeros((800, 800))
a = np.stack([a] * 1000)
features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")})
ds1 = Dataset.from_dict({"a": a}, features=features).with_format('numpy')
def transform(batch):
return batch
ds2 = ds1.with_transform(transform)
%time sum(1 for _ in ds1)
%time sum(1 for _ in ds2)
```
```
CPU times: user 472 ms, sys: 319 ms, total: 791 ms
Wall time: 794 ms
CPU times: user 9.32 s, sys: 443 ms, total: 9.76 s
Wall time: 9.78 s
```
In my real code I'm using set_transform to apply some post-processing on-the-fly for the 2d array, but it significantly slows down the dataset even if the transform itself is trivial.
Related issue: https://github.com/huggingface/datasets/issues/5841
### Steps to reproduce the bug
Use code in the description to reproduce.
### Expected behavior
Trivial custom transform in the example should not slowdown the dataset iteration.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35
- Python version: 3.11.4
- `huggingface_hub` version: 0.20.2
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.12.2 | 6,833 |
https://github.com/huggingface/datasets/issues/6830 | Add a doc page for the convert_to_parquet CLI | [] | Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova | 6,830 |
https://github.com/huggingface/datasets/issues/6829 | Load and save from/to disk no longer accept pathlib.Path | [] | Reported by @vttrifonov at https://github.com/huggingface/datasets/pull/6704#issuecomment-2071168296:
> This change is breaking in
> https://github.com/huggingface/datasets/blob/f96e74d5c633cd5435dd526adb4a74631eb05c43/src/datasets/arrow_dataset.py#L1515
> when the input is `pathlib.Path`. The issue is that `url_to_fs` expects a `str` and cannot deal with `Path`. `get_fs_token_paths` converts to `str` so it is not a problem
This change was introduced in:
- #6704 | 6,829 |
https://github.com/huggingface/datasets/issues/6827 | Loading a remote dataset fails in the last release (v2.19.0) | [] | While loading a dataset with multiple splits I get an error saying `Couldn't find file at <URL>`
I am loading the dataset like so, nothing out of the ordinary.
This dataset needs a token to access it.
```
token="hf_myhftoken-sdhbdsjgkhbd"
load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token=token)
```
I get the following error
![Screenshot 2024-04-19 at 11 03 07β―PM](https://github.com/huggingface/datasets/assets/35369637/8dce757f-08ff-45dd-85b5-890fced7c5bc)
Now you can see that the URL that it is trying to reach has the JSON object of the dataset split appended to the base URL. I think this may be due to a newly introduced issue.
I did not have this issue with the previous version of the datasets. Everything was fine for me yesterday and after the release 12 hours ago, this seems to have broken. Also, the dataset in question runs custom code and I checked and there have been no commits to the dataset on Huggingface in 6 months.
### Steps to reproduce the bug
Since this happened with one particular dataset for me, I am listing steps to use that dataset.
1. Open https://huggingface.co/datasets/speechcolab/gigaspeech and fill the form to get access.
2. Create a token on your huggingface account with read access.
3. Run the following line, substituing `<your_token_here>` with your token.
```
load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token="<your_token_here>")
```
### Expected behavior
Be able to load the dataset in question.
### Environment info
datasets == 2.19.0
python == 3.10
kernel == Linux 6.1.58+ | 6,827 |
https://github.com/huggingface/datasets/issues/6824 | Winogrande does not seem to be compatible with datasets version of 1.18.0 | [
"Hi ! Do you mean 2.18 ? Can you try to update `fsspec` and `huggingface_hub` ?\r\n\r\n```\r\npip install -U fsspec huggingface_hub\r\n```",
"Yes I meant 2.18, and it works after updating `fsspec` and `huggingface_hub`. Thanks!"
] | ### Describe the bug
I get the following error when simply running `load_dataset('winogrande','winogrande_xl')`.
I do not have such an issue in the 1.17.0 version.
```Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2265, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 371, in __init__
self.config, self.config_id = self._create_builder_config(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 620, in _create_builder_config
builder_config._resolve_data_files(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 211, in _resolve_data_files
self.data_files = self.data_files.resolve(base_path, download_config)
File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 799, in resolve
out[key] = data_files_patterns_list.resolve(base_path, download_config)
File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 752, in resolve
resolve_pattern(
File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 393, in resolve_pattern
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find 'hf://datasets/winogrande@ebf71e3c7b5880d019ecf6099c0b09311b1084f5/winogrande_xl/train/0000.parquet' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']```
### Steps to reproduce the bug
from datasets import load_dataset
datasets = load_dataset('winogrande','winogrande_xl')
### Expected behavior
```Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.06M/2.06M [00:00<00:00, 5.16MB/s]
Downloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 118k/118k [00:00<00:00, 360kB/s]
Downloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 85.9k/85.9k [00:00<00:00, 242kB/s]
Generating train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 40398/40398 [00:00<00:00, 845491.12 examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1767/1767 [00:00<00:00, 362501.11 examples/s]
Generating validation split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1267/1267 [00:00<00:00, 318768.11 examples/s]```
### Environment info
datasets version: 1.18.0
| 6,824 |
https://github.com/huggingface/datasets/issues/6823 | Loading problems of Datasets with a single shard | [] | ### Describe the bug
When saving a dataset on disk and it has a single shard it is not loaded as when it is saved in multiple shards. I installed the latest version of datasets via pip.
### Steps to reproduce the bug
The code below reproduces the behavior. All works well when the range of the loop is 10000 but it fails when it is 1000.
```
from PIL import Image
import numpy as np
from datasets import Dataset, DatasetDict, load_dataset
def load_image():
# Generate random noise image
noise = np.random.randint(0, 256, (256, 256, 3), dtype=np.uint8)
return Image.fromarray(noise)
def create_dataset():
input_images = []
output_images = []
text_prompts = []
for _ in range(10000): # this is the problematic parameter
input_images.append(load_image())
output_images.append(load_image())
text_prompts.append('test prompt')
data = {'input_image': input_images, 'output_image': output_images, 'text_prompt': text_prompts}
dataset = Dataset.from_dict(data)
return DatasetDict({'train': dataset})
dataset = create_dataset()
print('dataset before saving')
print(dataset)
print(dataset['train'].column_names)
dataset.save_to_disk('test_ds')
print('dataset after loading')
dataset_loaded = load_dataset('test_ds')
print(dataset_loaded)
print(dataset_loaded['train'].column_names)
```
The output for 1000 iterations is:
```
dataset before saving
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 1000
})
})
['input_image', 'output_image', 'text_prompt']
Saving the dataset (1/1 shards): 100%|β| 1000/1000 [00:00<00:00, 5156.00 example
dataset after loading
Generating train split: 1 examples [00:00, 230.52 examples/s]
DatasetDict({
train: Dataset({
features: ['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split'],
num_rows: 1
})
})
['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split']
```
For 10000 iteration (8 shards) it is correct:
```
dataset before saving
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 10000
})
})
['input_image', 'output_image', 'text_prompt']
Saving the dataset (8/8 shards): 100%|β| 10000/10000 [00:01<00:00, 6237.68 examp
dataset after loading
Generating train split: 10000 examples [00:00, 10773.16 examples/s]
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 10000
})
})
['input_image', 'output_image', 'text_prompt']
```
### Expected behavior
The procedure should work for a dataset with one shrad the same as for one with multiple shards
### Environment info
- `datasets` version: 2.18.0
- Platform: macOS-14.1-arm64-arm-64bit
- Python version: 3.11.8
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0
Edit: I looked in the source code of load.py in datasets. I should have used "load_from_disk" and it indeed works that way. But ideally load_dataset would have raisen an error the same way as if I call a path:
```
if Path(path, config.DATASET_STATE_JSON_FILENAME).exists():
raise ValueError(
"You are trying to load a dataset that was saved using `save_to_disk`. "
"Please use `load_from_disk` instead."
)
```
nevertheless I find it interesting that it works just well and without a warning if there are multiple shards. | 6,823 |
https://github.com/huggingface/datasets/issues/6819 | Give more details in `DataFilesNotFoundError` when getting the config names | [] | ### Feature request
After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error:
```
{
"error": "Cannot get the config names for the dataset.",
"cause_exception": "DataFilesNotFoundError",
"cause_message": "No (supported) data files found in cis-lmu/Glot500",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_runners/dataset/config_names.py\", line 73, in compute_config_names_response\n config_names = get_dataset_config_names(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 347, in get_dataset_config_names\n dataset_module = dataset_module_factory(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1873, in dataset_module_factory\n raise e1 from None\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1854, in dataset_module_factory\n return HubDatasetModuleFactoryWithoutScript(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1245, in get_module\n module_name, default_builder_kwargs = infer_module_for_data_files(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 595, in infer_module_for_data_files\n raise DataFilesNotFoundError(\"No (supported) data files found\" + (f\" in {path}\" if path else \"\"))\n",
"datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in cis-lmu/Glot500\n"
]
}
```
because the deleted files were still listed in the README, see https://huggingface.co/datasets/cis-lmu/Glot500/discussions/4
Ideally, the error message would include the name of the first configuration with missing files, to help the user understand how to fix it. Here, it would tell that configuration `aze_Ethi` has no supported data files, instead of telling that the `cis-lmu/Glot500` *dataset* has no supported data files (which is not true).
### Motivation
Giving more detail in the error would help the Datasets Hub users to debug why the dataset viewer does not work.
### Your contribution
Not sure how to best fix this, as there are a lot of loops on the dataset configs in the traceback methods. "maybe" it would be easier to handle if the code was completely isolating each config. | 6,819 |
https://github.com/huggingface/datasets/issues/6814 | `map` with `num_proc` > 1 leads to OOM | [
"Hi ! You can try to reduce `writer_batch_size`. It corresponds to the number of samples that stay in RAM before being flushed to disk"
] | ### Describe the bug
When running `map` on parquet dataset loaded from local machine, the RAM usage increases linearly eventually leading to OOM. I was wondering if I should I save the `cache_file` after every n steps in order to prevent this?
### Steps to reproduce the bug
```
ds = load_dataset("parquet", data_files=dataset_path, split="train")
ds = ds.shard(num_shards=4, index=0)
ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000))
ds = ds.map(prepare_dataset,
num_proc=32,
writer_batch_size=1000,
keep_in_memory=False,
desc="preprocess dataset")
```
```
def prepare_dataset(batch):
# load audio
sample = batch["audio"]
inputs = feature_extractor(sample["array"], sampling_rate=16000)
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(sample["array"].squeeze())
return batch
```
### Expected behavior
It shouldn't run into OOM problem.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.17
- Python version: 3.8.19
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.0.3
- `fsspec` version: 2024.2.0 | 6,814 |
https://github.com/huggingface/datasets/issues/6810 | Allow deleting a subset/config from a no-script dataset | [
"Probably best to implement this as a CLI command?",
"Thanks for your comment, @mariosasko. Or maybe both (in Python and as CLI command)? The Python command would be just the reverse of `push_to_hub`...\r\n\r\nI am working on a draft implementation, so we can discuss about the API and UX."
] | As proposed by @BramVanroy, it would be neat to have this functionality through the API. | 6,810 |
https://github.com/huggingface/datasets/issues/6808 | Make convert_to_parquet CLI command create script branch | [] | As proposed by @severo, maybe we should add this functionality as well to the CLI command to convert a script-dataset to Parquet. See: https://github.com/huggingface/datasets/pull/6795#discussion_r1562819168
> When providing support, we sometimes suggest that users store their script in a script branch. What do you think of this alternative to deleting the files? | 6,808 |
https://github.com/huggingface/datasets/issues/6805 | Batched mapping of existing string column casts boolean to string | [
"This seems to be hardcoded behavior in table.py `array_cast`.\r\n```python\r\nif (\r\n not allow_number_to_str\r\n and pa.types.is_string(pa_type)\r\n and (pa.types.is_floating(array.type) or pa.types.is_integer(array.type))\r\n ):\r\n raise TypeError(\r\n f\"Couldn't cast array of type {array.type} to {pa_type} since allow_number_to_str is set to {allow_number_to_str}\"\r\n )\r\n if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):\r\n raise TypeError(f\"Couldn't cast array of type {array.type} to {pa_type}\")\r\n return array.cast(pa_type)\r\n```\r\nwhere floats and integers are not cast to string but booleans are.\r\nMaybe this should be extended to booleans?",
"Thanks for reporting! @Modexus Do you want to open a PR with the suggested fix?",
"I'll gladly create a PR but not sure what the behavior should be.\r\n\r\nShould a value returned from map be cast to the current feature?\r\nAt the moment this seems very inconsistent since `datetime `is also cast (this would only fix `boolean`) but nested structures are not.\r\n\r\n```python\r\ndset = Dataset.from_dict({\"a\": [\"Hello world!\"]})\r\ndset = dset.map(lambda x: {\"a\": date(2021, 1, 1)})\r\n# dset[0][\"a\"] == '2021-01-01'\r\n```\r\n```python\r\ndset = Dataset.from_dict({\"a\": [\"Hello world!\"]})\r\ndset = dset.map(lambda x: {\"a\": [True]})\r\n# dset[0][\"a\"] == [True]\r\n```\r\n\r\nIs there are reason to cast the value if the user doesn't specify it explicitly?\r\nSeems tricky that some things are cast and some are not.",
"Indeed, it also makes sense to raise a `TypeError` for temporal and decimal types.\r\n\r\n> Is there are reason to cast the value if the user doesn't specify it explicitly?\r\n\r\nThis is how PyArrow's built-in `cast` behaves - it allows casting from primitive types to strings. Hence, we need `allow_number_to_str` to disallow such casts (e.g., in the [scenario](https://github.com/huggingface/datasets/blob/a3bc89d8bfd47c2a175c3ce16d92b7307cdeafd6/src/datasets/arrow_writer.py#L208) when we are \"trying a type\" to preserve the original type if there is a column in the output dataset with the same name as in the input one).\r\n\r\nPS: In the PR, we can introduce `allow_numeric_to_str` (for floats, integers, decimals, booleans) and `allow_temporal_to_str` (for dates, timestamps, ...) and deprecate `allow_number_to_str` to make it clear what each parameter does.",
"Would just `allow_primitive_to_str` work?\r\nThis should include all `numeric`, `boolean `and `temporal`formats.\r\n\r\nNote that at least in the [ C++ implementation](https://arrow.apache.org/docs/cpp/api/utilities.html#_CPPv410is_numericRK8DataType) `numeric `seems to exclude `boolean`.\r\n[](https://arrow.apache.org/docs/cpp/api/utilities.html#_CPPv410is_numericRK8DataType)",
"Indeed, `allow_primitive_to_str` sounds better.\r\n\r\nPS: PyArrow's `pa.types.is_primitive` returns `False` for decimal types, but I think is okay for us to treat decimals as primitive types (or we can have `allow_decimal_to_str` to be fully consistent with PyArrow)",
"Fixed by:\r\n- #6811"
] | ### Describe the bug
Let the dataset contain a column named 'a', which is of the string type.
If 'a' is converted to a boolean using batched mapping, the mapper automatically casts the boolean to a string (e.g., True -> 'true').
It only happens when the original column and the mapped column name are identical.
Thank you!
### Steps to reproduce the bug
```python
from datasets import Dataset
dset = Dataset.from_dict({'a': ['11', '22']})
dset = dset.map(lambda x: {'a': [True for _ in x['a']]}, batched=True)
print(dset['a'])
```
```
> ['true', 'true']
```
### Expected behavior
[True, True]
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.4
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2023.12.2 | 6,805 |
https://github.com/huggingface/datasets/issues/6801 | got fileNotFound | [
"Hi! I'll open a PR on the Hub to fix this, but please use the Hub's [Community tab](https://huggingface.co/datasets/nyanko7/danbooru2023/discussions) to report such issues in the future.",
"I've opened a [PR](https://huggingface.co/datasets/nyanko7/danbooru2023/discussions/8) in the repo, so let's continue the discussion there"
] | ### Describe the bug
When I use load_dataset to load the nyanko7/danbooru2023 data set, the cache is read in the form of a symlink. There may be a problem with the arrow_dataset initialization process and I get FileNotFoundError: [Errno 2] No such file or directory: '2945000.jpg'
### Steps to reproduce the bug
#code show as below
from datasets import load_dataset
data = load_dataset("nyanko7/danbooru2023",cache_dir=<symlink>)
data["train"][0]
### Expected behavior
I should get this result:
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=365x256 at 0x7FB730CB4070>, 'label': 0}
### Environment info
datasets==2.12.0
python==3.10.14
| 6,801 |
https://github.com/huggingface/datasets/issues/6800 | High overhead when loading lots of subsets from the same dataset | [
"Hi !\r\n\r\nIt's possible to multiple files at once:\r\n\r\n```python\r\ndata_files = \"data/*.jsonl\"\r\n# Or pass a list of files\r\nlangs = ['ka-ml', 'br-sr', 'ka-pt', 'id-ko', ..., 'fi-ze_zh', 'he-kk', 'ka-tr']\r\ndata_files = [f\"data/{lang}.jsonl\" for lang in langs]\r\nds = load_dataset(\"loicmagne/open-subtitles-250-bitext-mining\", data_files=data_files, split=\"train\")\r\n```\r\n\r\nAlso maybe you can add a subset called \"all\" for people that want to load all the data without having to list all the languages ?\r\n\r\n```yaml\r\n - config_name: all\r\n data_files: data/*.jsonl\r\n```\r\n",
"Thanks for your reply, it is indeed much faster, however the result is a dataset where all the subsets are \"merged\" together, the language pair is lost:\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence1', 'sentence2'],\r\n num_rows: 247809\r\n })\r\n})\r\n```\r\nI guess I could add a 'lang' feature for each row in the dataset, is there a better way to do it ?",
"Hi @lhoestq over at https://github.com/embeddings-benchmark/mteb/issues/530 we have started examining these issues and would love to make a PR for datasets if we believe there is a way to improve the speed. As I assume you have a better overview than me @lhoestq, would you be interested in a PR, and might you have an idea about where we would start working on it?\r\n\r\nWe see a speed comparison of \r\n1. 15 minutes (for ~20% of the languages) when loaded using a for loop\r\n2. 17 minutes using the your suggestion\r\n3. ~30 seconds when using @loicmagne \"merged\" method.\r\n\r\nWorth mentioning is that solution 2 looses the language information.",
"Can you retry using `datasets` 2.19 ? We improved a lot the speed of downloading datasets with tons of small files.\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nNow this takes 17sec on my side instead of the 17min minutes @loicmagne mentioned :)\r\n\r\n```python\r\n>>> %time ds = load_dataset(\"loicmagne/open-subtitles-250-bitext-mining\", data_files=\"data/*.jsonl\")\r\nDownloading readme: 100%|βββββββββββββββββββββββββββββββββ| 13.7k/13.7k [00:00<00:00, 5.47MB/s]\r\nResolving data files: 100%|βββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 612.51it/s]\r\nDownloading data: 100%|ββββββββββββββββββββββββββββββββββ| 250/250 [00:12<00:00, 19.68files/s]\r\nGenerating train split: 247809 examples [00:00, 1057071.08 examples/s]\r\nCPU times: user 4.95 s, sys: 3.1 s, total: 8.05 s\r\nWall time: 17.4 s\r\n```",
"> Can you retry using `datasets` 2.19 ? We improved a lot the speed of downloading datasets with tons of small files.\r\n> \r\n> ```\r\n> pip install -U datasets\r\n> ```\r\n> \r\n> Now this takes 17sec on my side instead of the 17min minutes @loicmagne mentioned :)\r\n> \r\n> ```python\r\n> >>> %time ds = load_dataset(\"loicmagne/open-subtitles-250-bitext-mining\", data_files=\"data/*.jsonl\")\r\n> Downloading readme: 100%|βββββββββββββββββββββββββββββββββ| 13.7k/13.7k [00:00<00:00, 5.47MB/s]\r\n> Resolving data files: 100%|βββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 612.51it/s]\r\n> Downloading data: 100%|ββββββββββββββββββββββββββββββββββ| 250/250 [00:12<00:00, 19.68files/s]\r\n> Generating train split: 247809 examples [00:00, 1057071.08 examples/s]\r\n> CPU times: user 4.95 s, sys: 3.1 s, total: 8.05 s\r\n> Wall time: 17.4 s\r\n> ```\r\n\r\nI was actually just noticing that, I bumped from 2.18 to 2.19 and got a massive speedup, amazing!\r\n\r\nAbout the fact that subset names are lost when loading all files at once, currently my solution is to add a 'lang' feature to each rows, convert to polars and use:\r\n\r\n```python\r\nds_split = ds.to_polars().group_by('lang')\r\n```\r\n\r\nIt's fast so I think it's an acceptable solution, but is there a better way to do it ?",
"It's the fastest way I think :)\r\n\r\nAlternatively you can download the dataset repository locally using [huggingface_hub](https://huggingface.co/docs/huggingface_hub/guides/download) (either via CLI or in python) and load the subsets one by one locally using a for loop as you were doing before (just pass the directory path to load_dataset instead of the dataset_id). "
] | ### Describe the bug
I have a multilingual dataset that contains a lot of subsets. Each subset corresponds to a pair of languages, you can see here an example with 250 subsets: [https://hf.co/datasets/loicmagne/open-subtitles-250-bitext-mining](). As part of the MTEB benchmark, we may need to load all the subsets of the dataset. The dataset is relatively small and contains only ~45MB of data, but when I try to load every subset, it takes 15 minutes from the HF hub and 13 minutes from the cache
This issue https://github.com/huggingface/datasets/issues/5499 also referenced this overhead, but I'm wondering if there is anything I can do to speedup loading different subsets of the same dataset, both when loading from disk and from the HF hub? Currently each subset is stored in a jsonl file
### Steps to reproduce the bug
```
from datasets import load_dataset
for subset in ['ka-ml', 'br-sr', 'bg-br', 'kk-lv', 'br-sk', 'br-fi', 'eu-ze_zh', 'kk-nl', 'kk-vi', 'ja-kk', 'br-sv', 'kk-zh_cn', 'kk-ms', 'br-et', 'br-hu', 'eo-kk', 'br-tr', 'ko-tl', 'te-zh_tw', 'br-hr', 'br-nl', 'ka-si', 'br-cs', 'br-is', 'br-ro', 'br-de', 'et-kk', 'fr-hy', 'br-no', 'is-ko', 'br-da', 'br-en', 'eo-lt', 'is-ze_zh', 'eu-ko', 'br-it', 'br-id', 'eu-zh_cn', 'is-ja', 'br-sl', 'br-gl', 'br-pt_br', 'br-es', 'br-pt', 'is-th', 'fa-is', 'br-ca', 'eu-ka', 'is-zh_cn', 'eu-ur', 'id-kk', 'br-sq', 'eu-ja', 'uk-ur', 'is-zh_tw', 'ka-ko', 'eu-zh_tw', 'eu-th', 'eu-is', 'is-tl', 'br-eo', 'eo-ze_zh', 'eu-te', 'ar-kk', 'eo-lv', 'ko-ze_zh', 'ml-ze_zh', 'is-lt', 'br-fr', 'ko-te', 'kk-sl', 'eu-fa', 'eo-ko', 'ka-ze_en', 'eo-eu', 'ta-zh_tw', 'eu-lv', 'ko-lv', 'lt-tl', 'eu-si', 'hy-ru', 'ar-is', 'eu-lt', 'eu-tl', 'eu-uk', 'ka-ze_zh', 'si-ze_zh', 'el-is', 'bn-is', 'ko-ze_en', 'eo-si', 'cs-kk', 'is-uk', 'eu-ze_en', 'ta-ze_zh', 'is-pl', 'is-mk', 'eu-ta', 'ko-lt', 'is-lv', 'fa-ko', 'bn-ko', 'hi-is', 'bn-ze_zh', 'bn-eu', 'bn-ja', 'is-ml', 'eu-ru', 'ko-ta', 'is-vi', 'ja-tl', 'eu-mk', 'eu-he', 'ka-zh_tw', 'ka-zh_cn', 'si-tl', 'is-kk', 'eu-fi', 'fi-ko', 'is-ur', 'ka-th', 'ko-ur', 'eo-ja', 'he-is', 'is-tr', 'ka-ur', 'et-ko', 'eu-vi', 'is-sk', 'gl-is', 'fr-is', 'is-sq', 'hu-is', 'fr-kk', 'eu-sq', 'is-ru', 'ja-ka', 'fi-tl', 'ka-lv', 'fi-is', 'is-si', 'ar-ko', 'ko-sl', 'ar-eu', 'ko-si', 'bg-is', 'eu-hu', 'ko-sv', 'bn-hu', 'kk-ro', 'eu-hi', 'ka-ms', 'ko-th', 'ko-sr', 'ko-mk', 'fi-kk', 'ka-vi', 'eu-ml', 'ko-ml', 'de-ko', 'fa-ze_zh', 'eu-sk', 'is-sl', 'et-is', 'eo-is', 'is-sr', 'is-ze_en', 'kk-pt_br', 'hr-hy', 'kk-pl', 'ja-ta', 'is-ms', 'hi-ze_en', 'is-ro', 'ko-zh_cn', 'el-eu', 'ka-pl', 'ka-sq', 'eu-sl', 'fa-ka', 'ko-no', 'si-ze_en', 'ko-uk', 'ja-ze_zh', 'hu-ko', 'kk-no', 'eu-pl', 'is-pt_br', 'bn-lv', 'tl-zh_cn', 'is-nl', 'he-ko', 'ko-sq', 'ta-th', 'lt-ta', 'da-ko', 'ca-is', 'is-ta', 'bn-fi', 'ja-ml', 'lv-si', 'eu-sv', 'ja-te', 'bn-ur', 'bn-ca', 'bs-ko', 'bs-is', 'eu-sr', 'ko-vi', 'ko-zh_tw', 'et-tl', 'kk-tr', 'eo-vi', 'is-it', 'ja-ko', 'eo-et', 'id-is', 'bn-et', 'bs-eu', 'bn-lt', 'tl-uk', 'bn-zh_tw', 'da-eu', 'el-ko', 'no-tl', 'ko-sk', 'is-pt', 'hu-kk', 'si-zh_tw', 'si-te', 'ka-ru', 'lt-ml', 'af-ja', 'bg-eu', 'eo-th', 'cs-is', 'pl-ze_zh', 'el-kk', 'kk-sv', 'ka-nl', 'ko-pl', 'bg-ko', 'ka-pt_br', 'et-eu', 'tl-zh_tw', 'ka-pt', 'id-ko', 'fi-ze_zh', 'he-kk', 'ka-tr']:
load_dataset('loicmagne/open-subtitles-250-bitext-mining', subset)
```
### Expected behavior
Faster loading?
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-27-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2023.5.0
| 6,800 |
https://github.com/huggingface/datasets/issues/6798 | `DatasetBuilder._split_generators` incomplete type annotation | [
"Good catch! Feel free to open a PR with the suggested fix :).",
"There is also the [`MockDownloadManager`](https://github.com/JonasLoos/datasets/blob/main/src/datasets/download/mock_download_manager.py#L33), which seems like it might get passed here too. However, to me, it doesn't really seem relevant to the users of the datasets library, so I would just ignore it. What do you think, @mariosasko?",
"The API (`dummy_data` CLI command ) that uses the `MockDownloadManager` has been deprecated, so ignoring it sounds good!"
] | ### Describe the bug
The [`DatasetBuilder._split_generators`](https://github.com/huggingface/datasets/blob/0f27d7b77c73412cfc50b24354bfd7a3e838202f/src/datasets/builder.py#L1449) function has currently the following signature:
```python
class DatasetBuilder:
def _split_generators(self, dl_manager: DownloadManager):
...
```
However, the `dl_manager` argument can also be of type [`StreamingDownloadManager`](https://github.com/huggingface/datasets/blob/0f27d7b77c73412cfc50b24354bfd7a3e838202f/src/datasets/download/streaming_download_manager.py#L962), which has different functionality. For example, the `download` function doesn't download, but rather just returns the given url(s).
I suggest changing the function signature to:
```python
class DatasetBuilder:
def _split_generators(self, dl_manager: Union[DownloadManager, StreamingDownloadManager]):
...
```
and also adjust the docstring accordingly.
I would like to create a Pull Request to fix this, and have the following questions:
* Are there also other options than `DownloadManager`, and `StreamingDownloadManager`?
* Should this also be changed in other functions?
### Steps to reproduce the bug
Minimal example to print the different class names:
```python
import tempfile
from datasets import load_dataset
example = b'''
from datasets import GeneratorBasedBuilder, DatasetInfo, Features, Value, SplitGenerator
class Test(GeneratorBasedBuilder):
def _info(self):
return DatasetInfo(features=Features({"x": Value("int64")}))
def _split_generators(self, dl_manager):
print(type(dl_manager))
return [SplitGenerator('test')]
def _generate_examples(self):
yield 0, {'x': 42}
'''
with tempfile.NamedTemporaryFile(suffix='.py') as f:
f.write(example)
f.flush()
load_dataset(f.name, streaming=False)
load_dataset(f.name, streaming=True)
```
### Expected behavior
complete type annotations
### Environment info
/ | 6,798 |
https://github.com/huggingface/datasets/issues/6796 | CI is broken due to hf-internal-testing/dataset_with_script | [
"Finally:\r\n- the initial issue seems it was temporary\r\n- there is a different issue now: https://github.com/huggingface/datasets/actions/runs/8627153993/job/23646584590?pr=6797\r\n```\r\nFAILED tests/test_load.py::ModuleFactoryTest::test_HubDatasetModuleFactoryWithParquetExport - datasets.utils._dataset_viewer.DatasetViewerError: No exported Parquet files available.\r\nFAILED tests/test_load.py::ModuleFactoryTest::test_HubDatasetModuleFactoryWithParquetExport_errors_on_wrong_sha - datasets.utils._dataset_viewer.DatasetViewerError: No exported Parquet files available.\r\nFAILED tests/test_load.py::test_load_dataset_builder_for_community_dataset_with_script - AssertionError: assert 'dataset_with_script' == 'parquet'\r\n \r\n - parquet\r\n + dataset_with_script\r\n```\r\n\r\nMaybe related to `hf-internal-testing/dataset_with_script` dataset: https://huggingface.co/datasets/hf-internal-testing/dataset_with_script",
"This URL: https://datasets-server.huggingface.co/parquet?dataset=hf-internal-testing/dataset_with_script\r\nraises:\r\n> {\"error\":\"The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag @lhoestq and @severo.\"}\r\n\r\nWas there a recent change on the Hub enforcing this behavior?",
"OK, I just saw this PR:\r\n- https://github.com/huggingface/dataset-viewer/pull/2689\r\n\r\nOnce merged and deployed, it should fix the issue.",
"Once the script-dataset has been allowed in the dataset-viewer, we should fix our test to make the CI pass.\r\n\r\nI am addressing this."
] | CI is broken for test_load_dataset_distributed_with_script. See: https://github.com/huggingface/datasets/actions/runs/8614926216/job/23609378127
```
FAILED tests/test_load.py::test_load_dataset_distributed_with_script[None] - assert False
+ where False = all(<generator object test_load_dataset_distributed_with_script.<locals>.<genexpr> at 0x7f0c741de3b0>)
FAILED tests/test_load.py::test_load_dataset_distributed_with_script[force_redownload] - assert False
+ where False = all(<generator object test_load_dataset_distributed_with_script.<locals>.<genexpr> at 0x7f0be45f6ea0>)
``` | 6,796 |
https://github.com/huggingface/datasets/issues/6793 | Loading just one particular split is not possible for imagenet-1k | [] | ### Describe the bug
I'd expect the following code to download just the validation split but instead I get all data on my disk (train, test and validation splits)
`
from datasets import load_dataset
dataset = load_dataset("imagenet-1k", split="validation", trust_remote_code=True)
`
Is it expected to work like that?
### Steps to reproduce the bug
1. Install the required libraries (python, datasets, huggingface_hub)
2. Login using huggingface cli
2. Run the code in the description
### Expected behavior
Just a single (validation) split should be downloaded.
### Environment info
python: 3.12.2
datasets: 2.18.0
huggingface_hub: 0.22.2 | 6,793 |
https://github.com/huggingface/datasets/issues/6791 | `add_faiss_index` raises ValueError: not enough values to unpack (expected 2, got 1) | [
"I realized I was passing a string column to this instead of float. Is it possible to add a warning or error to prevent users from falsely believing there's a bug?",
"Hello!\r\n\r\nI agree that we could add some safeguards around the type of `ds[column]`. At least for FAISS, we need the column to be made of embeddings as FAISS doesn't perform the embeddings itself.\r\n\r\nI can propose a PR sometime this week.",
"@Dref360 thanks for the initiative!"
] | ### Describe the bug
Calling `add_faiss_index` on a `Dataset` with a column argument raises a ValueError. The following is the trace
```python
214 def replacement_add(self, x):
215 """Adds vectors to the index.
216 The index must be trained before vectors can be added to it.
217 The vectors are implicitly numbered in sequence. When `n` vectors are
(...)
224 `dtype` must be float32.
225 """
--> 227 n, d = x.shape
228 assert d == self.d
229 x = np.ascontiguousarray(x, dtype='float32')
ValueError: not enough values to unpack (expected 2, got 1)
```
### Steps to reproduce the bug
1. Load any dataset like `ds = datasets.load_dataset("wikimedia/wikipedia", "20231101.en")["train"]`
2. Add an FAISS index on any column `ds.add_faiss_index('title')`
### Expected behavior
The index should be created
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-26-generic-x86_64-with-glibc2.35
- Python version: 3.9.19
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0
- `faiss-cpu` version: 1.8.0 | 6,791 |
https://github.com/huggingface/datasets/issues/6790 | PyArrow 'Memory mapping file failed: Cannot allocate memory' bug | [] | ### Describe the bug
Hello,
I've been struggling with a problem using Huggingface datasets caused by PyArrow memory allocation. I finally managed to solve it, and thought to document it since similar issues have been raised here before (https://github.com/huggingface/datasets/issues/5710, https://github.com/huggingface/datasets/issues/6176).
In my case, I was trying to load ~70k dataset files from disk using `datasets.load_from_disk(data_path)` (meaning 70k repeated calls to load_from_disk). This triggered an (uninformative) exception around 64k loaded files:
```
File "pyarrow/io.pxi", line 1053, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 1000, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
OSError: Memory mapping file failed: Cannot allocate memory
```
Despite system RAM usage being very low. After a lot of digging around, I discovered that my Ubuntu machine had a limit on the maximum number of memory mapped files in `/proc/sys/vm/max_map_count` set to 65530, which was causing my data loader to crash. Increasing the limit in the file (`echo <new_mmap_size> | sudo tee /proc/sys/vm/max_map_count`) made the issue go away.
While this isn't a bug as such in either Datasets or PyArrow, this behavior can be very confusing to users. Maybe this should be mentioned in documentation? I suspect the other issues raised here about memory mapping OOM errors could actually be consequence of system configuration.
Br,
Lauri
### Steps to reproduce the bug
```
import numpy as np
import pyarrow as pa
import tqdm
# Write some data to disk
arr = pa.array(np.arange(100))
schema = pa.schema([
pa.field('nums', arr.type)
])
with pa.OSFile('arraydata.arrow', 'wb') as sink:
with pa.ipc.new_file(sink, schema=schema) as writer:
batch = pa.record_batch([arr], schema=schema)
writer.write(batch)
# Number of times to open the memory map
nums = 70000
# Read the data back
arrays = [pa.memory_map('arraydata.arrow', 'r') for _ in tqdm.tqdm(range(nums))]
```
### Expected behavior
No errors.
### Environment info
datasets: 2.18.0
pyarrow: 15.0.0 | 6,790 |
https://github.com/huggingface/datasets/issues/6789 | Issue with map | [
"Default `writer_batch_size `is set to 1000 (see [map](https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.map)).\r\nThe \"tmp1335llua\" is probably the temp file it creates while writing to disk.\r\nMaybe try lowering the `writer_batch_size`.\r\n\r\nFor multi-processing you should probably pass the `processor `as an argument (with e.g. partial) to the function or create it inside so that the sub-processes have access to it and maybe add `if __name__ == \"__main__\"` (not sure that's necessary?).\r\n",
"Hi @Modexus,\r\n\r\nThank you very much for the help! Yep after playing around with map, I managed to get the parallel processing to work by implementing it like you suggested.\r\n\r\nRegarding the temp files, it seems like the temp files just keep growing in size as the map continues. Eventually, once map finishes, the temp files are deleted, but they are instead saved as cache .arrow files. These cache files are absolutely gigantic (~ 30-50x the size of the initial dataset!).\r\n\r\nAfter playing around with the `prepare_dataset()` function above, it seems this issue is caused by the following line in the function, where the log-Mel spectrogram of the audio is calculated:\r\n\r\n`# compute log-Mel input features from input audio array\r\n batch[\"input_features\"] = processor.feature_extractor(audio[\"array\"], \r\n sampling_rate=audio[\"sampling_rate\"]).input_features[0]\r\n`\r\n\r\nWhen I remove this line, the final cache files are approximately the same size as the initial dataset.\r\n\r\nCan I check whether this is expected behavior with the whisper feature extractor? I cant imagine the spectrograms are that large!\r\n\r\nThank you so much for the help!",
"I'm having a similar issue with the spectrographs taking up an incredibly large amount of space. (i.e. 100GB for 3GB of audio). Is this really normal behavior?",
"Upon taking a look at the hex contents of the mapped dataset files I found that the overwhelming majority of the data contained within them was duplicated junk similar to this. I'm not very familiar with the inner workings of AI but I have to assume this is an inefficient way of storing data at best and a bug at worst.\r\n![image](https://github.com/huggingface/datasets/assets/157770431/70bcbf59-d9ac-4fbf-9b8c-c9e3acc1b539)\r\n",
"Same problem, dataset.map takes long time to process 12GB raw audio data and create 200GB cache file. Is there any method can run process(map) during train, instead current run \r\nonce and save cache file ? ",
"Same issue here. Just trying to normalise image data for a 300MB dataset, ends up with an 11GB cache. The initial .map() call takes 80s over the 15000 images, but then simply iterating over the dataset takes almost 2 minutes. It should be doing no processing here! Something seems wrong.\r\nkeep_in_memory=True also offers no speedup.\r\nEDIT: Running the normalisation with set_transform (i.e. on the fly) iterates through the dataset in 18s. With no normalisation it takes around 14s. No reason for .map() to take 5 mins!",
"@eufrizz How you handle this using set_transform?\r\nI have a really big dataset of size 1.2TB and i am going to use it for fine-tunning whisper model. if i use map for dataset_preparing function it will take over 20 days!!!",
"> @eufrizz How you handle this using set_transform?\n> I have a really big dataset of size 1.2TB and i am going to use it for fine-tunning whisper model. if i use map for dataset_preparing function it will take over 20 days!!!\n\nJust give the preprocessing function you were using for map to set_transform. Just look at the set_transform documentation. If you're going to do lots of epochs you might be better off just saving the preprocessed data into a new dataset. "
] | ### Describe the bug
Map has been taking extremely long to preprocess my data.
It seems to process 1000 examples (which it does really fast in about 10 seconds), then it hangs for a good 1-2 minutes, before it moves on to the next batch of 1000 examples.
It also keeps eating up my hard drive space for some reason by creating a file named tmp1335llua that is over 300GB.
Trying to set num_proc to be >1 also gives me the following error: NameError: name 'processor' is not defined
Please advise on how I could optimise this?
### Steps to reproduce the bug
In general, I have been using map as per normal. Here is a snippet of my code:
````
########################### DATASET LOADING AND PREP #########################
def load_custom_dataset(split):
ds = []
if split == 'train':
for dset in args.train_datasets:
ds.append(load_from_disk(dset))
if split == 'test':
for dset in args.test_datasets:
ds.append(load_from_disk(dset))
ds_to_return = concatenate_datasets(ds)
ds_to_return = ds_to_return.shuffle(seed=22)
return ds_to_return
def prepare_dataset(batch):
# load and (possibly) resample audio data to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = processor.feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
# compute input length of audio sample in seconds
batch["input_length"] = len(audio["array"]) / audio["sampling_rate"]
# optional pre-processing steps
transcription = batch["sentence"]
if do_lower_case:
transcription = transcription.lower()
if do_remove_punctuation:
transcription = normalizer(transcription).strip()
# encode target text to label ids
batch["labels"] = processor.tokenizer(transcription).input_ids
return batch
print('DATASET PREPARATION IN PROGRESS...')
# case 3: combine_and_shuffle is true, only train provided
# load train datasets
train_set = load_custom_dataset('train')
# split dataset
raw_dataset = DatasetDict()
raw_dataset = train_set.train_test_split(test_size = args.test_size, shuffle=True, seed=42)
raw_dataset = raw_dataset.cast_column("audio", Audio(sampling_rate=args.sampling_rate))
print("Before Map:")
print(raw_dataset)
raw_dataset = raw_dataset.map(prepare_dataset, num_proc=1)
print("After Map:")
print(raw_dataset)
````
### Expected behavior
Based on the speed at which map is processing examples, I would expect a 5-6 hours completion for all mapping
However, because it hangs every 1000 examples, I instead roughly estimate it would take about 40 hours!
Moreover, i cant even finish the map because it keeps exponentially eating up my hard drive space
### Environment info
- `datasets` version: 2.18.0
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.10.14
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | 6,789 |
https://github.com/huggingface/datasets/issues/6788 | A Question About the Map Function | [
"All data is saved in the arrow format on disk.\r\nIf you return a tensor it gets converted to arrow before saving to disk when using map.\r\n\r\nTo get a tensor when you access data elements you can use `dataset.set_format(\"pt\")`.\r\nNote that this just changes how the data is loaded, not how it is stored.",
"> All data is saved in the arrow format on disk. If you return a tensor it gets converted to arrow before saving to disk when using map.\r\n> \r\n> To get a tensor when you access data elements you can use `dataset.set_format(\"pt\")`. Note that this just changes how the data is loaded, not how it is stored.\r\n\r\nThank you very much for your explanation, I understand what you mean now. So you're saying that when streaming=True, there's no need to convert it to the arrow format and save it to disk. But if we directly load all formats and then convert them into the arrow format after passing through the map function, it will convert torch.Tensor into a List. I see."
] | ### Describe the bug
Hello,
I have a question regarding the map function in the Hugging Face datasets.
The situation is as follows: when I load a jsonl file using load_dataset(..., streaming=False), and then utilize the map function to process it, I specify that the returned example should be of type Torch.tensor. However, I noticed that after applying the map function, the datatype automatically changes to List, which leads to errors in my program.
I attempted to use load_dataset(..., streaming=True), and this issue no longer occurs. I'm not entirely clear on why this happens. Could you please provide some insights into this?
### Steps to reproduce the bug
1.dataset = load_dataset(xxx, streaming = False)
2. dataset.map(function), function will return torch.Tensor.
3. you will find the format of data in dataset is List.
### Expected behavior
I expected to receieve the format of data is torch.Tensor.
### Environment info
2.18.0 | 6,788 |
https://github.com/huggingface/datasets/issues/6787 | TimeoutError in map | [
"From my current understanding, this timeout is only used when we need to get the results.\r\n\r\nOne of:\r\n1. All tasks are done\r\n2. One worker died\r\n\r\nYour function should work fine and it's definitely a bug if it doesn't.",
"When one of the `map`'s worker processes crashes, the linked code re-raises an error from the crash and returns it to the caller.\r\n\r\nIf your question is how to limit the time of long-running tasks/worker processes, such functionality doesn't exist in `datasets` (yet), which means you need to implement it yourself.\r\n\r\nE.g., you can implement it using the built-in `signal` module like this:\r\n```python\r\nimport time\r\nimport signal\r\nfrom contextlib import contextmanager\r\n\r\nfrom datasets import Dataset\r\n\r\n\r\n@contextmanager\r\ndef max_exec_time(t):\r\n def raise_timeout_handler(signum, frame):\r\n raise TimeoutError\r\n \r\n orig_handler = signal.getsignal(signal.SIGALRM)\r\n signal.signal(signal.SIGALRM, raise_timeout_handler)\r\n try:\r\n signal.alarm(t)\r\n yield\r\n finally:\r\n signal.alarm(0)\r\n signal.signal(signal.SIGALRM, orig_handler)\r\n\r\n\r\ndef worker(example, rank):\r\n try:\r\n with max_exec_time(20): # 20 sec execution limit\r\n if rank % 2 == 0:\r\n time.sleep(50) # simulate a long-running task\r\n example[\"a\"] = 100\r\n except TimeoutError:\r\n example[\"a\"] = None # Or return empty batches here in the \"batched\" mode\r\n return example\r\n\r\ndata = Dataset.from_list([{\"a\": 1}, {\"a\": 2}])\r\ndata = data.map(worker, num_proc=2, with_rank=True)\r\nprint(data[0])\r\n```",
"> From my current understanding, this timeout is only used when we need to get the results.\r\n> \r\n> One of:\r\n> \r\n> 1. All tasks are done\r\n> 2. One worker died\r\n> \r\n> Your function should work fine and it's definitely a bug if it doesn't.\r\n\r\nthanks for responding! can you reproduce the stuck with the above example code?",
"> When one of the `map`'s worker processes crashes, the linked code re-raises an error from the crash and returns it to the caller.\r\n> \r\n> If your question is how to limit the time of long-running tasks/worker processes, such functionality doesn't exist in `datasets` (yet), which means you need to implement it yourself.\r\n> \r\n> E.g., you can implement it using the built-in `signal` module like this:\r\n> \r\n> ```python\r\n> import time\r\n> import signal\r\n> from contextlib import contextmanager\r\n> \r\n> from datasets import Dataset\r\n> \r\n> \r\n> @contextmanager\r\n> def max_exec_time(t):\r\n> def raise_timeout_handler(signum, frame):\r\n> raise TimeoutError\r\n> \r\n> orig_handler = signal.getsignal(signal.SIGALRM)\r\n> signal.signal(signal.SIGALRM, raise_timeout_handler)\r\n> try:\r\n> signal.alarm(t)\r\n> yield\r\n> finally:\r\n> signal.alarm(0)\r\n> signal.signal(signal.SIGALRM, orig_handler)\r\n> \r\n> \r\n> def worker(example, rank):\r\n> try:\r\n> with max_exec_time(20): # 20 sec execution limit\r\n> if rank % 2 == 0:\r\n> time.sleep(50) # simulate a long-running task\r\n> example[\"a\"] = 100\r\n> except TimeoutError:\r\n> example[\"a\"] = None # Or return empty batches here in the \"batched\" mode\r\n> return example\r\n> \r\n> data = Dataset.from_list([{\"a\": 1}, {\"a\": 2}])\r\n> data = data.map(worker, num_proc=2, with_rank=True)\r\n> print(data[0])\r\n> ```\r\n\r\nthanks for responding! However, I don't think we should use `signal` in the context of multiprocessing since sometimes it will crash one process and raise the following error\r\nhttps://github.com/huggingface/datasets/blob/c3ddb1ef00334a6f973679a51e783905fbc9ef0b/src/datasets/utils/py_utils.py#L664",
"> thanks for responding! However, I don't think we should use signal in the context of multiprocessing since sometimes it will crash one process and raise the following error\r\n\r\nThe above code has `try/except` to catch the error from the handler. Or do you get an error other than `TimeoutError`?",
"> > thanks for responding! However, I don't think we should use signal in the context of multiprocessing since sometimes it will crash one process and raise the following error\r\n> \r\n> The above code has `try/except` to catch the error from the handler. Or do you get an error other than `TimeoutError`?\r\n\r\nyup, it will raise the RuntimeError: https://github.com/huggingface/datasets/blob/c3ddb1ef00334a6f973679a51e783905fbc9ef0b/src/datasets/utils/py_utils.py#L667C19-L670C22\r\n\r\n```\r\n raise RuntimeError(\r\n \"One of the subprocesses has abruptly died during map operation.\"\r\n \"To debug the error, disable multiprocessing.\"\r\n )\r\n```"
] | ### Describe the bug
```python
from datasets import Dataset
def worker(example):
while True:
continue
example['a'] = 100
return example
data = Dataset.from_list([{"a": 1}, {"a": 2}])
data = data.map(worker)
print(data[0])
```
I'm implementing a worker function whose runtime will depend on specific examples (e.g., while most examples take 0.01s in worker, several examples may take 50s).
Therefore, I would like to know how the current implementation will handle those subprocesses that require a long (e.g., >= 5min) or even infinite time.
I notice that the current implementation set a timeout of 0.05 second
https://github.com/huggingface/datasets/blob/c3ddb1ef00334a6f973679a51e783905fbc9ef0b/src/datasets/utils/py_utils.py#L674
However, this example code still gets stuck.
### Steps to reproduce the bug
run the example above
### Expected behavior
I want to set a default worker to handle these timeout cases, instead of getting stuck
### Environment info
main branch version | 6,787 |
https://github.com/huggingface/datasets/issues/6783 | AttributeError: module 'numpy' has no attribute 'object'. in Kaggle Notebook | [
"Hi! You can fix this by updating the `datasets` package with `pip install -U datasets` and restarting the notebook.\r\n",
"Kaggle removed the problematic `datasets==2.1.0` pin last week, so I'm closing this issue (now it pre-installs the latest version)."
] | ### Describe the bug
# problem
I can't resample audio dataset in Kaggle Notebook. It looks like some code in `datasets` library use aliases that were deprecated in NumPy 1.20.
## code for resampling
```
from datasets import load_dataset, Audio
from transformers import AutoFeatureExtractor
from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer
minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True
)
return inputs
dataset = dataset.map(preprocess_function, remove_columns="audio", batched=True, batch_size=100)
```
## the error I got
<details>
<summary>Click to expand</summary>
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[20], line 1
----> 1 dataset = dataset.map(preprocess_function, remove_columns="audio", batched=True, batch_size=100)
2 dataset
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1955, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1952 disable_tqdm = not logging.is_progress_bar_enabled()
1954 if num_proc is None or num_proc == 1:
-> 1955 return self._map_single(
1956 function=function,
1957 with_indices=with_indices,
1958 with_rank=with_rank,
1959 input_columns=input_columns,
1960 batched=batched,
1961 batch_size=batch_size,
1962 drop_last_batch=drop_last_batch,
1963 remove_columns=remove_columns,
1964 keep_in_memory=keep_in_memory,
1965 load_from_cache_file=load_from_cache_file,
1966 cache_file_name=cache_file_name,
1967 writer_batch_size=writer_batch_size,
1968 features=features,
1969 disable_nullable=disable_nullable,
1970 fn_kwargs=fn_kwargs,
1971 new_fingerprint=new_fingerprint,
1972 disable_tqdm=disable_tqdm,
1973 desc=desc,
1974 )
1975 else:
1977 def format_cache_file_name(cache_file_name, rank):
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:520, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
518 self: "Dataset" = kwargs.pop("self")
519 # apply actual function
--> 520 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
521 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
522 for dataset in datasets:
523 # Remove task templates if a column mapping of the template is no longer valid
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:487, in transmit_format.<locals>.wrapper(*args, **kwargs)
480 self_format = {
481 "type": self._format_type,
482 "format_kwargs": self._format_kwargs,
483 "columns": self._format_columns,
484 "output_all_columns": self._output_all_columns,
485 }
486 # apply actual function
--> 487 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
488 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
489 # re-apply format to the output
File /opt/conda/lib/python3.10/site-packages/datasets/fingerprint.py:458, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
452 kwargs[fingerprint_name] = update_fingerprint(
453 self._fingerprint, transform, kwargs_for_fingerprint
454 )
456 # Call actual function
--> 458 out = func(self, *args, **kwargs)
460 # Update fingerprint of in-place transforms + update in-place history of transforms
462 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:2356, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2354 writer.write_table(batch)
2355 else:
-> 2356 writer.write_batch(batch)
2357 if update_data and writer is not None:
2358 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py:507, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)
505 col_try_type = try_features[col] if try_features is not None and col in try_features else None
506 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 507 arrays.append(pa.array(typed_sequence))
508 inferred_features[col] = typed_sequence.get_inferred_type()
509 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
File /opt/conda/lib/python3.10/site-packages/pyarrow/array.pxi:236, in pyarrow.lib.array()
File /opt/conda/lib/python3.10/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py:184, in TypedSequence.__arrow_array__(self, type)
182 out = numpy_to_pyarrow_listarray(data)
183 elif isinstance(data, list) and data and isinstance(first_non_null_value(data)[1], np.ndarray):
--> 184 out = list_of_np_array_to_pyarrow_listarray(data)
185 else:
186 trying_cast_to_python_objects = True
File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1174, in list_of_np_array_to_pyarrow_listarray(l_arr, type)
1172 """Build a PyArrow ListArray from a possibly nested list of NumPy arrays"""
1173 if len(l_arr) > 0:
-> 1174 return list_of_pa_arrays_to_pyarrow_listarray(
1175 [numpy_to_pyarrow_listarray(arr, type=type) if arr is not None else None for arr in l_arr]
1176 )
1177 else:
1178 return pa.array([], type=type)
File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1163, in list_of_pa_arrays_to_pyarrow_listarray(l_arr)
1160 null_indices = [i for i, arr in enumerate(l_arr) if arr is None]
1161 l_arr = [arr for arr in l_arr if arr is not None]
1162 offsets = np.cumsum(
-> 1163 [0] + [len(arr) for arr in l_arr], dtype=np.object
1164 ) # convert to dtype object to allow None insertion
1165 offsets = np.insert(offsets, null_indices, None)
1166 offsets = pa.array(offsets, type=pa.int32())
File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr)
319 warnings.warn(
320 f"In the future `np.{attr}` will be defined as the "
321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
323 if attr in __former_attrs__:
--> 324 raise AttributeError(__former_attrs__[attr])
326 if attr == 'testing':
327 import numpy.testing as testing
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
```
</details>
### Steps to reproduce the bug
Run above code in Kaggle Notebook.
### Expected behavior
I can resample audio data without fail.
### Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.15.133+-x86_64-with-glibc2.31
- Python version: 3.10.13
- PyArrow version: 11.0.0
- Pandas version: 2.2.1 | 6,783 |
https://github.com/huggingface/datasets/issues/6782 | Image cast_storage very slow for arrays (e.g. numpy, tensors) | [
"This may be a solution that only changes `cast_storage` of `Image`.\r\nHowever, I'm not totally sure that the assumptions hold that are made about the `ListArray`.\r\n\r\n```python\r\nelif pa.types.is_list(storage.type):\r\n from .features import Array3DExtensionType\r\n\r\n def get_shapes(arr):\r\n shape = ()\r\n while isinstance(arr, pa.ListArray):\r\n len_curr = len(arr)\r\n arr = arr.flatten()\r\n len_new = len(arr)\r\n shape = shape + (len_new // len_curr,)\r\n return shape\r\n\r\n def get_dtypes(arr):\r\n dtype = storage.type\r\n while hasattr(dtype, \"value_type\"):\r\n dtype = dtype.value_type\r\n return dtype\r\n\r\n arrays = []\r\n for i, is_null in enumerate(storage.is_null()):\r\n if not is_null.as_py():\r\n storage_part = storage.take([i])\r\n shape = get_shapes(storage_part)\r\n dtype = get_dtypes(storage_part)\r\n\r\n extension_type = Array3DExtensionType(shape=shape, dtype=str(dtype))\r\n array = pa.ExtensionArray.from_storage(extension_type, storage_part)\r\n arrays.append(array.to_numpy().squeeze(0))\r\n else:\r\n arrays.append(None)\r\n\r\n bytes_array = pa.array(\r\n [encode_np_array(arr)[\"bytes\"] if arr is not None else None for arr in arrays],\r\n type=pa.binary(),\r\n )\r\n path_array = pa.array([None] * len(storage), type=pa.string())\r\n storage = pa.StructArray.from_arrays(\r\n [bytes_array, path_array], [\"bytes\", \"path\"], mask=bytes_array.is_null()\r\n )\r\n```\r\n(Edited): to handle nulls\r\n\r\nNotably this doesn't change anything about the passing through of data or other things, just in the `Image` class.\r\nSeems quite fast:\r\n```bash\r\nFri Apr 5 17:55:51 2024 restats\r\n\r\n 63818 function calls (61995 primitive calls) in 0.812 seconds\r\n\r\n Ordered by: cumulative time\r\n List reduced from 1051 to 20 due to restriction <20>\r\n\r\n ncalls tottime percall cumtime percall filename:lineno(function)\r\n 47/1 0.000 0.000 0.810 0.810 {built-in method builtins.exec}\r\n 2/1 0.000 0.000 0.810 0.810 <string>:1(<module>)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:594(wrapper)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:551(wrapper)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:2916(map)\r\n 3 0.000 0.000 0.807 0.269 arrow_dataset.py:3277(_map_single)\r\n 1 0.000 0.000 0.760 0.760 arrow_writer.py:589(finalize)\r\n 1 0.000 0.000 0.760 0.760 arrow_writer.py:423(write_examples_on_file)\r\n 1 0.000 0.000 0.759 0.759 arrow_writer.py:527(write_batch)\r\n 1 0.001 0.001 0.754 0.754 arrow_writer.py:161(__arrow_array__)\r\n 2/1 0.000 0.000 0.719 0.719 table.py:1800(wrapper)\r\n 1 0.000 0.000 0.719 0.719 table.py:1950(cast_array_to_feature)\r\n 1 0.006 0.006 0.718 0.718 image.py:209(cast_storage)\r\n 1 0.000 0.000 0.451 0.451 image.py:361(encode_np_array)\r\n 1 0.000 0.000 0.444 0.444 image.py:343(image_to_bytes)\r\n 1 0.000 0.000 0.413 0.413 Image.py:2376(save)\r\n 1 0.000 0.000 0.413 0.413 PngImagePlugin.py:1233(_save)\r\n 1 0.000 0.000 0.413 0.413 ImageFile.py:517(_save)\r\n 1 0.000 0.000 0.413 0.413 ImageFile.py:545(_encode_tile)\r\n 397 0.409 0.001 0.409 0.001 {method 'encode' of 'ImagingEncoder' objects}\r\n```",
"Also encounter this problem. Has been strugging with it for a long time...",
"This actually applies to all arrays (numpy or tensors like in torch), not only from external files.\r\n```python\r\nimport numpy as np\r\nimport datasets\r\n\r\nds = datasets.Dataset.from_dict(\r\n {\"image\": [np.random.randint(0, 255, (2048, 2048, 3), dtype=np.uint8)]},\r\n features=datasets.Features({\"image\": datasets.Image(decode=True)}),\r\n)\r\nds.set_format(\"numpy\")\r\n\r\nds = ds.map(load_from_cache_file=False)\r\n```"
] | Update: see comments below
### Describe the bug
Operations that save an image from a path are very slow.
I believe the reason for this is that the image data (`numpy`) is converted into `pyarrow` format but then back to python using `.pylist()` before being converted to a numpy array again.
`pylist` is already slow but used on a multi-dimensional numpy array such as an image it takes a very long time.
From the trace below we can see that `__arrow_array__` takes a long time.
It is currently also called in `get_inferred_type`, this should be removable #6781 but doesn't change the underyling issue.
The conversion to `pyarrow` and back also leads to the `numpy` array having type `int64` which causes a warning message because the image type excepts `uint8`.
However, originally the `numpy` image array was in `uint8`.
### Steps to reproduce the bug
```python
from PIL import Image
import numpy as np
import datasets
import cProfile
image = Image.fromarray(np.random.randint(0, 255, (2048, 2048, 3), dtype=np.uint8))
image.save("test_image.jpg")
ds = datasets.Dataset.from_dict(
{"image": ["test_image.jpg"]},
features=datasets.Features({"image": datasets.Image(decode=True)}),
)
# load as numpy array, e.g. for further processing with map
# same result as map returning numpy arrays
ds.set_format("numpy")
cProfile.run("ds.map(writer_batch_size=1, load_from_cache_file=False)", "restats")
```
```bash
Fri Apr 5 14:56:17 2024 restats
66817 function calls (64992 primitive calls) in 33.382 seconds
Ordered by: cumulative time
List reduced from 1073 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
46/1 0.000 0.000 33.382 33.382 {built-in method builtins.exec}
1 0.000 0.000 33.382 33.382 <string>:1(<module>)
1 0.000 0.000 33.382 33.382 arrow_dataset.py:594(wrapper)
1 0.000 0.000 33.382 33.382 arrow_dataset.py:551(wrapper)
1 0.000 0.000 33.379 33.379 arrow_dataset.py:2916(map)
4 0.000 0.000 33.327 8.332 arrow_dataset.py:3277(_map_single)
1 0.000 0.000 33.311 33.311 arrow_writer.py:465(write)
2 0.000 0.000 33.311 16.656 arrow_writer.py:423(write_examples_on_file)
1 0.000 0.000 33.311 33.311 arrow_writer.py:527(write_batch)
2 14.484 7.242 33.260 16.630 arrow_writer.py:161(__arrow_array__)
1 0.001 0.001 16.438 16.438 arrow_writer.py:121(get_inferred_type)
1 0.000 0.000 14.398 14.398 threading.py:637(wait)
1 0.000 0.000 14.398 14.398 threading.py:323(wait)
8 14.398 1.800 14.398 1.800 {method 'acquire' of '_thread.lock' objects}
4/2 0.000 0.000 4.337 2.169 table.py:1800(wrapper)
2 0.000 0.000 4.337 2.169 table.py:1950(cast_array_to_feature)
2 0.475 0.238 4.337 2.169 image.py:209(cast_storage)
9 2.583 0.287 2.583 0.287 {built-in method numpy.array}
2 0.000 0.000 1.284 0.642 image.py:319(encode_np_array)
2 0.000 0.000 1.246 0.623 image.py:301(image_to_bytes)
```
### Expected behavior
The `numpy` image data should be passed through as it will be directly consumed by `pillow` to convert it to bytes.
As an example one can replace `list_of_np_array_to_pyarrow_listarray(data)` in `__arrow_array__` with just `out = data` as a test.
We have to change `cast_storage` of the `Image` feature so it handles the passed through data (& if to handle type before)
```python
bytes_array = pa.array(
[encode_np_array(arr)["bytes"] if arr is not None else None for arr in storage],
type=pa.binary(),
)
```
Leading to the following:
```bash
Fri Apr 5 15:44:27 2024 restats
66419 function calls (64595 primitive calls) in 0.937 seconds
Ordered by: cumulative time
List reduced from 1023 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
47/1 0.000 0.000 0.935 0.935 {built-in method builtins.exec}
2/1 0.000 0.000 0.935 0.935 <string>:1(<module>)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:594(wrapper)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:551(wrapper)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:2916(map)
4 0.000 0.000 0.933 0.233 arrow_dataset.py:3277(_map_single)
1 0.000 0.000 0.883 0.883 arrow_writer.py:466(write)
2 0.000 0.000 0.883 0.441 arrow_writer.py:424(write_examples_on_file)
1 0.000 0.000 0.882 0.882 arrow_writer.py:528(write_batch)
2 0.000 0.000 0.877 0.439 arrow_writer.py:161(__arrow_array__)
4/2 0.000 0.000 0.877 0.439 table.py:1800(wrapper)
2 0.000 0.000 0.877 0.439 table.py:1950(cast_array_to_feature)
2 0.009 0.005 0.877 0.439 image.py:209(cast_storage)
2 0.000 0.000 0.868 0.434 image.py:335(encode_np_array)
2 0.000 0.000 0.856 0.428 image.py:317(image_to_bytes)
2 0.000 0.000 0.822 0.411 Image.py:2376(save)
2 0.000 0.000 0.822 0.411 PngImagePlugin.py:1233(_save)
2 0.000 0.000 0.822 0.411 ImageFile.py:517(_save)
2 0.000 0.000 0.821 0.411 ImageFile.py:545(_encode_tile)
589 0.803 0.001 0.803 0.001 {method 'encode' of 'ImagingEncoder' objects}
```
This is of course only a test as it passes through all `numpy` arrays irrespective of if they should be an image.
Also I guess `cast_storage` is meant for casting `pyarrow` storage exclusively.
Converting to `pyarrow` array seems like a good solution as it also handles `pytorch` tensors etc., maybe there is a more efficient way to create a PIL image from a `pyarrow` array?
Not sure how this should be handled but I would be happy to help if there is a good solution.
### Environment info
- `datasets` version: 2.18.1.dev0
- Platform: Linux-6.7.11-200.fc39.x86_64-x86_64-with-glibc2.38
- Python version: 3.12.2
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.3.1 | 6,782 |
https://github.com/huggingface/datasets/issues/6778 | Dataset.to_csv() missing commas in columns with lists | [
"Hello!\r\n\r\nThis is due to how pandas write numpy arrays to csv. [Source](https://stackoverflow.com/questions/54753179/to-csv-saves-np-array-as-string-instead-of-as-a-list)\r\nTo fix this, you can convert them to list yourselves.\r\n\r\n```python\r\ndf = ds.to_pandas()\r\ndf['int'] = df['int'].apply(lambda arr: list(arr))\r\ndf.to_csv(index=False, '../output/temp.csv')\r\n```\r\n\r\nI think it would be good if `datasets` would do the conversion itself, but it's a breaking change and I would wait for the greenlight from someone from HF."
] | ### Describe the bug
The `to_csv()` method does not output commas in lists. So when the Dataset is loaded back in the data structure of the column with a list is not correct.
Here's an example:
Obviously, it's not as trivial as inserting commas in the list, since its a comma-separated file. But hopefully there's a way to export the list in a way that it'll be imported by `load_dataset()` correctly.
### Steps to reproduce the bug
Here's some code to reproduce the bug:
```python
from datasets import Dataset
ds = Dataset.from_dict(
{
"pokemon": ["bulbasaur", "squirtle"],
"type": ["grass", "water"]
}
)
def ascii_to_hex(text):
return [ord(c) for c in text]
ds = ds.map(lambda x: {"int": ascii_to_hex(x['pokemon'])})
ds.to_csv('../output/temp.csv')
```
temp.csv then contains:
```
### Expected behavior
ACTUAL OUTPUT:
```
pokemon,type,int
bulbasaur,grass,[ 98 117 108 98 97 115 97 117 114]
squirtle,water,[115 113 117 105 114 116 108 101]
```
EXPECTED OUTPUT:
```
pokemon,type,int
bulbasaur,grass,[98, 117, 108, 98, 97, 115, 97, 117, 114]
squirtle,water,[115, 113, 117, 105, 114, 116, 108, 101]
```
or probably something more like this since it's a CSV file:
```
pokemon,type,int
bulbasaur,grass,"[98, 117, 108, 98, 97, 115, 97, 117, 114]"
squirtle,water,"[115, 113, 117, 105, 114, 116, 108, 101]"
```
### Environment info
### Package Version
Name: datasets
Version: 2.16.1
### Python
version: 3.10.12
### OS Info
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
...
UBUNTU_CODENAME=jammy | 6,778 |
https://github.com/huggingface/datasets/issues/6777 | .Jsonl metadata not detected | [
"Hi! `metadata.jsonl` (or `metadata.csv`) is the only allowed name for the `imagefolder`'s metadata files.",
"@mariosasko hey i tried with metadata.jsonl also and it still doesn't get the right columns",
"@mariosasko it says metadata.csv not found\r\n<img width=\"1150\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/81643693/3754980c-6185-4413-88fa-b499bcdd4195\">\r\n\r\ndataset = load_dataset('/dataset',metadata.csv) \r\n\r\n| workspace\r\n|| source code\r\n| dataset\r\n| |-- images\r\n| |-- metadata.csv\r\n| |-- metadata.jsonl\r\n| |-- padded_images\r\n\r\nExample of metadata.jsonl file\r\n{\"caption\": \"a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white label on the left side of the triangle\", \"image\": \"images/212734.png\", \"gaussian_padded_image\": \"padded_images/p_212734.png\"}\r\n{\"caption\": \"an eye-level full shot of a large elephant and a baby elephant standing in a watering hole on the left side is a small elephant with its head turned to the right of dry land, trees, and bushes\", \"image\": \"images/212735.png\", \"gaussian_padded_image\": \"padded_images/p_212735.png\"}\r\n",
"Loading more than one image per row with `imagefolder` is not supported currently. You can subscribe to https://github.com/huggingface/datasets/issues/5760 to see when it will be.\r\n\r\nInstead, you can load the dataset with `Dataset.from_generator`:\r\n```python\r\nimport json\r\nfrom datasets import Dataset, Value, Image, Features\r\n\r\ndef gen():\r\n with open(\"./dataset/metadata.jsonl\") as f:\r\n for line in f:\r\n line = json.loads(line)\r\n yield {\"caption\": line[\"caption\"], \"image\": os.path.join(\"./dataset\", line[\"image\"], \"gaussian_padded_image\": os.path.join(\"./dataset\", line[\"gaussian_padded_image\"]))}\r\n\r\nfeatures = Features({\"caption\": Value(\"string\"), \"image\": Image(), \"gaussian_padded_image\": Image()})\r\ndataset = Dataset.from_generator(gen, features=features)\r\n```\r\n(E.g., if you want to share this dataset on the Hub, you can call `dataset.push_to_hub(...)` afterward)",
"hi Thanks for sharing this, Actually I was trying with a webdataset format of the data as well and it did'nt work. Could you share how i can create Dataset object from webdataset format of this data?"
] | ### Describe the bug
Hi I have the following directory structure:
|--dataset
| |-- images
| |-- metadata1000.csv
| |-- metadata1000.jsonl
| |-- padded_images
Example of metadata1000.jsonl file
{"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white label on the left side of the triangle", "image": "images/212734.png", "gaussian_padded_image": "padded_images/p_212734.png"}
{"caption": "an eye-level full shot of a large elephant and a baby elephant standing in a watering hole on the left side is a small elephant with its head turned to the right of dry land, trees, and bushes", "image": "images/212735.png", "gaussian_padded_image": "padded_images/p_212735.png"}
.
.
.
I'm trying to use dataset = load_dataset("imagefolder", data_dir='/dataset/', split='train') to load the the dataset, however it is not able to load according to the fields in the metadata1000.jsonl .
please assist to load the data properly
also getting
```
File "/workspace/train_trans_vae.py", line 1089, in <module>
print(get_metadata_patterns('/dataset/'))
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 499, in get_metadata_patterns
raise FileNotFoundError(f"The directory at {base_path} doesn't contain any metadata file") from None
FileNotFoundError: The directory at /dataset/ doesn't contain any metadata file
```
when trying
```
from datasets.data_files import get_metadata_patterns
print(get_metadata_patterns('/dataset/'))
```
### Steps to reproduce the bug
dataset Version: 2.18.0
make a similar jsonl and similar directory format
### Expected behavior
creates a dataset object with the column names, caption,image,gaussian_padded_image
### Environment info
dataset Version: 2.18.0 | 6,777 |