id
int64 1.9B
3.25B
| title
stringlengths 2
244
| state
stringclasses 2
values | body
stringlengths 3
58.6k
⌀ | created_at
timestamp[s]date 2023-09-15 14:23:33
2025-07-22 09:33:54
| updated_at
timestamp[s]date 2023-09-18 16:20:09
2025-07-22 10:44:03
| closed_at
timestamp[s]date 2023-09-18 16:20:09
2025-07-19 22:45:08
⌀ | html_url
stringlengths 49
51
| pull_request
dict | number
int64 6.24k
7.7k
| is_pull_request
bool 2
classes | comments
listlengths 0
24
|
---|---|---|---|---|---|---|---|---|---|---|---|
3,251,904,843 |
Support downloading specific splits in load_dataset
|
open
|
This PR builds on #6832 by @mariosasko.
May close - #4101, #2538
Discussion - https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130
---
### Note - This PR is under work and frequent changes will be pushed.
| 2025-07-22T09:33:54 | 2025-07-22T09:45:18 | null |
https://github.com/huggingface/datasets/pull/7695
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7695",
"html_url": "https://github.com/huggingface/datasets/pull/7695",
"diff_url": "https://github.com/huggingface/datasets/pull/7695.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7695.patch",
"merged_at": null
}
| 7,695 | true |
[
"Hi @lhoestq 👋\r\n\r\nI’ve completed the following steps to continue the partial split download support (from PR #6832):\r\n\r\nI did changes on top of what has been done by mario. Here are some of those changes: \r\n- Restored support for writing multiple split shards:\r\n\r\n- In _prepare_split_single, we now correctly replace JJJJJ and SSSSS placeholders in the fpath for job/shard IDs before creating the writer.\r\n\r\n- Added os.makedirs(os.path.dirname(path), exist_ok=True) after placeholder substitution to prevent FileNotFoundError.\r\n\r\n- Applied the fix to both split writers:\r\n\r\n 1] self._generate_examples version (used by most modules).\r\n\r\n 2] self._generate_tables version (used by IterableDatasetBuilder).\r\n\r\n- Confirmed 109/113 tests passing, meaning the general logic is working across the board.\r\n\r\nWhat’s still failing\r\n4 integration tests fail:\r\n\r\n`test_load_hub_dataset_with_single_config_in_metadata`\r\n\r\n`test_load_hub_dataset_with_two_config_in_metadata`\r\n\r\n`test_load_hub_dataset_with_metadata_config_in_parallel`\r\n\r\n`test_reload_old_cache_from_2_15`\r\n\r\nAll are due to FileNotFoundError from uncreated output paths, which I'm currently finalizing by ensuring os.makedirs() is correctly applied before every writer instantiation.\r\n\r\nI will update about these fixes after running tests!"
] |
3,247,600,408 |
Dataset.to_json consumes excessive memory, appears to not be a streaming operation
|
open
|
### Describe the bug
When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation.
This behavior is unexpected, as the JSONL format is line-oriented and ideally suited for streaming writes. This issue can easily lead to Out-of-Memory (OOM) errors when exporting large datasets, especially in memory-constrained environments like Docker containers.
<img width="1343" height="329" alt="Image" src="https://github.com/user-attachments/assets/518b4263-ad12-422d-9672-28ffe97240ce" />
### Steps to reproduce the bug
```
import os
from datasets import load_dataset, Dataset
from loguru import logger
# A public dataset to test with
REPO_ID = "adam89/TinyStoriesChinese"
SUBSET = "default"
SPLIT = "train"
NUM_ROWS_TO_LOAD = 10 # Use a reasonably large number to see the memory spike
def run_test():
"""Loads data into memory and then saves it, triggering the memory issue."""
logger.info("Step 1: Loading data into an in-memory Dataset object...")
# Create an in-memory Dataset object from a stream
# This simulates having a processed dataset ready to be saved
iterable_dataset = load_dataset(REPO_ID, name=SUBSET, split=SPLIT, streaming=True)
limited_stream = iterable_dataset.take(NUM_ROWS_TO_LOAD)
in_memory_dataset = Dataset.from_generator(limited_stream.__iter__)
logger.info(f"Dataset with {len(in_memory_dataset)} rows created in memory.")
output_path = "./test_output.jsonl"
logger.info(f"Step 2: Saving the dataset to {output_path} using .to_json()...")
logger.info("Please monitor memory usage during this step.")
# This is the step that causes the massive memory allocation
in_memory_dataset.to_json(output_path, force_ascii=False)
logger.info("Save operation complete.")
os.remove(output_path)
if __name__ == "__main__":
# To see the memory usage clearly, run this script with a memory profiler:
# python -m memray run your_script_name.py
# python -m memray tree xxx.bin
run_test()
```
### Expected behavior
I would expect the .to_json(lines=True) method to be a memory-efficient, streaming operation. The memory usage should remain low and relatively constant, as data is converted and written to the file line-by-line or in small batches. The memory footprint should not be proportional to the total number of rows in the in_memory_dataset.
### Environment info
datasets version:3.6.0
Python version:3.9.18
os:macOS 15.3.1 (arm64)
| 2025-07-21T07:51:25 | 2025-07-21T07:51:25 | null |
https://github.com/huggingface/datasets/issues/7694
| null | 7,694 | false |
[] |
3,246,369,678 |
Dataset scripts are no longer supported, but found superb.py
|
open
|
### Describe the bug
Hello,
I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions.
I then get the error :
```
--------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[65], [line 1](vscode-notebook-cell:?execution_count=65&line=1)
----> [1](vscode-notebook-cell:?execution_count=65&line=1) dataset = datasets.load_dataset("superb", name="asr", split="test")
3 # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
4 # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
5 for out in tqdm(pipe(KeyDataset(dataset, "file"))):
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1387 verification_mode = VerificationMode(
1388 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
1389 )
1391 # Create a dataset builder
-> [1392](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392) builder_instance = load_dataset_builder(
1393 path=path,
1394 name=name,
1395 data_dir=data_dir,
1396 data_files=data_files,
1397 cache_dir=cache_dir,
1398 features=features,
1399 download_config=download_config,
1400 download_mode=download_mode,
1401 revision=revision,
1402 token=token,
1403 storage_options=storage_options,
1404 **config_kwargs,
1405 )
1407 # Return iterable dataset in case of streaming
1408 if streaming:
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs)
1130 if features is not None:
1131 features = _fix_for_backward_compatible_features(features)
-> [1132](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132) dataset_module = dataset_module_factory(
1133 path,
1134 revision=revision,
1135 download_config=download_config,
1136 download_mode=download_mode,
1137 data_dir=data_dir,
1138 data_files=data_files,
1139 cache_dir=cache_dir,
1140 )
1141 # Get dataset builder class
1142 builder_kwargs = dataset_module.builder_kwargs
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
1026 if isinstance(e1, FileNotFoundError):
1027 raise FileNotFoundError(
1028 f"Couldn't find any data file at {relative_to_absolute_path(path)}. "
1029 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1030 ) from None
-> [1031](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031) raise e1 from None
1032 else:
1033 raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.")
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
981 try:
982 api.hf_hub_download(
983 repo_id=path,
984 filename=filename,
(...) 987 proxies=download_config.proxies,
988 )
--> [989](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989) raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
990 except EntryNotFoundError:
991 # Use the infos from the parquet export except in some cases:
992 if data_dir or data_files or (revision and revision != "main"):
RuntimeError: Dataset scripts are no longer supported, but found superb.py
```
NB : I tried to replace "superb" by "anton-l/superb_demo" but I get a 'torchcodec' importing error. Maybe I misunderstood something.
### Steps to reproduce the bug
```
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
# KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
# as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
### Expected behavior
Get the tutorial expected results
### Environment info
--- SYSTEM INFO ---
Operating System: Ubuntu 24.10
Kernel: Linux 6.11.0-29-generic
Architecture: x86-64
--- PYTHON ---
Python 3.11.13
--- VENV INFO ----
datasets=4.0.0
transformers=4.53
tqdm=4.67.1
| 2025-07-20T13:48:06 | 2025-07-22T10:44:03 | null |
https://github.com/huggingface/datasets/issues/7693
| null | 7,693 | false |
[
"I got a pretty similar issue when I try to load bigbio/neurotrial_ner dataset. \n`Dataset scripts are no longer supported, but found neurotrial_ner.py`",
"Same here. I was running this tutorial and got a similar error: https://github.com/openai/whisper/discussions/654 (I'm a first-time transformers library user)\n\nRuntimeError: Dataset scripts are no longer supported, but found librispeech_asr.py\n\nWhat am I supposed to do at this point?\n\nThanks",
"hey I got the same error and I have tried to downgrade version to 3.6.0 and it works.\n`pip install datasets==3.6.0`",
"Thank you very much @Tin-viAct . That indeed did the trick for me :) \nNow the code continue its normal flow "
] |
3,246,268,635 |
xopen: invalid start byte for streaming dataset with trust_remote_code=True
|
open
|
### Describe the bug
I am trying to load YODAS2 dataset with datasets==3.6.0
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True)))
```
And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte`
The cause of the error is the following:
```
from datasets.utils.file_utils import xopen
filepath = 'https://huggingface.co/datasets/espnet/yodas2/resolve/c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json'
xopen(filepath, 'r').read()
>>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte
```
And the cause of this is the following:
```
import fsspec
fsspec.open(
'hf://datasets/espnet/yodas2@c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json',
mode='r',
hf={'token': None, 'endpoint': 'https://huggingface.co'},
).open().read()
>>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte
```
Is it true that streaming=True loading is not supported anymore for trust_remote_code=True, even with datasets==3.6.0? This breaks backward compatibility.
### Steps to reproduce the bug
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True)))
```
### Expected behavior
No errors expected
### Environment info
datasets==3.6.0, ubuntu 24.04
| 2025-07-20T11:08:20 | 2025-07-20T11:08:20 | null |
https://github.com/huggingface/datasets/issues/7692
| null | 7,692 | false |
[] |
3,245,547,170 |
Large WebDataset: pyarrow.lib.ArrowCapacityError on load() even with streaming
|
open
|
### Describe the bug
I am creating a large WebDataset-format dataset for sign language processing research, and a number of the videos are over 2GB. The instant I hit one of the shards with one of those videos, I get a ArrowCapacityError, even with streaming.
I made a config for the dataset that specifically includes just one problem shard, and the error triggers the instant you even run load_dataset(), even with streaming=True
```
ds = load_dataset("bible-nlp/sign-bibles", "ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard", streaming=True, split="train")
```
This gives:
```
File "/opt/home/cleong/projects/semantic_and_visual_similarity/sign-bibles-dataset/sign_bibles_dataset/tasks/test_iteration.py", line 13, in iterate_keys
ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train")
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/load.py", line 1409, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/builder.py", line 1225, in as_streaming_dataset
splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 88, in _split_generators
pa.Table.from_pylist(cast_to_python_objects([example], only_1d_for_numpy=True))
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 2046, in pyarrow.lib._Tabular.from_pylist
File "pyarrow/table.pxi", line 6431, in pyarrow.lib._from_pylist
File "pyarrow/table.pxi", line 4893, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1607, in pyarrow.lib._sanitize_arrays
File "pyarrow/table.pxi", line 1588, in pyarrow.lib._schema_from_arrays
File "pyarrow/array.pxi", line 375, in pyarrow.lib.array
File "pyarrow/array.pxi", line 45, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 3980158992
```
### Steps to reproduce the bug
```python
#!/usr/bin/env python
import argparse
from datasets import get_dataset_config_names, load_dataset
from tqdm import tqdm
from pyarrow.lib import ArrowCapacityError, ArrowInvalid
def iterate_keys(language_subset: str) -> None:
"""Iterate over all samples in the Sign Bibles dataset and print idx and sample key."""
# https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/loading_methods#datasets.load_dataset
ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train")
print(f"\n==> Loaded dataset config '{language_subset}'")
idx = 0
estimated_shard_index = 0
samples_per_shard = 5
with tqdm(desc=f"{language_subset} samples") as pbar:
iterator = iter(ds)
while True:
try:
if idx % samples_per_shard == 0 and idx > 0: # 5 samples per shard: 0, 1, 2, 3, 4
print(f"Estimated Shard idx (starting at 0, {samples_per_shard}/shard): {estimated_shard_index}")
estimated_shard_index += 1
sample = next(iterator)
sample_key = sample.get("__key__", "missing-key")
print(f"[{language_subset}] idx={idx}, key={sample_key}")
idx += 1
pbar.update(1)
except StopIteration:
print(f"Finished iterating through {idx} samples of {language_subset}")
break
except (ArrowCapacityError, ArrowInvalid) as e:
print(f"PyArrow error on idx={idx}, config={language_subset}: {e}")
idx += 1
pbar.update(1)
continue
except KeyError as e:
print(f"Missing key error on idx={idx}, config={language_subset}: {e}")
idx += 1
pbar.update(1)
continue
def main():
configs = get_dataset_config_names("bible-nlp/sign-bibles")
print(f"Available configs: {configs}")
configs = [
"ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard"
]
for language_subset in configs:
print(f"TESTING CONFIG {language_subset}")
iterate_keys(language_subset)
# try:
# except (ArrowCapacityError, ArrowInvalid) as e:
# print(f"PyArrow error at config level for {language_subset}: {e}")
# continue
# except RuntimeError as e:
# print(f"RuntimeError at config level for {language_subset}: {e}")
# continue
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Iterate through Sign Bibles dataset and print sample keys.")
args = parser.parse_args()
main()
```
### Expected behavior
I expect, when I load with streaming=True, that there should not be any data loaded or anything like that.
https://huggingface.co/docs/datasets/main/en/package_reference/loading_methods#datasets.load_dataset says that with streaming=true,
I did expect to have some trouble with large files, but that the streaming mode would not actually try to load them unless requested, e.g. with sample["mp4"]
>In the streaming case:
> Don’t download or cache anything. Instead, the dataset is lazily loaded and will be streamed on-the-fly when iterating on it.
### Environment info
Local setup: Conda environment on Ubuntu, pip list includes the following
datasets 4.0.0
pyarrow 20.0.0
Verified on Colab: https://colab.research.google.com/drive/1HdN8stlROWrLSYXUoNeV0vQ9pClhIVM8?usp=sharing, though there it crashes by using up all available RAM
| 2025-07-19T18:40:27 | 2025-07-21T19:17:33 | null |
https://github.com/huggingface/datasets/issues/7691
| null | 7,691 | false |
[
"It seems the error occurs right here, as it tries to infer the Features: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L90",
"It seems to me that if we have something that is so large that it cannot fit in pa.table, the fallback method should be to just set it as \"binary\" type, perhaps?",
"I also tried creating a dataset_info.json but the webdataset builder didn't seem to look for it and load it",
"Workaround on my end, removed all videos larger than 2GB for now. The dataset no longer crashes."
] |
3,244,380,691 |
HDF5 support
|
open
|
This PR adds support for tabular HDF5 file(s) by converting each row to an Arrow table. It supports columns with the usual dtypes including up to 5-dimensional arrays as well as support for complex/compound types by splitting them into several columns. All datasets within the HDF5 file should have rows on the first dimension (groups/subgroups are still allowed). Closes #3113.
Replaces #7625 which only supports a relatively small subset of HDF5.
| 2025-07-18T21:09:41 | 2025-07-19T06:09:00 | null |
https://github.com/huggingface/datasets/pull/7690
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7690",
"html_url": "https://github.com/huggingface/datasets/pull/7690",
"diff_url": "https://github.com/huggingface/datasets/pull/7690.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7690.patch",
"merged_at": null
}
| 7,690 | true |
[
"@lhoestq This is ready for review now. Note that it doesn't support *all* HDF5 files (and I don't think that's worth attempting)... the biggest assumption is that the first dimension of each dataset corresponds to rows in the split."
] |
3,242,580,301 |
BadRequestError for loading dataset?
|
closed
|
### Describe the bug
Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error:
```
huggingface_hub.errors.BadRequestError: (Request ID: ...)
Bad request:
* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand
✖ Invalid input: expected array, received string
→ at paths
✖ Invalid input: expected boolean, received string
→ at expand
```
I tried with both `4.0.0` and `3.5.1` since this dataset uses `trust_remote_code`, but I get the same error with both.
What can I do to load the dataset? I checked the documentation and GitHub issues here, but couldn't find a solution.
### Steps to reproduce the bug
```python
import datasets
ds = datasets.load_dataset("Helsinki-NLP/europarl", "en-fr", streaming=True, trust_remote_code=True)["train"]
```
### Expected behavior
That the dataset loads as it did a couple days ago.
### Environment info
- `datasets` version: 3.5.1
- Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.11
- `huggingface_hub` version: 0.30.2
- PyArrow version: 20.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
| 2025-07-18T09:30:04 | 2025-07-18T11:59:51 | 2025-07-18T11:52:29 |
https://github.com/huggingface/datasets/issues/7689
| null | 7,689 | false |
[
"Same here, for `HuggingFaceFW/fineweb`. Code that worked with no issues for the last 2 months suddenly fails today. Tried updating `datasets`, `huggingface_hub`, `fsspec` to newest versions, but the same error occurs.",
"I'm also hitting this issue, with `mandarjoshi/trivia_qa`; My dataset loading was working successfully yesterday - I'm using `huggingface-hub==0.27.1`, `datasets==3.2.0`",
"Same, here with `datasets==3.6.0`",
"Same, with `datasets==4.0.0`.",
"Same here tried different versions of huggingface-hub and datasets but the error keeps occuring ",
"A temporary workaround is to first download your dataset with\n\nhuggingface-cli download HuggingFaceH4/ultrachat_200k --repo-type dataset\n\nThen find the local path of the dataset typically like ~/.cache/huggingface/hub/HuggingFaceH4-ultrachat_200k/snapshots/*id*\n\nAnd then load like \n\nfrom datasets import load_dataset\ndataset = load_dataset(\"~/.cache/huggingface/hub/HuggingFaceH4-ultrachat_200k/snapshots/*id*\")\n",
"I am also experiencing this issue. I was trying to load TinyStories\nds = datasets.load_dataset(\"roneneldan/TinyStories\", streaming=True, split=\"train\")\n\nresulting in the previously stated error:\nException has occurred: BadRequestError\n(Request ID: Root=1-687a1d09-66cceb496c9401b1084133d6;3550deed-c459-4799-bc74-97924742bd94)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n✖ Invalid input: expected array, received string\n → at paths\n✖ Invalid input: expected boolean, received string\n → at expand\nFileNotFoundError: Dataset roneneldan/TinyStories is not cached in None\n\nThis very code worked fine yesterday, so it's a very recent issue.\n\nEnvironment info:\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"pyarrow version:\", pyarrow.__version__)\nprint(\"pandas version:\", pandas.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\nprint(\"Python version:\", sys.version)\nprint(\"Platform:\", platform.platform())\ndatasets version: 4.0.0\nhuggingface_hub version: 0.33.4\npyarrow version: 19.0.0\npandas version: 2.2.3\nfsspec version: 2024.9.0\nPython version: 3.12.11 (main, Jun 10 2025, 11:55:20) [GCC 15.1.1 20250425]\nPlatform: Linux-6.15.6-arch1-1-x86_64-with-glibc2.41",
"Same here with datasets==3.6.0\n```\nhuggingface_hub.errors.BadRequestError: (Request ID: Root=1-687a238d-27374f964534f79f702bc239;61f0669c-cb70-4aff-b57b-73a446f9c65e)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n✖ Invalid input: expected array, received string\n → at paths\n✖ Invalid input: expected boolean, received string\n → at expand\n```",
"Same here, works perfectly yesterday\n\n```\nError code: ConfigNamesError\nException: BadRequestError\nMessage: (Request ID: Root=1-687a23a5-314b45b36ce962cf0e431b9a;b979ddb2-a80b-483c-8b1e-403e24e83127)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n✖ Invalid input: expected array, received string\n → at paths\n✖ Invalid input: expected boolean, received string\n → at expand\n```",
"It was literally working for me and then suddenly it stopped working next time I run the command. Same issue but private repo so I can't share example. ",
"A bug from Hugging Face not us",
"Same here!",
"@LMSPaul thanks! The workaround seems to work (at least for the datasets I tested).\n\nOn the command line:\n```sh\nhuggingface-cli download <dataset-name> --repo-type dataset --local-dir <local-dir>\n```\n\nAnd then in Python:\n```python\nfrom datasets import load_dataset\n\n# The dataset-specific options seem to work with this as well, \n# except for a warning from \"trust_remote_code\"\nds = load_dataset(<local-dir>)\n```",
"Same for me.. I couldn't load ..\nIt was perfectly working yesterday..\n\n\nfrom datasets import load_dataset\nraw_datasets = load_dataset(\"glue\", \"mrpc\")\n\nThe error resulting is given below\n\n---------------------------------------------------------------------------\nBadRequestError Traceback (most recent call last)\n/tmp/ipykernel_60/772458687.py in <cell line: 0>()\n 1 from datasets import load_dataset\n----> 2 raw_datasets = load_dataset(\"glue\", \"mrpc\")\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)\n 2060 \n 2061 # Create a dataset builder\n-> 2062 builder_instance = load_dataset_builder(\n 2063 path=path,\n 2064 name=name,\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)\n 1780 download_config = download_config.copy() if download_config else DownloadConfig()\n 1781 download_config.storage_options.update(storage_options)\n-> 1782 dataset_module = dataset_module_factory(\n 1783 path,\n 1784 revision=revision,\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\n 1662 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\n 1663 ) from None\n-> 1664 raise e1 from None\n 1665 elif trust_remote_code:\n 1666 raise FileNotFoundError(\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\n 1627 download_mode=download_mode,\n 1628 use_exported_dataset_infos=use_exported_dataset_infos,\n-> 1629 ).get_module()\n 1630 except GatedRepoError as e:\n 1631 message = f\"Dataset '{path}' is a gated dataset on the Hub.\"\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in get_module(self)\n 1017 else:\n 1018 patterns = get_data_patterns(base_path, download_config=self.download_config)\n-> 1019 data_files = DataFilesDict.from_patterns(\n 1020 patterns,\n 1021 base_path=base_path,\n\n/usr/local/lib/python3.11/dist-packages/datasets/data_files.py in from_patterns(cls, patterns, base_path, allowed_extensions, download_config)\n 687 patterns_for_key\n 688 if isinstance(patterns_for_key, DataFilesList)\n--> 689 else DataFilesList.from_patterns(\n 690 patterns_for_key,\n 691 base_path=base_path,\n\n/usr/local/lib/python3.11/dist-packages/datasets/data_files.py in from_patterns(cls, patterns, base_path, allowed_extensions, download_config)\n 580 try:\n 581 data_files.extend(\n--> 582 resolve_pattern(\n 583 pattern,\n 584 base_path=base_path,\n\n/usr/local/lib/python3.11/dist-packages/datasets/data_files.py in resolve_pattern(pattern, base_path, allowed_extensions, download_config)\n 358 matched_paths = [\n 359 filepath if filepath.startswith(protocol_prefix) else protocol_prefix + filepath\n--> 360 for filepath, info in fs.glob(pattern, detail=True, **glob_kwargs).items()\n 361 if (info[\"type\"] == \"file\" or (info.get(\"islink\") and os.path.isfile(os.path.realpath(filepath))))\n 362 and (xbasename(filepath) not in files_to_ignore)\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_file_system.py in glob(self, path, **kwargs)\n 519 kwargs = {\"expand_info\": kwargs.get(\"detail\", False), **kwargs}\n 520 path = self.resolve_path(path, revision=kwargs.get(\"revision\")).unresolve()\n--> 521 return super().glob(path, **kwargs)\n 522 \n 523 def find(\n\n/usr/local/lib/python3.11/dist-packages/fsspec/spec.py in glob(self, path, maxdepth, **kwargs)\n 635 # any exception allowed bar FileNotFoundError?\n 636 return False\n--> 637 \n 638 def lexists(self, path, **kwargs):\n 639 \"\"\"If there is a file at the given path (including\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_file_system.py in find(self, path, maxdepth, withdirs, detail, refresh, revision, **kwargs)\n 554 \"\"\"\n 555 if maxdepth:\n--> 556 return super().find(\n 557 path, maxdepth=maxdepth, withdirs=withdirs, detail=detail, refresh=refresh, revision=revision, **kwargs\n 558 )\n\n/usr/local/lib/python3.11/dist-packages/fsspec/spec.py in find(self, path, maxdepth, withdirs, detail, **kwargs)\n 498 # This is needed for posix glob compliance\n 499 if withdirs and path != \"\" and self.isdir(path):\n--> 500 out[path] = self.info(path)\n 501 \n 502 for _, dirs, files in self.walk(path, maxdepth, detail=True, **kwargs):\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_file_system.py in info(self, path, refresh, revision, **kwargs)\n 717 out = out1[0]\n 718 if refresh or out is None or (expand_info and out and out[\"last_commit\"] is None):\n--> 719 paths_info = self._api.get_paths_info(\n 720 resolved_path.repo_id,\n 721 resolved_path.path_in_repo,\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)\n 112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\n 113 \n--> 114 return fn(*args, **kwargs)\n 115 \n 116 return _inner_fn # type: ignore\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_api.py in get_paths_info(self, repo_id, paths, expand, revision, repo_type, token)\n 3397 headers=headers,\n 3398 )\n-> 3399 hf_raise_for_status(response)\n 3400 paths_info = response.json()\n 3401 return [\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_http.py in hf_raise_for_status(response, endpoint_name)\n 463 f\"\\n\\nBad request for {endpoint_name} endpoint:\" if endpoint_name is not None else \"\\n\\nBad request:\"\n 464 )\n--> 465 raise _format(BadRequestError, message, response) from e\n 466 \n 467 elif response.status_code == 403:\n\nBadRequestError: (Request ID: Root=1-687a3201-087954b9245ab59672e6068e;d5bb4dbe-03e1-4912-bcec-5964c017b920)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n✖ Invalid input: expected array, received string\n → at paths\n✖ Invalid input: expected boolean, re",
"Thanks for the report!\nThe issue has been fixed and should now work without any code changes 😄\nSorry for the inconvenience!\n\nClosing, please open again if needed.",
"Works for me. Thanks!\n",
"Yes Now it's works for me..Thanks\r\n\r\nOn Fri, 18 Jul 2025, 5:25 pm Karol Brejna, ***@***.***> wrote:\r\n\r\n> *karol-brejna-i* left a comment (huggingface/datasets#7689)\r\n> <https://github.com/huggingface/datasets/issues/7689#issuecomment-3089238320>\r\n>\r\n> Works for me. Thanks!\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/7689#issuecomment-3089238320>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AJRBXNEWBJ5UYVC2IRJM5DD3JDODZAVCNFSM6AAAAACB2FDG4GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTAOBZGIZTQMZSGA>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] |
3,238,851,443 |
No module named "distributed"
|
open
|
### Describe the bug
hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this?
### Steps to reproduce the bug
1. pip install datasets
2. from datasets.distributed import split_dataset_by_node
### Expected behavior
expecting the command "from datasets.distributed import split_dataset_by_node" can be ran successfully
### Environment info
python: 3.12
| 2025-07-17T09:32:35 | 2025-07-21T13:50:27 | null |
https://github.com/huggingface/datasets/issues/7688
| null | 7,688 | false |
[
"The error ModuleNotFoundError: No module named 'datasets.distributed' means your installed datasets library is too old or incompatible with the version of Library you are using(in my case it was BEIR). The datasets.distributed module was removed in recent versions of the datasets library.\n\nDowngrade datasets to version 2.14.6 : ! pip install datasets==2.14.6\n"
] |
3,238,760,301 |
Datasets keeps rebuilding the dataset every time i call the python script
|
open
|
### Describe the bug
Every time it runs, somehow, samples increase.
This can cause a 12mb dataset to have other built versions of 400 mbs+
<img width="363" height="481" alt="Image" src="https://github.com/user-attachments/assets/766ce958-bd2b-41bc-b950-86710259bfdc" />
### Steps to reproduce the bug
`from datasets import load_dataset
s = load_dataset('~/.cache/huggingface/datasets/databricks___databricks-dolly-15k')['train']
`
1. A dataset needs to be available in the .cache folder
2. Run the code multiple times, and every time it runs, more versions are created
### Expected behavior
The number of samples increases every time the script runs
### Environment info
- `datasets` version: 3.6.0
- Platform: Windows-11-10.0.26100-SP0
- Python version: 3.13.3
- `huggingface_hub` version: 0.32.3
- PyArrow version: 20.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
| 2025-07-17T09:03:38 | 2025-07-17T09:03:38 | null |
https://github.com/huggingface/datasets/issues/7687
| null | 7,687 | false |
[] |
3,237,201,090 |
load_dataset does not check .no_exist files in the hub cache
|
open
|
### Describe the bug
I'm not entirely sure if this should be submitted as a bug in the `datasets` library or the `huggingface_hub` library, given it could be fixed at different levels of the stack.
The fundamental issue is that the `load_datasets` api doesn't use the `.no_exist` files in the hub cache unlike other wrapper APIs that do. This is because the `utils.file_utils.cached_path` used directly calls `hf_hub_download` instead of using `file_download.try_to_load_from_cache` from `huggingface_hub` (see `transformers` library `utils.hub.cached_files` for one alternate example).
This results in unnecessary metadata HTTP requests occurring for files that don't exist on every call. It won't generate the .no_exist cache files, nor will it use them.
### Steps to reproduce the bug
Run the following snippet as one example (setting cache dirs to clean paths for clarity)
`env HF_HOME=~/local_hf_hub python repro.py`
```
from datasets import load_dataset
import huggingface_hub
# monkeypatch to print out metadata requests being made
original_get_hf_file_metadata = huggingface_hub.file_download.get_hf_file_metadata
def get_hf_file_metadata_wrapper(*args, **kwargs):
print("File metadata request made (get_hf_file_metadata):", args, kwargs)
return original_get_hf_file_metadata(*args, **kwargs)
# Apply the patch
huggingface_hub.file_download.get_hf_file_metadata = get_hf_file_metadata_wrapper
dataset = load_dataset(
"Salesforce/wikitext",
"wikitext-2-v1",
split="test",
trust_remote_code=True,
cache_dir="~/local_datasets",
revision="b08601e04326c79dfdd32d625aee71d232d685c3",
)
```
This may be called over and over again, and you will see the same calls for files that don't exist:
```
File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/wikitext.py', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None}
File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/.huggingface.yaml', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None}
File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/dataset_infos.json', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None}
```
And you can see that the .no_exist folder is never created
```
$ ls ~/local_hf_hub/hub/datasets--Salesforce--wikitext/
blobs refs snapshots
```
### Expected behavior
The expected behavior is for the print "File metadata request made" to stop after the first call, and for .no_exist directory & files to be populated under ~/local_hf_hub/hub/datasets--Salesforce--wikitext/
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.5.13-65-650-4141-22041-coreweave-amd64-85c45edc-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2024.9.0
| 2025-07-16T20:04:00 | 2025-07-16T20:04:00 | null |
https://github.com/huggingface/datasets/issues/7686
| null | 7,686 | false |
[] |
3,236,979,340 |
Inconsistent range request behavior for parquet REST api
|
open
|
### Describe the bug
First off, I do apologize if this is not the correct repo for submitting this issue. Please direct me to another one if it's more appropriate elsewhere.
The datasets rest api is inconsistently giving `416 Range Not Satisfiable` when using a range request to get portions of the parquet files. More often than not, I am seeing 416, but other times for an identical request, it gives me the data along with `206 Partial Content` as expected.
### Steps to reproduce the bug
repeating this request multiple times will return either 416 or 206.
```sh
$ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
```
Note: this is not limited to just the above file, I tried with many different datasets and am able to consistently reproduce issue across multiple datasets.
when the 416 is returned, I get the following headers
```
< HTTP/2 416
< content-type: text/html
< content-length: 49
< server: CloudFront
< date: Wed, 16 Jul 2025 14:58:43 GMT
< expires: Wed, 16 Jul 2025 14:58:43 GMT
< content-range: bytes */177
< x-cache: Error from cloudfront
< via: 1.1 873527676a354c5998cad133525df9c0.cloudfront.net (CloudFront)
<
```
this suggests to me that there is likely a CDN/caching/routing issue happening and the request is not getting routed properly.
Full verbose output via curl.
<details>
❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86
* Trying 18.160.102.96:443...
* Connected to huggingface.co (18.160.102.96) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [319 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3821 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 416
< content-type: text/html
< content-length: 49
< server: CloudFront
< date: Wed, 16 Jul 2025 14:58:41 GMT
< expires: Wed, 16 Jul 2025 14:58:41 GMT
< content-range: bytes */177
< x-cache: Error from cloudfront
< via: 1.1 e2f1bed2f82641d6d5439eac20a790ba.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: Mo8hn-EZLJqE_hoBday8DdhmVXhV3v9-Wg-EEHI6gX_fNlkanVIUBA==
<
{ [49 bytes data]
100 49 100 49 0 0 2215 0 --:--:-- --:--:-- --:--:-- 2227
* Connection #0 to host huggingface.co left intact
(.venv) Daft main** ≡❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86
* Trying 18.160.102.96:443...
* Connected to huggingface.co (18.160.102.96) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [319 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3821 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 416
< content-type: text/html
< content-length: 49
< server: CloudFront
< date: Wed, 16 Jul 2025 14:58:42 GMT
< expires: Wed, 16 Jul 2025 14:58:42 GMT
< content-range: bytes */177
< x-cache: Error from cloudfront
< via: 1.1 bb352451e1eacf85f8786ee3ecd07eca.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: 9xy-CX9KvlS8Ye4eFr8jXMDobZHFkvdyvkLJGmK_qiwZQywCCwfq7Q==
<
{ [49 bytes data]
100 49 100 49 0 0 2381 0 --:--:-- --:--:-- --:--:-- 2450
* Connection #0 to host huggingface.co left intact
(.venv) Daft main** ≡❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86
* Trying 18.160.102.96:443...
* Connected to huggingface.co (18.160.102.96) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [319 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3821 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 416
< content-type: text/html
< content-length: 49
< server: CloudFront
< date: Wed, 16 Jul 2025 14:58:43 GMT
< expires: Wed, 16 Jul 2025 14:58:43 GMT
< content-range: bytes */177
< x-cache: Error from cloudfront
< via: 1.1 873527676a354c5998cad133525df9c0.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: wtBgwY4u4YJ2pD1ovM8UV770UiJoqWfs7i7VzschDyoLv5g7swGGmw==
<
{ [49 bytes data]
100 49 100 49 0 0 2273 0 --:--:-- --:--:-- --:--:-- 2333
* Connection #0 to host huggingface.co left intact
(.venv) Daft main** ≡❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86
* Trying 18.160.102.96:443...
* Connected to huggingface.co (18.160.102.96) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [319 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3821 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 302
< content-type: text/plain; charset=utf-8
< content-length: 177
< location: https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet
< date: Wed, 16 Jul 2025 14:58:44 GMT
< x-powered-by: huggingface-moon
< cross-origin-opener-policy: same-origin
< referrer-policy: strict-origin-when-cross-origin
< x-request-id: Root=1-6877be24-476860f03849cb1a1570c9d8
< access-control-allow-origin: https://huggingface.co
< access-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash
< set-cookie: token=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=None
< set-cookie: token=; Domain=huggingface.co; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=Lax
< x-cache: Miss from cloudfront
< via: 1.1 dd5af138aa8a11d8a70d5ef690ad1a2a.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: xuSi0X5RpH1OZqQOM8gGQLQLU8eOM6Gbkk-bgIX_qBnTTaa1VNkExA==
<
* Ignoring the response-body
100 177 100 177 0 0 2021 0 --:--:-- --:--:-- --:--:-- 2034
* Connection #0 to host huggingface.co left intact
* Issue another request to this URL: 'https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet'
* Found bundle for host: 0x600002d54570 [can multiplex]
* Re-using existing connection with host huggingface.co
* [HTTP/2] [3] OPENED stream for https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet
* [HTTP/2] [3] [:method: GET]
* [HTTP/2] [3] [:scheme: https]
* [HTTP/2] [3] [:authority: huggingface.co]
* [HTTP/2] [3] [:path: /datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet]
* [HTTP/2] [3] [user-agent: curl/8.7.1]
* [HTTP/2] [3] [accept: */*]
* [HTTP/2] [3] [range: bytes=217875070-218006142]
> GET /datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 302
< content-type: text/plain; charset=utf-8
< content-length: 1317
< location: https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC
< date: Wed, 16 Jul 2025 14:58:44 GMT
< x-powered-by: huggingface-moon
< cross-origin-opener-policy: same-origin
< referrer-policy: strict-origin-when-cross-origin
< x-request-id: Root=1-6877be24-4f628b292dc8a7a5339c41d3
< access-control-allow-origin: https://huggingface.co
< vary: Origin, Accept
< access-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash
< set-cookie: token=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=None
< set-cookie: token=; Domain=huggingface.co; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=Lax
< x-repo-commit: 712df366ffbc959d9f4279bf2da579230b7ca5d8
< accept-ranges: bytes
< x-linked-size: 218006142
< x-linked-etag: "01736bf26d0046ddec4ab8900fba3f0dc6500b038314b44d0edb73a7c88dec07"
< x-xet-hash: cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9
< link: <https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/xet-read-token/712df366ffbc959d9f4279bf2da579230b7ca5d8>; rel="xet-auth", <https://cas-server.xethub.hf.co/reconstruction/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9>; rel="xet-reconstruction-info"
< x-cache: Miss from cloudfront
< via: 1.1 dd5af138aa8a11d8a70d5ef690ad1a2a.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: 0qXw2sJGrWCLVt7c-Vtn09uE3nu6CrJw9RmAKvNr_flG75muclvlIg==
<
* Ignoring the response-body
100 1317 100 1317 0 0 9268 0 --:--:-- --:--:-- --:--:-- 9268
* Connection #0 to host huggingface.co left intact
* Issue another request to this URL: 'https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC'
* Host cas-bridge.xethub.hf.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.181.55, 18.160.181.54, 18.160.181.52, 18.160.181.88
* Trying 18.160.181.55:443...
* Connected to cas-bridge.xethub.hf.co (18.160.181.55) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [328 bytes data]
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3818 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=cas-bridge.xethub.hf.co
* start date: Jun 4 00:00:00 2025 GMT
* expire date: Jul 3 23:59:59 2026 GMT
* subjectAltName: host "cas-bridge.xethub.hf.co" matched cert's "cas-bridge.xethub.hf.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M04
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: cas-bridge.xethub.hf.co]
* [HTTP/2] [1] [:path: /xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC HTTP/2
> Host: cas-bridge.xethub.hf.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 206
< content-length: 131072
< date: Mon, 14 Jul 2025 08:40:28 GMT
< x-request-id: 01K041FDPVA03RR2PRXDZSN30G
< content-disposition: inline; filename*=UTF-8''0000.parquet; filename="0000.parquet";
< cache-control: public, max-age=31536000
< etag: "cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9"
< access-control-allow-origin: *
< access-control-allow-headers: Content-Range, Content-Type, Content-Disposition, ETag
< access-control-expose-headers: Accept-Ranges, Content-Range, Content-Type, Content-Disposition, ETag, X-Cache
< x-cache: Hit from cloudfront
< via: 1.1 1c857e24a4dc84d2d9c78d5b3463bed6.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P2
< x-amz-cf-id: 3SxFmQa5wLeeXbNiwaAo0_RwoR_n7-SivjsLjDLG-Pwn5UhG2oiEQA==
< age: 195496
< content-security-policy: default-src 'none'; sandbox
< content-range: bytes 217875070-218006141/218006142
<
{ [8192 bytes data]
100 128k 100 128k 0 0 769k 0 --:--:-- --:--:-- --:--:-- 769k
* Connection #1 to host cas-bridge.xethub.hf.co left intact
</details>
### Expected behavior
always get back a `206`
### Environment info
n/a
| 2025-07-16T18:39:44 | 2025-07-16T18:41:53 | null |
https://github.com/huggingface/datasets/issues/7685
| null | 7,685 | false |
[] |
3,231,680,474 |
fix audio cast storage from array + sampling_rate
|
closed
|
fix https://github.com/huggingface/datasets/issues/7682
| 2025-07-15T10:13:42 | 2025-07-15T10:24:08 | 2025-07-15T10:24:07 |
https://github.com/huggingface/datasets/pull/7684
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7684",
"html_url": "https://github.com/huggingface/datasets/pull/7684",
"diff_url": "https://github.com/huggingface/datasets/pull/7684.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7684.patch",
"merged_at": "2025-07-15T10:24:07"
}
| 7,684 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7684). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,231,553,161 |
Convert to string when needed + faster .zstd
|
closed
|
for https://huggingface.co/datasets/allenai/olmo-mix-1124
| 2025-07-15T09:37:44 | 2025-07-15T10:13:58 | 2025-07-15T10:13:56 |
https://github.com/huggingface/datasets/pull/7683
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7683",
"html_url": "https://github.com/huggingface/datasets/pull/7683",
"diff_url": "https://github.com/huggingface/datasets/pull/7683.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7683.patch",
"merged_at": "2025-07-15T10:13:56"
}
| 7,683 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7683). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,229,687,253 |
Fail to cast Audio feature for numpy arrays in datasets 4.0.0
|
closed
|
### Describe the bug
Casting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails
in version 4.0.0 but not in version 3.6.0
### Steps to reproduce the bug
The following `uv script` should be able to reproduce the bug in version 4.0.0
and pass in version 3.6.0 on a macOS Sequoia 15.5
```python
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "datasets[audio]==4.0.0",
# "librosa>=0.11.0",
# ]
# ///
# NAME
# create_audio_dataset.py - create an audio dataset of sine waves
#
# SYNOPSIS
# uv run create_audio_dataset.py
#
# DESCRIPTION
# Create an audio dataset using the Hugging Face [datasets] library.
# Illustrates how to create synthetic audio datasets using the [map]
# datasets function.
#
# The strategy is to first create a dataset with the input to the
# generation function, then execute the map function that generates
# the result, and finally cast the final features.
#
# BUG
# Casting features with Audio for numpy arrays -
# done here with `ds.map(gen_sine, features=features)` fails
# in version 4.0.0 but not in version 3.6.0
#
# This happens both in cases where --extra audio is provided and where is not.
# When audio is not provided i've installed the latest compatible version
# of soundfile.
#
# The error when soundfile is installed but the audio --extra is not
# indicates that the array values do not have the `.T` property,
# whilst also indicating that the value is a list instead of a numpy array.
#
# Last lines of error report when for datasets + soundfile case
# ...
#
# File "/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py", line 239, in cast_storage
# storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
# ~~~~~~~~~~~~~~~~~~~~~~^^^
# File "/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py", line 122, in encode_example
# sf.write(buffer, value["array"].T, value["sampling_rate"], format="wav")
# ^^^^^^^^^^^^^^^^
# AttributeError: 'list' object has no attribute 'T'
# ...
#
# For the case of datasets[audio] without explicit adding soundfile I get an FFmpeg
# error.
#
# Last lines of error report:
#
# ...
# RuntimeError: Could not load libtorchcodec. Likely causes:
# 1. FFmpeg is not properly installed in your environment. We support
# versions 4, 5, 6 and 7.
# 2. The PyTorch version (2.7.1) is not compatible with
# this version of TorchCodec. Refer to the version compatibility
# table:
# https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.
# 3. Another runtime dependency; see exceptions below.
# The following exceptions were raised as we tried to load libtorchcodec:
#
# [start of libtorchcodec loading traceback]
# FFmpeg version 7: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib, 0x0006): Library not loaded: @rpath/libavutil.59.dylib
# Referenced from: <6DB21246-F28A-31A6-910A-D8F3355D1064> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib
# Reason: no LC_RPATH's found
# FFmpeg version 6: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib, 0x0006): Library not loaded: @rpath/libavutil.58.dylib
# Referenced from: <BD3B44FC-E14B-3ABF-800F-BB54B6CCA3B1> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib
# Reason: no LC_RPATH's found
# FFmpeg version 5: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib, 0x0006): Library not loaded: @rpath/libavutil.57.dylib
# Referenced from: <F06EBF8A-238C-3A96-BFBB-B34E0BBDABF0> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib
# Reason: no LC_RPATH's found
# FFmpeg version 4: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib, 0x0006): Library not loaded: @rpath/libavutil.56.dylib
# Referenced from: <6E59F017-C703-3AF6-A271-6277DD5F8170> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib
# Reason: no LC_RPATH's found
# ...
#
# This is strange because the the same error does not happen when using version 3.6.0 with datasets[audio].
#
# The same error appears in python3.12
## Imports
import numpy as np
from datasets import Dataset, Features, Audio, Value
## Parameters
NUM_WAVES = 128
SAMPLE_RATE = 16_000
DURATION = 1.0
## Input dataset arguments
freqs = np.linspace(100, 2000, NUM_WAVES).tolist()
ds = Dataset.from_dict({"frequency": freqs})
## Features for the final dataset
features = Features(
{"frequency": Value("float32"), "audio": Audio(sampling_rate=SAMPLE_RATE)}
)
## Generate audio sine waves and cast features
def gen_sine(example):
t = np.linspace(0, DURATION, int(SAMPLE_RATE * DURATION), endpoint=False)
wav = np.sin(2 * np.pi * example["frequency"] * t)
return {
"frequency": example["frequency"],
"audio": {"array": wav, "sampling_rate": SAMPLE_RATE},
}
ds = ds.map(gen_sine, features=features)
print(ds)
print(ds.features)
```
### Expected behavior
I expect the result of version `4.0.0` to be the same of that in version `3.6.0`. Switching the value
of the script above to `3.6.0` I get the following, expected, result:
```
$ uv run bug_report.py
Map: 100%|███████████████████████████████████████████████████████| 128/128 [00:00<00:00, 204.58 examples/s]
Dataset({
features: ['frequency', 'audio'],
num_rows: 128
})
{'frequency': Value(dtype='float32', id=None), 'audio': Audio(sampling_rate=16000, mono=True, decode=True, id=None)}
```
### Environment info
- `datasets` version: 4.0.0
- Platform: macOS-15.5-arm64-arm-64bit-Mach-O
- Python version: 3.13.1
- `huggingface_hub` version: 0.33.4
- PyArrow version: 20.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0
| 2025-07-14T18:41:02 | 2025-07-15T12:10:39 | 2025-07-15T10:24:08 |
https://github.com/huggingface/datasets/issues/7682
| null | 7,682 | false |
[
"thanks for reporting, I opened a PR and I'll make a patch release soon ",
"> thanks for reporting, I opened a PR and I'll make a patch release soon\n\nThank you very much @lhoestq!"
] |
3,227,112,736 |
Probabilistic High Memory Usage and Freeze on Python 3.10
|
open
|
### Describe the bug
A probabilistic issue encountered when processing datasets containing PIL.Image columns using the huggingface/datasets library on Python 3.10. The process occasionally experiences a sudden and significant memory spike, reaching 100% utilization, leading to a complete freeze. During this freeze, the process becomes unresponsive, cannot be forcefully terminated, and does not throw any exceptions.
I have attempted to mitigate this issue by setting `datasets.config.IN_MEMORY_MAX_SIZE`, but it had no effect. In fact, based on the document of `load_dataset`, I suspect that setting `IN_MEMORY_MAX_SIZE` might even have a counterproductive effect.
This bug is not consistently reproducible, but its occurrence rate significantly decreases or disappears entirely when upgrading Python to version 3.11 or higher. Therefore, this issue also serves to share a potential solution for others who might encounter similar problems.
### Steps to reproduce the bug
Due to the probabilistic nature of this bug, consistent reproduction cannot be guaranteed for every run. However, in my environment, processing large datasets like timm/imagenet-1k-wds(whether reading, casting, or mapping operations) almost certainly triggers the issue at some point.
The probability of the issue occurring drastically increases when num_proc is set to a value greater than 1 during operations.
When the issue occurs, my system logs repeatedly show the following warnings:
```
WARN: very high memory utilization: 57.74GiB / 57.74GiB (100 %)
WARN: container is unhealthy: triggered memory limits (OOM)
WARN: container is unhealthy: triggered memory limits (OOM)
WARN: container is unhealthy: triggered memory limits (OOM)
```
### Expected behavior
The dataset should be read and processed normally without memory exhaustion or freezing. If an unrecoverable error occurs, an appropriate exception should be raised.
I have found that upgrading Python to version 3.11 or above completely resolves this issue. On Python 3.11, when memory usage approaches 100%, it suddenly drops before slowly increasing again. I suspect this behavior is due to an expected memory management action, possibly involving writing to disk cache, which prevents the complete freeze observed in Python 3.10.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.33.4
- PyArrow version: 20.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0
| 2025-07-14T01:57:16 | 2025-07-14T01:57:16 | null |
https://github.com/huggingface/datasets/issues/7681
| null | 7,681 | false |
[] |
3,224,824,151 |
Question about iterable dataset and streaming
|
open
|
In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78
I am confused,
1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style dataset?
2. `load_dataset(streaming=True)` is useful for huge dataset, but the speed is slow. How to make it comparable to `to_iterable_dataset` without loading the whole dataset into RAM?
| 2025-07-12T04:48:30 | 2025-07-15T13:39:38 | null |
https://github.com/huggingface/datasets/issues/7680
| null | 7,680 | false |
[
"> If we have already loaded the dataset, why doing to_iterable_dataset? Does it go through the dataset faster than map-style dataset?\n\nyes, it makes a faster DataLoader for example (otherwise DataLoader uses `__getitem__` which is slower than iterating)\n\n> load_dataset(streaming=True) is useful for huge dataset, but the speed is slow. How to make it comparable to to_iterable_dataset without loading the whole dataset into RAM?\n\nYou can aim for saturating your bandwidth using a DataLoader with num_workers and prefetch_factor. The maximum speed will be your internet bandwidth (unless your CPU is a bottlenbeck for CPU operations like image decoding).",
"> > If we have already loaded the dataset, why doing to_iterable_dataset? Does it go through the dataset faster than map-style dataset?\n> \n> yes, it makes a faster DataLoader for example (otherwise DataLoader uses `__getitem__` which is slower than iterating)\n\nOkay, but `__getitem__` seems suitable for distributed settings. A distributed sampler would dispatch distinct indexes to each rank (rank0 got 0,1,2,3, rank1 got 4,5,6,7), however, if we make it `to_iterable_dataset`, then each rank needs to iterate all the samples, making it slower (i,e, rank1 got 0,1,2,3, rank2 got 0,1,2,3,(4,5,6,7))\n\nWhat's your opinion here?",
"> however, if we make it to_iterable_dataset, then each rank needs to iterate all the samples, making it slower (i,e, rank1 got 0,1,2,3, rank2 got 0,1,2,3,(4,5,6,7))\n\nActually if you specify `to_iterable_dataset(num_shards=world_size)` (or a factor of world_size) and use a `torch.utils.data.DataLoader` then each rank will get a subset of the data thanks to the sharding. E.g. rank0 gets 0,1,2,3 and rank1 gets 4,5,6,7.\n\nThis is because `datasets.IterableDataset` subclasses `torch.utils.data.IterableDataset` and is aware of the current rank.",
"Got it, very nice features `num_shards` 👍🏻 \n\nI would benchmark `to_iterable_dataset(num_shards=world_size)` against traditional map-style one in distributed settings in the near future."
] |
3,220,787,371 |
metric glue breaks with 4.0.0
|
closed
|
### Describe the bug
worked fine with 3.6.0, and with 4.0.0 `eval_metric = metric.compute()` in HF Accelerate breaks.
The code that fails is:
https://huggingface.co/spaces/evaluate-metric/glue/blob/v0.4.0/glue.py#L84
```
def simple_accuracy(preds, labels):
print(preds, labels)
print(f"{preds==labels}")
return float((preds == labels).mean())
```
data:
```
Column([1, 0, 0, 1, 1]) Column([1, 0, 0, 1, 0])
False
```
```
[rank0]: return float((preds == labels).mean())
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^
[rank0]: AttributeError: 'bool' object has no attribute 'mean'
```
Some behavior has changed in this new major release of `datasets` and requires updating HF accelerate and perhaps the glue metric code, all belong to HF.
### Environment info
datasets=4.0.0
| 2025-07-10T21:39:50 | 2025-07-11T17:42:01 | 2025-07-11T17:42:01 |
https://github.com/huggingface/datasets/issues/7679
| null | 7,679 | false |
[
"I released `evaluate` 0.4.5 yesterday to fix the issue - sorry for the inconvenience:\n\n```\npip install -U evaluate\n```",
"Thanks so much, @lhoestq!"
] |
3,218,625,544 |
To support decoding audio data, please install 'torchcodec'.
|
closed
|
In the latest version of datasets==4.0.0, i cannot print the audio data on the Colab notebook. But it works on the 3.6.0 version.
!pip install -q -U datasets huggingface_hub fsspec
from datasets import load_dataset
downloaded_dataset = load_dataset("ymoslem/MediaSpeech", "tr", split="train")
print(downloaded_dataset["audio"][0])
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
[/tmp/ipython-input-4-90623240.py](https://localhost:8080/#) in <cell line: 0>()
----> 1 downloaded_dataset["audio"][0]
10 frames
[/usr/local/lib/python3.11/dist-packages/datasets/features/audio.py](https://localhost:8080/#) in decode_example(self, value, token_per_repo_id)
170 from ._torchcodec import AudioDecoder
171 else:
--> 172 raise ImportError("To support decoding audio data, please install 'torchcodec'.")
173
174 if not self.decode:
ImportError: To support decoding audio data, please install 'torchcodec'.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python version: 3.11.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 18.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2025.3.0
| 2025-07-10T09:43:13 | 2025-07-22T03:46:52 | 2025-07-11T05:05:42 |
https://github.com/huggingface/datasets/issues/7678
| null | 7,678 | false |
[
"Hi ! yes you should `!pip install -U datasets[audio]` to have the required dependencies.\n\n`datasets` 4.0 now relies on `torchcodec` for audio decoding. The `torchcodec` AudioDecoder enables streaming from HF and also allows to decode ranges of audio",
"Same issues on Colab.\n\n> !pip install -U datasets[audio] \n\nThis works for me. Thanks."
] |
3,218,044,656 |
Toxicity fails with datasets 4.0.0
|
closed
|
### Describe the bug
With the latest 4.0.0 release, huggingface toxicity evaluation module fails with error: `ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).`
### Steps to reproduce the bug
Repro:
```
>>> toxicity.compute(predictions=["This is a response"])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/evaluate/module.py", line 467, in compute
output = self._compute(**inputs, **compute_kwargs)
File "/Users/serena.ruan/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-measurement--toxicity/2390290fa0bf6d78480143547c6b08f3d4f8805b249df8c7a8e80d0ce8e3778b/toxicity.py", line 135, in _compute
scores = toxicity(predictions, self.toxic_classifier, toxic_label)
File "/Users/serena.ruan/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-measurement--toxicity/2390290fa0bf6d78480143547c6b08f3d4f8805b249df8c7a8e80d0ce8e3778b/toxicity.py", line 103, in toxicity
for pred_toxic in toxic_classifier(preds):
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/text_classification.py", line 159, in __call__
result = super().__call__(*inputs, **kwargs)
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1431, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1437, in run_single
model_inputs = self.preprocess(inputs, **preprocess_params)
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/text_classification.py", line 183, in preprocess
return self.tokenizer(inputs, return_tensors=return_tensors, **tokenizer_kwargs)
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2867, in __call__
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2927, in _call_one
raise ValueError(
ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).
```
### Expected behavior
This works before 4.0.0 release
### Environment info
- `datasets` version: 4.0.0
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.10.16
- `huggingface_hub` version: 0.33.0
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
| 2025-07-10T06:15:22 | 2025-07-11T04:40:59 | 2025-07-11T04:40:59 |
https://github.com/huggingface/datasets/issues/7677
| null | 7,677 | false |
[
"Hi ! You can fix this by upgrading `evaluate`:\n\n```\npip install -U evaluate\n```",
"Thanks, verified evaluate 0.4.5 works!"
] |
3,216,857,559 |
Many things broken since the new 4.0.0 release
|
open
|
### Describe the bug
The new changes in 4.0.0 are breaking many datasets, including those from lm-evaluation-harness.
I am trying to revert back to older versions, like 3.6.0 to make the eval work but I keep getting:
``` Python
File /venv/main/lib/python3.12/site-packages/datasets/features/features.py:1474, in generate_from_dict(obj)
1471 class_type = _FEATURE_TYPES.get(_type, None) or globals().get(_type, None)
1473 if class_type is None:
-> 1474 raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
1476 if class_type == LargeList:
1477 feature = obj.pop("feature")
ValueError: Feature type 'List' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
```
### Steps to reproduce the bug
``` Python
import lm_eval
model_eval = lm_eval.models.huggingface.HFLM(pretrained=model, tokenizer=tokenizer)
lm_eval.evaluator.simple_evaluate(model_eval, tasks=["winogrande"], num_fewshot=5, batch_size=1)
```
### Expected behavior
Older `datasets` versions should work just fine as before
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.8.0-60-generic-x86_64-with-glibc2.39
- Python version: 3.12.11
- `huggingface_hub` version: 0.33.1
- PyArrow version: 20.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0
| 2025-07-09T18:59:50 | 2025-07-21T10:38:01 | null |
https://github.com/huggingface/datasets/issues/7676
| null | 7,676 | false |
[
"Happy to take a look, do you have a list of impacted datasets ?",
"Thanks @lhoestq , related to lm-eval, at least `winogrande`, `mmlu` and `hellaswag`, based on my tests yesterday. But many others like <a href=\"https://huggingface.co/datasets/lukaemon/bbh\">bbh</a>, most probably others too. ",
"Hi @mobicham ,\n\nI was having the same issue `ValueError: Feature type 'List' not found` yesterday, when I tried to load my dataset using the `load_dataset()` function.\nBy updating to `4.0.0`, I don't see this error anymore.\n\np.s. I used `Sequence` in replace of list when building my dataset (see below)\n```\nfeatures = Features({\n ...\n \"objects\": Sequence({\n \"id\": Value(\"int64\"),\n \"bbox\": Sequence(Value(\"float32\"), length=4),\n \"category\": Value(\"string\")\n }),\n ...\n})\ndataset = Dataset.from_dict(data_dict)\ndataset = dataset.cast(features)\n\n``` \n",
"The issue comes from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train), [allenai/winogrande](https://huggingface.co/datasets/allenai/winogrande), [lukaemon/bbh](https://huggingface.co/datasets/lukaemon/bbh) and [Rowan/hellaswag](https://huggingface.co/datasets/Rowan/hellaswag) which are all unsupported in `datasets` 4.0 since they are based on python scripts. Fortunately there are PRs to fix those datasets (I did some of them a year ago but dataset authors haven't merged yet... will have to ping people again about it and update here):\n\n- https://huggingface.co/datasets/hails/mmlu_no_train/discussions/2 merged ! ✅ \n- https://huggingface.co/datasets/allenai/winogrande/discussions/6 merged ! ✅ \n- https://huggingface.co/datasets/Rowan/hellaswag/discussions/7 merged ! ✅ \n- https://huggingface.co/datasets/lukaemon/bbh/discussions/2 merged ! ✅ ",
"Thank you very much @lhoestq , I will try next week 👍 ",
"I get this error when using datasets 3.5.1 to load a dataset saved with datasets 4.0.0. If you are hitting this issue, make sure that both dataset saving code and the loading code are <4.0.0 or >=4.0.0.",
"This broke several lm-eval-harness workflows for me and reverting to older versions of datasets is not fixing the issue, does anyone have a workaround?",
"> I get this error when using datasets 3.5.1 to load a dataset saved with datasets 4.0.0. If you are hitting this issue, make sure that both dataset saving code and the loading code are <4.0.0 or >=4.0.0.\n\n`datasets` 4.0 can load datasets saved using any older version. But the other way around is not always true: if you save a dataset with `datasets` 4.0 it may use the new `List` type that requires 4.0 and raise `ValueError: Feature type 'List' not found.`\n\nHowever issues with lm eval harness seem to come from another issue: unsupported dataset scripts (see https://github.com/huggingface/datasets/issues/7676#issuecomment-3057550659)\n\n> This broke several lm-eval-harness workflows for me and reverting to older versions of datasets is not fixing the issue, does anyone have a workaround?\n\nwhen reverting to an old `datasets` version I'd encourage you to clear your cache (by default it is located at `~/.cache/huggingface/datasets`) otherwise it might try to load a `List` type that didn't exist in old versions",
"All the impacted datasets in lm eval harness have been fixed thanks to the reactivity of dataset authors ! let me know if you encounter issues with other datasets :)",
"Hello folks, I have found `patrickvonplaten/librispeech_asr_dummy` to be another dataset that is currently broken since the 4.0.0 release. Is there a PR on this as well?",
"https://huggingface.co/datasets/microsoft/prototypical-hai-collaborations seems to be impacted as well.\n\n```\n_temp = load_dataset(\"microsoft/prototypical-hai-collaborations\", \"wildchat1m_en3u-task_anns\")\n``` \nleads to \n`ValueError: Feature type 'List' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']`",
"`microsoft/prototypical-hai-collaborations` is not impacted, you can load it using both `datasets` 3.6 and 4.0. I also tried on colab to confirm.\n\nOne thing that could explain `ValueError: Feature type 'List' not found.` is maybe if you have loaded and cached this dataset with `datasets` 4.0 and then tried to reload it from cache using 3.6.0.\n\nEDIT: actually I tried and 3.6 can reload datasets cached with 4.0 so I'm not sure why you have this error. Which version of `datasets` are you using ?",
"> Hello folks, I have found patrickvonplaten/librispeech_asr_dummy to be another dataset that is currently broken since the 4.0.0 release. Is there a PR on this as well?\n\nI guess you can use [hf-internal-testing/librispeech_asr_dummy](https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy) instead of `patrickvonplaten/librispeech_asr_dummy`, or ask the dataset author to convert their dataset to Parquet"
] |
3,216,699,094 |
common_voice_11_0.py failure in dataset library
|
open
|
### Describe the bug
I tried to download dataset but have got this error:
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[8], line 4
1 from datasets import load_dataset
----> 4 load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True)
File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1387 verification_mode = VerificationMode(
1388 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
1389 )
1391 # Create a dataset builder
-> 1392 builder_instance = load_dataset_builder(
1393 path=path,
1394 name=name,
1395 data_dir=data_dir,
1396 data_files=data_files,
1397 cache_dir=cache_dir,
1398 features=features,
1399 download_config=download_config,
1400 download_mode=download_mode,
1401 revision=revision,
1402 token=token,
1403 storage_options=storage_options,
1404 **config_kwargs,
1405 )
1407 # Return iterable dataset in case of streaming
1408 if streaming:
File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs)
1130 if features is not None:
1131 features = _fix_for_backward_compatible_features(features)
-> 1132 dataset_module = dataset_module_factory(
1133 path,
1134 revision=revision,
1135 download_config=download_config,
1136 download_mode=download_mode,
1137 data_dir=data_dir,
1138 data_files=data_files,
1139 cache_dir=cache_dir,
1140 )
1141 # Get dataset builder class
1142 builder_kwargs = dataset_module.builder_kwargs
File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
1026 if isinstance(e1, FileNotFoundError):
1027 raise FileNotFoundError(
1028 f"Couldn't find any data file at {relative_to_absolute_path(path)}. "
1029 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1030 ) from None
-> 1031 raise e1 from None
1032 else:
1033 raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.")
File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
981 try:
982 api.hf_hub_download(
983 repo_id=path,
984 filename=filename,
(...)
987 proxies=download_config.proxies,
988 )
--> 989 raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
990 except EntryNotFoundError:
991 # Use the infos from the parquet export except in some cases:
992 if data_dir or data_files or (revision and revision != "main"):
RuntimeError: Dataset scripts are no longer supported, but found common_voice_11_0.py
### Steps to reproduce the bug
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True)
### Expected behavior
its supposed to download this dataset.
### Environment info
Python 3.12 , Windows 11
| 2025-07-09T17:47:59 | 2025-07-22T09:35:42 | null |
https://github.com/huggingface/datasets/issues/7675
| null | 7,675 | false |
[
"Hi ! This dataset is not in a supported format and `datasets` 4 doesn't support datasets that based on python scripts which are often source of errors. Feel free to ask the dataset authors to convert the dataset to a supported format at https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/discussions, e.g. parquet.\n\nIn the meantime you can pin old versions of `datasets` like `datasets==3.6.0`",
"Thanks @lhoestq! I encountered the same issue and switching to an older version of `datasets` worked.",
">which version of datasets worked for you, I tried switching to 4.6.0 and also moved back for fsspec, but still facing issues for this.\n\n",
"Try datasets<=3.6.0",
"same issue "
] |
3,216,251,069 |
set dev version
|
closed
| null | 2025-07-09T15:01:25 | 2025-07-09T15:04:01 | 2025-07-09T15:01:33 |
https://github.com/huggingface/datasets/pull/7674
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7674",
"html_url": "https://github.com/huggingface/datasets/pull/7674",
"diff_url": "https://github.com/huggingface/datasets/pull/7674.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7674.patch",
"merged_at": "2025-07-09T15:01:33"
}
| 7,674 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7674). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,216,075,633 |
Release: 4.0.0
|
closed
| null | 2025-07-09T14:03:16 | 2025-07-09T14:36:19 | 2025-07-09T14:36:18 |
https://github.com/huggingface/datasets/pull/7673
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7673",
"html_url": "https://github.com/huggingface/datasets/pull/7673",
"diff_url": "https://github.com/huggingface/datasets/pull/7673.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7673.patch",
"merged_at": "2025-07-09T14:36:18"
}
| 7,673 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7673). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,215,287,164 |
Fix double sequence
|
closed
|
```python
>>> Features({"a": Sequence(Sequence({"c": Value("int64")}))})
{'a': List({'c': List(Value('int64'))})}
```
instead of `{'a': {'c': List(List(Value('int64')))}}`
| 2025-07-09T09:53:39 | 2025-07-09T09:56:29 | 2025-07-09T09:56:28 |
https://github.com/huggingface/datasets/pull/7672
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7672",
"html_url": "https://github.com/huggingface/datasets/pull/7672",
"diff_url": "https://github.com/huggingface/datasets/pull/7672.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7672.patch",
"merged_at": "2025-07-09T09:56:27"
}
| 7,672 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7672). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,213,223,886 |
Mapping function not working if the first example is returned as None
|
closed
|
### Describe the bug
https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37
Here we can see the writer is initialized on `i==0`. However, there can be cases where in the user mapping function, the first example is filtered out (length constraints, etc).
In this case, the writer would be a `None` type and the code will report `NoneType has no write function`.
A simple fix is available, simply change line 3652 from `if i == 0:` to `if writer is None:`
### Steps to reproduce the bug
Prepare a dataset
have this function
```
import datasets
def make_map_fn(split, max_prompt_tokens=3):
def process_fn(example, idx):
question = example['question']
reasoning_steps = example['reasoning_steps']
label = example['label']
answer_format = ""
for i in range(len(reasoning_steps)):
system_message = "Dummy"
all_steps_formatted = []
content = f"""Dummy"""
prompt = [
{"role": "system", "content": system_message},
{"role": "user", "content": content},
]
tokenized = tokenizer.apply_chat_template(prompt, return_tensors="pt", truncation=False)
if tokenized.shape[1] > max_prompt_tokens:
return None # skip overly long examples
data = {
"dummy": "dummy"
}
return data
return process_fn
...
# load your dataset
...
train = train.map(function=make_map_fn('train'), with_indices=True)
```
### Expected behavior
The dataset mapping shall behave even when the first example is filtered out.
### Environment info
I am using `datasets==3.6.0` but I have observed this issue in the github repo too: https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37
| 2025-07-08T17:07:47 | 2025-07-09T12:30:32 | 2025-07-09T12:30:32 |
https://github.com/huggingface/datasets/issues/7671
| null | 7,671 | false |
[
"Hi, map() always expect an output.\n\nIf you wish to filter examples, you should use filter(), in your case it could be something like this:\n\n```python\nds = ds.map(my_processing_function).filter(ignore_long_prompts)\n```",
"Realized this! Thanks a lot, I will close this issue then."
] |
3,208,962,372 |
Fix audio bytes
|
closed
| null | 2025-07-07T13:05:15 | 2025-07-07T13:07:47 | 2025-07-07T13:05:33 |
https://github.com/huggingface/datasets/pull/7670
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7670",
"html_url": "https://github.com/huggingface/datasets/pull/7670",
"diff_url": "https://github.com/huggingface/datasets/pull/7670.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7670.patch",
"merged_at": "2025-07-07T13:05:33"
}
| 7,670 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7670). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,203,541,091 |
How can I add my custom data to huggingface datasets
|
open
|
I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that.
| 2025-07-04T19:19:54 | 2025-07-05T18:19:37 | null |
https://github.com/huggingface/datasets/issues/7669
| null | 7,669 | false |
[
"Hey @xiagod \n\nThe easiest way to add your custom data to Hugging Face Datasets is to use the built-in load_dataset function with your local files. Some examples include:\n\nCSV files:\nfrom datasets import load_dataset\ndataset = load_dataset(\"csv\", data_files=\"my_file.csv\")\n\nJSON or JSONL files:\nfrom datasets import load_dataset\ndataset = load_dataset(\"json\", data_files=\"my_file.json\")\n\n\nImages stored in folders (e.g. data/train/cat/, data/train/dog/):\nfrom datasets import load_dataset\ndataset = load_dataset(\"imagefolder\", data_dir=\"/path/to/pokemon\")\n\n\nThese methods let you quickly create a custom dataset without needing to write a full script.\n\nMore information can be found in Hugging Face's tutorial \"Create a dataset\" or \"Load\" documentation here: \n\nhttps://huggingface.co/docs/datasets/create_dataset \n\nhttps://huggingface.co/docs/datasets/loading#local-and-remote-files\n\n\n\nIf you want to submit your dataset to the Hugging Face Datasets GitHub repo so others can load it follow this guide: \n\nhttps://huggingface.co/docs/datasets/upload_dataset \n\n\n"
] |
3,199,039,322 |
Broken EXIF crash the whole program
|
open
|
### Describe the bug
When parsing this image in the ImageNet1K dataset, the `datasets` crashs whole training process just because unable to parse an invalid EXIF tag.

### Steps to reproduce the bug
Use the `datasets.Image.decode_example` method to decode the aforementioned image could reproduce the bug.
The decoding function will throw an unhandled exception at the `image.getexif()` method call due to invalid utf-8 stream in EXIF tags.
```
File lib/python3.12/site-packages/datasets/features/image.py:188, in Image.decode_example(self, value, token_per_repo_id)
186 image = PIL.Image.open(BytesIO(bytes_))
187 image.load() # to avoid "Too many open files" errors
--> 188 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None:
189 image = PIL.ImageOps.exif_transpose(image)
190 if self.mode and self.mode != image.mode:
File lib/python3.12/site-packages/PIL/Image.py:1542, in Image.getexif(self)
1540 xmp_tags = self.info.get("XML:com.adobe.xmp")
1541 if not xmp_tags and (xmp_tags := self.info.get("xmp")):
-> 1542 xmp_tags = xmp_tags.decode("utf-8")
1543 if xmp_tags:
1544 match = re.search(r'tiff:Orientation(="|>)([0-9])', xmp_tags)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 4312: invalid start byte
```
### Expected behavior
The invalid EXIF tag should simply be ignored or issue a warning message, instead of crash the whole program at once.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.33.0
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2025.3.0
| 2025-07-03T11:24:15 | 2025-07-03T12:27:16 | null |
https://github.com/huggingface/datasets/issues/7668
| null | 7,668 | false |
[
"There are other discussions about error handling for images decoding here : https://github.com/huggingface/datasets/issues/7632 https://github.com/huggingface/datasets/issues/7612\n\nand a PR here: https://github.com/huggingface/datasets/pull/7638 (would love your input on the proposed solution !)"
] |
3,196,251,707 |
Fix infer list of images
|
closed
|
cc @kashif
| 2025-07-02T15:07:58 | 2025-07-02T15:10:28 | 2025-07-02T15:08:03 |
https://github.com/huggingface/datasets/pull/7667
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7667",
"html_url": "https://github.com/huggingface/datasets/pull/7667",
"diff_url": "https://github.com/huggingface/datasets/pull/7667.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7667.patch",
"merged_at": "2025-07-02T15:08:03"
}
| 7,667 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7667). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,196,220,722 |
Backward compat list feature
|
closed
|
cc @kashif
| 2025-07-02T14:58:00 | 2025-07-02T15:00:37 | 2025-07-02T14:59:40 |
https://github.com/huggingface/datasets/pull/7666
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7666",
"html_url": "https://github.com/huggingface/datasets/pull/7666",
"diff_url": "https://github.com/huggingface/datasets/pull/7666.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7666.patch",
"merged_at": "2025-07-02T14:59:40"
}
| 7,666 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7666). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,193,239,955 |
Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files
|
closed
|
### Describe the bug
When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema.
In my case there is a field `body:` with a string value
```
"### Describe the bug (...) ,action: string, datetime: timestamp[s], author: string, (...) Pandas version: 1.3.4"
```
As a result, I got an exception
```
"TypeError: Couldn't cast array of type timestamp[s] to null".
```
Full stack-trace in the attached file below.
I also attach a minimized dataset (data.json, a single entry) that reproduces the error.
**Observations**(on the minimal example):
- if I remove _all fields before_ `body`, a different error appears,
- if I remove _all fields after_ `body`, yet another error appears,
- if `body` is _the only field_, the error disappears.
So this might be one complex bug or several edge cases interacting. I haven’t dug deeper.
Also changing the file extension to `.json` or `.txt` avoids the problem. This suggests **a possible workaround** for the general case: convert `.jsonl` to `.json`. Though I haven’t verified correctness of that workaround yet.
Anyway my understanding is that `load_dataset` with first argument set to "json" should properly handle `.jsonl` files. Correct me if I'm wrong.
[stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt)
[data.json](https://github.com/user-attachments/files/21004164/data.json)
P.S.
I discovered this while going through the HuggingFace tutorial. Specifically [this part](https://huggingface.co/learn/llm-course/chapter5/5?fw=pt).I will try to inform the tutorial team about this issue, as it can be a showstopper for young 🤗 adepts.
### Steps to reproduce the bug
1. Download attached [data.json](https://github.com/user-attachments/files/21004164/data.json) file.
2. Run the following code which should work correctly:
```
from datasets import load_dataset
load_dataset("json", data_files="data.json", split="train")
```
3. Change extension of the `data` file to `.jsonl` and run:
```
from datasets import load_dataset
load_dataset("json", data_files="data.jsonl", split="train")
```
This will trigger an error like the one in the attached [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt).
One can also try removing fields before the `body` field and after it. These actions give different errors.
### Expected behavior
Parsing data in `.jsonl` format should yield the same result as parsing the same data in `.json` format. In any case, the content of a string field should never be interpreted as part of the dataset schema.
### Environment info
datasets version: _3.6.0_
pyarrow version: _20.0.0_
Python version: _3.11.9_
platform version: _macOS-15.5-arm64-arm-64bit_
| 2025-07-01T17:14:53 | 2025-07-01T17:17:48 | 2025-07-01T17:17:48 |
https://github.com/huggingface/datasets/issues/7665
| null | 7,665 | false |
[
"Somehow I created the issue twice🙈 This one is an exact duplicate of #7664."
] |
3,193,239,035 |
Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files
|
open
|
### Describe the bug
When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema.
In my case there is a field `body:` with a string value
```
"### Describe the bug (...) ,action: string, datetime: timestamp[s], author: string, (...) Pandas version: 1.3.4"
```
As a result, I got an exception
```
"TypeError: Couldn't cast array of type timestamp[s] to null".
```
Full stack-trace in the attached file below.
I also attach a minimized dataset (data.json, a single entry) that reproduces the error.
**Observations**(on the minimal example):
- if I remove _all fields before_ `body`, a different error appears,
- if I remove _all fields after_ `body`, yet another error appears,
- if `body` is _the only field_, the error disappears.
So this might be one complex bug or several edge cases interacting. I haven’t dug deeper.
Also changing the file extension to `.json` or `.txt` avoids the problem. This suggests **a possible workaround** for the general case: convert `.jsonl` to `.json`. Though I haven’t verified correctness of that workaround yet.
Anyway my understanding is that `load_dataset` with first argument set to "json" should properly handle `.jsonl` files. Correct me if I'm wrong.
[stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt)
[data.json](https://github.com/user-attachments/files/21004164/data.json)
P.S.
I discovered this while going through the HuggingFace tutorial. Specifically [this part](https://huggingface.co/learn/llm-course/chapter5/5?fw=pt). I will try to inform the tutorial team about this issue, as it can be a showstopper for young 🤗 adepts.
### Steps to reproduce the bug
1. Download attached [data.json](https://github.com/user-attachments/files/21004164/data.json) file.
2. Run the following code which should work correctly:
```
from datasets import load_dataset
load_dataset("json", data_files="data.json", split="train")
```
3. Change extension of the `data` file to `.jsonl` and run:
```
from datasets import load_dataset
load_dataset("json", data_files="data.jsonl", split="train")
```
This will trigger an error like the one in the attached [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt).
One can also try removing fields before the `body` field and after it. These actions give different errors.
### Expected behavior
Parsing data in `.jsonl` format should yield the same result as parsing the same data in `.json` format. In any case, the content of a string field should never be interpreted as part of the dataset schema.
### Environment info
datasets version: _3.6.0_
pyarrow version: _20.0.0_
Python version: _3.11.9_
platform version: _macOS-15.5-arm64-arm-64bit_
| 2025-07-01T17:14:32 | 2025-07-09T13:14:11 | null |
https://github.com/huggingface/datasets/issues/7664
| null | 7,664 | false |
[
"Hey @zdzichukowalski, I was not able to reproduce this on python 3.11.9 and datasets 3.6.0. The contents of \"body\" are correctly parsed as a string and no other fields like timestamps are created. Could you try reproducing this in a fresh environment, or posting the complete code where you encountered that stacktrace? (I noticed in the stacktrace you had a bigger program, perhaps there are some side effects)",
"Hi @zdzichukowalski, thanks for reporting this!\n\nTo help investigate this further, could you please share the following:\n\nExact contents of the data.jsonl file you're using — especially the first few lines that trigger the error.\n\nThe full code snippet you used to run load_dataset(), along with any environment setup (if not already shared).\n\nCan you confirm whether the issue persists when running in a clean virtual environment (e.g., with only datasets, pyarrow, and their dependencies)?\n\nIf possible, could you try running the same with an explicit features schema, like:\n\n```\nfrom datasets import load_dataset, Features, Value\nfeatures = Features({\"body\": Value(\"string\")})\nds = load_dataset(\"json\", data_files=\"data.jsonl\", split=\"train\", features=features)\n```\nAlso, just to clarify — does the \"body\" field contain plain string content, or is it sometimes being parsed from multi-line or structured inputs (like embedded JSON or CSV-like text)?\n\nOnce we have this info, we can check whether this is a schema inference issue, a PyArrow type coercion bug, or something else.",
"Ok I can confirm that I also cannot reproduce the error in a clean environment with the minimized version of the dataset that I provided. Same story for the old environment. Nonetheless the bug still happens in the new environment with the full version of the dataset, which I am providing now. Please let me know if now you can reproduce the problem.\n\nAdditionally I'm attaching result of the `pip freeze` command.\n\n[datasets-issues.jsonl.zip](https://github.com/user-attachments/files/21081755/datasets-issues.jsonl.zip)\n[requirements.txt](https://github.com/user-attachments/files/21081776/requirements.txt)\n\n@ArjunJagdale running with explicit script gives the following stack:\n[stack_features_version.txt](https://github.com/user-attachments/files/21082056/stack_features_version.txt)\n\nThe problematic `body` field seems to be e.g. content of [this comment](https://github.com/huggingface/datasets/issues/5596#issue-1604919993) from Github in which someone provided a stack trace containing json structure ;) I would say that it is intended to be a plain string. \n\nTo find a part that triggers an error, simply search for the \"timestamp[s]\" in the dataset. There are few such entries.\n\nI think I provided all the information you asked. \n\nOh, and workaround I suggested, that is convert `.jsonl` to `.json` worked for me.\n\nP.S\n1. @itsmejul the stack trace I provided is coming from running the two-liner script that I attached. There is no bigger program, although there were some jupiter files alongside the script, which were run in the same env. I am not sure what part of the stack trace suggests that there is something more ;) \n\n2. Is it possible that on some layer in the python/env/jupiter there is some caching mechanism for files that would give false results for my minimized version of the dataset file? There is of course possibility that I made a mistake and run the script with the wrong file, but I double and triple checked things before creating an issue. Earlier I wrote that \"(...) changing the file extension to `.json` or `.txt` avoids the problem\". But with the full version this is not true(when I change to `txt`), and minimized version always works. So it looks like that when I changed the extension to e.g. `txt` then a minimized file loaded from the disk and it was parsed correctly, but every time when I changed back to `jsonl` my script must have used an original content of the file - the one before I made a minimization. But this is still all strange because I even removed the fields before and after the body from my minimized `jsonl` and there were some different errors(I mention it in my original post), so I do not get why today I cannot reproduce it in the original env... \n\n",
"Hi @zdzichukowalski, thanks again for the detailed info and files!\n\nI’ve reviewed the `datasets-issues.jsonl` you shared, and I can now confirm the issue with full clarity:\n\nSome entries in the `\"body\"` field contain string content that resembles schema definitions — for example:\n\n```\nstruct<type: string, action: string, datetime: timestamp[s], ...>\n```\n\nThese strings appear to be copied from GitHub comments or stack traces (e.g., from #5596)\n\nWhen using the `.jsonl` format, `load_dataset()` relies on row-wise schema inference via PyArrow. If some rows contain real structured fields like `pull_request.merged_at` (a valid timestamp), and others contain schema-like text inside string fields, PyArrow can get confused while unifying the schema — leading to cast errors.\n\nThat’s why:\n\n* Using a reduced schema like `features={\"body\": Value(\"string\")}` fails — because the full table has many more fields.\n* Converting the file to `.json` (a list of objects) works — because global schema inference kicks in.\n* Filtering the dataset to only the `body` field avoids the issue entirely.\n\n### Suggested Workarounds\n\n* Convert the `.jsonl` file to `.json` to enable global schema inference.\n* Or, preprocess the `.jsonl` file to extract only the `\"body\"` field if that’s all you need.",
"So in summary should we treat it as a low severity bug in `PyArrow`, in `Datasets` library, or as a proper behavior and do nothing with it?",
"You are right actually! I’d also categorize this as a low-severity schema inference edge case, mainly stemming from PyArrow, but exposed by how datasets handles .jsonl inputs.\n\nIt's not a bug in datasets per se, but confusing when string fields (like body) contain text that resembles schema — e.g., \"timestamp[s]\".\n\nMaybe @lhoestq — could this be considered as a small feature/improvement?"
] |
3,192,582,371 |
Custom metadata filenames
|
closed
|
example: https://huggingface.co/datasets/lhoestq/overlapping-subsets-imagefolder/tree/main
To make multiple subsets for an imagefolder (one metadata file per subset), e.g.
```yaml
configs:
- config_name: default
metadata_filenames:
- metadata.csv
- config_name: other
metadata_filenames:
- metadata2.csv
```
| 2025-07-01T13:50:36 | 2025-07-01T13:58:41 | 2025-07-01T13:58:39 |
https://github.com/huggingface/datasets/pull/7663
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7663",
"html_url": "https://github.com/huggingface/datasets/pull/7663",
"diff_url": "https://github.com/huggingface/datasets/pull/7663.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7663.patch",
"merged_at": "2025-07-01T13:58:39"
}
| 7,663 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7663). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,190,805,531 |
Applying map after transform with multiprocessing will cause OOM
|
open
|
### Describe the bug
I have a 30TB dataset. When I perform add_column and cast_column operations on it and then execute a multiprocessing map, it results in an OOM (Out of Memory) error. However, if I skip the add_column and cast_column steps and directly run the map, there is no OOM. After debugging step by step, I found that the OOM is caused at this point, and I suspect it’s because the add_column and cast_column operations are not cached, which causes the entire dataset to be loaded in each subprocess, leading to the OOM. The critical line of code is: https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/utils/py_utils.py#L607
Note num_process=1 would not cause OOM. I'm confused.
### Steps to reproduce the bug
For reproduce, you can load dataset and set cache_dir (for caching): amphion/Emilia-Dataset which is a veru large datasets that RAM can not fits.
And apply the map with multiprocessing after a transform operation (e.g. add_column, cast_column).
As long as num_process>1, it must cause OOM.
### Expected behavior
It should not cause OOM.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-5.10.134-16.101.al8.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.33.1
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2024.6.1
| 2025-07-01T05:45:57 | 2025-07-10T06:17:40 | null |
https://github.com/huggingface/datasets/issues/7662
| null | 7,662 | false |
[
"Hi ! `add_column` loads the full column data in memory:\n\nhttps://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021\n\na workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a time",
"> Hi ! `add_column` loads the full column data in memory:\n> \n> [datasets/src/datasets/arrow_dataset.py](https://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021)\n> \n> Line 6021 in [bfa497b](/huggingface/datasets/commit/bfa497b1666f4c58bd231c440d8b92f9859f3a58)\n> \n> column_table = InMemoryTable.from_pydict({name: column}, schema=pyarrow_schema) \n> a workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a time\n\n\nHow about cast_column,since map cannot apply type transformation, e.g. Audio(16000) to Audio(24000)",
"cast_column calls `pyarrow.Table.cast` on the full dataset which I believe the memory usage depends on the source and target types but should be low in general\n\ncasting from Audio(16000) to Audio(24000) is cheap since the source and target arrow types are the same",
"> cast_column calls `pyarrow.Table.cast` on the full dataset which I believe the memory usage depends on the source and target types but should be low in general\n> \n> casting from Audio(16000) to Audio(24000) is cheap since the source and target arrow types are the same\n\nThanks for replying. So the OOM is caused by add_column operation. When I skip the operation, low memory will be achieved. Right?",
"> Hi ! `add_column` loads the full column data in memory:\n> \n> [datasets/src/datasets/arrow_dataset.py](https://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021)\n> \n> Line 6021 in [bfa497b](/huggingface/datasets/commit/bfa497b1666f4c58bd231c440d8b92f9859f3a58)\n> \n> column_table = InMemoryTable.from_pydict({name: column}, schema=pyarrow_schema) \n> a workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a time\n\n\nNote num_process=1 would not cause OOM. I'm confused.\n\n"
] |
3,190,408,237 |
fix del tqdm lock error
|
open
|
fixes https://github.com/huggingface/datasets/issues/7660
| 2025-07-01T02:04:02 | 2025-07-08T01:38:46 | null |
https://github.com/huggingface/datasets/pull/7661
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7661",
"html_url": "https://github.com/huggingface/datasets/pull/7661",
"diff_url": "https://github.com/huggingface/datasets/pull/7661.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7661.patch",
"merged_at": null
}
| 7,661 | true |
[] |
3,189,028,251 |
AttributeError: type object 'tqdm' has no attribute '_lock'
|
open
|
### Describe the bug
`AttributeError: type object 'tqdm' has no attribute '_lock'`
It occurs when I'm trying to load datasets in thread pool.
Issue https://github.com/huggingface/datasets/issues/6066 and PR https://github.com/huggingface/datasets/pull/6067 https://github.com/huggingface/datasets/pull/6068 tried to fix this.
### Steps to reproduce the bug
Will have to try several times to reproduce the error because it is concerned with threads.
1. save some datasets for test
```pythonfrom datasets import Dataset, DatasetDict
import os
os.makedirs("test_dataset_shards", exist_ok=True)
for i in range(10):
data = Dataset.from_dict({"text": [f"example {j}" for j in range(1000000)]})
data = DatasetDict({'train': data})
data.save_to_disk(f"test_dataset_shards/shard_{i}")
```
2. load them in a thread pool
```python
from datasets import load_from_disk
from tqdm import tqdm
from concurrent.futures import ThreadPoolExecutor, as_completed
import glob
datas = glob.glob('test_dataset_shards/shard_*')
with ThreadPoolExecutor(max_workers=10) as pool:
futures = [pool.submit(load_from_disk, it) for it in datas]
datas = []
for future in tqdm(as_completed(futures), total=len(futures)):
datas.append(future.result())
```
### Expected behavior
no exception raised
### Environment info
datasets==2.19.0
python==3.10
| 2025-06-30T15:57:16 | 2025-07-03T15:14:27 | null |
https://github.com/huggingface/datasets/issues/7660
| null | 7,660 | false |
[
"Deleting a class (**not instance**) attribute might be invalid in this case, which is `tqdm` doing in `ensure_lock`.\n\n```python\nfrom tqdm import tqdm as old_tqdm\n\nclass tqdm1(old_tqdm):\n def __delattr__(self, attr):\n try:\n super().__delattr__(attr)\n except AttributeError:\n if attr != '_lock':\n print(attr)\n raise\n\nclass Meta(type):\n def __delattr__(cls, name):\n if name == \"_lock\":\n return \n return super().__delattr__(name)\n \nclass tqdm2(old_tqdm, metaclass=Meta):\n pass\n\ndel tqdm2._lock\ndel tqdm1._lock # error\n```\n\nhttps://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/utils/tqdm.py#L104-L122",
"A cheaper option (seems to work in my case): \n```python\nfrom datasets import tqdm as hf_tqdm\nhf_tqdm.set_lock(hf_tqdm.get_lock())\n```"
] |
3,187,882,217 |
Update the beans dataset link in Preprocess
|
closed
|
In the Preprocess tutorial, the to "the beans dataset" is incorrect. Fixed.
| 2025-06-30T09:58:44 | 2025-07-07T08:38:19 | 2025-07-01T14:01:42 |
https://github.com/huggingface/datasets/pull/7659
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7659",
"html_url": "https://github.com/huggingface/datasets/pull/7659",
"diff_url": "https://github.com/huggingface/datasets/pull/7659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7659.patch",
"merged_at": "2025-07-01T14:01:42"
}
| 7,659 | true |
[] |
3,187,800,504 |
Fix: Prevent loss of info.features and column_names in IterableDatasetDict.map when features is None
|
closed
|
This PR fixes a bug where calling `IterableDatasetDict.map()` or `IterableDataset.map()` with the default `features=None` argument would overwrite the existing `info.features` attribute with `None`. This, in turn, caused the resulting dataset to lose its schema, breaking downstream usage of attributes like `column_names`.
Why
Previously, the code would always set `info.features = features`, even if `features` was `None`. When mapping with removal of columns or other transformations, this led to the destruction of the schema and caused failures in code that relied on the dataset schema being present.
How
We now update `info.features` only if `features` is not `None`. This preserves the original schema unless the user explicitly provides a new one.
Reference
Fixes #7568
| 2025-06-30T09:31:12 | 2025-07-01T16:26:30 | 2025-07-01T16:26:12 |
https://github.com/huggingface/datasets/pull/7658
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7658",
"html_url": "https://github.com/huggingface/datasets/pull/7658",
"diff_url": "https://github.com/huggingface/datasets/pull/7658.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7658.patch",
"merged_at": null
}
| 7,658 | true |
[
"Hi!\r\nI haven’t included a test for this change, as the fix is quite small and targeted.\r\nPlease let me know if you’d like a test for this case or if you’d prefer to handle it during review.\r\nThanks!",
"we can't know in advance the `features` after map() (it transforms the data !), so you can reuse the `features` from `info.features`",
"I'll the patch as suggested — `info.features = features` or `self.info.features` — to ensure schema preservation while keeping the logic simple and explicit. WDYT?\r\n",
"info.features should be None in the general case, and replaced by the user's `features` if it's passed explicitly with `map(..., features=...)`\r\n\r\nhttps://github.com/huggingface/datasets/issues/7568 is not an issue we can fix",
"> info.features should be None in the general case, and replaced by the user's `features` if it's passed explicitly with `map(..., features=...)`\r\n> \r\n> #7568 is not an issue we can fix\r\n\r\nThanks for the clarification! Totally makes sense now — I understand that features=None is the expected behavior post-map() unless explicitly passed, and that preserving old schema by default could lead to incorrect assumptions.\r\nClosing this one — appreciate the feedback as always"
] |
3,186,036,016 |
feat: add subset_name as alias for name in load_dataset
|
open
|
fixes #7637
This PR introduces subset_name as a user-facing alias for the name (previously `config_name`) argument in load_dataset. It aligns terminology with the Hugging Face Hub UI (which shows “Subset”), reducing confusion for new users.
Supports `subset_name` in `load_dataset()`
Adds `.subset_name` property to DatasetBuilder
Maintains full backward compatibility
Raises clear error if name and `subset_name` conflict
| 2025-06-29T10:39:00 | 2025-07-18T17:45:41 | null |
https://github.com/huggingface/datasets/pull/7657
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7657",
"html_url": "https://github.com/huggingface/datasets/pull/7657",
"diff_url": "https://github.com/huggingface/datasets/pull/7657.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7657.patch",
"merged_at": null
}
| 7,657 | true |
[] |
3,185,865,686 |
fix(iterable): ensure MappedExamplesIterable supports state_dict for resume
|
open
|
Fixes #7630
### Problem
When calling `.map()` on an `IterableDataset`, resuming from a checkpoint skips a large number of samples. This is because `MappedExamplesIterable` did not implement `state_dict()` or `load_state_dict()`, so checkpointing was not properly delegated to the underlying iterable.
### What This PR Does
This patch adds:
```python
def state_dict(self):
return self.ex_iterable.state_dict()
def load_state_dict(self, state):
self.ex_iterable.load_state_dict(state)
```
to MappedExamplesIterable, so the wrapped base iterable's state can be saved and restored as expected.
Result
Using .map() no longer causes sample skipping after checkpoint resume.
Let me know if a dedicated test case is required — happy to add one!
| 2025-06-29T07:50:13 | 2025-06-29T07:50:13 | null |
https://github.com/huggingface/datasets/pull/7656
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7656",
"html_url": "https://github.com/huggingface/datasets/pull/7656",
"diff_url": "https://github.com/huggingface/datasets/pull/7656.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7656.patch",
"merged_at": null
}
| 7,656 | true |
[] |
3,185,382,105 |
Added specific use cases in Improve Performace
|
open
|
Fixes #2494
| 2025-06-28T19:00:32 | 2025-06-28T19:00:32 | null |
https://github.com/huggingface/datasets/pull/7655
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7655",
"html_url": "https://github.com/huggingface/datasets/pull/7655",
"diff_url": "https://github.com/huggingface/datasets/pull/7655.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7655.patch",
"merged_at": null
}
| 7,655 | true |
[] |
3,184,770,992 |
fix(load): strip deprecated use_auth_token from config_kwargs
|
open
|
Fixes #7504
This PR resolves a compatibility error when loading datasets via `load_dataset()` using outdated arguments like `use_auth_token`.
**What was happening:**
Users passing `use_auth_token` in `load_dataset(..., use_auth_token=...)` encountered a `ValueError`: BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key.
**Why:**
`use_auth_token` has been deprecated and removed from config definitions (replaced by `token`), but the `load_dataset()` function still forwarded it via `**config_kwargs` to BuilderConfigs, leading to unrecognized key errors.
**Fix:**
We now intercept and strip `use_auth_token` from `config_kwargs` inside `load_dataset`, replacing it with a warning:
```python
if "use_auth_token" in config_kwargs:
logger.warning("The 'use_auth_token' argument is deprecated. Please use 'token' instead.")
config_kwargs.pop("use_auth_token")
```
This ensures legacy compatibility while guiding users to switch to the token argument.
Let me know if you'd prefer a deprecation error instead of a warning. Thanks!
| 2025-06-28T09:20:21 | 2025-06-28T09:20:21 | null |
https://github.com/huggingface/datasets/pull/7654
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7654",
"html_url": "https://github.com/huggingface/datasets/pull/7654",
"diff_url": "https://github.com/huggingface/datasets/pull/7654.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7654.patch",
"merged_at": null
}
| 7,654 | true |
[] |
3,184,746,093 |
feat(load): fallback to `load_from_disk()` when loading a saved dataset directory
|
open
|
### Related Issue
Fixes #7503
Partially addresses #5044 by allowing `load_dataset()` to auto-detect and gracefully delegate to `load_from_disk()` for locally saved datasets.
---
### What does this PR do?
This PR introduces a minimal fallback mechanism in `load_dataset()` that detects when the provided `path` points to a dataset saved using `save_to_disk()`, and automatically redirects to `load_from_disk()`.
#### 🐛 Before (unexpected metadata-only rows):
```python
ds = load_dataset("/path/to/saved_dataset")
# → returns rows with only internal metadata (_data_files, _fingerprint, etc.)
````
#### ✅ After (graceful fallback):
```python
ds = load_dataset("/path/to/saved_dataset")
# → logs a warning and internally switches to load_from_disk()
```
---
### Why is this useful?
* Prevents confusion when reloading local datasets saved via `save_to_disk()`.
* Enables smoother compatibility with frameworks (e.g., TRL, `lighteval`) that rely on `load_dataset()` calls.
* Fully backward-compatible — hub-based loading, custom builders, and streaming remain untouched.
| 2025-06-28T08:47:36 | 2025-06-28T08:47:36 | null |
https://github.com/huggingface/datasets/pull/7653
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7653",
"html_url": "https://github.com/huggingface/datasets/pull/7653",
"diff_url": "https://github.com/huggingface/datasets/pull/7653.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7653.patch",
"merged_at": null
}
| 7,653 | true |
[] |
3,183,372,055 |
Add columns support to JSON loader for selective key filtering
|
open
|
Fixes #7594
This PR adds support for filtering specific columns when loading datasets from .json or .jsonl files — similar to how the columns=... argument works for Parquet.
As suggested, support for the `columns=...` argument (previously available for Parquet) has now been extended to **JSON and JSONL** loading via `load_dataset(...)`. You can now load only specific keys/columns and skip the rest — which should help in cases where some fields are unclean, inconsistent, or just unnecessary.
### Example:
```python
from datasets import load_dataset
dataset = load_dataset("json", data_files="your_data.jsonl", columns=["id", "title"])
print(dataset["train"].column_names)
# Output: ['id', 'title']
```
### Summary of changes:
* Added `columns: Optional[List[str]]` to `JsonConfig`
* Updated `_generate_tables()` to filter selected columns
* Forwarded `columns` argument from `load_dataset()` to the config
* Added test for validation(should be fine!)
Let me know if you'd like the same to be added for CSV or others as a follow-up — happy to help.
| 2025-06-27T16:18:42 | 2025-07-14T10:41:53 | null |
https://github.com/huggingface/datasets/pull/7652
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7652",
"html_url": "https://github.com/huggingface/datasets/pull/7652",
"diff_url": "https://github.com/huggingface/datasets/pull/7652.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7652.patch",
"merged_at": null
}
| 7,652 | true |
[
"I need this feature right now. It would be great if it could automatically fill in None for non-existent keys instead of reporting an error.",
"> I need this feature right now. It would be great if it could automatically fill in None for non-existent keys instead of reporting an error.\r\n\r\nHi @aihao2000, Just to confirm — I have done the changes you asked for!\r\nIf you pass columns=[\"key1\", \"key2\", \"optional_key\"] to load_dataset(..., columns=...), and any of those keys are missing from the input JSON objects, the loader will automatically fill those columns with None values, instead of raising an error.",
"Hi! any update on this PR?"
] |
3,182,792,775 |
fix: Extended metadata file names for folder_based_builder
|
open
|
Fixes #7650.
The metadata files generated by the `DatasetDict.save_to_file` function are not included in the folder_based_builder's metadata list, causing issues when only 1 actual data file is present, as described in issue #7650.
This PR adds these filenames to the builder, allowing correct loading.
| 2025-06-27T13:12:11 | 2025-06-30T08:19:37 | null |
https://github.com/huggingface/datasets/pull/7651
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7651",
"html_url": "https://github.com/huggingface/datasets/pull/7651",
"diff_url": "https://github.com/huggingface/datasets/pull/7651.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7651.patch",
"merged_at": null
}
| 7,651 | true |
[] |
3,182,745,315 |
`load_dataset` defaults to json file format for datasets with 1 shard
|
open
|
### Describe the bug
I currently have multiple datasets (train+validation) saved as 50MB shards. For one dataset the validation pair is small enough to fit into a single shard and this apparently causes problems when loading the dataset. I created the datasets using a DatasetDict, saved them as 50MB arrow files for streaming and then load each dataset. I have no problem loading any of the other datasets with more than 1 arrow file/shard.
The error indicates the training set got loaded in arrow format (correct) and the validation set in json (incorrect). This seems to be because some of the metadata files are considered as dataset files.
```
Error loading /nfs/dataset_pt-uk: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('validation'): ('json', {})}
```

Concretely, there is a mismatch between the metadata created by the `DatasetDict.save_to_file` and the builder for `datasets.load_dataset`:
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/data_files.py#L107
The `folder_based_builder` lists all files and with 1 arrow file the json files (that are actually metadata) are in the majority.
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L58
### Steps to reproduce the bug
Create a dataset with metadata and 1 arrow file in validation and multiple arrow files in the training set, following the above description. In my case, I saved the files via:
```python
dataset = DatasetDict({
'train': train_dataset,
'validation': val_dataset
})
dataset.save_to_disk(output_path, max_shard_size="50MB")
```
### Expected behavior
The dataset would get loaded.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.14.0-22-generic-x86_64-with-glibc2.41
- Python version: 3.12.7
- `huggingface_hub` version: 0.31.1
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1
| 2025-06-27T12:54:25 | 2025-06-27T12:54:25 | null |
https://github.com/huggingface/datasets/issues/7650
| null | 7,650 | false |
[] |
3,181,481,444 |
Enable parallel shard upload in push_to_hub() using num_proc
|
closed
|
Fixes #7591
### Add num_proc support to `push_to_hub()` for parallel shard upload
This PR adds support for parallel upload of dataset shards via the `num_proc` argument in `Dataset.push_to_hub()`.
📌 While the `num_proc` parameter was already present in the `push_to_hub()` signature and correctly passed to `_push_parquet_shards_to_hub()`, it was not being used to parallelize the upload.
🔧 This PR updates the internal `_push_parquet_shards_to_hub()` function to:
- Use `multiprocessing.Pool` and `iflatmap_unordered()` for concurrent shard upload when `num_proc > 1`
- Preserve original serial upload behavior if `num_proc` is `None` or ≤ 1
- Keep tqdm progress and commit behavior unchanged
Let me know if any test coverage or further changes are needed!
| 2025-06-27T05:59:03 | 2025-07-07T18:13:53 | 2025-07-07T18:13:52 |
https://github.com/huggingface/datasets/pull/7649
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7649",
"html_url": "https://github.com/huggingface/datasets/pull/7649",
"diff_url": "https://github.com/huggingface/datasets/pull/7649.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7649.patch",
"merged_at": null
}
| 7,649 | true |
[
"it was already added in https://github.com/huggingface/datasets/pull/7606 actually ^^'",
"Oh sure sure, Closing this one as redundant."
] |
3,181,409,736 |
Fix misleading add_column() usage example in docstring
|
closed
|
Fixes #7611
This PR fixes the usage example in the Dataset.add_column() docstring, which previously implied that add_column() modifies the dataset in-place.
Why:
The method returns a new dataset with the additional column, and users must assign the result to a variable to preserve the change.
This should make the behavior clearer for users.
@lhoestq @davanstrien
| 2025-06-27T05:27:04 | 2025-07-20T16:07:49 | 2025-07-17T13:14:17 |
https://github.com/huggingface/datasets/pull/7648
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7648",
"html_url": "https://github.com/huggingface/datasets/pull/7648",
"diff_url": "https://github.com/huggingface/datasets/pull/7648.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7648.patch",
"merged_at": "2025-07-17T13:14:17"
}
| 7,648 | true |
[
"I believe there are other occurences of cases like this, like select_columns, select, filter, shard and flatten, could you also fix the docstring for them as well before we merge ?",
"Done! @lhoestq! I've updated the docstring examples for the following methods to clarify that they return new datasets instead of modifying in-place:\r\n\r\n- `select_columns`\r\n- `select`\r\n- `filter`\r\n- `shard`\r\n- `flatten`\r\n",
"Also, any suggestions on what kind of issues I should work on next? I tried looking on my own, but I’d be happy if you could assign me something — I’ll do my best!\r\n",
"Hi! any update on this PR?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7648). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Also, any suggestions on what kind of issues I should work on next? I tried looking on my own, but I’d be happy if you could assign me something — I’ll do my best!\r\n\r\nHmm. One long lasting issue is the one about being able to download only one split of a dataset (currently `load_dataset()` downloads all the splits, even when only one of train/test/validation is passed with `load_dataset(..., split=split)`)\r\n\r\nThis makes some downloads pretty long, I remember Mario started to work on this in this PR but couldn't finish it: https://github.com/huggingface/datasets/pull/6832\r\n\r\nI think it would be a challenging but pretty impactful addition, and feel free to ping me if you have questions or if I can help. You can also take a look at Mario's first PR which was already in an advanced state. \r\n\r\nLet me know if it sounds like the kind of contribution you're looking for :)",
"Hi @lhoestq, thanks for the thoughtful suggestion!\r\n\r\nThe issue you mentioned sounds like a meaningful problem to tackle, and I’d love to take a closer look at it. I’ll start by reviewing Mario’s PR (#6832), understand what was implemented so far, and what remains to be done.\r\n\r\nIf I have any questions or run into anything unclear, I’ll be sure to reach out. \r\n\r\nI plan to give this a solid try. Thanks again — contributing to Hugging Face is something I truly hope to grow into.\r\n\r\n---\r\nOnce again the the main Issue is to - \r\n\r\n>Allow users to download only the requested split(s) in load_dataset(...), avoiding unnecessary processing/downloading of the full dataset (especially important for large datasets like svhn, squad, glue).\r\n\r\nright?\r\n\r\nAlso I have gone through some related / mentioned issues and PRs - \r\n\r\n- PR #6832 | Mario's main implementation for per-split download logic. Introduces splits param, _available_splits, and conditional logic in download_and_prepare()\r\n\r\n- PR #6639 | Your earlier PR to trigger download_and_prepare() only when splits are missing from disk\r\n\r\n- Issue #4101 / #2538 / #6529 | Real-world user complaints about load_dataset(..., split=...) still downloading everything. Confirm the need for this fix\r\n\r\n- #2249 | Referenced by albertvillanova — old idea of caching only specific splits\r\n\r\n---\r\nIF I am not wrong, #2249 had some limitations - \r\n- Only worked for some dataset scripts where the download dict had split names as keys (like natural_questions).\r\n\r\n- Would fail or cause confusing behavior on datasets with: \r\n1] Custom download keys (TRAIN_DOWNLOAD_URL, val_nyt, metadata)\r\n2] Files passed one by one to dl_manager.download(), not as a dict\r\n\r\n- Reused DownloadConfig, which led to blurry separation between cached_path, DownloadManager, and dataset logic.\r\n\r\n- Needed to modify each dataset's _split_generators() to fully support split filtering.\r\n\r\n- Risked partial or inconsistent caching if logic wasn’t tight.\r\n"
] |
3,178,952,517 |
loading mozilla-foundation--common_voice_11_0 fails
|
open
|
### Describe the bug
Hello everyone,
i am trying to load `mozilla-foundation--common_voice_11_0` and it fails. Reproducer
```
import datasets
datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True)
```
and it fails with
```
File ~/opt/envs/.../lib/python3.10/site-packages/datasets/utils/file_utils.py:827, in _add_retries_to_file_obj_read_method.<locals>.read_with_retries(*args, **kwargs)
825 for retry in range(1, max_retries + 1):
826 try:
--> 827 out = read(*args, **kwargs)
828 break
829 except (
830 _AiohttpClientError,
831 asyncio.TimeoutError,
832 requests.exceptions.ConnectionError,
833 requests.exceptions.Timeout,
834 ) as err:
File /usr/lib/python3.10/codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final)
319 def decode(self, input, final=False):
320 # decode input (taking the buffer into account)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
324 self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
When i remove streaming then everything is good but i need `streaming=True`
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True)
```
### Expected behavior
Expected that it will download dataset
### Environment info
datasets==3.6.0
python3.10
on all platforms linux/win/mac
| 2025-06-26T12:23:48 | 2025-07-10T14:49:30 | null |
https://github.com/huggingface/datasets/issues/7647
| null | 7,647 | false |
[
"@claude Could you please address this issue",
"kinda related: https://github.com/huggingface/datasets/issues/7675"
] |
3,178,036,854 |
Introduces automatic subset-level grouping for folder-based dataset builders #7066
|
open
|
Fixes #7066
This PR introduces automatic **subset-level grouping** for folder-based dataset builders by:
1. Adding a utility function `group_files_by_subset()` that clusters files by root name (ignoring digits and shard suffixes).
2. Integrating this logic into `FolderBasedBuilder._split_generators()` to yield one split per subset.
3. Adding unit tests for the grouping function.
4. Updating the documentation to describe this new behavior under `docs/source/repository_structure.mdx`.
---
### Motivation
Datasets with files like:
```
train0.jsonl
train1.jsonl
animals.jsonl
metadata.jsonl
```
will now be **automatically grouped** as:
- `"train"` subset → `train0.jsonl`, `train1.jsonl`
- `"animals"` subset → `animals.jsonl`
- `"metadata"` subset → `metadata.jsonl`
This enables structured multi-subset loading even when the dataset doesn't follow traditional `train/validation/test` split conventions.
---
### Files Changed
- `src/datasets/data_files.py`: added `group_files_by_subset()` utility
- `src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py`: grouped files before yielding splits
- `tests/test_data_files.py`: added unit test `test_group_files_by_subset`
- `docs/source/repository_structure.mdx`: documented subset grouping for maintainers and users
---
### Benefits
- More flexible and robust dataset split logic
- Enables logical grouping of user-uploaded files without nested folder structure
- Backward-compatible with all existing folder-based configs
---
Ready for review ✅
| 2025-06-26T07:01:37 | 2025-07-14T10:42:56 | null |
https://github.com/huggingface/datasets/pull/7646
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7646",
"html_url": "https://github.com/huggingface/datasets/pull/7646",
"diff_url": "https://github.com/huggingface/datasets/pull/7646.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7646.patch",
"merged_at": null
}
| 7,646 | true |
[
"It adds automatic grouping of files into subsets based on their root name (e.g., `train0.jsonl`, `train1.jsonl` → `\"train\"`), as discussed above. The logic is integrated into `FolderBasedBuilder` and is fully tested + documented.\r\n\r\nLet me know if any changes are needed — happy to iterate!",
"Hi ! I believe the subsets need to be instantiated here as `configs` - not `splits` (which are meant for train/validation/test):\r\n\r\nhttps://github.com/huggingface/datasets/blob/ef762e664a2a1675368ed7a203b0ac8cecca6e19/src/datasets/load.py#L647-L662\r\n\r\nAlso the subset names should probably be inferred only from the parquet/csv/json files and not from png/jpeg/wav/mp4 etc. WDYT ?",
"> Hi ! I believe the subsets need to be instantiated here as `configs` - not `splits` (which are meant for train/validation/test):\r\n> \r\n> https://github.com/huggingface/datasets/blob/ef762e664a2a1675368ed7a203b0ac8cecca6e19/src/datasets/load.py#L647-L662\r\n> \r\n> Also the subset names should probably be inferred only from the parquet/csv/json files and not from png/jpeg/wav/mp4 etc. WDYT ?\r\n\r\nThanks a lot for the review!\r\n\r\nYou're absolutely right — treating subsets as separate configs instead of overloaded splits makes much more sense. If that approach sounds good to you, I can move the grouping logic to `load.py`, where configs are instantiated, and revise the PR to emit one `BuilderConfig` per grouped subset.\r\n\r\nAlso totally agree on limiting grouping to structured file types — I’d scope this to `.json`, `.jsonl`, `.csv`, and `.parquet`.\r\n\r\nLet me know if this direction sounds good, and I’ll get started on the changes right away!\r\n",
"Hi! @lhoestq!"
] |
3,176,810,164 |
`ClassLabel` docs: Correct value for unknown labels
|
open
|
This small change fixes the documentation to to be compliant with what happens in `encode_example`.
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/features/features.py#L1126-L1129
| 2025-06-25T20:01:35 | 2025-06-25T20:01:35 | null |
https://github.com/huggingface/datasets/pull/7645
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7645",
"html_url": "https://github.com/huggingface/datasets/pull/7645",
"diff_url": "https://github.com/huggingface/datasets/pull/7645.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7645.patch",
"merged_at": null
}
| 7,645 | true |
[] |
3,176,363,492 |
fix sequence ci
|
closed
|
fix error from https://github.com/huggingface/datasets/pull/7643
| 2025-06-25T17:07:55 | 2025-06-25T17:10:30 | 2025-06-25T17:08:01 |
https://github.com/huggingface/datasets/pull/7644
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7644",
"html_url": "https://github.com/huggingface/datasets/pull/7644",
"diff_url": "https://github.com/huggingface/datasets/pull/7644.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7644.patch",
"merged_at": "2025-06-25T17:08:01"
}
| 7,644 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7644). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,176,354,431 |
Backward compat sequence instance
|
closed
|
useful to still get `isinstance(Sequence(Value("int64")), Sequence)`for downstream libs like evaluate
| 2025-06-25T17:05:09 | 2025-06-25T17:07:40 | 2025-06-25T17:05:44 |
https://github.com/huggingface/datasets/pull/7643
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7643",
"html_url": "https://github.com/huggingface/datasets/pull/7643",
"diff_url": "https://github.com/huggingface/datasets/pull/7643.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7643.patch",
"merged_at": "2025-06-25T17:05:43"
}
| 7,643 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7643). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,176,025,890 |
fix length for ci
|
closed
| null | 2025-06-25T15:10:38 | 2025-06-25T15:11:53 | 2025-06-25T15:11:51 |
https://github.com/huggingface/datasets/pull/7642
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7642",
"html_url": "https://github.com/huggingface/datasets/pull/7642",
"diff_url": "https://github.com/huggingface/datasets/pull/7642.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7642.patch",
"merged_at": "2025-06-25T15:11:51"
}
| 7,642 | true |
[] |
3,175,953,405 |
update docs and docstrings
|
closed
| null | 2025-06-25T14:48:58 | 2025-06-25T14:51:46 | 2025-06-25T14:49:33 |
https://github.com/huggingface/datasets/pull/7641
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7641",
"html_url": "https://github.com/huggingface/datasets/pull/7641",
"diff_url": "https://github.com/huggingface/datasets/pull/7641.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7641.patch",
"merged_at": "2025-06-25T14:49:33"
}
| 7,641 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7641). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,175,914,924 |
better features repr
|
closed
|
following the addition of List in #7634
before:
```python
In [3]: ds.features
Out[3]:
{'json': {'id': Value(dtype='string', id=None),
'metadata:transcript': [{'end': Value(dtype='float64', id=None),
'start': Value(dtype='float64', id=None),
'transcript': Value(dtype='string', id=None),
'words': [{'end': Value(dtype='float64', id=None),
'score': Value(dtype='float64', id=None),
'start': Value(dtype='float64', id=None),
'word': Value(dtype='string', id=None)}]}],
'metadata:vad': [{'end': Value(dtype='float64', id=None),
'start': Value(dtype='float64', id=None)}]},
'mp4': Value(dtype='binary', id=None),
'npz': {'boxes_and_keypoints:box': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'boxes_and_keypoints:is_valid_box': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None),
'boxes_and_keypoints:keypoints': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'movement:EmotionArousalToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:EmotionValenceToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:FAUToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:FAUValue': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:alignment_head_rotation': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:alignment_translation': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'movement:emotion_arousal': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:emotion_scores': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:emotion_valence': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:expression': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:frame_latent': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:gaze_encodings': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:head_encodings': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:hypernet_features': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:is_valid': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'smplh:body_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'smplh:global_orient': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'smplh:is_valid': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None),
'smplh:left_hand_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'smplh:right_hand_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'smplh:translation': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None)},
'wav': Audio(sampling_rate=None, mono=True, decode=True, id=None),
'__key__': Value(dtype='string', id=None),
'__url__': Value(dtype='string', id=None)}
```
after:
```python
In [3]: ds.features
Out[3]:
{'json': {'id': Value('string'),
'metadata:transcript': List({'end': Value('float64'), 'start': Value('float64'), 'transcript': Value('string'), 'words': List({'end': Value('float64'), 'score': Value('float64'), 'start': Value('float64'), 'word': Value('string')})}),
'metadata:vad': List({'end': Value('float64'), 'start': Value('float64')})},
'mp4': Value('binary'),
'npz': {'boxes_and_keypoints:box': List(List(Value('float32'))),
'boxes_and_keypoints:is_valid_box': List(Value('bool')),
'boxes_and_keypoints:keypoints': List(List(List(Value('float32')))),
'movement:EmotionArousalToken': List(List(Value('float32'))),
'movement:EmotionValenceToken': List(List(Value('float32'))),
'movement:FAUToken': List(List(Value('float32'))),
'movement:FAUValue': List(List(Value('float32'))),
'movement:alignment_head_rotation': List(List(Value('float32'))),
'movement:alignment_translation': List(List(List(Value('float32')))),
'movement:emotion_arousal': List(List(Value('float32'))),
'movement:emotion_scores': List(List(Value('float32'))),
'movement:emotion_valence': List(List(Value('float32'))),
'movement:expression': List(List(Value('float32'))),
'movement:frame_latent': List(List(Value('float32'))),
'movement:gaze_encodings': List(List(Value('float32'))),
'movement:head_encodings': List(List(Value('float32'))),
'movement:hypernet_features': List(List(Value('float32'))),
'movement:is_valid': List(List(Value('float32'))),
'smplh:body_pose': List(List(List(Value('float32')))),
'smplh:global_orient': List(List(Value('float32'))),
'smplh:is_valid': List(Value('bool')),
'smplh:left_hand_pose': List(List(List(Value('float32')))),
'smplh:right_hand_pose': List(List(List(Value('float32')))),
'smplh:translation': List(List(Value('float32')))},
'wav': Audio(sampling_rate=None, decode=True, stream_index=None),
'__key__': Value('string'),
'__url__': Value('string')}
```
| 2025-06-25T14:37:32 | 2025-06-25T14:46:47 | 2025-06-25T14:46:45 |
https://github.com/huggingface/datasets/pull/7640
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7640",
"html_url": "https://github.com/huggingface/datasets/pull/7640",
"diff_url": "https://github.com/huggingface/datasets/pull/7640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7640.patch",
"merged_at": "2025-06-25T14:46:45"
}
| 7,640 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7640). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,175,616,169 |
fix save_infos
|
closed
| null | 2025-06-25T13:16:26 | 2025-06-25T13:19:33 | 2025-06-25T13:16:33 |
https://github.com/huggingface/datasets/pull/7639
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7639",
"html_url": "https://github.com/huggingface/datasets/pull/7639",
"diff_url": "https://github.com/huggingface/datasets/pull/7639.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7639.patch",
"merged_at": "2025-06-25T13:16:33"
}
| 7,639 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7639). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,172,645,391 |
Add ignore_decode_errors option to Image feature for robust decoding #7612
|
open
|
This PR implements support for robust image decoding in the `Image` feature, as discussed in issue #7612.
## 🔧 What was added
- A new boolean field: `ignore_decode_errors` (default: `False`)
- If set to `True`, any exceptions during decoding will be caught, and `None` will be returned instead of raising an error
```python
features = Features({
"image": Image(decode=True, ignore_decode_errors=True),
})
````
This enables robust iteration over potentially corrupted datasets — especially useful when streaming datasets like WebDataset or image-heavy public sets where sample corruption is common.
## 🧪 Behavior
* If `ignore_decode_errors=False` (default), decoding behaves exactly as before
* If `True`, decoding errors are caught, and a warning is emitted:
```
[Image.decode_example] Skipped corrupted image: ...
```
## 🧵 Linked issue
Closes #7612
Let me know if you'd like a follow-up test PR. Happy to write one!
| 2025-06-24T16:47:51 | 2025-07-04T07:07:30 | null |
https://github.com/huggingface/datasets/pull/7638
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7638",
"html_url": "https://github.com/huggingface/datasets/pull/7638",
"diff_url": "https://github.com/huggingface/datasets/pull/7638.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7638.patch",
"merged_at": null
}
| 7,638 | true |
[
"cc @lhoestq",
"I think splitting the error handling for the main image decoding process and the metadata decoding process is possibly a bit nicer, as some images do render correctly, but their metadata might be invalid and cause the pipeline to fail, which I've encountered recently as in #7668.\r\n\r\nThe [`decode_image`](https://docs.pytorch.org/vision/main/generated/torchvision.io.decode_image.html) function in `torchvision` handles similar cases by using the `apply_exif_orientation` flag to turn off the exif metadata processing entirely.",
"> I think splitting the error handling for the main image decoding process and the metadata decoding process is possibly a bit nicer, as some images do render correctly, but their metadata might be invalid and cause the pipeline to fail, which I've encountered recently as in #7668.\r\n> The [`decode_image`](https://docs.pytorch.org/vision/main/generated/torchvision.io.decode_image.html) function in `torchvision` handles similar cases by using the `apply_exif_orientation` flag to turn off the exif metadata processing entirely.\r\n \r\n @lhoestq & @Seas0 — that makes total sense.\r\n \r\nCurrently, if EXIF metadata like `.getexif()` fails (due to malformed tags), the whole image gets dropped even if it renders correctly — not ideal.\r\n \r\nTo address this, I'm planning to split the EXIF handling into a separate `try/except` block, like:\r\n```python\r\ntry:\r\n exif = image.getexif()\r\n if exif.get(PIL.Image.ExifTags.Base.Orientation) is not None:\r\n image = PIL.ImageOps.exif_transpose(image)\r\nexcept Exception as exif_err:\r\n if self.ignore_decode_errors:\r\n warnings.warn(f\"[Image.decode_example] Skipped EXIF metadata: {exif_err}\")\r\n else:\r\n raise\r\n```\r\n\r\nSo that, Valid but EXIF-broken images will still be returned & EXIF failures will be skipped only if ignore_decode_errors=True. \r\n\r\nSounds good??",
"With the recent EXIF decoding isolation logic added, this PR now fully addresses:\r\n\r\n- ✅ #7612 – Robust iteration over corrupt samples (especially useful in `.streaming=True`)\r\n- ✅ #7632 – Graceful handling of invalid image files when using `cast_column(..., Image(...))`\r\n- ✅ #7668 – Broken EXIF metadata no longer crashes decoding; images are returned if usable\r\n\r\nAll decoding errors (including `.getexif()` and image file loading) are now skipped with a warning when `ignore_decode_errors=True`. This enables safe, scalable image preprocessing pipelines."
] |
3,171,883,522 |
Introduce subset_name as an alias of config_name
|
open
|
### Feature request
Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata).
### Motivation
The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically called config_name in the datasets library. This inconsistency has caused confusion for many users, especially those unfamiliar with the internal terminology.
I have repeatedly received questions from users trying to understand what "config" means, and why it doesn’t match what they see as "subset" on the Hub. Renaming everything to `subset_name` might be too disruptive, but introducing subset_name as a clear alias for config_name could significantly improve user experience without breaking backward compatibility.
This change would:
- Align terminology across the Hub UI and datasets codebase
- Reduce user confusion, especially for newcomers
- Make documentation and examples more intuitive
| 2025-06-24T12:49:01 | 2025-07-01T16:08:33 | null |
https://github.com/huggingface/datasets/issues/7637
| null | 7,637 | false |
[
"I second this! When you come from the Hub, the intuitive question is \"how do I set the subset name\", and it's not easily answered from the docs: `subset_name` would answer this directly.",
"I've submitted PR [#7657](https://github.com/huggingface/datasets/pull/7657) to introduce subset_name as a user-facing alias for name in load_dataset, keeping terminology consistent with the Hub UI (“Subset”). It’s fully backward-compatible and includes a conflict check.\n\nLet me know if you'd like me to include tests as part of the PR — happy to add them if needed!",
"The main usage is as a positional argument anyway, so I wouldn't necessarily agree that we need an alias (with the risk of confusing users). But happy to have more mentions in the docs of syntaxes like `load_dataset(\"dataset_name\", \"subset_name\")`",
"> The main usage is as a positional argument anyway, so I wouldn't necessarily agree that we need an alias (with the risk of confusing users). But happy to have more mentions in the docs of syntaxes like `load_dataset(\"dataset_name\", \"subset_name\")`\n\nThanks @lhoestq, totally fair point — especially with positional usage being the norm. I’m happy to align with the team’s direction here. If you'd prefer, I can update this PR to shift the focus to documentation/examples (e.g., showing \"subset_name\" as the second arg)."
] |
3,170,878,167 |
"open" in globals()["__builtins__"], an error occurs: "TypeError: argument of type 'module' is not iterable"
|
open
|
When I run the following code, an error occurs: "TypeError: argument of type 'module' is not iterable"
```python
print("open" in globals()["__builtins__"])
```
Traceback (most recent call last):
File "./main.py", line 2, in <module>
print("open" in globals()["__builtins__"])
^^^^^^^^^^^^^^^^^^^^^^
TypeError: argument of type 'module' is not iterable
But this code runs fine in datasets, I don't understand why
[src/datasets/utils/patching.py#L96](https://github.com/huggingface/datasets/blob/3.6.0/src/datasets/utils/patching.py#L96)
| 2025-06-24T08:09:39 | 2025-07-10T04:13:16 | null |
https://github.com/huggingface/datasets/issues/7636
| null | 7,636 | false |
[
"@kuanyan9527 Your query is indeed valid. Following could be its reasoning:\n\nQuoting from https://stackoverflow.com/a/11181607:\n\"By default, when in the `__main__` module,` __builtins__` is the built-in module `__builtin__` (note: no 's'); when in any other module, `__builtins__` is an alias for the dictionary of the `__builtin__` module itself.\"\n\nCan you confirm if you are running the snippet `print(\"open\" in globals()[\"__builtins__\"])` in the default? In that case, as expected, `__builtins__` is a module which is causing the error. But in the codebase, the class `patch_submodule`, is primarily used in the second circumstance, where it acts as a dictionary. Hence causing the code to function successfully.\n\nHope this helps.",
"@kuanyan9527 Are there any more queries in this regards, else please feel free to close the issue.\nThank you.",
"Your answer is very important to me,thanks.",
"I encountered this error when running datasets with pypy,\n`TypeError: argument of type 'module' is not iterable` in [src/datasets/utils/patching.py#L96](https://github.com/huggingface/datasets/blob/3.6.0/src/datasets/utils/patching.py#L96)\nby modifying `globals()[\"__builtins__\"]` to `builtins.__dict__`, importing via `import builtins`.\nCan this be applied to the community?"
] |
3,170,486,408 |
Fix: Preserve float columns in JSON loader when values are integer-like (e.g. 0.0, 1.0)
|
open
|
This PR fixes a bug in the JSON loader where columns containing float values like `[0.0, 1.0, 2.0]` were being implicitly coerced to `int`, due to pandas or Arrow type inference.
This caused issues downstream in statistics computation (e.g., dataset-viewer) where such columns were incorrectly labeled as `"int"` instead of `"float"`.
### 🔍 What was happening:
When the JSON loader falls back to `pandas_read_json()` (after `pa.read_json()` fails), pandas/Arrow can coerce float values to integers if all values are integer-like (e.g., `0.0 == 0`).
### ✅ What this PR does:
- Adds a check in the fallback path of `_generate_tables()`
- Ensures that columns made entirely of floats are preserved as `"float64"` even if they are integer-like (e.g. `0.0`, `1.0`)
- This prevents loss of float semantics when creating the Arrow table
### 🧪 Reproducible Example:
```json
[{"col": 0.0}, {"col": 1.0}, {"col": 2.0}]
````
Previously loaded as:
* `int`
Now correctly loaded as:
* `float`
Fixes #6937
| 2025-06-24T06:16:48 | 2025-06-24T06:16:48 | null |
https://github.com/huggingface/datasets/pull/7635
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7635",
"html_url": "https://github.com/huggingface/datasets/pull/7635",
"diff_url": "https://github.com/huggingface/datasets/pull/7635.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7635.patch",
"merged_at": null
}
| 7,635 | true |
[] |
3,169,389,653 |
Replace Sequence by List
|
closed
|
Sequence is just a utility that we need to keep for backward compatibility. And `[ ]` was used instead but doesn't allow passing the length of the list.
This PR removes most mentions of Sequence and usage of `[ ]` and defines a proper List type instead.
before: `Sequence(Value("int64"))` or `[Value("int64")]`
now: `List(Value("int64"))`
This PR conserves full backward compatibility. And it's a good occasion with the release of 4.0.0
| 2025-06-23T20:35:48 | 2025-06-25T13:59:13 | 2025-06-25T13:59:11 |
https://github.com/huggingface/datasets/pull/7634
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7634",
"html_url": "https://github.com/huggingface/datasets/pull/7634",
"diff_url": "https://github.com/huggingface/datasets/pull/7634.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7634.patch",
"merged_at": "2025-06-25T13:59:11"
}
| 7,634 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7634). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,168,399,637 |
Proposal: Small Tamil Discourse Coherence Dataset.
|
open
|
I’m a beginner from NIT Srinagar proposing a dataset of 50 Tamil text pairs for discourse coherence (coherent/incoherent labels) to support NLP research in low-resource languages.
- Size: 50 samples
- Format: CSV with columns (text1, text2, label)
- Use case: Training NLP models for coherence
I’ll use GitHub’s web editor and Google Colab. Please confirm if this fits.
| 2025-06-23T14:24:40 | 2025-06-23T14:24:40 | null |
https://github.com/huggingface/datasets/issues/7633
| null | 7,633 | false |
[] |
3,168,283,589 |
Graceful Error Handling for cast_column("image", Image(decode=True)) in Hugging Face Datasets
|
open
|
### Feature request
Currently, when using dataset.cast_column("image", Image(decode=True)), the pipeline throws an error and halts if any image in the dataset is invalid or corrupted (e.g., truncated files, incorrect formats, unreachable URLs). This behavior disrupts large-scale processing where a few faulty samples are common.
reference : https://discuss.huggingface.co/t/handle-errors-when-loading-images-404-corrupted-etc/50318/5
https://discuss.huggingface.co/t/handling-non-existing-url-in-image-dataset-while-cast-column/69185
Proposed Feature
Introduce a mechanism (e.g., a continue_on_error=True flag or global error handling mode) in Image(decode=True) that:
Skips invalid images and sets them as None, or
Logs the error but allows the rest of the dataset to be processed without interruption.
Example Usage
from datasets import load_dataset, Image
dataset = load_dataset("my_dataset")
dataset = dataset.cast_column("image", Image(decode=True, continue_on_error=True))
Benefits
Ensures robust large-scale image dataset processing.
Improves developer productivity by avoiding custom retry/error-handling code.
Aligns with best practices in dataset preprocessing pipelines that tolerate minor data corruption.
Potential Implementation Options
Internally wrap the decoding in a try/except block.
Return None or a placeholder on failure.
Optionally allow custom error callbacks or logging.
### Motivation
Robustness: Large-scale image datasets often contain a small fraction of corrupt files or unreachable URLs. Halting on the first error forces users to write custom workarounds or preprocess externally.
Simplicity: A built-in flag removes boilerplate try/except logic around every decode step.
Performance: Skipping invalid samples inline is more efficient than a two-pass approach (filter then decode).
### Your contribution
1. API Change
Extend datasets.features.Image(decode=True) to accept continue_on_error: bool = False.
2. Behavior
If continue_on_error=False (default), maintain current behavior: any decode error raises an exception.
If continue_on_error=True, wrap decode logic in try/except:
On success: store the decoded image.
On failure: log a warning (e.g., via logging.warning) and set the field to None (or a sentinel value).
3. Optional Enhancements
Allow a callback hook:
Image(decode=True, continue_on_error=True, on_error=lambda idx, url, exc: ...)
Emit metrics or counts of skipped images.
| 2025-06-23T13:49:24 | 2025-07-08T06:52:53 | null |
https://github.com/huggingface/datasets/issues/7632
| null | 7,632 | false |
[
"Hi! This is now handled in PR #7638",
"Thank you for implementing the suggestion it would be great help in our use case. "
] |
3,165,127,657 |
Pass user-agent from DownloadConfig into fsspec storage_options
|
open
|
Fixes part of issue #6046
### Problem
The `user-agent` defined in `DownloadConfig` was not passed down to fsspec-based filesystems like `HfFileSystem`, which prevents proper identification/tracking of client requests.
### Solution
Added support for injecting the `user-agent` into `storage_options["headers"]` within `_prepare_single_hop_path_and_storage_options()` based on the `protocol`.
Now, when using `hf://`, `http://`, or `https://`, the custom user-agent is passed automatically.
### Code Location
Modified:
- `src/datasets/utils/file_utils.py`
Used `get_datasets_user_agent(...)` to ensure proper formatting and fallback logic.
| 2025-06-21T14:22:25 | 2025-06-21T14:25:28 | null |
https://github.com/huggingface/datasets/pull/7631
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7631",
"html_url": "https://github.com/huggingface/datasets/pull/7631",
"diff_url": "https://github.com/huggingface/datasets/pull/7631.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7631.patch",
"merged_at": null
}
| 7,631 | true |
[
"- This PR assumes that `HfFileSystem` in `huggingface_hub` supports receiving `headers` in `storage_options`. If not, a follow-up PR can be opened to add this support to `HfFileSystem.__init__`.\r\n- No test was added for this since it’s a config passthrough. If needed, I’d be happy to add one."
] |
3,164,650,900 |
[bug] resume from ckpt skips samples if .map is applied
|
open
|
### Describe the bug
resume from ckpt skips samples if .map is applied
Maybe related: https://github.com/huggingface/datasets/issues/7538
### Steps to reproduce the bug
```python
from datasets import Dataset
from datasets.distributed import split_dataset_by_node
# Create dataset with map transformation
def create_dataset():
ds = Dataset.from_dict({"id": list(range(100))})
ds = ds.to_iterable_dataset(num_shards=4)
ds = ds.map(lambda x: x) #comment it out to get desired behavior
ds = split_dataset_by_node(ds, rank=0, world_size=2)
return ds
ds = create_dataset()
# Iterate and save checkpoint after 10 samples
it = iter(ds)
for idx, sample in enumerate(it):
if idx == 9: # Checkpoint after 10 samples
checkpoint = ds.state_dict()
print(f"Checkpoint saved at sample: {sample['id']}")
break
# Continue with original iterator
original_next_samples = []
for idx, sample in enumerate(it):
original_next_samples.append(sample["id"])
if idx >= 4:
break
# Resume from checkpoint
ds_new = create_dataset()
ds_new.load_state_dict(checkpoint)
# Get samples from resumed iterator
it_new = iter(ds_new)
resumed_next_samples = []
for idx, sample in enumerate(it_new):
resumed_next_samples.append(sample["id"])
if idx >= 4:
break
print(f"\nExpected next samples: {original_next_samples}")
print(f"Actual next samples: {resumed_next_samples}")
print(
f"\n❌ BUG: {resumed_next_samples[0] - original_next_samples[0]} samples were skipped!"
)
```
With map
```
Checkpoint saved at sample: 9
Expected next samples: [10, 11, 12, 13, 14]
Actual next samples: [50, 51, 52, 53, 54]
❌ BUG: 40 samples were skipped!
```
### Expected behavior
without map
```
Expected next samples: [10, 11, 12, 13, 14]
Actual next samples: [10, 11, 12, 13, 14]
❌ BUG: 0 samples were skipped!
```
### Environment info
datasets == 3.6.0
| 2025-06-21T01:50:03 | 2025-06-29T07:51:32 | null |
https://github.com/huggingface/datasets/issues/7630
| null | 7,630 | false |
[
"Thanks for reporting this — it looks like a separate but related bug to #7538, which involved sample loss when resuming an `IterableDataset` wrapped in `FormattedExamplesIterable`. That was resolved in #7553 by re-batching the iterable to track offset correctly.\n\nIn this case, the issue seems to arise specifically from applying `.map()` before sharding and checkpointing. That wraps the iterable in `MappedExamplesIterable`, which may not preserve or propagate `shard_example_idx` correctly across `.state_dict()` and `.load_state_dict()` calls.\n\nYou can see that without `.map()`, resume works fine — but with `.map()`, it jumps from sample 9 to 50, skipping the rest of the shard.\n\nI'll dig deeper into how `MappedExamplesIterable` manages offsets and whether it supports proper checkpoint resumption. If not, we might need a fix similar to the one in #7553, or a wrapper to preserve resume metadata.\n\nHappy to help fix it!\n",
"Let me know if a dedicated test case is required — happy to add one!"
] |
3,161,169,782 |
Add test for `as_iterable_dataset()` method in DatasetBuilder
|
open
|
This PR adds a test for the new `as_iterable_dataset()` method introduced in PR #7628.
The test:
- Loads a builder using `load_dataset_builder("c4", "en")`
- Runs `download_and_prepare()`
- Streams examples using `builder.as_iterable_dataset(split="train[:100]")`
- Verifies streamed examples contain the "text" field
This ensures that the builder correctly streams data from cached Arrow files.
| 2025-06-19T19:23:55 | 2025-06-19T19:23:55 | null |
https://github.com/huggingface/datasets/pull/7629
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7629",
"html_url": "https://github.com/huggingface/datasets/pull/7629",
"diff_url": "https://github.com/huggingface/datasets/pull/7629.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7629.patch",
"merged_at": null
}
| 7,629 | true |
[] |
3,161,156,461 |
Add `as_iterable_dataset()` method to DatasetBuilder for streaming from cached Arrow files
|
open
|
This PR implements `builder.as_iterable_dataset(split=...)` as discussed in #5481.
It allows users to load an `IterableDataset` directly from cached Arrow files (using ArrowReader and ArrowExamplesIterable), without loading the full dataset into memory.
This is useful for large-scale training scenarios where memory is constrained. A test has also been added in `test_builder.py`.
Related to: #5481
| 2025-06-19T19:15:41 | 2025-06-19T19:15:41 | null |
https://github.com/huggingface/datasets/pull/7628
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7628",
"html_url": "https://github.com/huggingface/datasets/pull/7628",
"diff_url": "https://github.com/huggingface/datasets/pull/7628.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7628.patch",
"merged_at": null
}
| 7,628 | true |
[] |
3,160,544,390 |
Creating a HF Dataset from lakeFS with S3 storage takes too much time!
|
closed
|
Hi,
I’m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_
Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot!
From what I understand, it is loading the images into cache then building the dataset.
– Please find bellow the execution screenshot –
Is there a way to optimize this or am I doing something wrong?
Thanks!

| 2025-06-19T14:28:41 | 2025-06-23T12:39:10 | 2025-06-23T12:39:10 |
https://github.com/huggingface/datasets/issues/7627
| null | 7,627 | false |
[
"### > Update\n\nThe bottleneck, from what I understand, was making one network request per file\n\nFor 30k images, this meant 30k separate GET requests to the MinIO server through the S3 API, and that was killing the performance\n\nUsing webDataset to transform the large number of files to few .tar files and passing “webdataset” instead of “imagefolder” to the load_dataset function worked perfectly (took only ~11s)"
] |
3,159,322,138 |
feat(map): reuse unchanged columns when input_columns specified to reduce disk usage (#6013)
|
open
|
## Summary
This PR addresses [#6013](https://github.com/huggingface/datasets/issues/6013) by reusing unchanged columns from the original dataset in the `map()` method when `input_columns` is specified.
## What’s Implemented
- Injected logic at the end of `Dataset.map()` to:
- Identify untouched columns not in `input_columns` or `remove_columns`
- Select those columns from the original dataset
- Concatenate them with the transformed result using `pyarrow.concat_tables`
## Example Behavior
```python
ds = Dataset.from_dict({"a": [1, 2], "b": [3, 4]})
ds2 = ds.map(lambda x: {"c": x["a"] + 10}, input_columns=["a"], remove_columns=["a"])
print(ds2.column_names) # Output: ['b', 'c']
````
Column `b` is reused from the original dataset.
## Notes
* This keeps disk usage and caching minimal by avoiding full dataset duplication.
* Only triggered when `input_columns` is set.
---
cc @lhoestq @mariosasko for review 🙂
| 2025-06-19T07:41:45 | 2025-07-18T17:36:35 | null |
https://github.com/huggingface/datasets/pull/7626
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7626",
"html_url": "https://github.com/huggingface/datasets/pull/7626",
"diff_url": "https://github.com/huggingface/datasets/pull/7626.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7626.patch",
"merged_at": null
}
| 7,626 | true |
[] |
3,159,016,001 |
feat: Add h5folder dataset loader for HDF5 support
|
open
|
### Related Issue
Closes #3113
### What does this PR do?
This PR introduces a new dataset loader module called **`h5folder`** to support loading datasets stored in **HDF5 (.h5)** format.
It allows users to do:
```python
from datasets import load_dataset
dataset = load_dataset("h5folder", data_dir="path/to/")
````
### 🧩 Design Overview
* Implemented inside `datasets/packaged_modules/h5folder/h5folder.py`
* Based on the `GeneratorBasedBuilder` API
* Uses `h5py` to read HDF5 files and yield examples
* Expects datasets such as `id`, `data`, and `label` inside `data.h5`
* Converts numpy arrays to Python types before yielding
### 🧪 Example `.h5` Structure (for local testing)
```python
import h5py
import numpy as np
with h5py.File("data.h5", "w") as f:
f.create_dataset("id", data=np.arange(100))
f.create_dataset("data", data=np.random.randn(100, 10))
f.create_dataset("label", data=np.random.randint(0, 2, size=100))
```
### ✅ Testing
- The loader logic follows the structure of existing modules like `imagefolder`
- Will rely on Hugging Face CI to validate integration
- Manually testing planned once merged or during feedback
### 📁 Files Added
* `datasets/src/datasets/packaged_modules/h5folder/h5folder.py`
### 📌 Component(s) Affected
* `area/datasets`
* `area/load`
### 📦 Release Note Classification
* `rn/feature` – Adds support for loading `.h5` datasets via `load_dataset("h5folder", ...)`
---
Let me know if any changes or improvements are needed — happy to iterate. Thanks for reviewing!
| 2025-06-19T05:39:10 | 2025-06-26T05:44:26 | null |
https://github.com/huggingface/datasets/pull/7625
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7625",
"html_url": "https://github.com/huggingface/datasets/pull/7625",
"diff_url": "https://github.com/huggingface/datasets/pull/7625.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7625.patch",
"merged_at": null
}
| 7,625 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7625). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I guess test failed cause import os, import h5py, and import datasets lines are not alphabetically sorted, or not grouped properly.\r\n\r\n\r\n",
"This commit was accidental - `[Merge branch 'main' into patch-4]`. The \r\n`[chore: fix import order in h5folder.py to satisfy linter]` should solve the import order issue. \r\n\r\n\r\n"
] |
3,156,136,624 |
#Dataset Make "image" column appear first in dataset preview UI
|
closed
|
Hi!
#Dataset
I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub.
However, at the moment, the `"image"` column is not the first—in fact, it appears last, which is not ideal for the presentation I’d like to achieve.
I have a couple of questions:
Is there a way to force the dataset card to display the `"image"` column first?
Is there currently any way to control or influence the column order in the dataset preview UI?
Does the order of keys in the .jsonl file or the features argument affect the display order?
Thanks again for your time and help! :blush:
| 2025-06-18T09:25:19 | 2025-06-20T07:46:43 | 2025-06-20T07:46:43 |
https://github.com/huggingface/datasets/issues/7624
| null | 7,624 | false |
[
"Hi ! It should follow the same order as the order of the keys in the metadata file",
"Hi! Thank you for your answer. \n\nAs you said it, I I forced every key in every JSON to have an order using `collections. OrderedDict` in Python. Now, it works!\n\nTY"
] |
3,154,519,684 |
fix: raise error in FolderBasedBuilder when data_dir and data_files are missing
|
closed
|
### Related Issues/PRs
Fixes #6152
---
### What changes are proposed in this pull request?
This PR adds a dedicated validation check in the `_info()` method of the `FolderBasedBuilder` class to ensure that users provide either `data_dir` or `data_files` when loading folder-based datasets (such as `audiofolder`, `imagefolder`, etc.).
---
### Why this change?
Previously, when calling:
```python
load_dataset("audiofolder")
````
without specifying `data_dir` or `data_files`, the loader would silently fallback to the **current working directory**, leading to:
* Long loading times
* Unexpected behavior (e.g., scanning unrelated files)
This behavior was discussed in issue #6152. As suggested by maintainers, the fix has now been implemented directly inside the `FolderBasedBuilder._info()` method — keeping the logic localized to the specific builder instead of a generic loader function.
---
### How is this PR tested?
* ✅ Manually tested by calling `load_dataset("audiofolder")` with no `data_dir` or `data_files` → a `ValueError` is now raised early.
* ✅ Existing functionality (with valid input) remains unaffected.
---
### Does this PR require documentation update?
* [x] No
---
### Release Notes
#### Is this a user-facing change?
* [x] Yes
> Folder-based datasets now raise an explicit error if neither `data_dir` nor `data_files` are specified, preventing unintended fallback to the current working directory.
---
#### What component(s) does this PR affect?
* [x] `area/datasets`
* [x] `area/load`
---
<a name="release-note-category"></a>
#### How should the PR be classified?
* [x] `rn/bug-fix` - A user-facing bug fix
---
#### Should this be included in the next patch release?
* [x] Yes
| 2025-06-17T19:16:34 | 2025-06-18T14:18:41 | 2025-06-18T14:18:41 |
https://github.com/huggingface/datasets/pull/7623
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7623",
"html_url": "https://github.com/huggingface/datasets/pull/7623",
"diff_url": "https://github.com/huggingface/datasets/pull/7623.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7623.patch",
"merged_at": "2025-06-18T14:18:41"
}
| 7,623 | true |
[
"@lhoestq Moved the logic to FolderBasedBuilder._info() as discussed in previous PR (#7618). Let me know if anything else is needed — happy to update!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7623). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,154,398,557 |
Guard against duplicate builder_kwargs/config_kwargs in load_dataset_…
|
open
|
…builder (#4910 )
### What does this PR do?
Fixes edge case in `load_dataset_builder` by raising a `TypeError` if the same key exists in both `builder_kwargs` and `config_kwargs`.
### Implementation details
- Added a guard clause in `load_dataset_builder` to detect duplicate keys between `builder_kwargs` and `config_kwargs`
- Wrote a unit test in `tests/test_load_duplicate_keys.py` to verify the exception is raised correctly
### Fixes
Closes #4910
### Reviewers
@zach-huggingface
@SunMarc
Would appreciate your review if you have time - thanks!
| 2025-06-17T18:28:35 | 2025-07-02T12:39:20 | null |
https://github.com/huggingface/datasets/pull/7622
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7622",
"html_url": "https://github.com/huggingface/datasets/pull/7622",
"diff_url": "https://github.com/huggingface/datasets/pull/7622.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7622.patch",
"merged_at": null
}
| 7,622 | true |
[
"Hi folks, this PR fixes the duplicate-kwargs edge case and includes a unit test. Would love a review when you have a moment!\r\n\r\n@zach-huggingface\r\n@SunMarc "
] |
3,153,780,963 |
minor docs data aug
|
closed
| null | 2025-06-17T14:46:57 | 2025-06-17T14:50:28 | 2025-06-17T14:47:11 |
https://github.com/huggingface/datasets/pull/7621
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7621",
"html_url": "https://github.com/huggingface/datasets/pull/7621",
"diff_url": "https://github.com/huggingface/datasets/pull/7621.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7621.patch",
"merged_at": "2025-06-17T14:47:11"
}
| 7,621 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7621). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,153,565,183 |
Fixes in docs
|
closed
|
before release 4.0
(I also did minor improvements to `features` to not show their `id=None` in their `__repr__()`)
| 2025-06-17T13:41:54 | 2025-06-17T13:58:26 | 2025-06-17T13:58:24 |
https://github.com/huggingface/datasets/pull/7620
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7620",
"html_url": "https://github.com/huggingface/datasets/pull/7620",
"diff_url": "https://github.com/huggingface/datasets/pull/7620.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7620.patch",
"merged_at": "2025-06-17T13:58:24"
}
| 7,620 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7620). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,153,058,517 |
`from_list` fails while `from_generator` works for large datasets
|
open
|
### Describe the bug
I am constructing a large time series dataset and observed that first constructing a list of entries and then using `Dataset.from_list` led to a crash as the number of items became large. However, this is not a problem when using `Dataset.from_generator`.
### Steps to reproduce the bug
#### Snippet A (crashes)
```py
from tqdm.auto import tqdm
import numpy as np
import datasets
def data_generator():
for i in tqdm(range(10_000_000)):
length = np.random.randint(2048)
series = np.random.rand(length)
yield {"target": series, "item_id": str(i), "start": np.datetime64("2000", "ms")}
data_list = list(data_generator())
ds = datasets.Dataset.from_list(data_list)
```
The last line crashes with
```
ArrowInvalid: Value 2147483761 too large to fit in C integer type
```
#### Snippet B (works)
```py
from tqdm.auto import tqdm
import numpy as np
import datasets
def data_generator():
for i in tqdm(range(10_000_000)):
length = np.random.randint(2048)
series = np.random.rand(length)
yield {"target": series, "item_id": str(i), "start": np.datetime64("2000", "ms")}
ds = datasets.Dataset.from_generator(data_generator)
```
### Expected behavior
I expected both the approaches to work or to fail similarly.
### Environment info
```
- `datasets` version: 3.6.0
- Platform: Linux-6.8.0-1029-aws-x86_64-with-glibc2.35
- Python version: 3.11.11
- `huggingface_hub` version: 0.32.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
```
| 2025-06-17T10:58:55 | 2025-06-29T16:34:44 | null |
https://github.com/huggingface/datasets/issues/7619
| null | 7,619 | false |
[
"@lhoestq any thoughts on this? ",
"Thanks for the report! This behavior is expected due to how `from_list()` and `from_generator()` differ internally.\n\n- `from_list()` builds the entire dataset in memory at once, which can easily exceed limits (especially with variable-length arrays or millions of rows). The Arrow error you're seeing (`Value too large to fit in C integer type`) is related to that memory overload.\n- `from_generator()` avoids this issue by batching and streaming data incrementally, which is much more memory-efficient.\n\nSo for large datasets like time series or NLP data with large arrays, `from_generator()` (or `datasets.IterableDataset`) is the recommended approach.\n\nHope this helps clarify the behavior — let me know if you'd like me to point to prior issues/discussions where similar tradeoffs came up!\n",
"@ArjunJagdale Yes, it is related to using large dataset but not in the way that you have described. As I understand, the problem here is that `datasets` does not use `LargeList` with 64-bit offsets from PyArrow when using `from_list`. However, with `from_generator` this seems to work okay, likely due to batching. As such, this is more like a bug than an expected outcome. If this is indeed \"expected\", `datasets` should fail more gracefully in these cases with a recommendation to use `from_generator`. ",
"Thanks for the clarification — you're absolutely right, this seems tied to the use of 32-bit list offsets in from_list() under the hood. That distinction between List and LargeList in PyArrow is a crucial one, and definitely worth highlighting in the docs or error message. Happy to help if a check or fallback to LargeList makes sense here."
] |
3,148,912,897 |
fix: raise error when folder-based datasets are loaded without data_dir or data_files
|
open
|
### Related Issues/PRs
<!-- Uncomment 'Resolve' if this PR can close the linked items. -->
<!-- Resolve --> #6152
---
### What changes are proposed in this pull request?
This PR adds an early validation step for folder-based datasets (like `audiofolder`) to prevent silent fallback behavior.
**Before this fix**:
- When `data_dir` or `data_files` were not provided, the loader defaulted to the current working directory.
- This caused unexpected behavior like:
- Long loading times
- Scanning unintended local files
**Now**:
- If both `data_dir` and `data_files` are missing, a `ValueError` is raised early with a helpful message.
---
### How is this PR tested?
- [x] Manual test via `load_dataset("audiofolder")` with missing `data_dir`
- [ ] Existing unit tests (should not break any)
- [ ] New tests (if needed, maintainers can guide)
---
### Does this PR require documentation update?
- [x] No. You can skip the rest of this section.
---
### Release Notes
#### Is this a user-facing change?
- [x] Yes. Give a description of this change to be included in the release notes for users.
> Adds early error handling for folder-based datasets when neither `data_dir` nor `data_files` is specified, avoiding unintended resolution to the current directory.
#### What component(s), interfaces, languages, and integrations does this PR affect?
Components:
- [x] `area/datasets`
- [x] `area/load`
---
<a name="release-note-category"></a>
#### How should the PR be classified in the release notes? Choose one:
- [x] `rn/bug-fix` - A user-facing bug fix worth mentioning in the release notes
---
#### Should this PR be included in the next patch release?
- [x] Yes (this PR will be cherry-picked and included in the next patch release)
| 2025-06-16T07:43:59 | 2025-06-16T12:13:26 | null |
https://github.com/huggingface/datasets/pull/7618
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7618",
"html_url": "https://github.com/huggingface/datasets/pull/7618",
"diff_url": "https://github.com/huggingface/datasets/pull/7618.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7618.patch",
"merged_at": null
}
| 7,618 | true |
[
"Great ! Since this logic is specific to one builder class maybe this check can be in the class definition ? I think you can put it in FolderBasedBuilder's `_info()` method."
] |
3,148,102,085 |
Unwanted column padding in nested lists of dicts
|
closed
|
```python
from datasets import Dataset
dataset = Dataset.from_dict({
"messages": [
[
{"a": "...",},
{"b": "...",},
],
]
})
print(dataset[0])
```
What I get:
```
{'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]}
```
What I want:
```
{'messages': [{'a': '...'}, {'b': '...'}]}
```
Is there an easy way to automatically remove these auto-filled null/none values?
If not, I probably need a recursive none exclusion function, don't I?
Datasets 3.6.0
| 2025-06-15T22:06:17 | 2025-06-16T13:43:31 | 2025-06-16T13:43:31 |
https://github.com/huggingface/datasets/issues/7617
| null | 7,617 | false |
[
"Answer from @lhoestq:\n\n> No\n> This is because Arrow and Parquet a columnar format: they require a fixed type for each column. So if you have nested dicts, each item should have the same subfields\n\nThe way around I found is the handle it after sampling with this function:\n\n```python\ndef remove_padding(example):\n if isinstance(example, list):\n return [remove_padding(value) if isinstance(value, (dict, list)) else value for value in example]\n elif isinstance(example, Mapping):\n return {\n key: remove_padding(value) if isinstance(value, (dict, list)) else value\n for key, value in example.items()\n if value is not None\n }\n else:\n raise TypeError(\"Input must be a list or a dictionary.\")\n\n# Example:\nexample = next(iter(dataset))\nexample = remove_padding(example)\n```"
] |
3,144,506,665 |
Torchcodec decoding
|
closed
|
Closes #7607
## New signatures
### Audio
```python
Audio(sampling_rate: Optional[int] = None, mono: bool = True, decode: bool = True, stream_index: Optional[int] = None)
Audio.encode_example(self, value: Union[str, bytes, bytearray, dict, "AudioDecoder"]) -> dict
Audio.decode_example(self, value: dict, token_per_repo_id: Optional[dict[str, Union[str, bool, None]]] = None) -> "AudioDecoder":
```
### Video
```python
Video(decode: bool = True, stream_index: Optional[int] = None, dimension_order: Literal['NCHW', 'NHWC'] = 'NCHW', num_ffmpeg_threads: int = 1, device: Optional[Union[str, "torch.device"]] = 'cpu', seek_mode: Literal['exact', 'approximate'] = 'exact')
Video.encode_example(self, value: Union[str, bytes, bytearray, Example, np.ndarray, "VideoDecoder"]) -> Example:
Video.decode_example(self, value: Union[str, Example], token_per_repo_id: Optional[dict[str, Union[bool, str]]] = None, ) -> "VideoDecoder":
```
## Notes
Audio features constructor takes in 1 new optional param stream_index which is passed to the AudioDecoder constructor to select the stream index of a file.
Audio feature can now take in torchcodec.decoders.AudioDecoder as input to encode_example()
Audio feature decode_example() returns torchcodec.decoders.AudioDecoder
Video feature constructor takes in 5 new optional params stream_index, dimension_order, num_ffmpeg_threads, device, seek_mode all of which are passed to VideoDecoder constructor
Video feature decode_example() returns torchcodec.decoders.VideoDecoder
Video feature can now take in torchcodec.decoders.VideoDecoder as input to encode_example()
All test cases have been updated to reflect these changes
All documentation has also been updated to reflect these changes.
Both VideoDecoder and AudioDecoder when formatted with (np_formatter, tf_formatter, etc) will ignore the type and return themselves. Formatting test cases were updated accordingly to reflect this. (Pretty simple to make this not the case if we want though)
## Errors
This test case from `tests/packaged_modules/test_audiofolder.py`
```python
@require_librosa
@require_sndfile
@pytest.mark.parametrize("streaming", [False, True])
def test_data_files_with_metadata_and_archives(streaming, cache_dir, data_files_with_zip_archives):
audiofolder = AudioFolder(data_files=data_files_with_zip_archives, cache_dir=cache_dir)
audiofolder.download_and_prepare()
datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset()
for split, data_files in data_files_with_zip_archives.items():
num_of_archives = len(data_files) # the metadata file is inside the archive
expected_num_of_audios = 2 * num_of_archives
assert split in datasets
dataset = list(datasets[split])
assert len(dataset) == expected_num_of_audios
# make sure each sample has its own audio (all arrays are different) and metadata
assert (
sum(np.array_equal(dataset[0]["audio"].get_all_samples().data.numpy(), example["audio"].get_all_samples().data.numpy()) for example in dataset[1:])
== 0
)
assert len({example["text"] for example in dataset}) == expected_num_of_audios
assert all(example["text"] is not None for example in dataset)
```
Fails now because AudioDecoder needs to access the files after the lines below are run, but there seems to be some context issues. The file the decoder is trying to read is closed before the decoder gets the chance to decode it.
```python
audiofolder.download_and_prepare()
datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset()
```
| 2025-06-13T19:06:07 | 2025-06-19T18:25:49 | 2025-06-19T18:25:49 |
https://github.com/huggingface/datasets/pull/7616
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7616",
"html_url": "https://github.com/huggingface/datasets/pull/7616",
"diff_url": "https://github.com/huggingface/datasets/pull/7616.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7616.patch",
"merged_at": "2025-06-19T18:25:48"
}
| 7,616 | true |
[
"@lhoestq any updates on when this will be merged? Let me know if theres anything you need from my end.",
"Btw I plan to release `datasets` 4.0 after your PR, this will be a major milestone :)",
"@lhoestq just pushed the new changes.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7616). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Great ! I took the liberty to move the AudioDecoder to its own file and make small edits in the docs and docstrings\r\n\r\nIf it looks good to you I think we can merge :)"
] |
3,143,443,498 |
remove unused code
|
closed
| null | 2025-06-13T12:37:30 | 2025-06-13T12:39:59 | 2025-06-13T12:37:40 |
https://github.com/huggingface/datasets/pull/7615
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7615",
"html_url": "https://github.com/huggingface/datasets/pull/7615",
"diff_url": "https://github.com/huggingface/datasets/pull/7615.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7615.patch",
"merged_at": "2025-06-13T12:37:40"
}
| 7,615 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7615). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,143,381,638 |
Lazy column
|
closed
|
Same as https://github.com/huggingface/datasets/pull/7564 but for `Dataset`, cc @TopCoder2K FYI
e.g. `ds[col]` now returns a lazy Column instead of a list
This way calling `ds[col][idx]` only loads the required data in memory
(bonus: also supports subfields access with `ds[col][subcol][idx]`)
the breaking change will be for the next major release, which also includes removal of dataset scripts support
close https://github.com/huggingface/datasets/issues/4180
| 2025-06-13T12:12:57 | 2025-06-17T13:08:51 | 2025-06-17T13:08:49 |
https://github.com/huggingface/datasets/pull/7614
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7614",
"html_url": "https://github.com/huggingface/datasets/pull/7614",
"diff_url": "https://github.com/huggingface/datasets/pull/7614.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7614.patch",
"merged_at": "2025-06-17T13:08:49"
}
| 7,614 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7614). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,142,819,991 |
fix parallel push_to_hub in dataset_dict
|
closed
| null | 2025-06-13T09:02:24 | 2025-06-13T12:30:23 | 2025-06-13T12:30:22 |
https://github.com/huggingface/datasets/pull/7613
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7613",
"html_url": "https://github.com/huggingface/datasets/pull/7613",
"diff_url": "https://github.com/huggingface/datasets/pull/7613.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7613.patch",
"merged_at": "2025-06-13T12:30:22"
}
| 7,613 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7613). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,141,905,049 |
Provide an option of robust dataset iterator with error handling
|
open
|
### Feature request
Adding an option to skip corrupted data samples. Currently the datasets behavior is throwing errors if the data sample if corrupted and let user aware and handle the data corruption. When I tried to try-catch the error at user level, the iterator will raise StopIteration when I called next() again.
The way I try to do error handling is: (This doesn't work, unfortunately)
```
# Load the dataset with streaming enabled
dataset = load_dataset(
"pixparse/cc12m-wds", split="train", streaming=True
)
# Get an iterator from the dataset
iterator = iter(dataset)
while True:
try:
# Try to get the next example
example = next(iterator)
# Try to access and process the image
image = example["jpg"]
pil_image = Image.fromarray(np.array(image))
pil_image.verify() # Verify it's a valid image file
except StopIteration: # Code path 1
print("\nStopIteration was raised! Reach the end of dataset")
raise StopIteration
except Exception as e: # Code path 2
errors += 1
print("Error! Skip this sample")
cotinue
else:
successful += 1
```
This is because the `IterableDataset` already throws an error (reaches Code path 2). And if I continue call next(), it will hit Code path 1. This is because the inner iterator of `IterableDataset`([code](https://github.com/huggingface/datasets/blob/89bd1f971402acb62805ef110bc1059c38b1c8c6/src/datasets/iterable_dataset.py#L2242)) as been stopped, so calling next() on it will raise StopIteration.
So I can not skip the corrupted data sample in this way. Would also love to hear any suggestions about creating a robust dataloader.
Thanks for your help in advance!
### Motivation
## Public dataset corruption might be common
A lot of users would use public dataset, and the public dataset might contains some corrupted data, especially for dataset with image / video etc. I totally understand it's dataset owner and user's responsibility to ensure the data integrity / run data cleaning or preprocessing, but it would be easier for developers who would use the dataset
## Use cases
For example, a robust dataloader would be easy for users who want to try quick tests on different dataset, and chose one dataset which fits their needs. So user could use IterableDataloader with `stream=True` to use the dataset easily without downloading and removing corrupted data samples from the dataset.
### Your contribution
The error handling might not trivial and might need more careful design.
| 2025-06-13T00:40:48 | 2025-06-24T16:52:30 | null |
https://github.com/huggingface/datasets/issues/7612
| null | 7,612 | false |
[
"Hi ! Maybe we can add a parameter to the Image() type to make it to return `None` instead of raising an error in case of corruption ? Would that help ?",
"Hi! 👋🏼 I just opened PR [#7638](https://github.com/huggingface/datasets/pull/7638) to address this issue.\n\n### 🔧 What it does:\nIt adds an `ignore_decode_errors` flag to the `Image` feature. When set to `True`, corrupted image samples will be skipped (with a warning), and `None` will be returned instead of raising an exception.\n\nThis allows users to stream datasets that may contain some invalid images without breaking the iteration loop:\n\n```python\nfeatures = Features({\n \"image\": Image(decode=True, ignore_decode_errors=True)\n})\n````\n\n### 🧩 Why this helps:\n\n* Prevents full iteration breakdown during `.streaming=True` usage\n* Enables downstream tooling like Flux (see [[Flux#1290](https://github.com/pytorch/torchtitan/pull/1290)](https://github.com/pytorch/torchtitan/pull/1290)) to implement robust loaders now that `datasets` supports graceful handling\n* Keeps current behavior unchanged unless explicitly opted-in\n\nLet me know if you'd like me to follow up with test coverage or additional enhancements!\n\ncc @lhoestq "
] |
3,141,383,940 |
Code example for dataset.add_column() does not reflect correct way to use function
|
closed
|
https://github.com/huggingface/datasets/blame/38d4d0e11e22fdbc4acf373d2421d25abeb43439/src/datasets/arrow_dataset.py#L5925C10-L5925C10
The example seems to suggest that dataset.add_column() can add column inplace, however, this is wrong -- it cannot. It returns a new dataset with the column added to it.
| 2025-06-12T19:42:29 | 2025-07-17T13:14:18 | 2025-07-17T13:14:18 |
https://github.com/huggingface/datasets/issues/7611
| null | 7,611 | false |
[
"Hi @shaily99 \n\nThanks for pointing this out — you're absolutely right!\n\nThe current example in the docstring for add_column() implies in-place modification, which is misleading since add_column() actually returns a new dataset.",
"#self-assign\n"
] |
3,141,281,560 |
i cant confirm email
|
open
|
### Describe the bug
This is dificult, I cant confirm email because I'm not get any email!
I cant post forum because I cant confirm email!
I can send help desk because... no exist on web page.
paragraph 44
### Steps to reproduce the bug
rthjrtrt
### Expected behavior
ewtgfwetgf
### Environment info
sdgfswdegfwe
| 2025-06-12T18:58:49 | 2025-06-27T14:36:47 | null |
https://github.com/huggingface/datasets/issues/7610
| null | 7,610 | false |
[
"Will you please clarify the issue by some screenshots or more in-depth explanation?",
"\nThis is clarify answer. I have not received a letter.\n\n**The graphic at the top shows how I don't get any letter. Can you show in a clear way how you don't get a letter from me?**"
] |
3,140,373,128 |
Update `_dill.py` to use `co_linetable` for Python 3.10+ in place of `co_lnotab`
|
closed
|
Not 100% about this one, but it seems to be recommended.
```
/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
```
Tests pass locally. And the warning is gone with this change.
https://peps.python.org/pep-0626/#backwards-compatibility
| 2025-06-12T13:47:01 | 2025-06-16T12:14:10 | 2025-06-16T12:14:08 |
https://github.com/huggingface/datasets/pull/7609
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7609",
"html_url": "https://github.com/huggingface/datasets/pull/7609",
"diff_url": "https://github.com/huggingface/datasets/pull/7609.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7609.patch",
"merged_at": "2025-06-16T12:14:08"
}
| 7,609 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7609). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"not 100% sure either, I tried removing unnecessary checks - let me know if they sound good to you otherwise I'll revert",
"I can't reproduce the warning anymore... 🤦🏻♂️\r\n",
"Ah now I can reproduce!, and I can confirm that the warning is gone when you apply the change in this PR"
] |
3,137,564,259 |
Tests typing and fixes for push_to_hub
|
closed
|
todo:
- [x] fix TestPushToHub.test_push_dataset_dict_to_hub_iterable_num_proc
| 2025-06-11T17:13:52 | 2025-06-12T21:15:23 | 2025-06-12T21:15:21 |
https://github.com/huggingface/datasets/pull/7608
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7608",
"html_url": "https://github.com/huggingface/datasets/pull/7608",
"diff_url": "https://github.com/huggingface/datasets/pull/7608.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7608.patch",
"merged_at": "2025-06-12T21:15:21"
}
| 7,608 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7608). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,135,722,560 |
Video and audio decoding with torchcodec
|
closed
|
### Feature request
Pytorch is migrating video processing to torchcodec and it's pretty cool. It would be nice to migrate both the audio and video features to use torchcodec instead of torchaudio/video.
### Motivation
My use case is I'm working on a multimodal AV model, and what's nice about torchcodec is I can extract the audio tensors directly from MP4 files. Also, I can easily resample video data to whatever fps I like on the fly. I haven't found an easy/efficient way to do this with torchvision.
### Your contribution
I’m modifying the Video dataclass to use torchcodec in place of the current backend, starting from a stable commit for a project I’m working on. If it ends up working well, I’m happy to open a PR on main.
| 2025-06-11T07:02:30 | 2025-06-19T18:25:49 | 2025-06-19T18:25:49 |
https://github.com/huggingface/datasets/issues/7607
| null | 7,607 | false |
[
"Good idea ! let me know if you have any question or if I can help",
"@lhoestq Almost finished, but I'm having trouble understanding this test case.\nThis is how it looks originally. The `map` function is called, and then `with_format` is called. According to the test case example[\"video\"] is supposed to be a VideoReader. However, according to the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.with_format) its supposed to be the type passed into `with_format` (numpy in this case). My implementation with VideoDecoder currently does the latter, is that correct, or should it be a VideoDecoder object instead?\n```\n@require_torchvision\ndef test_dataset_with_video_map_and_formatted(shared_datadir):\n from torchvision.io import VideoReader\n\n video_path = str(shared_datadir / \"test_video_66x50.mov\")\n data = {\"video\": [video_path]}\n features = Features({\"video\": Video()})\n dset = Dataset.from_dict(data, features=features)\n dset = dset.map(lambda x: x).with_format(\"numpy\")\n example = dset[0]\n assert isinstance(example[\"video\"], VideoReader)\n # assert isinstance(example[\"video\"][0], np.ndarray)\n\n # from bytes\n with open(video_path, \"rb\") as f:\n data = {\"video\": [f.read()]}\n dset = Dataset.from_dict(data, features=features)\n dset = dset.map(lambda x: x).with_format(\"numpy\")\n example = dset[0]\n assert isinstance(example[\"video\"], VideoReader)\n # assert isinstance(example[\"video\"][0], np.ndarray)\n\n```",
"Hi ! It's maybe more convenient for users to always have a VideoDecoder, since they might only access a few frames and not the full video. So IMO it's fine to always return a VideoDecoder (maybe later we can extend the VideoDecoder to return other types of tensors than numpy arrays though ? 👀 it's not crucial for now though)",
"@lhoestq ya that makes sense, looks like this functionality lives in `src/datasets/formatting`, where an exception is made for VideoReader objects to remain as themselves when being formatted. I'll make the necessary changes. ",
"@lhoestq I'm assuming this was also the case for torchaudio objects?",
"We're not using torchaudio but soundfile. But anyway we unfortunately decode full audio files instead of returning a Reader and it can be interesting to fix this. Currently it always returns a dict {\"array\": np.array(...), \"sampling_rate\": int(...)}, while it would be cool to return a reader with seek() and read() - like methods as for videos.\n\n(there is a way to make the audio change backward compatible anyway by allowing `reader[\"array\"]` to return the full array)",
"@lhoestq (sorry for the spam btw)\nLooks like there's a # TODO to have these returned as np.arrays instead. I'm curious why the authors didn't do it initially. Maybe a performance thing?\nThis is from `/src/datasets/formatting/np_formatter.py` line 70\n```\nif config.TORCHVISION_AVAILABLE and \"torchvision\" in sys.modules:\n from torchvision.io import VideoReader\n\n if isinstance(value, VideoReader):\n return value # TODO(QL): set output to np arrays ?\n```",
"Oh cool ya this is something that I could implement with torchcodec. I can add that to the PR as well.",
"> Looks like there's a # TODO to have these returned as np.arrays instead. I'm curious why the authors didn't do it initially. Maybe a performance thing?\n\nyea that was me, I focused on a simple logic to start with, since I knew there was torchcodec coming and maybe wasn't worth it at the time ^^\n\nbut anyway it's fine to start with a logic without formatting to start with and then iterate",
"Hey @lhoestq I ran into an error with this test case for the Audio feature\n\n```\n@require_sndfile\n@require_torchcodec\ndef test_dataset_with_audio_feature_map_is_decoded(shared_datadir):\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data = {\"audio\": [audio_path], \"text\": [\"Hello\"]}\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n dset = Dataset.from_dict(data, features=features)\n\n def process_audio_sampling_rate_by_example(example):\n sample_rate = example[\"audio\"].get_all_samples().sample_rate\n example[\"double_sampling_rate\"] = 2 * sample_rate\n return example\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_example)\n for item in decoded_dset.cast_column(\"audio\", Audio(decode=False)):\n assert item.keys() == {\"audio\", \"text\", \"double_sampling_rate\"}\n assert item[\"double_sampling_rate\"] == 88200\n\n def process_audio_sampling_rate_by_batch(batch):\n double_sampling_rates = []\n for audio in batch[\"audio\"]:\n double_sampling_rates.append(2 * audio.get_all_samples().sample_rate)\n batch[\"double_sampling_rate\"] = double_sampling_rates\n return batch\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_batch, batched=True)\n for item in decoded_dset.cast_column(\"audio\", Audio(decode=False)):\n assert item.keys() == {\"audio\", \"text\", \"double_sampling_rate\"}\n assert item[\"double_sampling_rate\"] == 88200\n```\n\nthis is the error below\n```\nsrc/datasets/arrow_writer.py:626: in write_batch\n arrays.append(pa.array(typed_sequence))\n.....\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded - pyarrow.lib.ArrowInvalid: Could not convert <torchcodec.decoders._audio_decoder.AudioDecoder object at 0x138cdd810> with type AudioDecoder: did not recognize Python value type when inferring an Arrow data type\n```\n\nBy the way I copied the test case and ran it on the original implementation of the Video feature, which uses the torchvision backend and I got a similar error.\n```\ndef test_dataset_with_video_feature_map_is_decoded(shared_datadir):\n video_path = str(shared_datadir / \"test_video_66x50.mov\")\n data = {\"video\": [video_path], \"text\": [\"Hello\"]}\n features = Features({\"video\": Video(), \"text\": Value(\"string\")})\n dset = Dataset.from_dict(data, features=features)\n\n def process_audio_sampling_rate_by_example(example):\n metadata = example[\"video\"].get_metadata()\n example[\"double_fps\"] = 2 * metadata[\"video\"][\"fps\"][0]\n return example\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_example)\n for item in decoded_dset.cast_column(\"video\", Video(decode=False)):\n assert item.keys() == {\"video\", \"text\", \"double_fps\"}\n assert item[\"double_fps\"] == 2 * 10 # prollly wont work past 2*10 is made up!! shouldn't pass\n\n def process_audio_sampling_rate_by_batch(batch):\n double_fps = []\n for video in batch[\"video\"]:\n double_fps.append(2 * video.metadata.begin_stream_seconds)\n batch[\"double_fps\"] = double_fps\n return batch\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_batch, batched=True)\n for item in decoded_dset.cast_column(\"video\", Video(decode=False)):\n assert item.keys() == {\"video\", \"text\", \"double_fps\"}\n assert item[\"double_fps\"] == 2 * 10 # prollly wont work past this no reason it should\n```\n\nI was wondering if these error's are expected. They seem to be coming from the fact that the function `_cast_to_python_objects` in `src/datasets/features/features.py` doesn't handle VideoDecoders or AudioDecoders. I was able to fix it and get rid of the error by adding this to the bottom of the function\n```\n elif config.TORCHCODEC_AVAILABLE and \"torchcodec\" in sys.modules and isinstance(obj, VideoDecoder):\n v = Video()\n return v.encode_example(obj), True\n elif config.TORCHCODEC_AVAILABLE and \"torchcodec\" in sys.modules and isinstance(obj, AudioDecoder):\n a = Audio()\n return a.encode_example(obj), True\n```\nThis fixed it, but I just want to make sure I'm not adding things that are messing up the intended functionality.",
"This is the right fix ! :)",
"Btw I just remembered that we were using soundfile because it can support a wide range of audio formats, is it also the case for torchcodec ? including ogg, opus for example",
"Yes from what I understand torchcodec supports everything ffmpeg supports.",
"Okay just finished. However, I wasn't able to pass this test case:\n```python\n@require_torchcodec\n@require_sndfile\[email protected](\"streaming\", [False, True])\ndef test_load_dataset_with_audio_feature(streaming, jsonl_audio_dataset_path, shared_datadir):\n from torchcodec.decoders import AudioDecoder\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data_files = jsonl_audio_dataset_path\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n dset = load_dataset(\"json\", split=\"train\", data_files=data_files, features=features, streaming=streaming)\n item = dset[0] if not streaming else next(iter(dset))\n assert item.keys() == {\"audio\", \"text\"}\n assert isinstance(item[\"audio\"], AudioDecoder)\n samples = item[\"audio\"].get_all_samples()\n assert samples.sample_rate == 44100\n assert samples.data.shape == (1, 202311)\n```\n\nIt returned this error\n```\nstreaming = False, jsonl_audio_dataset_path = '/private/var/folders/47/c7dlgs_n6lx8rtr8f5w5m1m00000gn/T/pytest-of-tytodd/pytest-103/data2/audio_dataset.jsonl'\nshared_datadir = PosixPath('/private/var/folders/47/c7dlgs_n6lx8rtr8f5w5m1m00000gn/T/pytest-of-tytodd/pytest-103/test_load_dataset_with_audio_f0/data')\n\n @require_torchcodec\n @require_sndfile\n @pytest.mark.parametrize(\"streaming\", [False, True])\n def test_load_dataset_with_audio_feature(streaming, jsonl_audio_dataset_path, shared_datadir):\n from torchcodec.decoders import AudioDecoder\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data_files = jsonl_audio_dataset_path\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n> dset = load_dataset(\"json\", split=\"train\", data_files=data_files, features=features, streaming=streaming)\n\ntests/features/test_audio.py:686: \n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\nsrc/datasets/load.py:1418: in load_dataset\n builder_instance.download_and_prepare(\nsrc/datasets/builder.py:925: in download_and_prepare\n self._download_and_prepare(\nsrc/datasets/builder.py:1019: in _download_and_prepare\n verify_splits(self.info.splits, split_dict)\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\n\nexpected_splits = {'train': SplitInfo(name='train', num_bytes=2351563, num_examples=10000, shard_lengths=None, dataset_name=None), 'validation': SplitInfo(name='validation', num_bytes=238418, num_examples=1000, shard_lengths=None, dataset_name=None)}\nrecorded_splits = {'train': SplitInfo(name='train', num_bytes=167, num_examples=1, shard_lengths=None, dataset_name='json')}\n\n def verify_splits(expected_splits: Optional[dict], recorded_splits: dict):\n if expected_splits is None:\n logger.info(\"Unable to verify splits sizes.\")\n return\n if len(set(expected_splits) - set(recorded_splits)) > 0:\n> raise ExpectedMoreSplitsError(str(set(expected_splits) - set(recorded_splits)))\nE datasets.exceptions.ExpectedMoreSplitsError: {'validation'}\n\nsrc/datasets/utils/info_utils.py:68: ExpectedMoreSplitsError\n```\n\nIt looks like this test case wasn't passing when I forked the repo, so I assume I didn't do anything to break it. I also added this case to `test_video.py`, and it fails there as well. If this looks good, I'll go ahead and submit the PR.",
"Awesome ! yes feel free to submit the PR, I can see what I can do for the remaining tests",
"@lhoestq just submitted it #7616 "
] |
3,133,848,546 |
Add `num_proc=` to `.push_to_hub()` (Dataset and IterableDataset)
|
closed
| null | 2025-06-10T14:35:10 | 2025-06-11T16:47:28 | 2025-06-11T16:47:25 |
https://github.com/huggingface/datasets/pull/7606
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7606",
"html_url": "https://github.com/huggingface/datasets/pull/7606",
"diff_url": "https://github.com/huggingface/datasets/pull/7606.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7606.patch",
"merged_at": "2025-06-11T16:47:25"
}
| 7,606 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7606). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,131,636,882 |
Make `push_to_hub` atomic (#7600)
|
closed
| null | 2025-06-09T22:29:38 | 2025-06-23T19:32:08 | 2025-06-23T19:32:08 |
https://github.com/huggingface/datasets/pull/7605
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7605",
"html_url": "https://github.com/huggingface/datasets/pull/7605",
"diff_url": "https://github.com/huggingface/datasets/pull/7605.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7605.patch",
"merged_at": null
}
| 7,605 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7605). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi ! unfortunately we can't allow atomic commits for commits with hundreds of files additions (HF would time out)\r\n\r\nMaybe an alternative would be to retry if there was a commit in between ? this could be the default behavior as well",
"Thanks for taking a look – much appreciated!\r\n\r\nI've verified that commits with up to 20,000 files don't time out and the commit time scales linearly with the number of operations enqueued. It took just under 2 minutes to complete (successfully) the 20k file commit.\r\n\r\nThe fundamental issue I'm trying to tackle here is dataset corruption: getting into a state where a dataset on the hub cannot be used when downloaded. Non-atomic commits won't get us there, I think. If, for example, 3 of 5 commits complete and the machine/process calling `push_to_hub` has a network, hardware, or other failure that prevents it from completing the rest of the commits (even with retries) we'll now have some pointer files pointing to the new data and others pointing to the old data => corrupted. While this may seem like an unlikely scenario, it's a regular occurrence at scale.\r\n\r\nIf you still feel strongly that atomic commits are not the right way to go, I can either set it to not be the default or remove it entirely from this PR.\r\n\r\nAs for retries, it's a good idea. In a non-atomic world, the logic gets more complicated:\r\n- keep an explicit queue of pending add/delete operations\r\n- chunkwise pop from queue and commit with `parent_commit` set to previous chunked commit hash\r\n- if `create_commit` fails:\r\n - re-fetch README and set `parent_commit` to latest hash for `revision`\r\n - re-generate dataset card content\r\n - swap old `CommitOperationAdd` with new one for README in the pending queue\r\n- resume chunkwise committing from the queue as above\r\n\r\nEntirely doable, but more involved than I signed up for with this PR.",
"Just to clarify – setting the `parent_commit` can be separated from making the commit atomic (which is what I'm suggesting by either atomic commits not the default or removing it from this PR). It's crucial to set the parent commit to avoid the read-modify-write race condition on the README schema."
] |
3,130,837,169 |
Docs and more methods for IterableDataset: push_to_hub, to_parquet...
|
closed
|
to_csv, to_json, to_sql, to_pandas, to_polars, to_dict, to_list
| 2025-06-09T16:44:40 | 2025-06-10T13:15:23 | 2025-06-10T13:15:21 |
https://github.com/huggingface/datasets/pull/7604
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7604",
"html_url": "https://github.com/huggingface/datasets/pull/7604",
"diff_url": "https://github.com/huggingface/datasets/pull/7604.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7604.patch",
"merged_at": "2025-06-10T13:15:21"
}
| 7,604 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7604). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,130,394,563 |
No TF in win tests
|
closed
| null | 2025-06-09T13:56:34 | 2025-06-09T15:33:31 | 2025-06-09T15:33:30 |
https://github.com/huggingface/datasets/pull/7603
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7603",
"html_url": "https://github.com/huggingface/datasets/pull/7603",
"diff_url": "https://github.com/huggingface/datasets/pull/7603.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7603.patch",
"merged_at": "2025-06-09T15:33:30"
}
| 7,603 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7603). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,128,758,924 |
Enhance error handling and input validation across multiple modules
|
open
|
This PR improves the robustness and user experience by:
1. **Audio Module**:
- Added clear error messages when required fields ('path' or 'bytes') are missing in audio encoding
2. **DatasetDict**:
- Enhanced key access error messages to show available splits when an invalid key is accessed
3. **NonMutableDict**:
- Added input validation for the update() method to ensure proper mapping types
4. **Arrow Reader**:
- Improved error messages for small dataset percentage splits with suggestions for alternatives
5. **FaissIndex**:
- Strengthened input validation with descriptive error messages
- Added proper type checking and shape validation for search queries
These changes make the code more maintainable and user-friendly by providing actionable feedback when issues arise.
| 2025-06-08T23:01:06 | 2025-06-08T23:01:06 | null |
https://github.com/huggingface/datasets/pull/7602
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7602",
"html_url": "https://github.com/huggingface/datasets/pull/7602",
"diff_url": "https://github.com/huggingface/datasets/pull/7602.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7602.patch",
"merged_at": null
}
| 7,602 | true |
[] |
3,127,296,182 |
`push_to_hub` is not concurrency safe (dataset schema corruption)
|
closed
|
### Describe the bug
Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable.
Consider this scenario:
- we have an Arrow dataset
- there are `N` configs of the dataset
- there are `N` independent processes operating on each of the individual configs (e.g. adding a column, `new_col`)
- each process calls `push_to_hub` on their particular config when they're done processing
- all calls to `push_to_hub` succeed
- the `README.md` now has some configs with `new_col` added and some with `new_col` missing
Any attempt to load a config (using `load_dataset`) where `new_col` is missing will fail because of a schema mismatch between `README.md` and the Arrow files. Fixing the dataset requires updating `README.md` by hand with the correct schema for the affected config. In effect, `push_to_hub` is doing a `git push --force` (I found this behavior quite surprising).
We have hit this issue every time we run processing jobs over our datasets and have to fix corrupted schemas by hand.
Reading through the code, it seems that specifying a [`parent_commit`](https://github.com/huggingface/huggingface_hub/blob/v0.32.4/src/huggingface_hub/hf_api.py#L4587) hash around here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5794 would get us to a normal, non-forced git push, and avoid schema corruption. I'm not familiar enough with the code to know how to determine the commit hash from which the in-memory dataset card was loaded.
### Steps to reproduce the bug
See above.
### Expected behavior
Concurrent edits to disjoint configs of a dataset should never corrupt the dataset schema.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.2
- `fsspec` version: 2023.9.0
| 2025-06-07T17:28:56 | 2025-06-23T19:36:37 | 2025-06-23T19:36:37 |
https://github.com/huggingface/datasets/issues/7600
| null | 7,600 | false |
[
"@lhoestq can you please take a look? I've submitted a PR that fixes this issue. Thanks.",
"Thanks for the ping ! As I said in https://github.com/huggingface/datasets/pull/7605 there is maybe a more general approach using retries :)",
"Dropping this due to inactivity; we've implemented push_to_hub outside of HF datasets that's concurrency safe. Feel free to use the code I provided as a starting point if there's still interest in addressing this issue."
] |
3,125,620,119 |
My already working dataset (when uploaded few months ago) now is ignoring metadata.jsonl
|
closed
|
### Describe the bug
Hi everyone, I uploaded my dataset https://huggingface.co/datasets/PRAIG/SMB a few months ago while I was waiting for a conference acceptance response. Without modifying anything in the dataset repository now the Dataset viewer is not rendering the metadata.jsonl annotations, neither it is being downloaded when using load_dataset. Can you please help? Thank you in advance.
### Steps to reproduce the bug
from datasets import load_dataset
ds = load_dataset("PRAIG/SMB")
ds = ds["train"]
### Expected behavior
It is expected to have all the metadata available in the jsonl file. Fields like: "score_id", "original_width", "original_height", "regions"... among others.
### Environment info
datasets==3.6.0, python 3.13.3 (but he problem is already in the huggingface dataset page)
| 2025-06-06T18:59:00 | 2025-06-16T15:18:00 | 2025-06-16T15:18:00 |
https://github.com/huggingface/datasets/issues/7599
| null | 7,599 | false |
[
"Maybe its been a recent update, but i can manage to load the metadata.jsonl separately from the images with:\n\n```\nmetadata = load_dataset(\"PRAIG/SMB\", split=\"train\", data_files=[\"*.jsonl\"])\nimages = load_dataset(\"PRAIG/SMB\", split=\"train\")\n```\nDo you know it this is an expected behaviour? This makes my dataset viewer to only load the images without the labeling of metadata.jsonl.\n\nThanks",
"Hi ! this is because we now expect the metadata file to be inside the directory named after the split \"train\" (this way each split can have its own metadata and can be loaded independently)\n\nYou can fix that by configuring it explicitly in the dataset's README.md header:\n\n```yaml\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path:\n - \"train/**/*.png\"\n - \"metadata.jsonl\"\n```\n\n(or by moving the metadata.jsonl in train/ but in this case you also have to modify the content of the JSONL to fix the relative paths to the images)",
"Thank you very much, dataset viewer is already working as expected!!"
] |
3,125,184,457 |
fix string_to_dict usage for windows
|
closed
| null | 2025-06-06T15:54:29 | 2025-06-06T16:12:22 | 2025-06-06T16:12:21 |
https://github.com/huggingface/datasets/pull/7598
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7598",
"html_url": "https://github.com/huggingface/datasets/pull/7598",
"diff_url": "https://github.com/huggingface/datasets/pull/7598.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7598.patch",
"merged_at": "2025-06-06T16:12:21"
}
| 7,598 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7598). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,123,962,709 |
Download datasets from a private hub in 2025
|
closed
|
### Feature request
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature.
The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted.
This issue was raised before here: https://github.com/huggingface/datasets/issues/3679
@juliensimon
### Motivation
none
### Your contribution
none
| 2025-06-06T07:55:19 | 2025-06-13T13:46:00 | 2025-06-13T13:46:00 |
https://github.com/huggingface/datasets/issues/7597
| null | 7,597 | false |
[
"Hi ! First, and in the general case, Hugging Face does offer to host private datasets, and with a subscription you can even choose the region in which the repositories are hosted (US, EU)\n\nThen if you happen to have a private deployment, you can set the HF_ENDPOINT environment variable (same as in https://github.com/huggingface/transformers/issues/38634)",
"Thank you @lhoestq. Works as described!"
] |
3,122,595,042 |
Add albumentations to use dataset
|
closed
|
1. Fixed broken link to the list of transforms in torchvison.
2. Extended section about video image augmentations with an example from Albumentations.
| 2025-06-05T20:39:46 | 2025-06-17T18:38:08 | 2025-06-17T14:44:30 |
https://github.com/huggingface/datasets/pull/7596
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7596",
"html_url": "https://github.com/huggingface/datasets/pull/7596",
"diff_url": "https://github.com/huggingface/datasets/pull/7596.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7596.patch",
"merged_at": "2025-06-17T14:44:30"
}
| 7,596 | true |
[
"@lhoestq ping",
"@lhoestq ping",
"@lhoestq Thanks. Cleaned up torchvision."
] |
3,121,689,436 |
Add `IterableDataset.push_to_hub()`
|
closed
|
Basic implementation, which writes one shard per input dataset shard.
This is to be improved later.
Close https://github.com/huggingface/datasets/issues/5665
PS: for image/audio datasets structured as actual image/audio files (not parquet), you can sometimes speed it up with `ds.decode(num_threads=...).push_to_hub(...)`
| 2025-06-05T15:29:32 | 2025-06-06T16:12:37 | 2025-06-06T16:12:36 |
https://github.com/huggingface/datasets/pull/7595
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7595",
"html_url": "https://github.com/huggingface/datasets/pull/7595",
"diff_url": "https://github.com/huggingface/datasets/pull/7595.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7595.patch",
"merged_at": "2025-06-06T16:12:36"
}
| 7,595 | true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7595). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.