Dataset Viewer
url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
null | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
string | type
null | active_lock_reason
null | sub_issues_summary
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | draft
bool | pull_request
dict | is_pull_request
bool |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7662/comments | https://api.github.com/repos/huggingface/datasets/issues/7662/events | https://github.com/huggingface/datasets/issues/7662 | 3,190,805,531 | I_kwDODunzps6-L9Qb | 7,662 | Applying map after transform with multiprocessing will cause OOM | {
"login": "JunjieLl",
"id": 26482910,
"node_id": "MDQ6VXNlcjI2NDgyOTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/26482910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JunjieLl",
"html_url": "https://github.com/JunjieLl",
"followers_url": "https://api.github.com/users/JunjieLl/followers",
"following_url": "https://api.github.com/users/JunjieLl/following{/other_user}",
"gists_url": "https://api.github.com/users/JunjieLl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JunjieLl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JunjieLl/subscriptions",
"organizations_url": "https://api.github.com/users/JunjieLl/orgs",
"repos_url": "https://api.github.com/users/JunjieLl/repos",
"events_url": "https://api.github.com/users/JunjieLl/events{/privacy}",
"received_events_url": "https://api.github.com/users/JunjieLl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-07-01T05:45:57 | 2025-07-01T05:45:57 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
I have a 30TB dataset. When I perform add_column and cast_column operations on it and then execute a multiprocessing map, it results in an OOM (Out of Memory) error. However, if I skip the add_column and cast_column steps and directly run the map, there is no OOM. After debugging step by step, I found that the OOM is caused at this point, and I suspect it’s because the add_column and cast_column operations are not cached, which causes the entire dataset to be loaded in each subprocess, leading to the OOM. The critical line of code is: https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/utils/py_utils.py#L607
Note num_process=1 would not cause OOM. I'm confused.
### Steps to reproduce the bug
For reproduce, you can load dataset and set cache_dir (for caching): amphion/Emilia-Dataset which is a veru large datasets that RAM can not fits.
And apply the map with multiprocessing after a transform operation (e.g. add_column, cast_column).
As long as num_process>1, it must cause OOM.
### Expected behavior
It should not cause OOM.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-5.10.134-16.101.al8.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.33.1
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2024.6.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7662/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7661/comments | https://api.github.com/repos/huggingface/datasets/issues/7661/events | https://github.com/huggingface/datasets/pull/7661 | 3,190,408,237 | PR_kwDODunzps6czBDi | 7,661 | fix del tqdm lock error | {
"login": "Hypothesis-Z",
"id": 44766273,
"node_id": "MDQ6VXNlcjQ0NzY2Mjcz",
"avatar_url": "https://avatars.githubusercontent.com/u/44766273?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hypothesis-Z",
"html_url": "https://github.com/Hypothesis-Z",
"followers_url": "https://api.github.com/users/Hypothesis-Z/followers",
"following_url": "https://api.github.com/users/Hypothesis-Z/following{/other_user}",
"gists_url": "https://api.github.com/users/Hypothesis-Z/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hypothesis-Z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hypothesis-Z/subscriptions",
"organizations_url": "https://api.github.com/users/Hypothesis-Z/orgs",
"repos_url": "https://api.github.com/users/Hypothesis-Z/repos",
"events_url": "https://api.github.com/users/Hypothesis-Z/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hypothesis-Z/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-07-01T02:04:02 | 2025-07-01T02:33:04 | null | NONE | null | null | null | for issue https://github.com/huggingface/datasets/issues/7660 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7661/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7661",
"html_url": "https://github.com/huggingface/datasets/pull/7661",
"diff_url": "https://github.com/huggingface/datasets/pull/7661.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7661.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7660/comments | https://api.github.com/repos/huggingface/datasets/issues/7660/events | https://github.com/huggingface/datasets/issues/7660 | 3,189,028,251 | I_kwDODunzps6-FLWb | 7,660 | AttributeError: '_tqdm_cls' object has no attribute '_lock' | {
"login": "Hypothesis-Z",
"id": 44766273,
"node_id": "MDQ6VXNlcjQ0NzY2Mjcz",
"avatar_url": "https://avatars.githubusercontent.com/u/44766273?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hypothesis-Z",
"html_url": "https://github.com/Hypothesis-Z",
"followers_url": "https://api.github.com/users/Hypothesis-Z/followers",
"following_url": "https://api.github.com/users/Hypothesis-Z/following{/other_user}",
"gists_url": "https://api.github.com/users/Hypothesis-Z/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hypothesis-Z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hypothesis-Z/subscriptions",
"organizations_url": "https://api.github.com/users/Hypothesis-Z/orgs",
"repos_url": "https://api.github.com/users/Hypothesis-Z/repos",
"events_url": "https://api.github.com/users/Hypothesis-Z/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hypothesis-Z/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | [
"Deleting a class (**not instance**) attribute might be invalid in this case, which is `tqdm` doing in `ensure_lock`.\n\n```python\nfrom tqdm import tqdm as old_tqdm\n\nclass tqdm1(old_tqdm):\n def __delattr__(self, attr):\n try:\n super().__delattr__(attr)\n except AttributeError:\n if attr != '_lock':\n print(attr)\n raise\n\nclass Meta(type):\n def __delattr__(cls, name):\n if name == \"_lock\":\n return \n return super().__delattr__(name)\n \nclass tqdm2(old_tqdm, metaclass=Meta):\n pass\n\ndel tqdm2._lock\ndel tqdm1._lock # error\n```\n\nhttps://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/utils/tqdm.py#L104-L122",
"It seems to work in my case. \n```python\nfrom datasets import tqdm as hf_tqdm\nhf_tqdm.set_lock(hf_tqdm.get_lock())\n```"
]
| 2025-06-30T15:57:16 | 2025-07-01T03:39:01 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
`AttributeError: type object 'tqdm' has no attribute '_lock'`
It occurs when I'm trying to load datasets in thread pool.
Issue https://github.com/huggingface/datasets/issues/6066 and PR https://github.com/huggingface/datasets/pull/6067 https://github.com/huggingface/datasets/pull/6068 tried to fix this.
### Steps to reproduce the bug
Will have to try several times to reproduce the error cause it is concerned with threads.
1. save some datasets for test
```pythonfrom datasets import Dataset, DatasetDict
import os
os.makedirs("test_dataset_shards", exist_ok=True)
for i in range(10):
data = Dataset.from_dict({"text": [f"example {j}" for j in range(1000000)]})
data = DatasetDict({'train': data})
data.save_to_disk(f"test_dataset_shards/shard_{i}")
```
2. load them in a thread pool
```python
from datasets import load_from_disk
from tqdm import tqdm
from concurrent.futures import ThreadPoolExecutor, as_completed
import glob
datas = glob.glob('test_dataset_shards/shard_*')
with ThreadPoolExecutor(max_workers=10) as pool:
futures = [pool.submit(load_from_disk, it) for it in datas]
datas = []
for future in tqdm(as_completed(futures), total=len(futures)):
datas.append(future.result())
```
### Expected behavior
no exception raised
### Environment info
datasets==2.19.0
python==3.10 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7660/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7659/comments | https://api.github.com/repos/huggingface/datasets/issues/7659/events | https://github.com/huggingface/datasets/pull/7659 | 3,187,882,217 | PR_kwDODunzps6cqkou | 7,659 | Update the beans dataset link in Preprocess | {
"login": "HJassar",
"id": 5434867,
"node_id": "MDQ6VXNlcjU0MzQ4Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5434867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HJassar",
"html_url": "https://github.com/HJassar",
"followers_url": "https://api.github.com/users/HJassar/followers",
"following_url": "https://api.github.com/users/HJassar/following{/other_user}",
"gists_url": "https://api.github.com/users/HJassar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HJassar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HJassar/subscriptions",
"organizations_url": "https://api.github.com/users/HJassar/orgs",
"repos_url": "https://api.github.com/users/HJassar/repos",
"events_url": "https://api.github.com/users/HJassar/events{/privacy}",
"received_events_url": "https://api.github.com/users/HJassar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-30T09:58:44 | 2025-06-30T09:59:08 | null | NONE | null | null | null | In the Preprocess tutorial, the to "the beans dataset" is incorrect. Fixed. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7659/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7659",
"html_url": "https://github.com/huggingface/datasets/pull/7659",
"diff_url": "https://github.com/huggingface/datasets/pull/7659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7659.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7658/comments | https://api.github.com/repos/huggingface/datasets/issues/7658/events | https://github.com/huggingface/datasets/pull/7658 | 3,187,800,504 | PR_kwDODunzps6cqTMs | 7,658 | Fix: Prevent loss of info.features and column_names in IterableDatasetDict.map when features is None | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | [
"Hi!\r\nI haven’t included a test for this change, as the fix is quite small and targeted.\r\nPlease let me know if you’d like a test for this case or if you’d prefer to handle it during review.\r\nThanks!"
]
| 2025-06-30T09:31:12 | 2025-06-30T09:31:34 | null | CONTRIBUTOR | null | null | null | This PR fixes a bug where calling `IterableDatasetDict.map()` or `IterableDataset.map()` with the default `features=None` argument would overwrite the existing `info.features` attribute with `None`. This, in turn, caused the resulting dataset to lose its schema, breaking downstream usage of attributes like `column_names`.
Why
Previously, the code would always set `info.features = features`, even if `features` was `None`. When mapping with removal of columns or other transformations, this led to the destruction of the schema and caused failures in code that relied on the dataset schema being present.
How
We now update `info.features` only if `features` is not `None`. This preserves the original schema unless the user explicitly provides a new one.
Reference
Fixes #7568 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7658/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7658",
"html_url": "https://github.com/huggingface/datasets/pull/7658",
"diff_url": "https://github.com/huggingface/datasets/pull/7658.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7658.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7657 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7657/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7657/comments | https://api.github.com/repos/huggingface/datasets/issues/7657/events | https://github.com/huggingface/datasets/pull/7657 | 3,186,036,016 | PR_kwDODunzps6cks2E | 7,657 | feat: add subset_name as alias for name in load_dataset | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-29T10:39:00 | 2025-06-29T10:55:11 | null | CONTRIBUTOR | null | null | null | fixes #7637
This PR introduces subset_name as a user-facing alias for the name (previously `config_name`) argument in load_dataset. It aligns terminology with the Hugging Face Hub UI (which shows “Subset”), reducing confusion for new users.
Supports `subset_name` in `load_dataset()`
Adds `.subset_name` property to DatasetBuilder
Maintains full backward compatibility
Raises clear error if name and `subset_name` conflict | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7657/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7657",
"html_url": "https://github.com/huggingface/datasets/pull/7657",
"diff_url": "https://github.com/huggingface/datasets/pull/7657.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7657.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7656/comments | https://api.github.com/repos/huggingface/datasets/issues/7656/events | https://github.com/huggingface/datasets/pull/7656 | 3,185,865,686 | PR_kwDODunzps6ckPHc | 7,656 | fix(iterable): ensure MappedExamplesIterable supports state_dict for resume | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-29T07:50:13 | 2025-06-29T07:50:13 | null | CONTRIBUTOR | null | null | null | Fixes #7630
### Problem
When calling `.map()` on an `IterableDataset`, resuming from a checkpoint skips a large number of samples. This is because `MappedExamplesIterable` did not implement `state_dict()` or `load_state_dict()`, so checkpointing was not properly delegated to the underlying iterable.
### What This PR Does
This patch adds:
```python
def state_dict(self):
return self.ex_iterable.state_dict()
def load_state_dict(self, state):
self.ex_iterable.load_state_dict(state)
```
to MappedExamplesIterable, so the wrapped base iterable's state can be saved and restored as expected.
Result
Using .map() no longer causes sample skipping after checkpoint resume.
Let me know if a dedicated test case is required — happy to add one! | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7656/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7656",
"html_url": "https://github.com/huggingface/datasets/pull/7656",
"diff_url": "https://github.com/huggingface/datasets/pull/7656.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7656.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7655/comments | https://api.github.com/repos/huggingface/datasets/issues/7655/events | https://github.com/huggingface/datasets/pull/7655 | 3,185,382,105 | PR_kwDODunzps6ci9oi | 7,655 | Added specific use cases in Improve Performace | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-28T19:00:32 | 2025-06-28T19:00:32 | null | CONTRIBUTOR | null | null | null | Fixes #2494 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7655/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7655",
"html_url": "https://github.com/huggingface/datasets/pull/7655",
"diff_url": "https://github.com/huggingface/datasets/pull/7655.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7655.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7654/comments | https://api.github.com/repos/huggingface/datasets/issues/7654/events | https://github.com/huggingface/datasets/pull/7654 | 3,184,770,992 | PR_kwDODunzps6chPmz | 7,654 | fix(load): strip deprecated use_auth_token from config_kwargs | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-28T09:20:21 | 2025-06-28T09:20:21 | null | CONTRIBUTOR | null | null | null | Fixes #7504
This PR resolves a compatibility error when loading datasets via `load_dataset()` using outdated arguments like `use_auth_token`.
**What was happening:**
Users passing `use_auth_token` in `load_dataset(..., use_auth_token=...)` encountered a `ValueError`: BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key.
**Why:**
`use_auth_token` has been deprecated and removed from config definitions (replaced by `token`), but the `load_dataset()` function still forwarded it via `**config_kwargs` to BuilderConfigs, leading to unrecognized key errors.
**Fix:**
We now intercept and strip `use_auth_token` from `config_kwargs` inside `load_dataset`, replacing it with a warning:
```python
if "use_auth_token" in config_kwargs:
logger.warning("The 'use_auth_token' argument is deprecated. Please use 'token' instead.")
config_kwargs.pop("use_auth_token")
```
This ensures legacy compatibility while guiding users to switch to the token argument.
Let me know if you'd prefer a deprecation error instead of a warning. Thanks! | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7654/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7654",
"html_url": "https://github.com/huggingface/datasets/pull/7654",
"diff_url": "https://github.com/huggingface/datasets/pull/7654.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7654.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7653/comments | https://api.github.com/repos/huggingface/datasets/issues/7653/events | https://github.com/huggingface/datasets/pull/7653 | 3,184,746,093 | PR_kwDODunzps6chLmb | 7,653 | feat(load): fallback to `load_from_disk()` when loading a saved dataset directory | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-28T08:47:36 | 2025-06-28T08:47:36 | null | CONTRIBUTOR | null | null | null | ### Related Issue
Fixes #7503
Partially addresses #5044 by allowing `load_dataset()` to auto-detect and gracefully delegate to `load_from_disk()` for locally saved datasets.
---
### What does this PR do?
This PR introduces a minimal fallback mechanism in `load_dataset()` that detects when the provided `path` points to a dataset saved using `save_to_disk()`, and automatically redirects to `load_from_disk()`.
#### 🐛 Before (unexpected metadata-only rows):
```python
ds = load_dataset("/path/to/saved_dataset")
# → returns rows with only internal metadata (_data_files, _fingerprint, etc.)
````
#### ✅ After (graceful fallback):
```python
ds = load_dataset("/path/to/saved_dataset")
# → logs a warning and internally switches to load_from_disk()
```
---
### Why is this useful?
* Prevents confusion when reloading local datasets saved via `save_to_disk()`.
* Enables smoother compatibility with frameworks (e.g., TRL, `lighteval`) that rely on `load_dataset()` calls.
* Fully backward-compatible — hub-based loading, custom builders, and streaming remain untouched.
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7653/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7653",
"html_url": "https://github.com/huggingface/datasets/pull/7653",
"diff_url": "https://github.com/huggingface/datasets/pull/7653.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7653.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7652/comments | https://api.github.com/repos/huggingface/datasets/issues/7652/events | https://github.com/huggingface/datasets/pull/7652 | 3,183,372,055 | PR_kwDODunzps6cdCnv | 7,652 | Add columns support to JSON loader for selective key filtering | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-27T16:18:42 | 2025-06-27T17:37:16 | null | CONTRIBUTOR | null | null | null | Fixes #7594
This PR adds support for filtering specific columns when loading datasets from .json or .jsonl files — similar to how the columns=... argument works for Parquet.
As suggested, support for the `columns=...` argument (previously available for Parquet) has now been extended to **JSON and JSONL** loading via `load_dataset(...)`. You can now load only specific keys/columns and skip the rest — which should help in cases where some fields are unclean, inconsistent, or just unnecessary.
### Example:
```python
from datasets import load_dataset
dataset = load_dataset("json", data_files="your_data.jsonl", columns=["id", "title"])
print(dataset["train"].column_names)
# Output: ['id', 'title']
```
### Summary of changes:
* Added `columns: Optional[List[str]]` to `JsonConfig`
* Updated `_generate_tables()` to filter selected columns
* Forwarded `columns` argument from `load_dataset()` to the config
* Added test case to validate behavior
Let me know if you'd like the same to be added for CSV or others as a follow-up — happy to help. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7652/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7652",
"html_url": "https://github.com/huggingface/datasets/pull/7652",
"diff_url": "https://github.com/huggingface/datasets/pull/7652.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7652.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7651/comments | https://api.github.com/repos/huggingface/datasets/issues/7651/events | https://github.com/huggingface/datasets/pull/7651 | 3,182,792,775 | PR_kwDODunzps6cbMmg | 7,651 | fix: Extended metadata file names for folder_based_builder | {
"login": "iPieter",
"id": 6965756,
"node_id": "MDQ6VXNlcjY5NjU3NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6965756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iPieter",
"html_url": "https://github.com/iPieter",
"followers_url": "https://api.github.com/users/iPieter/followers",
"following_url": "https://api.github.com/users/iPieter/following{/other_user}",
"gists_url": "https://api.github.com/users/iPieter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iPieter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iPieter/subscriptions",
"organizations_url": "https://api.github.com/users/iPieter/orgs",
"repos_url": "https://api.github.com/users/iPieter/repos",
"events_url": "https://api.github.com/users/iPieter/events{/privacy}",
"received_events_url": "https://api.github.com/users/iPieter/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-27T13:12:11 | 2025-06-30T08:19:37 | null | NONE | null | null | null | Fixes #7650.
The metadata files generated by the `DatasetDict.save_to_file` function are not included in the folder_based_builder's metadata list, causing issues when only 1 actual data file is present, as described in issue #7650.
This PR adds these filenames to the builder, allowing correct loading. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7651/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7651",
"html_url": "https://github.com/huggingface/datasets/pull/7651",
"diff_url": "https://github.com/huggingface/datasets/pull/7651.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7651.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7650/comments | https://api.github.com/repos/huggingface/datasets/issues/7650/events | https://github.com/huggingface/datasets/issues/7650 | 3,182,745,315 | I_kwDODunzps69tNbj | 7,650 | `load_dataset` defaults to json file format for datasets with 1 shard | {
"login": "iPieter",
"id": 6965756,
"node_id": "MDQ6VXNlcjY5NjU3NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6965756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iPieter",
"html_url": "https://github.com/iPieter",
"followers_url": "https://api.github.com/users/iPieter/followers",
"following_url": "https://api.github.com/users/iPieter/following{/other_user}",
"gists_url": "https://api.github.com/users/iPieter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iPieter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iPieter/subscriptions",
"organizations_url": "https://api.github.com/users/iPieter/orgs",
"repos_url": "https://api.github.com/users/iPieter/repos",
"events_url": "https://api.github.com/users/iPieter/events{/privacy}",
"received_events_url": "https://api.github.com/users/iPieter/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-27T12:54:25 | 2025-06-27T12:54:25 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
I currently have multiple datasets (train+validation) saved as 50MB shards. For one dataset the validation pair is small enough to fit into a single shard and this apparently causes problems when loading the dataset. I created the datasets using a DatasetDict, saved them as 50MB arrow files for streaming and then load each dataset. I have no problem loading any of the other datasets with more than 1 arrow file/shard.
The error indicates the training set got loaded in arrow format (correct) and the validation set in json (incorrect). This seems to be because some of the metadata files are considered as dataset files.
```
Error loading /nfs/dataset_pt-uk: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('validation'): ('json', {})}
```

Concretely, there is a mismatch between the metadata created by the `DatasetDict.save_to_file` and the builder for `datasets.load_dataset`:
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/data_files.py#L107
The `folder_based_builder` lists all files and with 1 arrow file the json files (that are actually metadata) are in the majority.
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L58
### Steps to reproduce the bug
Create a dataset with metadata and 1 arrow file in validation and multiple arrow files in the training set, following the above description. In my case, I saved the files via:
```python
dataset = DatasetDict({
'train': train_dataset,
'validation': val_dataset
})
dataset.save_to_disk(output_path, max_shard_size="50MB")
```
### Expected behavior
The dataset would get loaded.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.14.0-22-generic-x86_64-with-glibc2.41
- Python version: 3.12.7
- `huggingface_hub` version: 0.31.1
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7650/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7649 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7649/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7649/comments | https://api.github.com/repos/huggingface/datasets/issues/7649/events | https://github.com/huggingface/datasets/pull/7649 | 3,181,481,444 | PR_kwDODunzps6cW0sQ | 7,649 | Enable parallel shard upload in push_to_hub() using num_proc | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-27T05:59:03 | 2025-06-27T06:03:46 | null | CONTRIBUTOR | null | null | null | Fixes #7591
### Add num_proc support to `push_to_hub()` for parallel shard upload
This PR adds support for parallel upload of dataset shards via the `num_proc` argument in `Dataset.push_to_hub()`.
📌 While the `num_proc` parameter was already present in the `push_to_hub()` signature and correctly passed to `_push_parquet_shards_to_hub()`, it was not being used to parallelize the upload.
🔧 This PR updates the internal `_push_parquet_shards_to_hub()` function to:
- Use `multiprocessing.Pool` and `iflatmap_unordered()` for concurrent shard upload when `num_proc > 1`
- Preserve original serial upload behavior if `num_proc` is `None` or ≤ 1
- Keep tqdm progress and commit behavior unchanged
Let me know if any test coverage or further changes are needed!
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7649/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7649",
"html_url": "https://github.com/huggingface/datasets/pull/7649",
"diff_url": "https://github.com/huggingface/datasets/pull/7649.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7649.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7648/comments | https://api.github.com/repos/huggingface/datasets/issues/7648/events | https://github.com/huggingface/datasets/pull/7648 | 3,181,409,736 | PR_kwDODunzps6cWmSn | 7,648 | Fix misleading add_column() usage example in docstring | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-27T05:27:04 | 2025-06-27T05:27:54 | null | CONTRIBUTOR | null | null | null | Fixes #7611
This PR fixes the usage example in the Dataset.add_column() docstring, which previously implied that add_column() modifies the dataset in-place.
Why:
The method returns a new dataset with the additional column, and users must assign the result to a variable to preserve the change.
This should make the behavior clearer for users.
@lhoestq @davanstrien | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7648/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7648",
"html_url": "https://github.com/huggingface/datasets/pull/7648",
"diff_url": "https://github.com/huggingface/datasets/pull/7648.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7648.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7647 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7647/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7647/comments | https://api.github.com/repos/huggingface/datasets/issues/7647/events | https://github.com/huggingface/datasets/issues/7647 | 3,178,952,517 | I_kwDODunzps69evdF | 7,647 | loading mozilla-foundation--common_voice_11_0 fails | {
"login": "pavel-esir",
"id": 5703039,
"node_id": "MDQ6VXNlcjU3MDMwMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5703039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pavel-esir",
"html_url": "https://github.com/pavel-esir",
"followers_url": "https://api.github.com/users/pavel-esir/followers",
"following_url": "https://api.github.com/users/pavel-esir/following{/other_user}",
"gists_url": "https://api.github.com/users/pavel-esir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pavel-esir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavel-esir/subscriptions",
"organizations_url": "https://api.github.com/users/pavel-esir/orgs",
"repos_url": "https://api.github.com/users/pavel-esir/repos",
"events_url": "https://api.github.com/users/pavel-esir/events{/privacy}",
"received_events_url": "https://api.github.com/users/pavel-esir/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | [
"@claude Could you please address this issue"
]
| 2025-06-26T12:23:48 | 2025-06-27T12:29:03 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
Hello everyone,
i am trying to load `mozilla-foundation--common_voice_11_0` and it fails. Reproducer
```
import datasets
datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True)
```
and it fails with
```
File ~/opt/envs/.../lib/python3.10/site-packages/datasets/utils/file_utils.py:827, in _add_retries_to_file_obj_read_method.<locals>.read_with_retries(*args, **kwargs)
825 for retry in range(1, max_retries + 1):
826 try:
--> 827 out = read(*args, **kwargs)
828 break
829 except (
830 _AiohttpClientError,
831 asyncio.TimeoutError,
832 requests.exceptions.ConnectionError,
833 requests.exceptions.Timeout,
834 ) as err:
File /usr/lib/python3.10/codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final)
319 def decode(self, input, final=False):
320 # decode input (taking the buffer into account)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
324 self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
When i remove streaming then everything is good but i need `streaming=True`
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True)
```
### Expected behavior
Expected that it will download dataset
### Environment info
datasets==3.6.0
python3.10
on all platforms linux/win/mac | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7647/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7646 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7646/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7646/comments | https://api.github.com/repos/huggingface/datasets/issues/7646/events | https://github.com/huggingface/datasets/pull/7646 | 3,178,036,854 | PR_kwDODunzps6cLhrM | 7,646 | Introduces automatic subset-level grouping for folder-based dataset builders #7066 | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | [
"It adds automatic grouping of files into subsets based on their root name (e.g., `train0.jsonl`, `train1.jsonl` → `\"train\"`), as discussed above. The logic is integrated into `FolderBasedBuilder` and is fully tested + documented.\r\n\r\nLet me know if any changes are needed — happy to iterate!",
"Hi ! I believe the subsets need to be instantiated here as `configs` - not `splits` (which are meant for train/validation/test):\r\n\r\nhttps://github.com/huggingface/datasets/blob/ef762e664a2a1675368ed7a203b0ac8cecca6e19/src/datasets/load.py#L647-L662\r\n\r\nAlso the subset names should probably be inferred only from the parquet/csv/json files and not from png/jpeg/wav/mp4 etc. WDYT ?",
"> Hi ! I believe the subsets need to be instantiated here as `configs` - not `splits` (which are meant for train/validation/test):\r\n> \r\n> https://github.com/huggingface/datasets/blob/ef762e664a2a1675368ed7a203b0ac8cecca6e19/src/datasets/load.py#L647-L662\r\n> \r\n> Also the subset names should probably be inferred only from the parquet/csv/json files and not from png/jpeg/wav/mp4 etc. WDYT ?\r\n\r\nThanks a lot for the review!\r\n\r\nYou're absolutely right — treating subsets as separate configs instead of overloaded splits makes much more sense. If that approach sounds good to you, I can move the grouping logic to `load.py`, where configs are instantiated, and revise the PR to emit one `BuilderConfig` per grouped subset.\r\n\r\nAlso totally agree on limiting grouping to structured file types — I’d scope this to `.json`, `.jsonl`, `.csv`, and `.parquet`.\r\n\r\nLet me know if this direction sounds good, and I’ll get started on the changes right away!\r\n"
]
| 2025-06-26T07:01:37 | 2025-06-27T18:04:04 | null | CONTRIBUTOR | null | null | null | Fixes #7066
This PR introduces automatic **subset-level grouping** for folder-based dataset builders by:
1. Adding a utility function `group_files_by_subset()` that clusters files by root name (ignoring digits and shard suffixes).
2. Integrating this logic into `FolderBasedBuilder._split_generators()` to yield one split per subset.
3. Adding unit tests for the grouping function.
4. Updating the documentation to describe this new behavior under `docs/source/repository_structure.mdx`.
---
### Motivation
Datasets with files like:
```
train0.jsonl
train1.jsonl
animals.jsonl
metadata.jsonl
```
will now be **automatically grouped** as:
- `"train"` subset → `train0.jsonl`, `train1.jsonl`
- `"animals"` subset → `animals.jsonl`
- `"metadata"` subset → `metadata.jsonl`
This enables structured multi-subset loading even when the dataset doesn't follow traditional `train/validation/test` split conventions.
---
### Files Changed
- `src/datasets/data_files.py`: added `group_files_by_subset()` utility
- `src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py`: grouped files before yielding splits
- `tests/test_data_files.py`: added unit test `test_group_files_by_subset`
- `docs/source/repository_structure.mdx`: documented subset grouping for maintainers and users
---
### Benefits
- More flexible and robust dataset split logic
- Enables logical grouping of user-uploaded files without nested folder structure
- Backward-compatible with all existing folder-based configs
---
Ready for review ✅ | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7646/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7646",
"html_url": "https://github.com/huggingface/datasets/pull/7646",
"diff_url": "https://github.com/huggingface/datasets/pull/7646.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7646.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7645/comments | https://api.github.com/repos/huggingface/datasets/issues/7645/events | https://github.com/huggingface/datasets/pull/7645 | 3,176,810,164 | PR_kwDODunzps6cHkp- | 7,645 | `ClassLabel` docs: Correct value for unknown labels | {
"login": "l-uuz",
"id": 56924246,
"node_id": "MDQ6VXNlcjU2OTI0MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/56924246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/l-uuz",
"html_url": "https://github.com/l-uuz",
"followers_url": "https://api.github.com/users/l-uuz/followers",
"following_url": "https://api.github.com/users/l-uuz/following{/other_user}",
"gists_url": "https://api.github.com/users/l-uuz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/l-uuz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/l-uuz/subscriptions",
"organizations_url": "https://api.github.com/users/l-uuz/orgs",
"repos_url": "https://api.github.com/users/l-uuz/repos",
"events_url": "https://api.github.com/users/l-uuz/events{/privacy}",
"received_events_url": "https://api.github.com/users/l-uuz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-25T20:01:35 | 2025-06-25T20:01:35 | null | NONE | null | null | null | This small change fixes the documentation to to be compliant with what happens in `encode_example`.
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/features/features.py#L1126-L1129 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7645/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7645",
"html_url": "https://github.com/huggingface/datasets/pull/7645",
"diff_url": "https://github.com/huggingface/datasets/pull/7645.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7645.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7644 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7644/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7644/comments | https://api.github.com/repos/huggingface/datasets/issues/7644/events | https://github.com/huggingface/datasets/pull/7644 | 3,176,363,492 | PR_kwDODunzps6cGGfW | 7,644 | fix sequence ci | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7644). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
]
| 2025-06-25T17:07:55 | 2025-06-25T17:10:30 | 2025-06-25T17:08:01 | MEMBER | null | null | null | fix error from https://github.com/huggingface/datasets/pull/7643 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7644/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7644",
"html_url": "https://github.com/huggingface/datasets/pull/7644",
"diff_url": "https://github.com/huggingface/datasets/pull/7644.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7644.patch",
"merged_at": "2025-06-25T17:08:01"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7643 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7643/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7643/comments | https://api.github.com/repos/huggingface/datasets/issues/7643/events | https://github.com/huggingface/datasets/pull/7643 | 3,176,354,431 | PR_kwDODunzps6cGEeK | 7,643 | Backward compat sequence instance | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7643). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
]
| 2025-06-25T17:05:09 | 2025-06-25T17:07:40 | 2025-06-25T17:05:44 | MEMBER | null | null | null | useful to still get `isinstance(Sequence(Value("int64")), Sequence)`for downstream libs like evaluate | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7643/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7643",
"html_url": "https://github.com/huggingface/datasets/pull/7643",
"diff_url": "https://github.com/huggingface/datasets/pull/7643.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7643.patch",
"merged_at": "2025-06-25T17:05:43"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7642/comments | https://api.github.com/repos/huggingface/datasets/issues/7642/events | https://github.com/huggingface/datasets/pull/7642 | 3,176,025,890 | PR_kwDODunzps6cE_Wr | 7,642 | fix length for ci | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | []
| 2025-06-25T15:10:38 | 2025-06-25T15:11:53 | 2025-06-25T15:11:51 | MEMBER | null | null | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7642/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7642",
"html_url": "https://github.com/huggingface/datasets/pull/7642",
"diff_url": "https://github.com/huggingface/datasets/pull/7642.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7642.patch",
"merged_at": "2025-06-25T15:11:51"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7641 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7641/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7641/comments | https://api.github.com/repos/huggingface/datasets/issues/7641/events | https://github.com/huggingface/datasets/pull/7641 | 3,175,953,405 | PR_kwDODunzps6cEwUl | 7,641 | update docs and docstrings | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7641). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
]
| 2025-06-25T14:48:58 | 2025-06-25T14:51:46 | 2025-06-25T14:49:33 | MEMBER | null | null | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7641/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7641",
"html_url": "https://github.com/huggingface/datasets/pull/7641",
"diff_url": "https://github.com/huggingface/datasets/pull/7641.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7641.patch",
"merged_at": "2025-06-25T14:49:33"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7640/comments | https://api.github.com/repos/huggingface/datasets/issues/7640/events | https://github.com/huggingface/datasets/pull/7640 | 3,175,914,924 | PR_kwDODunzps6cEofU | 7,640 | better features repr | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7640). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
]
| 2025-06-25T14:37:32 | 2025-06-25T14:46:47 | 2025-06-25T14:46:45 | MEMBER | null | null | null | following the addition of List in #7634
before:
```python
In [3]: ds.features
Out[3]:
{'json': {'id': Value(dtype='string', id=None),
'metadata:transcript': [{'end': Value(dtype='float64', id=None),
'start': Value(dtype='float64', id=None),
'transcript': Value(dtype='string', id=None),
'words': [{'end': Value(dtype='float64', id=None),
'score': Value(dtype='float64', id=None),
'start': Value(dtype='float64', id=None),
'word': Value(dtype='string', id=None)}]}],
'metadata:vad': [{'end': Value(dtype='float64', id=None),
'start': Value(dtype='float64', id=None)}]},
'mp4': Value(dtype='binary', id=None),
'npz': {'boxes_and_keypoints:box': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'boxes_and_keypoints:is_valid_box': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None),
'boxes_and_keypoints:keypoints': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'movement:EmotionArousalToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:EmotionValenceToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:FAUToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:FAUValue': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:alignment_head_rotation': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:alignment_translation': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'movement:emotion_arousal': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:emotion_scores': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:emotion_valence': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:expression': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:frame_latent': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:gaze_encodings': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:head_encodings': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:hypernet_features': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:is_valid': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'smplh:body_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'smplh:global_orient': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'smplh:is_valid': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None),
'smplh:left_hand_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'smplh:right_hand_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'smplh:translation': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None)},
'wav': Audio(sampling_rate=None, mono=True, decode=True, id=None),
'__key__': Value(dtype='string', id=None),
'__url__': Value(dtype='string', id=None)}
```
after:
```python
In [3]: ds.features
Out[3]:
{'json': {'id': Value('string'),
'metadata:transcript': List({'end': Value('float64'), 'start': Value('float64'), 'transcript': Value('string'), 'words': List({'end': Value('float64'), 'score': Value('float64'), 'start': Value('float64'), 'word': Value('string')})}),
'metadata:vad': List({'end': Value('float64'), 'start': Value('float64')})},
'mp4': Value('binary'),
'npz': {'boxes_and_keypoints:box': List(List(Value('float32'))),
'boxes_and_keypoints:is_valid_box': List(Value('bool')),
'boxes_and_keypoints:keypoints': List(List(List(Value('float32')))),
'movement:EmotionArousalToken': List(List(Value('float32'))),
'movement:EmotionValenceToken': List(List(Value('float32'))),
'movement:FAUToken': List(List(Value('float32'))),
'movement:FAUValue': List(List(Value('float32'))),
'movement:alignment_head_rotation': List(List(Value('float32'))),
'movement:alignment_translation': List(List(List(Value('float32')))),
'movement:emotion_arousal': List(List(Value('float32'))),
'movement:emotion_scores': List(List(Value('float32'))),
'movement:emotion_valence': List(List(Value('float32'))),
'movement:expression': List(List(Value('float32'))),
'movement:frame_latent': List(List(Value('float32'))),
'movement:gaze_encodings': List(List(Value('float32'))),
'movement:head_encodings': List(List(Value('float32'))),
'movement:hypernet_features': List(List(Value('float32'))),
'movement:is_valid': List(List(Value('float32'))),
'smplh:body_pose': List(List(List(Value('float32')))),
'smplh:global_orient': List(List(Value('float32'))),
'smplh:is_valid': List(Value('bool')),
'smplh:left_hand_pose': List(List(List(Value('float32')))),
'smplh:right_hand_pose': List(List(List(Value('float32')))),
'smplh:translation': List(List(Value('float32')))},
'wav': Audio(sampling_rate=None, decode=True, stream_index=None),
'__key__': Value('string'),
'__url__': Value('string')}
``` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7640/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7640",
"html_url": "https://github.com/huggingface/datasets/pull/7640",
"diff_url": "https://github.com/huggingface/datasets/pull/7640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7640.patch",
"merged_at": "2025-06-25T14:46:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7639 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7639/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7639/comments | https://api.github.com/repos/huggingface/datasets/issues/7639/events | https://github.com/huggingface/datasets/pull/7639 | 3,175,616,169 | PR_kwDODunzps6cDoAf | 7,639 | fix save_infos | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7639). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
]
| 2025-06-25T13:16:26 | 2025-06-25T13:19:33 | 2025-06-25T13:16:33 | MEMBER | null | null | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7639/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7639",
"html_url": "https://github.com/huggingface/datasets/pull/7639",
"diff_url": "https://github.com/huggingface/datasets/pull/7639.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7639.patch",
"merged_at": "2025-06-25T13:16:33"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7638/comments | https://api.github.com/repos/huggingface/datasets/issues/7638/events | https://github.com/huggingface/datasets/pull/7638 | 3,172,645,391 | PR_kwDODunzps6b5vpZ | 7,638 | Add ignore_decode_errors option to Image feature for robust decoding #7612 | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | [
"cc @lhoestq"
]
| 2025-06-24T16:47:51 | 2025-06-24T16:48:03 | null | CONTRIBUTOR | null | null | null | This PR implements support for robust image decoding in the `Image` feature, as discussed in issue #7612.
## 🔧 What was added
- A new boolean field: `ignore_decode_errors` (default: `False`)
- If set to `True`, any exceptions during decoding will be caught, and `None` will be returned instead of raising an error
```python
features = Features({
"image": Image(decode=True, ignore_decode_errors=True),
})
````
This enables robust iteration over potentially corrupted datasets — especially useful when streaming datasets like WebDataset or image-heavy public sets where sample corruption is common.
## 🧪 Behavior
* If `ignore_decode_errors=False` (default), decoding behaves exactly as before
* If `True`, decoding errors are caught, and a warning is emitted:
```
[Image.decode_example] Skipped corrupted image: ...
```
## 🧵 Linked issue
Closes #7612
Let me know if you'd like a follow-up test PR. Happy to write one! | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7638/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7638/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7638",
"html_url": "https://github.com/huggingface/datasets/pull/7638",
"diff_url": "https://github.com/huggingface/datasets/pull/7638.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7638.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7637/comments | https://api.github.com/repos/huggingface/datasets/issues/7637/events | https://github.com/huggingface/datasets/issues/7637 | 3,171,883,522 | I_kwDODunzps69DxoC | 7,637 | Introduce subset_name as an alias of config_name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
]
| open | false | null | []
| null | [
"I second this! When you come from the Hub, the intuitive question is \"how do I set the subset name\", and it's not easily answered from the docs: `subset_name` would answer this directly.",
"I've submitted PR [#7657](https://github.com/huggingface/datasets/pull/7657) to introduce subset_name as a user-facing alias for name in load_dataset, keeping terminology consistent with the Hub UI (“Subset”). It’s fully backward-compatible and includes a conflict check.\n\nLet me know if you'd like me to include tests as part of the PR — happy to add them if needed!"
]
| 2025-06-24T12:49:01 | 2025-06-29T10:55:54 | null | MEMBER | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Feature request
Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata).
### Motivation
The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically called config_name in the datasets library. This inconsistency has caused confusion for many users, especially those unfamiliar with the internal terminology.
I have repeatedly received questions from users trying to understand what "config" means, and why it doesn’t match what they see as "subset" on the Hub. Renaming everything to `subset_name` might be too disruptive, but introducing subset_name as a clear alias for config_name could significantly improve user experience without breaking backward compatibility.
This change would:
- Align terminology across the Hub UI and datasets codebase
- Reduce user confusion, especially for newcomers
- Make documentation and examples more intuitive
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7637/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7637/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7636 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7636/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7636/comments | https://api.github.com/repos/huggingface/datasets/issues/7636/events | https://github.com/huggingface/datasets/issues/7636 | 3,170,878,167 | I_kwDODunzps68_8LX | 7,636 | "open" in globals()["__builtins__"], an error occurs: "TypeError: argument of type 'module' is not iterable" | {
"login": "kuanyan9527",
"id": 51187979,
"node_id": "MDQ6VXNlcjUxMTg3OTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/51187979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kuanyan9527",
"html_url": "https://github.com/kuanyan9527",
"followers_url": "https://api.github.com/users/kuanyan9527/followers",
"following_url": "https://api.github.com/users/kuanyan9527/following{/other_user}",
"gists_url": "https://api.github.com/users/kuanyan9527/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kuanyan9527/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kuanyan9527/subscriptions",
"organizations_url": "https://api.github.com/users/kuanyan9527/orgs",
"repos_url": "https://api.github.com/users/kuanyan9527/repos",
"events_url": "https://api.github.com/users/kuanyan9527/events{/privacy}",
"received_events_url": "https://api.github.com/users/kuanyan9527/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"@kuanyan9527 Your query is indeed valid. Following could be its reasoning:\n\nQuoting from https://stackoverflow.com/a/11181607:\n\"By default, when in the `__main__` module,` __builtins__` is the built-in module `__builtin__` (note: no 's'); when in any other module, `__builtins__` is an alias for the dictionary of the `__builtin__` module itself.\"\n\nCan you confirm if you are running the snippet `print(\"open\" in globals()[\"__builtins__\"])` in the default? In that case, as expected, `__builtins__` is a module which is causing the error. But in the codebase, the class `patch_submodule`, is primarily used in the second circumstance, where it acts as a dictionary. Hence causing the code to function successfully.\n\nHope this helps.",
"@kuanyan9527 Are there any more queries in this regards, else please feel free to close the issue.\nThank you.",
"Your answer is very important to me,thanks."
]
| 2025-06-24T08:09:39 | 2025-07-01T01:54:08 | 2025-07-01T01:54:08 | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | When I run the following code, an error occurs: "TypeError: argument of type 'module' is not iterable"
```python
print("open" in globals()["__builtins__"])
```
Traceback (most recent call last):
File "./main.py", line 2, in <module>
print("open" in globals()["__builtins__"])
^^^^^^^^^^^^^^^^^^^^^^
TypeError: argument of type 'module' is not iterable
But this code runs fine in datasets, I don't understand why
[src/datasets/utils/patching.py#L96](https://github.com/huggingface/datasets/blob/3.6.0/src/datasets/utils/patching.py#L96) | {
"login": "kuanyan9527",
"id": 51187979,
"node_id": "MDQ6VXNlcjUxMTg3OTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/51187979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kuanyan9527",
"html_url": "https://github.com/kuanyan9527",
"followers_url": "https://api.github.com/users/kuanyan9527/followers",
"following_url": "https://api.github.com/users/kuanyan9527/following{/other_user}",
"gists_url": "https://api.github.com/users/kuanyan9527/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kuanyan9527/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kuanyan9527/subscriptions",
"organizations_url": "https://api.github.com/users/kuanyan9527/orgs",
"repos_url": "https://api.github.com/users/kuanyan9527/repos",
"events_url": "https://api.github.com/users/kuanyan9527/events{/privacy}",
"received_events_url": "https://api.github.com/users/kuanyan9527/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7636/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7635 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7635/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7635/comments | https://api.github.com/repos/huggingface/datasets/issues/7635/events | https://github.com/huggingface/datasets/pull/7635 | 3,170,486,408 | PR_kwDODunzps6bybOe | 7,635 | Fix: Preserve float columns in JSON loader when values are integer-like (e.g. 0.0, 1.0) | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-24T06:16:48 | 2025-06-24T06:16:48 | null | CONTRIBUTOR | null | null | null | This PR fixes a bug in the JSON loader where columns containing float values like `[0.0, 1.0, 2.0]` were being implicitly coerced to `int`, due to pandas or Arrow type inference.
This caused issues downstream in statistics computation (e.g., dataset-viewer) where such columns were incorrectly labeled as `"int"` instead of `"float"`.
### 🔍 What was happening:
When the JSON loader falls back to `pandas_read_json()` (after `pa.read_json()` fails), pandas/Arrow can coerce float values to integers if all values are integer-like (e.g., `0.0 == 0`).
### ✅ What this PR does:
- Adds a check in the fallback path of `_generate_tables()`
- Ensures that columns made entirely of floats are preserved as `"float64"` even if they are integer-like (e.g. `0.0`, `1.0`)
- This prevents loss of float semantics when creating the Arrow table
### 🧪 Reproducible Example:
```json
[{"col": 0.0}, {"col": 1.0}, {"col": 2.0}]
````
Previously loaded as:
* `int`
Now correctly loaded as:
* `float`
Fixes #6937
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7635/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7635",
"html_url": "https://github.com/huggingface/datasets/pull/7635",
"diff_url": "https://github.com/huggingface/datasets/pull/7635.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7635.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7634/comments | https://api.github.com/repos/huggingface/datasets/issues/7634/events | https://github.com/huggingface/datasets/pull/7634 | 3,169,389,653 | PR_kwDODunzps6buyij | 7,634 | Replace Sequence by List | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7634). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
]
| 2025-06-23T20:35:48 | 2025-06-25T13:59:13 | 2025-06-25T13:59:11 | MEMBER | null | null | null | Sequence is just a utility that we need to keep for backward compatibility. And `[ ]` was used instead but doesn't allow passing the length of the list.
This PR removes most mentions of Sequence and usage of `[ ]` and defines a proper List type instead.
before: `Sequence(Value("int64"))` or `[Value("int64")]`
now: `List(Value("int64"))`
This PR conserves full backward compatibility. And it's a good occasion with the release of 4.0.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7634/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7634",
"html_url": "https://github.com/huggingface/datasets/pull/7634",
"diff_url": "https://github.com/huggingface/datasets/pull/7634.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7634.patch",
"merged_at": "2025-06-25T13:59:11"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7633 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7633/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7633/comments | https://api.github.com/repos/huggingface/datasets/issues/7633/events | https://github.com/huggingface/datasets/issues/7633 | 3,168,399,637 | I_kwDODunzps682fEV | 7,633 | Proposal: Small Tamil Discourse Coherence Dataset. | {
"login": "bikkiNitSrinagar",
"id": 66418501,
"node_id": "MDQ6VXNlcjY2NDE4NTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/66418501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bikkiNitSrinagar",
"html_url": "https://github.com/bikkiNitSrinagar",
"followers_url": "https://api.github.com/users/bikkiNitSrinagar/followers",
"following_url": "https://api.github.com/users/bikkiNitSrinagar/following{/other_user}",
"gists_url": "https://api.github.com/users/bikkiNitSrinagar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bikkiNitSrinagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bikkiNitSrinagar/subscriptions",
"organizations_url": "https://api.github.com/users/bikkiNitSrinagar/orgs",
"repos_url": "https://api.github.com/users/bikkiNitSrinagar/repos",
"events_url": "https://api.github.com/users/bikkiNitSrinagar/events{/privacy}",
"received_events_url": "https://api.github.com/users/bikkiNitSrinagar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-23T14:24:40 | 2025-06-23T14:24:40 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | I’m a beginner from NIT Srinagar proposing a dataset of 50 Tamil text pairs for discourse coherence (coherent/incoherent labels) to support NLP research in low-resource languages.
- Size: 50 samples
- Format: CSV with columns (text1, text2, label)
- Use case: Training NLP models for coherence
I’ll use GitHub’s web editor and Google Colab. Please confirm if this fits. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7633/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7632/comments | https://api.github.com/repos/huggingface/datasets/issues/7632/events | https://github.com/huggingface/datasets/issues/7632 | 3,168,283,589 | I_kwDODunzps682CvF | 7,632 | Graceful Error Handling for cast_column("image", Image(decode=True)) in Hugging Face Datasets | {
"login": "ganiket19",
"id": 37377515,
"node_id": "MDQ6VXNlcjM3Mzc3NTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/37377515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ganiket19",
"html_url": "https://github.com/ganiket19",
"followers_url": "https://api.github.com/users/ganiket19/followers",
"following_url": "https://api.github.com/users/ganiket19/following{/other_user}",
"gists_url": "https://api.github.com/users/ganiket19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ganiket19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ganiket19/subscriptions",
"organizations_url": "https://api.github.com/users/ganiket19/orgs",
"repos_url": "https://api.github.com/users/ganiket19/repos",
"events_url": "https://api.github.com/users/ganiket19/events{/privacy}",
"received_events_url": "https://api.github.com/users/ganiket19/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
]
| open | false | null | []
| null | []
| 2025-06-23T13:49:24 | 2025-06-23T16:26:53 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Feature request
Currently, when using dataset.cast_column("image", Image(decode=True)), the pipeline throws an error and halts if any image in the dataset is invalid or corrupted (e.g., truncated files, incorrect formats, unreachable URLs). This behavior disrupts large-scale processing where a few faulty samples are common.
reference : https://discuss.huggingface.co/t/handle-errors-when-loading-images-404-corrupted-etc/50318/5
https://discuss.huggingface.co/t/handling-non-existing-url-in-image-dataset-while-cast-column/69185
Proposed Feature
Introduce a mechanism (e.g., a continue_on_error=True flag or global error handling mode) in Image(decode=True) that:
Skips invalid images and sets them as None, or
Logs the error but allows the rest of the dataset to be processed without interruption.
Example Usage
from datasets import load_dataset, Image
dataset = load_dataset("my_dataset")
dataset = dataset.cast_column("image", Image(decode=True, continue_on_error=True))
Benefits
Ensures robust large-scale image dataset processing.
Improves developer productivity by avoiding custom retry/error-handling code.
Aligns with best practices in dataset preprocessing pipelines that tolerate minor data corruption.
Potential Implementation Options
Internally wrap the decoding in a try/except block.
Return None or a placeholder on failure.
Optionally allow custom error callbacks or logging.
### Motivation
Robustness: Large-scale image datasets often contain a small fraction of corrupt files or unreachable URLs. Halting on the first error forces users to write custom workarounds or preprocess externally.
Simplicity: A built-in flag removes boilerplate try/except logic around every decode step.
Performance: Skipping invalid samples inline is more efficient than a two-pass approach (filter then decode).
### Your contribution
1. API Change
Extend datasets.features.Image(decode=True) to accept continue_on_error: bool = False.
2. Behavior
If continue_on_error=False (default), maintain current behavior: any decode error raises an exception.
If continue_on_error=True, wrap decode logic in try/except:
On success: store the decoded image.
On failure: log a warning (e.g., via logging.warning) and set the field to None (or a sentinel value).
3. Optional Enhancements
Allow a callback hook:
Image(decode=True, continue_on_error=True, on_error=lambda idx, url, exc: ...)
Emit metrics or counts of skipped images. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7632/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7631 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7631/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7631/comments | https://api.github.com/repos/huggingface/datasets/issues/7631/events | https://github.com/huggingface/datasets/pull/7631 | 3,165,127,657 | PR_kwDODunzps6bgwOB | 7,631 | Pass user-agent from DownloadConfig into fsspec storage_options | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | [
"- This PR assumes that `HfFileSystem` in `huggingface_hub` supports receiving `headers` in `storage_options`. If not, a follow-up PR can be opened to add this support to `HfFileSystem.__init__`.\r\n- No test was added for this since it’s a config passthrough. If needed, I’d be happy to add one."
]
| 2025-06-21T14:22:25 | 2025-06-21T14:25:28 | null | CONTRIBUTOR | null | null | null | Fixes part of issue #6046
### Problem
The `user-agent` defined in `DownloadConfig` was not passed down to fsspec-based filesystems like `HfFileSystem`, which prevents proper identification/tracking of client requests.
### Solution
Added support for injecting the `user-agent` into `storage_options["headers"]` within `_prepare_single_hop_path_and_storage_options()` based on the `protocol`.
Now, when using `hf://`, `http://`, or `https://`, the custom user-agent is passed automatically.
### Code Location
Modified:
- `src/datasets/utils/file_utils.py`
Used `get_datasets_user_agent(...)` to ensure proper formatting and fallback logic. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7631/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7631",
"html_url": "https://github.com/huggingface/datasets/pull/7631",
"diff_url": "https://github.com/huggingface/datasets/pull/7631.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7631.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7630/comments | https://api.github.com/repos/huggingface/datasets/issues/7630/events | https://github.com/huggingface/datasets/issues/7630 | 3,164,650,900 | I_kwDODunzps68oL2U | 7,630 | [bug] resume from ckpt skips samples if .map is applied | {
"login": "felipemello1",
"id": 23004953,
"node_id": "MDQ6VXNlcjIzMDA0OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/23004953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felipemello1",
"html_url": "https://github.com/felipemello1",
"followers_url": "https://api.github.com/users/felipemello1/followers",
"following_url": "https://api.github.com/users/felipemello1/following{/other_user}",
"gists_url": "https://api.github.com/users/felipemello1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felipemello1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felipemello1/subscriptions",
"organizations_url": "https://api.github.com/users/felipemello1/orgs",
"repos_url": "https://api.github.com/users/felipemello1/repos",
"events_url": "https://api.github.com/users/felipemello1/events{/privacy}",
"received_events_url": "https://api.github.com/users/felipemello1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | [
"Thanks for reporting this — it looks like a separate but related bug to #7538, which involved sample loss when resuming an `IterableDataset` wrapped in `FormattedExamplesIterable`. That was resolved in #7553 by re-batching the iterable to track offset correctly.\n\nIn this case, the issue seems to arise specifically from applying `.map()` before sharding and checkpointing. That wraps the iterable in `MappedExamplesIterable`, which may not preserve or propagate `shard_example_idx` correctly across `.state_dict()` and `.load_state_dict()` calls.\n\nYou can see that without `.map()`, resume works fine — but with `.map()`, it jumps from sample 9 to 50, skipping the rest of the shard.\n\nI'll dig deeper into how `MappedExamplesIterable` manages offsets and whether it supports proper checkpoint resumption. If not, we might need a fix similar to the one in #7553, or a wrapper to preserve resume metadata.\n\nHappy to help fix it!\n",
"Let me know if a dedicated test case is required — happy to add one!"
]
| 2025-06-21T01:50:03 | 2025-06-29T07:51:32 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
resume from ckpt skips samples if .map is applied
Maybe related: https://github.com/huggingface/datasets/issues/7538
### Steps to reproduce the bug
```python
from datasets import Dataset
from datasets.distributed import split_dataset_by_node
# Create dataset with map transformation
def create_dataset():
ds = Dataset.from_dict({"id": list(range(100))})
ds = ds.to_iterable_dataset(num_shards=4)
ds = ds.map(lambda x: x) #comment it out to get desired behavior
ds = split_dataset_by_node(ds, rank=0, world_size=2)
return ds
ds = create_dataset()
# Iterate and save checkpoint after 10 samples
it = iter(ds)
for idx, sample in enumerate(it):
if idx == 9: # Checkpoint after 10 samples
checkpoint = ds.state_dict()
print(f"Checkpoint saved at sample: {sample['id']}")
break
# Continue with original iterator
original_next_samples = []
for idx, sample in enumerate(it):
original_next_samples.append(sample["id"])
if idx >= 4:
break
# Resume from checkpoint
ds_new = create_dataset()
ds_new.load_state_dict(checkpoint)
# Get samples from resumed iterator
it_new = iter(ds_new)
resumed_next_samples = []
for idx, sample in enumerate(it_new):
resumed_next_samples.append(sample["id"])
if idx >= 4:
break
print(f"\nExpected next samples: {original_next_samples}")
print(f"Actual next samples: {resumed_next_samples}")
print(
f"\n❌ BUG: {resumed_next_samples[0] - original_next_samples[0]} samples were skipped!"
)
```
With map
```
Checkpoint saved at sample: 9
Expected next samples: [10, 11, 12, 13, 14]
Actual next samples: [50, 51, 52, 53, 54]
❌ BUG: 40 samples were skipped!
```
### Expected behavior
without map
```
Expected next samples: [10, 11, 12, 13, 14]
Actual next samples: [10, 11, 12, 13, 14]
❌ BUG: 0 samples were skipped!
```
### Environment info
datasets == 3.6.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7630/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7629 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7629/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7629/comments | https://api.github.com/repos/huggingface/datasets/issues/7629/events | https://github.com/huggingface/datasets/pull/7629 | 3,161,169,782 | PR_kwDODunzps6bTc7b | 7,629 | Add test for `as_iterable_dataset()` method in DatasetBuilder | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-19T19:23:55 | 2025-06-19T19:23:55 | null | CONTRIBUTOR | null | null | null | This PR adds a test for the new `as_iterable_dataset()` method introduced in PR #7628.
The test:
- Loads a builder using `load_dataset_builder("c4", "en")`
- Runs `download_and_prepare()`
- Streams examples using `builder.as_iterable_dataset(split="train[:100]")`
- Verifies streamed examples contain the "text" field
This ensures that the builder correctly streams data from cached Arrow files.
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7629/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7629",
"html_url": "https://github.com/huggingface/datasets/pull/7629",
"diff_url": "https://github.com/huggingface/datasets/pull/7629.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7629.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7628 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7628/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7628/comments | https://api.github.com/repos/huggingface/datasets/issues/7628/events | https://github.com/huggingface/datasets/pull/7628 | 3,161,156,461 | PR_kwDODunzps6bTaGk | 7,628 | Add `as_iterable_dataset()` method to DatasetBuilder for streaming from cached Arrow files | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-19T19:15:41 | 2025-06-19T19:15:41 | null | CONTRIBUTOR | null | null | null | This PR implements `builder.as_iterable_dataset(split=...)` as discussed in #5481.
It allows users to load an `IterableDataset` directly from cached Arrow files (using ArrowReader and ArrowExamplesIterable), without loading the full dataset into memory.
This is useful for large-scale training scenarios where memory is constrained. A test has also been added in `test_builder.py`.
Related to: #5481
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7628/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7628",
"html_url": "https://github.com/huggingface/datasets/pull/7628",
"diff_url": "https://github.com/huggingface/datasets/pull/7628.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7628.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7627/comments | https://api.github.com/repos/huggingface/datasets/issues/7627/events | https://github.com/huggingface/datasets/issues/7627 | 3,160,544,390 | I_kwDODunzps68YhSG | 7,627 | Creating a HF Dataset from lakeFS with S3 storage takes too much time! | {
"login": "Thunderhead-exe",
"id": 118734142,
"node_id": "U_kgDOBxO9Pg",
"avatar_url": "https://avatars.githubusercontent.com/u/118734142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Thunderhead-exe",
"html_url": "https://github.com/Thunderhead-exe",
"followers_url": "https://api.github.com/users/Thunderhead-exe/followers",
"following_url": "https://api.github.com/users/Thunderhead-exe/following{/other_user}",
"gists_url": "https://api.github.com/users/Thunderhead-exe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Thunderhead-exe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Thunderhead-exe/subscriptions",
"organizations_url": "https://api.github.com/users/Thunderhead-exe/orgs",
"repos_url": "https://api.github.com/users/Thunderhead-exe/repos",
"events_url": "https://api.github.com/users/Thunderhead-exe/events{/privacy}",
"received_events_url": "https://api.github.com/users/Thunderhead-exe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"### > Update\n\nThe bottleneck, from what I understand, was making one network request per file\n\nFor 30k images, this meant 30k separate GET requests to the MinIO server through the S3 API, and that was killing the performance\n\nUsing webDataset to transform the large number of files to few .tar files and passing “webdataset” instead of “imagefolder” to the load_dataset function worked perfectly (took only ~11s)"
]
| 2025-06-19T14:28:41 | 2025-06-23T12:39:10 | 2025-06-23T12:39:10 | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | Hi,
I’m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_
Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot!
From what I understand, it is loading the images into cache then building the dataset.
– Please find bellow the execution screenshot –
Is there a way to optimize this or am I doing something wrong?
Thanks!
 | {
"login": "Thunderhead-exe",
"id": 118734142,
"node_id": "U_kgDOBxO9Pg",
"avatar_url": "https://avatars.githubusercontent.com/u/118734142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Thunderhead-exe",
"html_url": "https://github.com/Thunderhead-exe",
"followers_url": "https://api.github.com/users/Thunderhead-exe/followers",
"following_url": "https://api.github.com/users/Thunderhead-exe/following{/other_user}",
"gists_url": "https://api.github.com/users/Thunderhead-exe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Thunderhead-exe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Thunderhead-exe/subscriptions",
"organizations_url": "https://api.github.com/users/Thunderhead-exe/orgs",
"repos_url": "https://api.github.com/users/Thunderhead-exe/repos",
"events_url": "https://api.github.com/users/Thunderhead-exe/events{/privacy}",
"received_events_url": "https://api.github.com/users/Thunderhead-exe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7627/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7626 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7626/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7626/comments | https://api.github.com/repos/huggingface/datasets/issues/7626/events | https://github.com/huggingface/datasets/pull/7626 | 3,159,322,138 | PR_kwDODunzps6bNMuF | 7,626 | feat(map): reuse unchanged columns when input_columns specified to reduce disk usage (#6013) | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-19T07:41:45 | 2025-06-26T06:43:16 | null | CONTRIBUTOR | null | null | null | ## Summary
This PR addresses [#6013](https://github.com/huggingface/datasets/issues/6013) by reusing unchanged columns from the original dataset in the `map()` method when `input_columns` is specified.
## What’s Implemented
- Injected logic at the end of `Dataset.map()` to:
- Identify untouched columns not in `input_columns` or `remove_columns`
- Select those columns from the original dataset
- Concatenate them with the transformed result using `pyarrow.concat_tables`
## Example Behavior
```python
ds = Dataset.from_dict({"a": [1, 2], "b": [3, 4]})
ds2 = ds.map(lambda x: {"c": x["a"] + 10}, input_columns=["a"], remove_columns=["a"])
print(ds2.column_names) # Output: ['b', 'c']
````
Column `b` is reused from the original dataset.
## Notes
* This keeps disk usage and caching minimal by avoiding full dataset duplication.
* Only triggered when `input_columns` is set.
---
cc @lhoestq @mariosasko for review 🙂
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7626/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7626",
"html_url": "https://github.com/huggingface/datasets/pull/7626",
"diff_url": "https://github.com/huggingface/datasets/pull/7626.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7626.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7625/comments | https://api.github.com/repos/huggingface/datasets/issues/7625/events | https://github.com/huggingface/datasets/pull/7625 | 3,159,016,001 | PR_kwDODunzps6bMKof | 7,625 | feat: Add h5folder dataset loader for HDF5 support | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7625). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I guess test failed cause import os, import h5py, and import datasets lines are not alphabetically sorted, or not grouped properly.\r\n\r\n\r\n",
"This commit was accidental - `[Merge branch 'main' into patch-4]`. The \r\n`[chore: fix import order in h5folder.py to satisfy linter]` should solve the import order issue. \r\n\r\n\r\n"
]
| 2025-06-19T05:39:10 | 2025-06-26T05:44:26 | null | CONTRIBUTOR | null | null | null | ### Related Issue
Closes #3113
### What does this PR do?
This PR introduces a new dataset loader module called **`h5folder`** to support loading datasets stored in **HDF5 (.h5)** format.
It allows users to do:
```python
from datasets import load_dataset
dataset = load_dataset("h5folder", data_dir="path/to/")
````
### 🧩 Design Overview
* Implemented inside `datasets/packaged_modules/h5folder/h5folder.py`
* Based on the `GeneratorBasedBuilder` API
* Uses `h5py` to read HDF5 files and yield examples
* Expects datasets such as `id`, `data`, and `label` inside `data.h5`
* Converts numpy arrays to Python types before yielding
### 🧪 Example `.h5` Structure (for local testing)
```python
import h5py
import numpy as np
with h5py.File("data.h5", "w") as f:
f.create_dataset("id", data=np.arange(100))
f.create_dataset("data", data=np.random.randn(100, 10))
f.create_dataset("label", data=np.random.randint(0, 2, size=100))
```
### ✅ Testing
- The loader logic follows the structure of existing modules like `imagefolder`
- Will rely on Hugging Face CI to validate integration
- Manually testing planned once merged or during feedback
### 📁 Files Added
* `datasets/src/datasets/packaged_modules/h5folder/h5folder.py`
### 📌 Component(s) Affected
* `area/datasets`
* `area/load`
### 📦 Release Note Classification
* `rn/feature` – Adds support for loading `.h5` datasets via `load_dataset("h5folder", ...)`
---
Let me know if any changes or improvements are needed — happy to iterate. Thanks for reviewing!
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7625/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7625/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7625",
"html_url": "https://github.com/huggingface/datasets/pull/7625",
"diff_url": "https://github.com/huggingface/datasets/pull/7625.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7625.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7624/comments | https://api.github.com/repos/huggingface/datasets/issues/7624/events | https://github.com/huggingface/datasets/issues/7624 | 3,156,136,624 | I_kwDODunzps68HtKw | 7,624 | #Dataset Make "image" column appear first in dataset preview UI | {
"login": "jcerveto",
"id": 98875217,
"node_id": "U_kgDOBeS3UQ",
"avatar_url": "https://avatars.githubusercontent.com/u/98875217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcerveto",
"html_url": "https://github.com/jcerveto",
"followers_url": "https://api.github.com/users/jcerveto/followers",
"following_url": "https://api.github.com/users/jcerveto/following{/other_user}",
"gists_url": "https://api.github.com/users/jcerveto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcerveto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcerveto/subscriptions",
"organizations_url": "https://api.github.com/users/jcerveto/orgs",
"repos_url": "https://api.github.com/users/jcerveto/repos",
"events_url": "https://api.github.com/users/jcerveto/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcerveto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"Hi ! It should follow the same order as the order of the keys in the metadata file",
"Hi! Thank you for your answer. \n\nAs you said it, I I forced every key in every JSON to have an order using `collections. OrderedDict` in Python. Now, it works!\n\nTY"
]
| 2025-06-18T09:25:19 | 2025-06-20T07:46:43 | 2025-06-20T07:46:43 | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | Hi!
#Dataset
I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub.
However, at the moment, the `"image"` column is not the first—in fact, it appears last, which is not ideal for the presentation I’d like to achieve.
I have a couple of questions:
Is there a way to force the dataset card to display the `"image"` column first?
Is there currently any way to control or influence the column order in the dataset preview UI?
Does the order of keys in the .jsonl file or the features argument affect the display order?
Thanks again for your time and help! :blush: | {
"login": "jcerveto",
"id": 98875217,
"node_id": "U_kgDOBeS3UQ",
"avatar_url": "https://avatars.githubusercontent.com/u/98875217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcerveto",
"html_url": "https://github.com/jcerveto",
"followers_url": "https://api.github.com/users/jcerveto/followers",
"following_url": "https://api.github.com/users/jcerveto/following{/other_user}",
"gists_url": "https://api.github.com/users/jcerveto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcerveto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcerveto/subscriptions",
"organizations_url": "https://api.github.com/users/jcerveto/orgs",
"repos_url": "https://api.github.com/users/jcerveto/repos",
"events_url": "https://api.github.com/users/jcerveto/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcerveto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7624/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7623 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7623/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7623/comments | https://api.github.com/repos/huggingface/datasets/issues/7623/events | https://github.com/huggingface/datasets/pull/7623 | 3,154,519,684 | PR_kwDODunzps6a9Jk5 | 7,623 | fix: raise error in FolderBasedBuilder when data_dir and data_files are missing | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"@lhoestq Moved the logic to FolderBasedBuilder._info() as discussed in previous PR (#7618). Let me know if anything else is needed — happy to update!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7623). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
]
| 2025-06-17T19:16:34 | 2025-06-18T14:18:41 | 2025-06-18T14:18:41 | CONTRIBUTOR | null | null | null | ### Related Issues/PRs
Fixes #6152
---
### What changes are proposed in this pull request?
This PR adds a dedicated validation check in the `_info()` method of the `FolderBasedBuilder` class to ensure that users provide either `data_dir` or `data_files` when loading folder-based datasets (such as `audiofolder`, `imagefolder`, etc.).
---
### Why this change?
Previously, when calling:
```python
load_dataset("audiofolder")
````
without specifying `data_dir` or `data_files`, the loader would silently fallback to the **current working directory**, leading to:
* Long loading times
* Unexpected behavior (e.g., scanning unrelated files)
This behavior was discussed in issue #6152. As suggested by maintainers, the fix has now been implemented directly inside the `FolderBasedBuilder._info()` method — keeping the logic localized to the specific builder instead of a generic loader function.
---
### How is this PR tested?
* ✅ Manually tested by calling `load_dataset("audiofolder")` with no `data_dir` or `data_files` → a `ValueError` is now raised early.
* ✅ Existing functionality (with valid input) remains unaffected.
---
### Does this PR require documentation update?
* [x] No
---
### Release Notes
#### Is this a user-facing change?
* [x] Yes
> Folder-based datasets now raise an explicit error if neither `data_dir` nor `data_files` are specified, preventing unintended fallback to the current working directory.
---
#### What component(s) does this PR affect?
* [x] `area/datasets`
* [x] `area/load`
---
<a name="release-note-category"></a>
#### How should the PR be classified?
* [x] `rn/bug-fix` - A user-facing bug fix
---
#### Should this be included in the next patch release?
* [x] Yes | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7623/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7623",
"html_url": "https://github.com/huggingface/datasets/pull/7623",
"diff_url": "https://github.com/huggingface/datasets/pull/7623.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7623.patch",
"merged_at": "2025-06-18T14:18:41"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7622 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7622/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7622/comments | https://api.github.com/repos/huggingface/datasets/issues/7622/events | https://github.com/huggingface/datasets/pull/7622 | 3,154,398,557 | PR_kwDODunzps6a8v6J | 7,622 | Guard against duplicate builder_kwargs/config_kwargs in load_dataset_… | {
"login": "Shohail-Ismail",
"id": 149825575,
"node_id": "U_kgDOCO4oJw",
"avatar_url": "https://avatars.githubusercontent.com/u/149825575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shohail-Ismail",
"html_url": "https://github.com/Shohail-Ismail",
"followers_url": "https://api.github.com/users/Shohail-Ismail/followers",
"following_url": "https://api.github.com/users/Shohail-Ismail/following{/other_user}",
"gists_url": "https://api.github.com/users/Shohail-Ismail/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shohail-Ismail/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shohail-Ismail/subscriptions",
"organizations_url": "https://api.github.com/users/Shohail-Ismail/orgs",
"repos_url": "https://api.github.com/users/Shohail-Ismail/repos",
"events_url": "https://api.github.com/users/Shohail-Ismail/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shohail-Ismail/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | []
| 2025-06-17T18:28:35 | 2025-06-17T18:38:56 | null | NONE | null | null | null | …builder (#4910 )
### What does this PR do?
Fixes edge case in `load_dataset_builder` by raising a `TypeError` if the same key exists in both `builder_kwargs` and `config_kwargs`.
### Implementation details
- Added a guard clause in `load_dataset_builder` to detect duplicate keys between `builder_kwargs` and `config_kwargs`
- Wrote a unit test in `tests/test_load_duplicate_keys.py` to verify the exception is raised correctly
### Fixes
Closes #4910
### Reviewers
@zach-huggingface
@SunMarc
Would appreciate your review if you have time - thanks! | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7622/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7622",
"html_url": "https://github.com/huggingface/datasets/pull/7622",
"diff_url": "https://github.com/huggingface/datasets/pull/7622.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7622.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7621 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7621/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7621/comments | https://api.github.com/repos/huggingface/datasets/issues/7621/events | https://github.com/huggingface/datasets/pull/7621 | 3,153,780,963 | PR_kwDODunzps6a6rAu | 7,621 | minor docs data aug | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7621). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
]
| 2025-06-17T14:46:57 | 2025-06-17T14:50:28 | 2025-06-17T14:47:11 | MEMBER | null | null | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7621/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7621",
"html_url": "https://github.com/huggingface/datasets/pull/7621",
"diff_url": "https://github.com/huggingface/datasets/pull/7621.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7621.patch",
"merged_at": "2025-06-17T14:47:11"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7620 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7620/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7620/comments | https://api.github.com/repos/huggingface/datasets/issues/7620/events | https://github.com/huggingface/datasets/pull/7620 | 3,153,565,183 | PR_kwDODunzps6a58TP | 7,620 | Fixes in docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7620). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
]
| 2025-06-17T13:41:54 | 2025-06-17T13:58:26 | 2025-06-17T13:58:24 | MEMBER | null | null | null | before release 4.0
(I also did minor improvements to `features` to not show their `id=None` in their `__repr__()`) | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7620/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7620",
"html_url": "https://github.com/huggingface/datasets/pull/7620",
"diff_url": "https://github.com/huggingface/datasets/pull/7620.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7620.patch",
"merged_at": "2025-06-17T13:58:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7619 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7619/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7619/comments | https://api.github.com/repos/huggingface/datasets/issues/7619/events | https://github.com/huggingface/datasets/issues/7619 | 3,153,058,517 | I_kwDODunzps6779rV | 7,619 | `from_list` fails while `from_generator` works for large datasets | {
"login": "abdulfatir",
"id": 4028948,
"node_id": "MDQ6VXNlcjQwMjg5NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4028948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abdulfatir",
"html_url": "https://github.com/abdulfatir",
"followers_url": "https://api.github.com/users/abdulfatir/followers",
"following_url": "https://api.github.com/users/abdulfatir/following{/other_user}",
"gists_url": "https://api.github.com/users/abdulfatir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abdulfatir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abdulfatir/subscriptions",
"organizations_url": "https://api.github.com/users/abdulfatir/orgs",
"repos_url": "https://api.github.com/users/abdulfatir/repos",
"events_url": "https://api.github.com/users/abdulfatir/events{/privacy}",
"received_events_url": "https://api.github.com/users/abdulfatir/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | [
"@lhoestq any thoughts on this? ",
"Thanks for the report! This behavior is expected due to how `from_list()` and `from_generator()` differ internally.\n\n- `from_list()` builds the entire dataset in memory at once, which can easily exceed limits (especially with variable-length arrays or millions of rows). The Arrow error you're seeing (`Value too large to fit in C integer type`) is related to that memory overload.\n- `from_generator()` avoids this issue by batching and streaming data incrementally, which is much more memory-efficient.\n\nSo for large datasets like time series or NLP data with large arrays, `from_generator()` (or `datasets.IterableDataset`) is the recommended approach.\n\nHope this helps clarify the behavior — let me know if you'd like me to point to prior issues/discussions where similar tradeoffs came up!\n",
"@ArjunJagdale Yes, it is related to using large dataset but not in the way that you have described. As I understand, the problem here is that `datasets` does not use `LargeList` with 64-bit offsets from PyArrow when using `from_list`. However, with `from_generator` this seems to work okay, likely due to batching. As such, this is more like a bug than an expected outcome. If this is indeed \"expected\", `datasets` should fail more gracefully in these cases with a recommendation to use `from_generator`. ",
"Thanks for the clarification — you're absolutely right, this seems tied to the use of 32-bit list offsets in from_list() under the hood. That distinction between List and LargeList in PyArrow is a crucial one, and definitely worth highlighting in the docs or error message. Happy to help if a check or fallback to LargeList makes sense here."
]
| 2025-06-17T10:58:55 | 2025-06-29T16:34:44 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
I am constructing a large time series dataset and observed that first constructing a list of entries and then using `Dataset.from_list` led to a crash as the number of items became large. However, this is not a problem when using `Dataset.from_generator`.
### Steps to reproduce the bug
#### Snippet A (crashes)
```py
from tqdm.auto import tqdm
import numpy as np
import datasets
def data_generator():
for i in tqdm(range(10_000_000)):
length = np.random.randint(2048)
series = np.random.rand(length)
yield {"target": series, "item_id": str(i), "start": np.datetime64("2000", "ms")}
data_list = list(data_generator())
ds = datasets.Dataset.from_list(data_list)
```
The last line crashes with
```
ArrowInvalid: Value 2147483761 too large to fit in C integer type
```
#### Snippet B (works)
```py
from tqdm.auto import tqdm
import numpy as np
import datasets
def data_generator():
for i in tqdm(range(10_000_000)):
length = np.random.randint(2048)
series = np.random.rand(length)
yield {"target": series, "item_id": str(i), "start": np.datetime64("2000", "ms")}
ds = datasets.Dataset.from_generator(data_generator)
```
### Expected behavior
I expected both the approaches to work or to fail similarly.
### Environment info
```
- `datasets` version: 3.6.0
- Platform: Linux-6.8.0-1029-aws-x86_64-with-glibc2.35
- Python version: 3.11.11
- `huggingface_hub` version: 0.32.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
``` | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7619/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7618 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7618/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7618/comments | https://api.github.com/repos/huggingface/datasets/issues/7618/events | https://github.com/huggingface/datasets/pull/7618 | 3,148,912,897 | PR_kwDODunzps6aqOnm | 7,618 | fix: raise error when folder-based datasets are loaded without data_dir or data_files | {
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| open | false | null | []
| null | [
"Great ! Since this logic is specific to one builder class maybe this check can be in the class definition ? I think you can put it in FolderBasedBuilder's `_info()` method."
]
| 2025-06-16T07:43:59 | 2025-06-16T12:13:26 | null | CONTRIBUTOR | null | null | null |
### Related Issues/PRs
<!-- Uncomment 'Resolve' if this PR can close the linked items. -->
<!-- Resolve --> #6152
---
### What changes are proposed in this pull request?
This PR adds an early validation step for folder-based datasets (like `audiofolder`) to prevent silent fallback behavior.
**Before this fix**:
- When `data_dir` or `data_files` were not provided, the loader defaulted to the current working directory.
- This caused unexpected behavior like:
- Long loading times
- Scanning unintended local files
**Now**:
- If both `data_dir` and `data_files` are missing, a `ValueError` is raised early with a helpful message.
---
### How is this PR tested?
- [x] Manual test via `load_dataset("audiofolder")` with missing `data_dir`
- [ ] Existing unit tests (should not break any)
- [ ] New tests (if needed, maintainers can guide)
---
### Does this PR require documentation update?
- [x] No. You can skip the rest of this section.
---
### Release Notes
#### Is this a user-facing change?
- [x] Yes. Give a description of this change to be included in the release notes for users.
> Adds early error handling for folder-based datasets when neither `data_dir` nor `data_files` is specified, avoiding unintended resolution to the current directory.
#### What component(s), interfaces, languages, and integrations does this PR affect?
Components:
- [x] `area/datasets`
- [x] `area/load`
---
<a name="release-note-category"></a>
#### How should the PR be classified in the release notes? Choose one:
- [x] `rn/bug-fix` - A user-facing bug fix worth mentioning in the release notes
---
#### Should this PR be included in the next patch release?
- [x] Yes (this PR will be cherry-picked and included in the next patch release)
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7618/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7618",
"html_url": "https://github.com/huggingface/datasets/pull/7618",
"diff_url": "https://github.com/huggingface/datasets/pull/7618.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7618.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7617 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7617/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7617/comments | https://api.github.com/repos/huggingface/datasets/issues/7617/events | https://github.com/huggingface/datasets/issues/7617 | 3,148,102,085 | I_kwDODunzps67pDnF | 7,617 | Unwanted column padding in nested lists of dicts | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | []
| closed | false | null | []
| null | [
"Answer from @lhoestq:\n\n> No\n> This is because Arrow and Parquet a columnar format: they require a fixed type for each column. So if you have nested dicts, each item should have the same subfields\n\nThe way around I found is the handle it after sampling with this function:\n\n```python\ndef remove_padding(example):\n if isinstance(example, list):\n return [remove_padding(value) if isinstance(value, (dict, list)) else value for value in example]\n elif isinstance(example, Mapping):\n return {\n key: remove_padding(value) if isinstance(value, (dict, list)) else value\n for key, value in example.items()\n if value is not None\n }\n else:\n raise TypeError(\"Input must be a list or a dictionary.\")\n\n# Example:\nexample = next(iter(dataset))\nexample = remove_padding(example)\n```"
]
| 2025-06-15T22:06:17 | 2025-06-16T13:43:31 | 2025-06-16T13:43:31 | MEMBER | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ```python
from datasets import Dataset
dataset = Dataset.from_dict({
"messages": [
[
{"a": "...",},
{"b": "...",},
],
]
})
print(dataset[0])
```
What I get:
```
{'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]}
```
What I want:
```
{'messages': [{'a': '...'}, {'b': '...'}]}
```
Is there an easy way to automatically remove these auto-filled null/none values?
If not, I probably need a recursive none exclusion function, don't I?
Datasets 3.6.0 | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7617/timeline | null | completed | null | null | false |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 99