url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
3.31B
node_id
stringlengths
18
32
number
int64
1
7.73k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
timestamp[ns]date
2020-04-14 10:18:02
2025-08-09 15:52:54
updated_at
timestamp[ns]date
2020-04-27 16:04:17
2025-08-10 05:26:27
closed_at
timestamp[ns]date
2020-04-14 12:01:40
2025-08-07 08:27:18
author_association
stringclasses
4 values
type
float64
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
closed_by
dict
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
4 values
sub_issues_summary
dict
https://api.github.com/repos/huggingface/datasets/issues/7633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7633/comments
https://api.github.com/repos/huggingface/datasets/issues/7633/events
https://github.com/huggingface/datasets/issues/7633
3,168,399,637
I_kwDODunzps682fEV
7,633
Proposal: Small Tamil Discourse Coherence Dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/66418501?v=4", "events_url": "https://api.github.com/users/bikkiNitSrinagar/events{/privacy}", "followers_url": "https://api.github.com/users/bikkiNitSrinagar/followers", "following_url": "https://api.github.com/users/bikkiNitSrinagar/following{/other_user}", "gists_url": "https://api.github.com/users/bikkiNitSrinagar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bikkiNitSrinagar", "id": 66418501, "login": "bikkiNitSrinagar", "node_id": "MDQ6VXNlcjY2NDE4NTAx", "organizations_url": "https://api.github.com/users/bikkiNitSrinagar/orgs", "received_events_url": "https://api.github.com/users/bikkiNitSrinagar/received_events", "repos_url": "https://api.github.com/users/bikkiNitSrinagar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bikkiNitSrinagar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bikkiNitSrinagar/subscriptions", "type": "User", "url": "https://api.github.com/users/bikkiNitSrinagar", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2025-06-23T14:24:40
2025-06-23T14:24:40
null
NONE
null
null
null
null
I’m a beginner from NIT Srinagar proposing a dataset of 50 Tamil text pairs for discourse coherence (coherent/incoherent labels) to support NLP research in low-resource languages. - Size: 50 samples - Format: CSV with columns (text1, text2, label) - Use case: Training NLP models for coherence I’ll use GitHub’s web editor and Google Colab. Please confirm if this fits.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7633/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7633/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7632/comments
https://api.github.com/repos/huggingface/datasets/issues/7632/events
https://github.com/huggingface/datasets/issues/7632
3,168,283,589
I_kwDODunzps682CvF
7,632
Graceful Error Handling for cast_column("image", Image(decode=True)) in Hugging Face Datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/37377515?v=4", "events_url": "https://api.github.com/users/ganiket19/events{/privacy}", "followers_url": "https://api.github.com/users/ganiket19/followers", "following_url": "https://api.github.com/users/ganiket19/following{/other_user}", "gists_url": "https://api.github.com/users/ganiket19/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ganiket19", "id": 37377515, "login": "ganiket19", "node_id": "MDQ6VXNlcjM3Mzc3NTE1", "organizations_url": "https://api.github.com/users/ganiket19/orgs", "received_events_url": "https://api.github.com/users/ganiket19/received_events", "repos_url": "https://api.github.com/users/ganiket19/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ganiket19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ganiket19/subscriptions", "type": "User", "url": "https://api.github.com/users/ganiket19", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi! This is now handled in PR #7638", "Thank you for implementing the suggestion it would be great help in our use case. " ]
2025-06-23T13:49:24
2025-07-08T06:52:53
null
NONE
null
null
null
null
### Feature request Currently, when using dataset.cast_column("image", Image(decode=True)), the pipeline throws an error and halts if any image in the dataset is invalid or corrupted (e.g., truncated files, incorrect formats, unreachable URLs). This behavior disrupts large-scale processing where a few faulty samples are common. reference : https://discuss.huggingface.co/t/handle-errors-when-loading-images-404-corrupted-etc/50318/5 https://discuss.huggingface.co/t/handling-non-existing-url-in-image-dataset-while-cast-column/69185 Proposed Feature Introduce a mechanism (e.g., a continue_on_error=True flag or global error handling mode) in Image(decode=True) that: Skips invalid images and sets them as None, or Logs the error but allows the rest of the dataset to be processed without interruption. Example Usage from datasets import load_dataset, Image dataset = load_dataset("my_dataset") dataset = dataset.cast_column("image", Image(decode=True, continue_on_error=True)) Benefits Ensures robust large-scale image dataset processing. Improves developer productivity by avoiding custom retry/error-handling code. Aligns with best practices in dataset preprocessing pipelines that tolerate minor data corruption. Potential Implementation Options Internally wrap the decoding in a try/except block. Return None or a placeholder on failure. Optionally allow custom error callbacks or logging. ### Motivation Robustness: Large-scale image datasets often contain a small fraction of corrupt files or unreachable URLs. Halting on the first error forces users to write custom workarounds or preprocess externally. Simplicity: A built-in flag removes boilerplate try/except logic around every decode step. Performance: Skipping invalid samples inline is more efficient than a two-pass approach (filter then decode). ### Your contribution 1. API Change Extend datasets.features.Image(decode=True) to accept continue_on_error: bool = False. 2. Behavior If continue_on_error=False (default), maintain current behavior: any decode error raises an exception. If continue_on_error=True, wrap decode logic in try/except: On success: store the decoded image. On failure: log a warning (e.g., via logging.warning) and set the field to None (or a sentinel value). 3. Optional Enhancements Allow a callback hook: Image(decode=True, continue_on_error=True, on_error=lambda idx, url, exc: ...) Emit metrics or counts of skipped images.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7632/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7632/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7631/comments
https://api.github.com/repos/huggingface/datasets/issues/7631/events
https://github.com/huggingface/datasets/pull/7631
3,165,127,657
PR_kwDODunzps6bgwOB
7,631
Pass user-agent from DownloadConfig into fsspec storage_options
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "- This PR assumes that `HfFileSystem` in `huggingface_hub` supports receiving `headers` in `storage_options`. If not, a follow-up PR can be opened to add this support to `HfFileSystem.__init__`.\r\n- No test was added for this since it’s a config passthrough. If needed, I’d be happy to add one." ]
2025-06-21T14:22:25
2025-06-21T14:25:28
null
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7631.diff", "html_url": "https://github.com/huggingface/datasets/pull/7631", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7631.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7631" }
Fixes part of issue #6046 ### Problem The `user-agent` defined in `DownloadConfig` was not passed down to fsspec-based filesystems like `HfFileSystem`, which prevents proper identification/tracking of client requests. ### Solution Added support for injecting the `user-agent` into `storage_options["headers"]` within `_prepare_single_hop_path_and_storage_options()` based on the `protocol`. Now, when using `hf://`, `http://`, or `https://`, the custom user-agent is passed automatically. ### Code Location Modified: - `src/datasets/utils/file_utils.py` Used `get_datasets_user_agent(...)` to ensure proper formatting and fallback logic.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7631/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7631/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7630/comments
https://api.github.com/repos/huggingface/datasets/issues/7630/events
https://github.com/huggingface/datasets/issues/7630
3,164,650,900
I_kwDODunzps68oL2U
7,630
[bug] resume from ckpt skips samples if .map is applied
{ "avatar_url": "https://avatars.githubusercontent.com/u/23004953?v=4", "events_url": "https://api.github.com/users/felipemello1/events{/privacy}", "followers_url": "https://api.github.com/users/felipemello1/followers", "following_url": "https://api.github.com/users/felipemello1/following{/other_user}", "gists_url": "https://api.github.com/users/felipemello1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/felipemello1", "id": 23004953, "login": "felipemello1", "node_id": "MDQ6VXNlcjIzMDA0OTUz", "organizations_url": "https://api.github.com/users/felipemello1/orgs", "received_events_url": "https://api.github.com/users/felipemello1/received_events", "repos_url": "https://api.github.com/users/felipemello1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/felipemello1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felipemello1/subscriptions", "type": "User", "url": "https://api.github.com/users/felipemello1", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Thanks for reporting this — it looks like a separate but related bug to #7538, which involved sample loss when resuming an `IterableDataset` wrapped in `FormattedExamplesIterable`. That was resolved in #7553 by re-batching the iterable to track offset correctly.\n\nIn this case, the issue seems to arise specifically from applying `.map()` before sharding and checkpointing. That wraps the iterable in `MappedExamplesIterable`, which may not preserve or propagate `shard_example_idx` correctly across `.state_dict()` and `.load_state_dict()` calls.\n\nYou can see that without `.map()`, resume works fine — but with `.map()`, it jumps from sample 9 to 50, skipping the rest of the shard.\n\nI'll dig deeper into how `MappedExamplesIterable` manages offsets and whether it supports proper checkpoint resumption. If not, we might need a fix similar to the one in #7553, or a wrapper to preserve resume metadata.\n\nHappy to help fix it!\n", "Let me know if a dedicated test case is required — happy to add one!" ]
2025-06-21T01:50:03
2025-06-29T07:51:32
null
NONE
null
null
null
null
### Describe the bug resume from ckpt skips samples if .map is applied Maybe related: https://github.com/huggingface/datasets/issues/7538 ### Steps to reproduce the bug ```python from datasets import Dataset from datasets.distributed import split_dataset_by_node # Create dataset with map transformation def create_dataset(): ds = Dataset.from_dict({"id": list(range(100))}) ds = ds.to_iterable_dataset(num_shards=4) ds = ds.map(lambda x: x) #comment it out to get desired behavior ds = split_dataset_by_node(ds, rank=0, world_size=2) return ds ds = create_dataset() # Iterate and save checkpoint after 10 samples it = iter(ds) for idx, sample in enumerate(it): if idx == 9: # Checkpoint after 10 samples checkpoint = ds.state_dict() print(f"Checkpoint saved at sample: {sample['id']}") break # Continue with original iterator original_next_samples = [] for idx, sample in enumerate(it): original_next_samples.append(sample["id"]) if idx >= 4: break # Resume from checkpoint ds_new = create_dataset() ds_new.load_state_dict(checkpoint) # Get samples from resumed iterator it_new = iter(ds_new) resumed_next_samples = [] for idx, sample in enumerate(it_new): resumed_next_samples.append(sample["id"]) if idx >= 4: break print(f"\nExpected next samples: {original_next_samples}") print(f"Actual next samples: {resumed_next_samples}") print( f"\n❌ BUG: {resumed_next_samples[0] - original_next_samples[0]} samples were skipped!" ) ``` With map ``` Checkpoint saved at sample: 9 Expected next samples: [10, 11, 12, 13, 14] Actual next samples: [50, 51, 52, 53, 54] ❌ BUG: 40 samples were skipped! ``` ### Expected behavior without map ``` Expected next samples: [10, 11, 12, 13, 14] Actual next samples: [10, 11, 12, 13, 14] ❌ BUG: 0 samples were skipped! ``` ### Environment info datasets == 3.6.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7630/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7630/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7629/comments
https://api.github.com/repos/huggingface/datasets/issues/7629/events
https://github.com/huggingface/datasets/pull/7629
3,161,169,782
PR_kwDODunzps6bTc7b
7,629
Add test for `as_iterable_dataset()` method in DatasetBuilder
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2025-06-19T19:23:55
2025-06-19T19:23:55
null
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7629.diff", "html_url": "https://github.com/huggingface/datasets/pull/7629", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7629.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7629" }
This PR adds a test for the new `as_iterable_dataset()` method introduced in PR #7628. The test: - Loads a builder using `load_dataset_builder("c4", "en")` - Runs `download_and_prepare()` - Streams examples using `builder.as_iterable_dataset(split="train[:100]")` - Verifies streamed examples contain the "text" field This ensures that the builder correctly streams data from cached Arrow files.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7629/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7629/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7628/comments
https://api.github.com/repos/huggingface/datasets/issues/7628/events
https://github.com/huggingface/datasets/pull/7628
3,161,156,461
PR_kwDODunzps6bTaGk
7,628
Add `as_iterable_dataset()` method to DatasetBuilder for streaming from cached Arrow files
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2025-06-19T19:15:41
2025-06-19T19:15:41
null
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7628.diff", "html_url": "https://github.com/huggingface/datasets/pull/7628", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7628.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7628" }
This PR implements `builder.as_iterable_dataset(split=...)` as discussed in #5481. It allows users to load an `IterableDataset` directly from cached Arrow files (using ArrowReader and ArrowExamplesIterable), without loading the full dataset into memory. This is useful for large-scale training scenarios where memory is constrained. A test has also been added in `test_builder.py`. Related to: #5481
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7628/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7628/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7627/comments
https://api.github.com/repos/huggingface/datasets/issues/7627/events
https://github.com/huggingface/datasets/issues/7627
3,160,544,390
I_kwDODunzps68YhSG
7,627
Creating a HF Dataset from lakeFS with S3 storage takes too much time!
{ "avatar_url": "https://avatars.githubusercontent.com/u/118734142?v=4", "events_url": "https://api.github.com/users/Thunderhead-exe/events{/privacy}", "followers_url": "https://api.github.com/users/Thunderhead-exe/followers", "following_url": "https://api.github.com/users/Thunderhead-exe/following{/other_user}", "gists_url": "https://api.github.com/users/Thunderhead-exe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Thunderhead-exe", "id": 118734142, "login": "Thunderhead-exe", "node_id": "U_kgDOBxO9Pg", "organizations_url": "https://api.github.com/users/Thunderhead-exe/orgs", "received_events_url": "https://api.github.com/users/Thunderhead-exe/received_events", "repos_url": "https://api.github.com/users/Thunderhead-exe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Thunderhead-exe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Thunderhead-exe/subscriptions", "type": "User", "url": "https://api.github.com/users/Thunderhead-exe", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "### > Update\n\nThe bottleneck, from what I understand, was making one network request per file\n\nFor 30k images, this meant 30k separate GET requests to the MinIO server through the S3 API, and that was killing the performance\n\nUsing webDataset to transform the large number of files to few .tar files and passing “webdataset” instead of “imagefolder” to the load_dataset function worked perfectly (took only ~11s)" ]
2025-06-19T14:28:41
2025-06-23T12:39:10
2025-06-23T12:39:10
NONE
null
null
null
null
Hi, I’m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_ Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot! From what I understand, it is loading the images into cache then building the dataset. – Please find bellow the execution screenshot – Is there a way to optimize this or am I doing something wrong? Thanks! ![Image](https://github.com/user-attachments/assets/c79257c8-f023-42a9-9e6f-0898b3ea93fe)
{ "avatar_url": "https://avatars.githubusercontent.com/u/118734142?v=4", "events_url": "https://api.github.com/users/Thunderhead-exe/events{/privacy}", "followers_url": "https://api.github.com/users/Thunderhead-exe/followers", "following_url": "https://api.github.com/users/Thunderhead-exe/following{/other_user}", "gists_url": "https://api.github.com/users/Thunderhead-exe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Thunderhead-exe", "id": 118734142, "login": "Thunderhead-exe", "node_id": "U_kgDOBxO9Pg", "organizations_url": "https://api.github.com/users/Thunderhead-exe/orgs", "received_events_url": "https://api.github.com/users/Thunderhead-exe/received_events", "repos_url": "https://api.github.com/users/Thunderhead-exe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Thunderhead-exe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Thunderhead-exe/subscriptions", "type": "User", "url": "https://api.github.com/users/Thunderhead-exe", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7627/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7627/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7626/comments
https://api.github.com/repos/huggingface/datasets/issues/7626/events
https://github.com/huggingface/datasets/pull/7626
3,159,322,138
PR_kwDODunzps6bNMuF
7,626
feat(map): reuse unchanged columns when input_columns specified to reduce disk usage (#6013)
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-06-19T07:41:45
2025-07-28T17:39:12
2025-07-28T17:39:12
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7626.diff", "html_url": "https://github.com/huggingface/datasets/pull/7626", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7626.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7626" }
## Summary This PR addresses [#6013](https://github.com/huggingface/datasets/issues/6013) by reusing unchanged columns from the original dataset in the `map()` method when `input_columns` is specified. ## What’s Implemented - Injected logic at the end of `Dataset.map()` to: - Identify untouched columns not in `input_columns` or `remove_columns` - Select those columns from the original dataset - Concatenate them with the transformed result using `pyarrow.concat_tables` ## Example Behavior ```python ds = Dataset.from_dict({"a": [1, 2], "b": [3, 4]}) ds2 = ds.map(lambda x: {"c": x["a"] + 10}, input_columns=["a"], remove_columns=["a"]) print(ds2.column_names) # Output: ['b', 'c'] ```` Column `b` is reused from the original dataset. ## Notes * This keeps disk usage and caching minimal by avoiding full dataset duplication. * Only triggered when `input_columns` is set. --- cc @lhoestq @mariosasko for review 🙂
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7626/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7626/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7625/comments
https://api.github.com/repos/huggingface/datasets/issues/7625/events
https://github.com/huggingface/datasets/pull/7625
3,159,016,001
PR_kwDODunzps6bMKof
7,625
feat: Add h5folder dataset loader for HDF5 support
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7625). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I guess test failed cause import os, import h5py, and import datasets lines are not alphabetically sorted, or not grouped properly.\r\n\r\n![image](https://github.com/user-attachments/assets/ab73f8f9-da50-4ba8-9b2d-b7c30fce94f5)\r\n", "This commit was accidental - `[Merge branch 'main' into patch-4]`. The \r\n`[chore: fix import order in h5folder.py to satisfy linter]` should solve the import order issue. \r\n\r\n\r\n" ]
2025-06-19T05:39:10
2025-06-26T05:44:26
null
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7625.diff", "html_url": "https://github.com/huggingface/datasets/pull/7625", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7625.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7625" }
### Related Issue Closes #3113 ### What does this PR do? This PR introduces a new dataset loader module called **`h5folder`** to support loading datasets stored in **HDF5 (.h5)** format. It allows users to do: ```python from datasets import load_dataset dataset = load_dataset("h5folder", data_dir="path/to/") ```` ### 🧩 Design Overview * Implemented inside `datasets/packaged_modules/h5folder/h5folder.py` * Based on the `GeneratorBasedBuilder` API * Uses `h5py` to read HDF5 files and yield examples * Expects datasets such as `id`, `data`, and `label` inside `data.h5` * Converts numpy arrays to Python types before yielding ### 🧪 Example `.h5` Structure (for local testing) ```python import h5py import numpy as np with h5py.File("data.h5", "w") as f: f.create_dataset("id", data=np.arange(100)) f.create_dataset("data", data=np.random.randn(100, 10)) f.create_dataset("label", data=np.random.randint(0, 2, size=100)) ``` ### ✅ Testing - The loader logic follows the structure of existing modules like `imagefolder` - Will rely on Hugging Face CI to validate integration - Manually testing planned once merged or during feedback ### 📁 Files Added * `datasets/src/datasets/packaged_modules/h5folder/h5folder.py` ### 📌 Component(s) Affected * `area/datasets` * `area/load` ### 📦 Release Note Classification * `rn/feature` – Adds support for loading `.h5` datasets via `load_dataset("h5folder", ...)` --- Let me know if any changes or improvements are needed — happy to iterate. Thanks for reviewing!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/7625/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7625/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7624/comments
https://api.github.com/repos/huggingface/datasets/issues/7624/events
https://github.com/huggingface/datasets/issues/7624
3,156,136,624
I_kwDODunzps68HtKw
7,624
#Dataset Make "image" column appear first in dataset preview UI
{ "avatar_url": "https://avatars.githubusercontent.com/u/98875217?v=4", "events_url": "https://api.github.com/users/jcerveto/events{/privacy}", "followers_url": "https://api.github.com/users/jcerveto/followers", "following_url": "https://api.github.com/users/jcerveto/following{/other_user}", "gists_url": "https://api.github.com/users/jcerveto/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jcerveto", "id": 98875217, "login": "jcerveto", "node_id": "U_kgDOBeS3UQ", "organizations_url": "https://api.github.com/users/jcerveto/orgs", "received_events_url": "https://api.github.com/users/jcerveto/received_events", "repos_url": "https://api.github.com/users/jcerveto/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jcerveto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcerveto/subscriptions", "type": "User", "url": "https://api.github.com/users/jcerveto", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! It should follow the same order as the order of the keys in the metadata file", "Hi! Thank you for your answer. \n\nAs you said it, I I forced every key in every JSON to have an order using `collections. OrderedDict` in Python. Now, it works!\n\nTY" ]
2025-06-18T09:25:19
2025-06-20T07:46:43
2025-06-20T07:46:43
NONE
null
null
null
null
Hi! #Dataset I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub. However, at the moment, the `"image"` column is not the first—in fact, it appears last, which is not ideal for the presentation I’d like to achieve. I have a couple of questions: Is there a way to force the dataset card to display the `"image"` column first? Is there currently any way to control or influence the column order in the dataset preview UI? Does the order of keys in the .jsonl file or the features argument affect the display order? Thanks again for your time and help! :blush:
{ "avatar_url": "https://avatars.githubusercontent.com/u/98875217?v=4", "events_url": "https://api.github.com/users/jcerveto/events{/privacy}", "followers_url": "https://api.github.com/users/jcerveto/followers", "following_url": "https://api.github.com/users/jcerveto/following{/other_user}", "gists_url": "https://api.github.com/users/jcerveto/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jcerveto", "id": 98875217, "login": "jcerveto", "node_id": "U_kgDOBeS3UQ", "organizations_url": "https://api.github.com/users/jcerveto/orgs", "received_events_url": "https://api.github.com/users/jcerveto/received_events", "repos_url": "https://api.github.com/users/jcerveto/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jcerveto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcerveto/subscriptions", "type": "User", "url": "https://api.github.com/users/jcerveto", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7624/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7624/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7623/comments
https://api.github.com/repos/huggingface/datasets/issues/7623/events
https://github.com/huggingface/datasets/pull/7623
3,154,519,684
PR_kwDODunzps6a9Jk5
7,623
fix: raise error in FolderBasedBuilder when data_dir and data_files are missing
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "@lhoestq Moved the logic to FolderBasedBuilder._info() as discussed in previous PR (#7618). Let me know if anything else is needed — happy to update!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7623). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-17T19:16:34
2025-06-18T14:18:41
2025-06-18T14:18:41
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7623.diff", "html_url": "https://github.com/huggingface/datasets/pull/7623", "merged_at": "2025-06-18T14:18:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/7623.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7623" }
### Related Issues/PRs Fixes #6152 --- ### What changes are proposed in this pull request? This PR adds a dedicated validation check in the `_info()` method of the `FolderBasedBuilder` class to ensure that users provide either `data_dir` or `data_files` when loading folder-based datasets (such as `audiofolder`, `imagefolder`, etc.). --- ### Why this change? Previously, when calling: ```python load_dataset("audiofolder") ```` without specifying `data_dir` or `data_files`, the loader would silently fallback to the **current working directory**, leading to: * Long loading times * Unexpected behavior (e.g., scanning unrelated files) This behavior was discussed in issue #6152. As suggested by maintainers, the fix has now been implemented directly inside the `FolderBasedBuilder._info()` method — keeping the logic localized to the specific builder instead of a generic loader function. --- ### How is this PR tested? * ✅ Manually tested by calling `load_dataset("audiofolder")` with no `data_dir` or `data_files` → a `ValueError` is now raised early. * ✅ Existing functionality (with valid input) remains unaffected. --- ### Does this PR require documentation update? * [x] No --- ### Release Notes #### Is this a user-facing change? * [x] Yes > Folder-based datasets now raise an explicit error if neither `data_dir` nor `data_files` are specified, preventing unintended fallback to the current working directory. --- #### What component(s) does this PR affect? * [x] `area/datasets` * [x] `area/load` --- <a name="release-note-category"></a> #### How should the PR be classified? * [x] `rn/bug-fix` - A user-facing bug fix --- #### Should this be included in the next patch release? * [x] Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7623/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7623/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7622/comments
https://api.github.com/repos/huggingface/datasets/issues/7622/events
https://github.com/huggingface/datasets/pull/7622
3,154,398,557
PR_kwDODunzps6a8v6J
7,622
Guard against duplicate builder_kwargs/config_kwargs in load_dataset_…
{ "avatar_url": "https://avatars.githubusercontent.com/u/149825575?v=4", "events_url": "https://api.github.com/users/Shohail-Ismail/events{/privacy}", "followers_url": "https://api.github.com/users/Shohail-Ismail/followers", "following_url": "https://api.github.com/users/Shohail-Ismail/following{/other_user}", "gists_url": "https://api.github.com/users/Shohail-Ismail/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Shohail-Ismail", "id": 149825575, "login": "Shohail-Ismail", "node_id": "U_kgDOCO4oJw", "organizations_url": "https://api.github.com/users/Shohail-Ismail/orgs", "received_events_url": "https://api.github.com/users/Shohail-Ismail/received_events", "repos_url": "https://api.github.com/users/Shohail-Ismail/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Shohail-Ismail/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shohail-Ismail/subscriptions", "type": "User", "url": "https://api.github.com/users/Shohail-Ismail", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi folks, this PR fixes the duplicate-kwargs edge case and includes a unit test. Would love a review when you have a moment!\r\n\r\n@zach-huggingface\r\n@SunMarc " ]
2025-06-17T18:28:35
2025-07-23T14:06:20
2025-07-23T14:06:20
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7622.diff", "html_url": "https://github.com/huggingface/datasets/pull/7622", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7622.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7622" }
…builder (#4910 ) ### What does this PR do? Fixes edge case in `load_dataset_builder` by raising a `TypeError` if the same key exists in both `builder_kwargs` and `config_kwargs`. ### Implementation details - Added a guard clause in `load_dataset_builder` to detect duplicate keys between `builder_kwargs` and `config_kwargs` - Wrote a unit test in `tests/test_load_duplicate_keys.py` to verify the exception is raised correctly ### Fixes Closes #4910 ### Reviewers @zach-huggingface @SunMarc Would appreciate your review if you have time - thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/149825575?v=4", "events_url": "https://api.github.com/users/Shohail-Ismail/events{/privacy}", "followers_url": "https://api.github.com/users/Shohail-Ismail/followers", "following_url": "https://api.github.com/users/Shohail-Ismail/following{/other_user}", "gists_url": "https://api.github.com/users/Shohail-Ismail/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Shohail-Ismail", "id": 149825575, "login": "Shohail-Ismail", "node_id": "U_kgDOCO4oJw", "organizations_url": "https://api.github.com/users/Shohail-Ismail/orgs", "received_events_url": "https://api.github.com/users/Shohail-Ismail/received_events", "repos_url": "https://api.github.com/users/Shohail-Ismail/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Shohail-Ismail/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shohail-Ismail/subscriptions", "type": "User", "url": "https://api.github.com/users/Shohail-Ismail", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7622/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7622/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7621/comments
https://api.github.com/repos/huggingface/datasets/issues/7621/events
https://github.com/huggingface/datasets/pull/7621
3,153,780,963
PR_kwDODunzps6a6rAu
7,621
minor docs data aug
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7621). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-17T14:46:57
2025-06-17T14:50:28
2025-06-17T14:47:11
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7621.diff", "html_url": "https://github.com/huggingface/datasets/pull/7621", "merged_at": "2025-06-17T14:47:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/7621.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7621" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7621/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7621/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7620/comments
https://api.github.com/repos/huggingface/datasets/issues/7620/events
https://github.com/huggingface/datasets/pull/7620
3,153,565,183
PR_kwDODunzps6a58TP
7,620
Fixes in docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7620). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-17T13:41:54
2025-06-17T13:58:26
2025-06-17T13:58:24
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7620.diff", "html_url": "https://github.com/huggingface/datasets/pull/7620", "merged_at": "2025-06-17T13:58:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/7620.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7620" }
before release 4.0 (I also did minor improvements to `features` to not show their `id=None` in their `__repr__()`)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7620/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7620/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7619/comments
https://api.github.com/repos/huggingface/datasets/issues/7619/events
https://github.com/huggingface/datasets/issues/7619
3,153,058,517
I_kwDODunzps6779rV
7,619
`from_list` fails while `from_generator` works for large datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/4028948?v=4", "events_url": "https://api.github.com/users/abdulfatir/events{/privacy}", "followers_url": "https://api.github.com/users/abdulfatir/followers", "following_url": "https://api.github.com/users/abdulfatir/following{/other_user}", "gists_url": "https://api.github.com/users/abdulfatir/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abdulfatir", "id": 4028948, "login": "abdulfatir", "node_id": "MDQ6VXNlcjQwMjg5NDg=", "organizations_url": "https://api.github.com/users/abdulfatir/orgs", "received_events_url": "https://api.github.com/users/abdulfatir/received_events", "repos_url": "https://api.github.com/users/abdulfatir/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abdulfatir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abdulfatir/subscriptions", "type": "User", "url": "https://api.github.com/users/abdulfatir", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "@lhoestq any thoughts on this? ", "Thanks for the report! This behavior is expected due to how `from_list()` and `from_generator()` differ internally.\n\n- `from_list()` builds the entire dataset in memory at once, which can easily exceed limits (especially with variable-length arrays or millions of rows). The Arrow error you're seeing (`Value too large to fit in C integer type`) is related to that memory overload.\n- `from_generator()` avoids this issue by batching and streaming data incrementally, which is much more memory-efficient.\n\nSo for large datasets like time series or NLP data with large arrays, `from_generator()` (or `datasets.IterableDataset`) is the recommended approach.\n\nHope this helps clarify the behavior — let me know if you'd like me to point to prior issues/discussions where similar tradeoffs came up!\n", "@ArjunJagdale Yes, it is related to using large dataset but not in the way that you have described. As I understand, the problem here is that `datasets` does not use `LargeList` with 64-bit offsets from PyArrow when using `from_list`. However, with `from_generator` this seems to work okay, likely due to batching. As such, this is more like a bug than an expected outcome. If this is indeed \"expected\", `datasets` should fail more gracefully in these cases with a recommendation to use `from_generator`. ", "Thanks for the clarification — you're absolutely right, this seems tied to the use of 32-bit list offsets in from_list() under the hood. That distinction between List and LargeList in PyArrow is a crucial one, and definitely worth highlighting in the docs or error message. Happy to help if a check or fallback to LargeList makes sense here." ]
2025-06-17T10:58:55
2025-06-29T16:34:44
null
NONE
null
null
null
null
### Describe the bug I am constructing a large time series dataset and observed that first constructing a list of entries and then using `Dataset.from_list` led to a crash as the number of items became large. However, this is not a problem when using `Dataset.from_generator`. ### Steps to reproduce the bug #### Snippet A (crashes) ```py from tqdm.auto import tqdm import numpy as np import datasets def data_generator(): for i in tqdm(range(10_000_000)): length = np.random.randint(2048) series = np.random.rand(length) yield {"target": series, "item_id": str(i), "start": np.datetime64("2000", "ms")} data_list = list(data_generator()) ds = datasets.Dataset.from_list(data_list) ``` The last line crashes with ``` ArrowInvalid: Value 2147483761 too large to fit in C integer type ``` #### Snippet B (works) ```py from tqdm.auto import tqdm import numpy as np import datasets def data_generator(): for i in tqdm(range(10_000_000)): length = np.random.randint(2048) series = np.random.rand(length) yield {"target": series, "item_id": str(i), "start": np.datetime64("2000", "ms")} ds = datasets.Dataset.from_generator(data_generator) ``` ### Expected behavior I expected both the approaches to work or to fail similarly. ### Environment info ``` - `datasets` version: 3.6.0 - Platform: Linux-6.8.0-1029-aws-x86_64-with-glibc2.35 - Python version: 3.11.11 - `huggingface_hub` version: 0.32.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0 ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7619/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7619/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7618/comments
https://api.github.com/repos/huggingface/datasets/issues/7618/events
https://github.com/huggingface/datasets/pull/7618
3,148,912,897
PR_kwDODunzps6aqOnm
7,618
fix: raise error when folder-based datasets are loaded without data_dir or data_files
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Great ! Since this logic is specific to one builder class maybe this check can be in the class definition ? I think you can put it in FolderBasedBuilder's `_info()` method." ]
2025-06-16T07:43:59
2025-06-16T12:13:26
null
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7618.diff", "html_url": "https://github.com/huggingface/datasets/pull/7618", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7618.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7618" }
### Related Issues/PRs <!-- Uncomment 'Resolve' if this PR can close the linked items. --> <!-- Resolve --> #6152 --- ### What changes are proposed in this pull request? This PR adds an early validation step for folder-based datasets (like `audiofolder`) to prevent silent fallback behavior. **Before this fix**: - When `data_dir` or `data_files` were not provided, the loader defaulted to the current working directory. - This caused unexpected behavior like: - Long loading times - Scanning unintended local files **Now**: - If both `data_dir` and `data_files` are missing, a `ValueError` is raised early with a helpful message. --- ### How is this PR tested? - [x] Manual test via `load_dataset("audiofolder")` with missing `data_dir` - [ ] Existing unit tests (should not break any) - [ ] New tests (if needed, maintainers can guide) --- ### Does this PR require documentation update? - [x] No. You can skip the rest of this section. --- ### Release Notes #### Is this a user-facing change? - [x] Yes. Give a description of this change to be included in the release notes for users. > Adds early error handling for folder-based datasets when neither `data_dir` nor `data_files` is specified, avoiding unintended resolution to the current directory. #### What component(s), interfaces, languages, and integrations does this PR affect? Components: - [x] `area/datasets` - [x] `area/load` --- <a name="release-note-category"></a> #### How should the PR be classified in the release notes? Choose one: - [x] `rn/bug-fix` - A user-facing bug fix worth mentioning in the release notes --- #### Should this PR be included in the next patch release? - [x] Yes (this PR will be cherry-picked and included in the next patch release)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7618/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7618/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7617/comments
https://api.github.com/repos/huggingface/datasets/issues/7617/events
https://github.com/huggingface/datasets/issues/7617
3,148,102,085
I_kwDODunzps67pDnF
7,617
Unwanted column padding in nested lists of dicts
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Answer from @lhoestq:\n\n> No\n> This is because Arrow and Parquet a columnar format: they require a fixed type for each column. So if you have nested dicts, each item should have the same subfields\n\nThe way around I found is the handle it after sampling with this function:\n\n```python\ndef remove_padding(example):\n if isinstance(example, list):\n return [remove_padding(value) if isinstance(value, (dict, list)) else value for value in example]\n elif isinstance(example, Mapping):\n return {\n key: remove_padding(value) if isinstance(value, (dict, list)) else value\n for key, value in example.items()\n if value is not None\n }\n else:\n raise TypeError(\"Input must be a list or a dictionary.\")\n\n# Example:\nexample = next(iter(dataset))\nexample = remove_padding(example)\n```" ]
2025-06-15T22:06:17
2025-06-16T13:43:31
2025-06-16T13:43:31
MEMBER
null
null
null
null
```python from datasets import Dataset dataset = Dataset.from_dict({ "messages": [ [ {"a": "...",}, {"b": "...",}, ], ] }) print(dataset[0]) ``` What I get: ``` {'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]} ``` What I want: ``` {'messages': [{'a': '...'}, {'b': '...'}]} ``` Is there an easy way to automatically remove these auto-filled null/none values? If not, I probably need a recursive none exclusion function, don't I? Datasets 3.6.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7617/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7617/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7616/comments
https://api.github.com/repos/huggingface/datasets/issues/7616/events
https://github.com/huggingface/datasets/pull/7616
3,144,506,665
PR_kwDODunzps6acSW7
7,616
Torchcodec decoding
{ "avatar_url": "https://avatars.githubusercontent.com/u/49127578?v=4", "events_url": "https://api.github.com/users/TyTodd/events{/privacy}", "followers_url": "https://api.github.com/users/TyTodd/followers", "following_url": "https://api.github.com/users/TyTodd/following{/other_user}", "gists_url": "https://api.github.com/users/TyTodd/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TyTodd", "id": 49127578, "login": "TyTodd", "node_id": "MDQ6VXNlcjQ5MTI3NTc4", "organizations_url": "https://api.github.com/users/TyTodd/orgs", "received_events_url": "https://api.github.com/users/TyTodd/received_events", "repos_url": "https://api.github.com/users/TyTodd/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TyTodd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TyTodd/subscriptions", "type": "User", "url": "https://api.github.com/users/TyTodd", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "@lhoestq any updates on when this will be merged? Let me know if theres anything you need from my end.", "Btw I plan to release `datasets` 4.0 after your PR, this will be a major milestone :)", "@lhoestq just pushed the new changes.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7616). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Great ! I took the liberty to move the AudioDecoder to its own file and make small edits in the docs and docstrings\r\n\r\nIf it looks good to you I think we can merge :)" ]
2025-06-13T19:06:07
2025-06-19T18:25:49
2025-06-19T18:25:49
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7616.diff", "html_url": "https://github.com/huggingface/datasets/pull/7616", "merged_at": "2025-06-19T18:25:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/7616.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7616" }
Closes #7607 ## New signatures ### Audio ```python Audio(sampling_rate: Optional[int] = None, mono: bool = True, decode: bool = True, stream_index: Optional[int] = None) Audio.encode_example(self, value: Union[str, bytes, bytearray, dict, "AudioDecoder"]) -> dict Audio.decode_example(self, value: dict, token_per_repo_id: Optional[dict[str, Union[str, bool, None]]] = None) -> "AudioDecoder": ``` ### Video ```python Video(decode: bool = True, stream_index: Optional[int] = None, dimension_order: Literal['NCHW', 'NHWC'] = 'NCHW', num_ffmpeg_threads: int = 1, device: Optional[Union[str, "torch.device"]] = 'cpu', seek_mode: Literal['exact', 'approximate'] = 'exact') Video.encode_example(self, value: Union[str, bytes, bytearray, Example, np.ndarray, "VideoDecoder"]) -> Example: Video.decode_example(self, value: Union[str, Example], token_per_repo_id: Optional[dict[str, Union[bool, str]]] = None, ) -> "VideoDecoder": ``` ## Notes Audio features constructor takes in 1 new optional param stream_index which is passed to the AudioDecoder constructor to select the stream index of a file. Audio feature can now take in torchcodec.decoders.AudioDecoder as input to encode_example() Audio feature decode_example() returns torchcodec.decoders.AudioDecoder Video feature constructor takes in 5 new optional params stream_index, dimension_order, num_ffmpeg_threads, device, seek_mode all of which are passed to VideoDecoder constructor Video feature decode_example() returns torchcodec.decoders.VideoDecoder Video feature can now take in torchcodec.decoders.VideoDecoder as input to encode_example() All test cases have been updated to reflect these changes All documentation has also been updated to reflect these changes. Both VideoDecoder and AudioDecoder when formatted with (np_formatter, tf_formatter, etc) will ignore the type and return themselves. Formatting test cases were updated accordingly to reflect this. (Pretty simple to make this not the case if we want though) ## Errors This test case from `tests/packaged_modules/test_audiofolder.py` ```python @require_librosa @require_sndfile @pytest.mark.parametrize("streaming", [False, True]) def test_data_files_with_metadata_and_archives(streaming, cache_dir, data_files_with_zip_archives): audiofolder = AudioFolder(data_files=data_files_with_zip_archives, cache_dir=cache_dir) audiofolder.download_and_prepare() datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset() for split, data_files in data_files_with_zip_archives.items(): num_of_archives = len(data_files) # the metadata file is inside the archive expected_num_of_audios = 2 * num_of_archives assert split in datasets dataset = list(datasets[split]) assert len(dataset) == expected_num_of_audios # make sure each sample has its own audio (all arrays are different) and metadata assert ( sum(np.array_equal(dataset[0]["audio"].get_all_samples().data.numpy(), example["audio"].get_all_samples().data.numpy()) for example in dataset[1:]) == 0 ) assert len({example["text"] for example in dataset}) == expected_num_of_audios assert all(example["text"] is not None for example in dataset) ``` Fails now because AudioDecoder needs to access the files after the lines below are run, but there seems to be some context issues. The file the decoder is trying to read is closed before the decoder gets the chance to decode it. ```python audiofolder.download_and_prepare() datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset() ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7616/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7616/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7615/comments
https://api.github.com/repos/huggingface/datasets/issues/7615/events
https://github.com/huggingface/datasets/pull/7615
3,143,443,498
PR_kwDODunzps6aYp18
7,615
remove unused code
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7615). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-13T12:37:30
2025-06-13T12:39:59
2025-06-13T12:37:40
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7615.diff", "html_url": "https://github.com/huggingface/datasets/pull/7615", "merged_at": "2025-06-13T12:37:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/7615.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7615" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7615/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7615/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7614/comments
https://api.github.com/repos/huggingface/datasets/issues/7614/events
https://github.com/huggingface/datasets/pull/7614
3,143,381,638
PR_kwDODunzps6aYcbH
7,614
Lazy column
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7614). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-13T12:12:57
2025-06-17T13:08:51
2025-06-17T13:08:49
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7614.diff", "html_url": "https://github.com/huggingface/datasets/pull/7614", "merged_at": "2025-06-17T13:08:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/7614.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7614" }
Same as https://github.com/huggingface/datasets/pull/7564 but for `Dataset`, cc @TopCoder2K FYI e.g. `ds[col]` now returns a lazy Column instead of a list This way calling `ds[col][idx]` only loads the required data in memory (bonus: also supports subfields access with `ds[col][subcol][idx]`) the breaking change will be for the next major release, which also includes removal of dataset scripts support close https://github.com/huggingface/datasets/issues/4180
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7614/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7614/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7613
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7613/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7613/comments
https://api.github.com/repos/huggingface/datasets/issues/7613/events
https://github.com/huggingface/datasets/pull/7613
3,142,819,991
PR_kwDODunzps6aWgr3
7,613
fix parallel push_to_hub in dataset_dict
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7613). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-13T09:02:24
2025-06-13T12:30:23
2025-06-13T12:30:22
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7613.diff", "html_url": "https://github.com/huggingface/datasets/pull/7613", "merged_at": "2025-06-13T12:30:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/7613.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7613" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7613/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7613/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7612
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7612/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7612/comments
https://api.github.com/repos/huggingface/datasets/issues/7612/events
https://github.com/huggingface/datasets/issues/7612
3,141,905,049
I_kwDODunzps67RaqZ
7,612
Provide an option of robust dataset iterator with error handling
{ "avatar_url": "https://avatars.githubusercontent.com/u/40016222?v=4", "events_url": "https://api.github.com/users/wwwjn/events{/privacy}", "followers_url": "https://api.github.com/users/wwwjn/followers", "following_url": "https://api.github.com/users/wwwjn/following{/other_user}", "gists_url": "https://api.github.com/users/wwwjn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wwwjn", "id": 40016222, "login": "wwwjn", "node_id": "MDQ6VXNlcjQwMDE2MjIy", "organizations_url": "https://api.github.com/users/wwwjn/orgs", "received_events_url": "https://api.github.com/users/wwwjn/received_events", "repos_url": "https://api.github.com/users/wwwjn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wwwjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wwwjn/subscriptions", "type": "User", "url": "https://api.github.com/users/wwwjn", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi ! Maybe we can add a parameter to the Image() type to make it to return `None` instead of raising an error in case of corruption ? Would that help ?", "Hi! 👋🏼 I just opened PR [#7638](https://github.com/huggingface/datasets/pull/7638) to address this issue.\n\n### 🔧 What it does:\nIt adds an `ignore_decode_errors` flag to the `Image` feature. When set to `True`, corrupted image samples will be skipped (with a warning), and `None` will be returned instead of raising an exception.\n\nThis allows users to stream datasets that may contain some invalid images without breaking the iteration loop:\n\n```python\nfeatures = Features({\n \"image\": Image(decode=True, ignore_decode_errors=True)\n})\n````\n\n### 🧩 Why this helps:\n\n* Prevents full iteration breakdown during `.streaming=True` usage\n* Enables downstream tooling like Flux (see [[Flux#1290](https://github.com/pytorch/torchtitan/pull/1290)](https://github.com/pytorch/torchtitan/pull/1290)) to implement robust loaders now that `datasets` supports graceful handling\n* Keeps current behavior unchanged unless explicitly opted-in\n\nLet me know if you'd like me to follow up with test coverage or additional enhancements!\n\ncc @lhoestq " ]
2025-06-13T00:40:48
2025-06-24T16:52:30
null
NONE
null
null
null
null
### Feature request Adding an option to skip corrupted data samples. Currently the datasets behavior is throwing errors if the data sample if corrupted and let user aware and handle the data corruption. When I tried to try-catch the error at user level, the iterator will raise StopIteration when I called next() again. The way I try to do error handling is: (This doesn't work, unfortunately) ``` # Load the dataset with streaming enabled dataset = load_dataset( "pixparse/cc12m-wds", split="train", streaming=True ) # Get an iterator from the dataset iterator = iter(dataset) while True: try: # Try to get the next example example = next(iterator) # Try to access and process the image image = example["jpg"] pil_image = Image.fromarray(np.array(image)) pil_image.verify() # Verify it's a valid image file except StopIteration: # Code path 1 print("\nStopIteration was raised! Reach the end of dataset") raise StopIteration except Exception as e: # Code path 2 errors += 1 print("Error! Skip this sample") cotinue else: successful += 1 ``` This is because the `IterableDataset` already throws an error (reaches Code path 2). And if I continue call next(), it will hit Code path 1. This is because the inner iterator of `IterableDataset`([code](https://github.com/huggingface/datasets/blob/89bd1f971402acb62805ef110bc1059c38b1c8c6/src/datasets/iterable_dataset.py#L2242)) as been stopped, so calling next() on it will raise StopIteration. So I can not skip the corrupted data sample in this way. Would also love to hear any suggestions about creating a robust dataloader. Thanks for your help in advance! ### Motivation ## Public dataset corruption might be common A lot of users would use public dataset, and the public dataset might contains some corrupted data, especially for dataset with image / video etc. I totally understand it's dataset owner and user's responsibility to ensure the data integrity / run data cleaning or preprocessing, but it would be easier for developers who would use the dataset ## Use cases For example, a robust dataloader would be easy for users who want to try quick tests on different dataset, and chose one dataset which fits their needs. So user could use IterableDataloader with `stream=True` to use the dataset easily without downloading and removing corrupted data samples from the dataset. ### Your contribution The error handling might not trivial and might need more careful design.
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7612/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7612/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7611/comments
https://api.github.com/repos/huggingface/datasets/issues/7611/events
https://github.com/huggingface/datasets/issues/7611
3,141,383,940
I_kwDODunzps67PbcE
7,611
Code example for dataset.add_column() does not reflect correct way to use function
{ "avatar_url": "https://avatars.githubusercontent.com/u/31388649?v=4", "events_url": "https://api.github.com/users/shaily99/events{/privacy}", "followers_url": "https://api.github.com/users/shaily99/followers", "following_url": "https://api.github.com/users/shaily99/following{/other_user}", "gists_url": "https://api.github.com/users/shaily99/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shaily99", "id": 31388649, "login": "shaily99", "node_id": "MDQ6VXNlcjMxMzg4NjQ5", "organizations_url": "https://api.github.com/users/shaily99/orgs", "received_events_url": "https://api.github.com/users/shaily99/received_events", "repos_url": "https://api.github.com/users/shaily99/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shaily99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shaily99/subscriptions", "type": "User", "url": "https://api.github.com/users/shaily99", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi @shaily99 \n\nThanks for pointing this out — you're absolutely right!\n\nThe current example in the docstring for add_column() implies in-place modification, which is misleading since add_column() actually returns a new dataset.", "#self-assign\n" ]
2025-06-12T19:42:29
2025-07-17T13:14:18
2025-07-17T13:14:18
NONE
null
null
null
null
https://github.com/huggingface/datasets/blame/38d4d0e11e22fdbc4acf373d2421d25abeb43439/src/datasets/arrow_dataset.py#L5925C10-L5925C10 The example seems to suggest that dataset.add_column() can add column inplace, however, this is wrong -- it cannot. It returns a new dataset with the column added to it.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7611/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7611/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7610/comments
https://api.github.com/repos/huggingface/datasets/issues/7610/events
https://github.com/huggingface/datasets/issues/7610
3,141,281,560
I_kwDODunzps67PCcY
7,610
i cant confirm email
{ "avatar_url": "https://avatars.githubusercontent.com/u/187984415?v=4", "events_url": "https://api.github.com/users/lykamspam/events{/privacy}", "followers_url": "https://api.github.com/users/lykamspam/followers", "following_url": "https://api.github.com/users/lykamspam/following{/other_user}", "gists_url": "https://api.github.com/users/lykamspam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lykamspam", "id": 187984415, "login": "lykamspam", "node_id": "U_kgDOCzRqHw", "organizations_url": "https://api.github.com/users/lykamspam/orgs", "received_events_url": "https://api.github.com/users/lykamspam/received_events", "repos_url": "https://api.github.com/users/lykamspam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lykamspam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lykamspam/subscriptions", "type": "User", "url": "https://api.github.com/users/lykamspam", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Will you please clarify the issue by some screenshots or more in-depth explanation?", "![Image](https://github.com/user-attachments/assets/ebe58239-72ef-43f6-a849-35736878fbf3)\nThis is clarify answer. I have not received a letter.\n\n**The graphic at the top shows how I don't get any letter. Can you show in a clear way how you don't get a letter from me?**" ]
2025-06-12T18:58:49
2025-06-27T14:36:47
null
NONE
null
null
null
null
### Describe the bug This is dificult, I cant confirm email because I'm not get any email! I cant post forum because I cant confirm email! I can send help desk because... no exist on web page. paragraph 44 ### Steps to reproduce the bug rthjrtrt ### Expected behavior ewtgfwetgf ### Environment info sdgfswdegfwe
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7610/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7610/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7609/comments
https://api.github.com/repos/huggingface/datasets/issues/7609/events
https://github.com/huggingface/datasets/pull/7609
3,140,373,128
PR_kwDODunzps6aOQ_g
7,609
Update `_dill.py` to use `co_linetable` for Python 3.10+ in place of `co_lnotab`
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7609). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "not 100% sure either, I tried removing unnecessary checks - let me know if they sound good to you otherwise I'll revert", "I can't reproduce the warning anymore... 🤦🏻‍♂️\r\n", "Ah now I can reproduce!, and I can confirm that the warning is gone when you apply the change in this PR" ]
2025-06-12T13:47:01
2025-06-16T12:14:10
2025-06-16T12:14:08
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7609.diff", "html_url": "https://github.com/huggingface/datasets/pull/7609", "merged_at": "2025-06-16T12:14:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/7609.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7609" }
Not 100% about this one, but it seems to be recommended. ``` /fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead. ``` Tests pass locally. And the warning is gone with this change. https://peps.python.org/pep-0626/#backwards-compatibility
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7609/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7609/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7608/comments
https://api.github.com/repos/huggingface/datasets/issues/7608/events
https://github.com/huggingface/datasets/pull/7608
3,137,564,259
PR_kwDODunzps6aEr6b
7,608
Tests typing and fixes for push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7608). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-11T17:13:52
2025-06-12T21:15:23
2025-06-12T21:15:21
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7608.diff", "html_url": "https://github.com/huggingface/datasets/pull/7608", "merged_at": "2025-06-12T21:15:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/7608.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7608" }
todo: - [x] fix TestPushToHub.test_push_dataset_dict_to_hub_iterable_num_proc
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7608/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7608/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7607
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7607/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7607/comments
https://api.github.com/repos/huggingface/datasets/issues/7607/events
https://github.com/huggingface/datasets/issues/7607
3,135,722,560
I_kwDODunzps6651RA
7,607
Video and audio decoding with torchcodec
{ "avatar_url": "https://avatars.githubusercontent.com/u/49127578?v=4", "events_url": "https://api.github.com/users/TyTodd/events{/privacy}", "followers_url": "https://api.github.com/users/TyTodd/followers", "following_url": "https://api.github.com/users/TyTodd/following{/other_user}", "gists_url": "https://api.github.com/users/TyTodd/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TyTodd", "id": 49127578, "login": "TyTodd", "node_id": "MDQ6VXNlcjQ5MTI3NTc4", "organizations_url": "https://api.github.com/users/TyTodd/orgs", "received_events_url": "https://api.github.com/users/TyTodd/received_events", "repos_url": "https://api.github.com/users/TyTodd/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TyTodd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TyTodd/subscriptions", "type": "User", "url": "https://api.github.com/users/TyTodd", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Good idea ! let me know if you have any question or if I can help", "@lhoestq Almost finished, but I'm having trouble understanding this test case.\nThis is how it looks originally. The `map` function is called, and then `with_format` is called. According to the test case example[\"video\"] is supposed to be a VideoReader. However, according to the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.with_format) its supposed to be the type passed into `with_format` (numpy in this case). My implementation with VideoDecoder currently does the latter, is that correct, or should it be a VideoDecoder object instead?\n```\n@require_torchvision\ndef test_dataset_with_video_map_and_formatted(shared_datadir):\n from torchvision.io import VideoReader\n\n video_path = str(shared_datadir / \"test_video_66x50.mov\")\n data = {\"video\": [video_path]}\n features = Features({\"video\": Video()})\n dset = Dataset.from_dict(data, features=features)\n dset = dset.map(lambda x: x).with_format(\"numpy\")\n example = dset[0]\n assert isinstance(example[\"video\"], VideoReader)\n # assert isinstance(example[\"video\"][0], np.ndarray)\n\n # from bytes\n with open(video_path, \"rb\") as f:\n data = {\"video\": [f.read()]}\n dset = Dataset.from_dict(data, features=features)\n dset = dset.map(lambda x: x).with_format(\"numpy\")\n example = dset[0]\n assert isinstance(example[\"video\"], VideoReader)\n # assert isinstance(example[\"video\"][0], np.ndarray)\n\n```", "Hi ! It's maybe more convenient for users to always have a VideoDecoder, since they might only access a few frames and not the full video. So IMO it's fine to always return a VideoDecoder (maybe later we can extend the VideoDecoder to return other types of tensors than numpy arrays though ? 👀 it's not crucial for now though)", "@lhoestq ya that makes sense, looks like this functionality lives in `src/datasets/formatting`, where an exception is made for VideoReader objects to remain as themselves when being formatted. I'll make the necessary changes. ", "@lhoestq I'm assuming this was also the case for torchaudio objects?", "We're not using torchaudio but soundfile. But anyway we unfortunately decode full audio files instead of returning a Reader and it can be interesting to fix this. Currently it always returns a dict {\"array\": np.array(...), \"sampling_rate\": int(...)}, while it would be cool to return a reader with seek() and read() - like methods as for videos.\n\n(there is a way to make the audio change backward compatible anyway by allowing `reader[\"array\"]` to return the full array)", "@lhoestq (sorry for the spam btw)\nLooks like there's a # TODO to have these returned as np.arrays instead. I'm curious why the authors didn't do it initially. Maybe a performance thing?\nThis is from `/src/datasets/formatting/np_formatter.py` line 70\n```\nif config.TORCHVISION_AVAILABLE and \"torchvision\" in sys.modules:\n from torchvision.io import VideoReader\n\n if isinstance(value, VideoReader):\n return value # TODO(QL): set output to np arrays ?\n```", "Oh cool ya this is something that I could implement with torchcodec. I can add that to the PR as well.", "> Looks like there's a # TODO to have these returned as np.arrays instead. I'm curious why the authors didn't do it initially. Maybe a performance thing?\n\nyea that was me, I focused on a simple logic to start with, since I knew there was torchcodec coming and maybe wasn't worth it at the time ^^\n\nbut anyway it's fine to start with a logic without formatting to start with and then iterate", "Hey @lhoestq I ran into an error with this test case for the Audio feature\n\n```\n@require_sndfile\n@require_torchcodec\ndef test_dataset_with_audio_feature_map_is_decoded(shared_datadir):\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data = {\"audio\": [audio_path], \"text\": [\"Hello\"]}\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n dset = Dataset.from_dict(data, features=features)\n\n def process_audio_sampling_rate_by_example(example):\n sample_rate = example[\"audio\"].get_all_samples().sample_rate\n example[\"double_sampling_rate\"] = 2 * sample_rate\n return example\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_example)\n for item in decoded_dset.cast_column(\"audio\", Audio(decode=False)):\n assert item.keys() == {\"audio\", \"text\", \"double_sampling_rate\"}\n assert item[\"double_sampling_rate\"] == 88200\n\n def process_audio_sampling_rate_by_batch(batch):\n double_sampling_rates = []\n for audio in batch[\"audio\"]:\n double_sampling_rates.append(2 * audio.get_all_samples().sample_rate)\n batch[\"double_sampling_rate\"] = double_sampling_rates\n return batch\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_batch, batched=True)\n for item in decoded_dset.cast_column(\"audio\", Audio(decode=False)):\n assert item.keys() == {\"audio\", \"text\", \"double_sampling_rate\"}\n assert item[\"double_sampling_rate\"] == 88200\n```\n\nthis is the error below\n```\nsrc/datasets/arrow_writer.py:626: in write_batch\n arrays.append(pa.array(typed_sequence))\n.....\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded - pyarrow.lib.ArrowInvalid: Could not convert <torchcodec.decoders._audio_decoder.AudioDecoder object at 0x138cdd810> with type AudioDecoder: did not recognize Python value type when inferring an Arrow data type\n```\n\nBy the way I copied the test case and ran it on the original implementation of the Video feature, which uses the torchvision backend and I got a similar error.\n```\ndef test_dataset_with_video_feature_map_is_decoded(shared_datadir):\n video_path = str(shared_datadir / \"test_video_66x50.mov\")\n data = {\"video\": [video_path], \"text\": [\"Hello\"]}\n features = Features({\"video\": Video(), \"text\": Value(\"string\")})\n dset = Dataset.from_dict(data, features=features)\n\n def process_audio_sampling_rate_by_example(example):\n metadata = example[\"video\"].get_metadata()\n example[\"double_fps\"] = 2 * metadata[\"video\"][\"fps\"][0]\n return example\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_example)\n for item in decoded_dset.cast_column(\"video\", Video(decode=False)):\n assert item.keys() == {\"video\", \"text\", \"double_fps\"}\n assert item[\"double_fps\"] == 2 * 10 # prollly wont work past 2*10 is made up!! shouldn't pass\n\n def process_audio_sampling_rate_by_batch(batch):\n double_fps = []\n for video in batch[\"video\"]:\n double_fps.append(2 * video.metadata.begin_stream_seconds)\n batch[\"double_fps\"] = double_fps\n return batch\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_batch, batched=True)\n for item in decoded_dset.cast_column(\"video\", Video(decode=False)):\n assert item.keys() == {\"video\", \"text\", \"double_fps\"}\n assert item[\"double_fps\"] == 2 * 10 # prollly wont work past this no reason it should\n```\n\nI was wondering if these error's are expected. They seem to be coming from the fact that the function `_cast_to_python_objects` in `src/datasets/features/features.py` doesn't handle VideoDecoders or AudioDecoders. I was able to fix it and get rid of the error by adding this to the bottom of the function\n```\n elif config.TORCHCODEC_AVAILABLE and \"torchcodec\" in sys.modules and isinstance(obj, VideoDecoder):\n v = Video()\n return v.encode_example(obj), True\n elif config.TORCHCODEC_AVAILABLE and \"torchcodec\" in sys.modules and isinstance(obj, AudioDecoder):\n a = Audio()\n return a.encode_example(obj), True\n```\nThis fixed it, but I just want to make sure I'm not adding things that are messing up the intended functionality.", "This is the right fix ! :)", "Btw I just remembered that we were using soundfile because it can support a wide range of audio formats, is it also the case for torchcodec ? including ogg, opus for example", "Yes from what I understand torchcodec supports everything ffmpeg supports.", "Okay just finished. However, I wasn't able to pass this test case:\n```python\n@require_torchcodec\n@require_sndfile\[email protected](\"streaming\", [False, True])\ndef test_load_dataset_with_audio_feature(streaming, jsonl_audio_dataset_path, shared_datadir):\n from torchcodec.decoders import AudioDecoder\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data_files = jsonl_audio_dataset_path\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n dset = load_dataset(\"json\", split=\"train\", data_files=data_files, features=features, streaming=streaming)\n item = dset[0] if not streaming else next(iter(dset))\n assert item.keys() == {\"audio\", \"text\"}\n assert isinstance(item[\"audio\"], AudioDecoder)\n samples = item[\"audio\"].get_all_samples()\n assert samples.sample_rate == 44100\n assert samples.data.shape == (1, 202311)\n```\n\nIt returned this error\n```\nstreaming = False, jsonl_audio_dataset_path = '/private/var/folders/47/c7dlgs_n6lx8rtr8f5w5m1m00000gn/T/pytest-of-tytodd/pytest-103/data2/audio_dataset.jsonl'\nshared_datadir = PosixPath('/private/var/folders/47/c7dlgs_n6lx8rtr8f5w5m1m00000gn/T/pytest-of-tytodd/pytest-103/test_load_dataset_with_audio_f0/data')\n\n @require_torchcodec\n @require_sndfile\n @pytest.mark.parametrize(\"streaming\", [False, True])\n def test_load_dataset_with_audio_feature(streaming, jsonl_audio_dataset_path, shared_datadir):\n from torchcodec.decoders import AudioDecoder\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data_files = jsonl_audio_dataset_path\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n> dset = load_dataset(\"json\", split=\"train\", data_files=data_files, features=features, streaming=streaming)\n\ntests/features/test_audio.py:686: \n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\nsrc/datasets/load.py:1418: in load_dataset\n builder_instance.download_and_prepare(\nsrc/datasets/builder.py:925: in download_and_prepare\n self._download_and_prepare(\nsrc/datasets/builder.py:1019: in _download_and_prepare\n verify_splits(self.info.splits, split_dict)\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\n\nexpected_splits = {'train': SplitInfo(name='train', num_bytes=2351563, num_examples=10000, shard_lengths=None, dataset_name=None), 'validation': SplitInfo(name='validation', num_bytes=238418, num_examples=1000, shard_lengths=None, dataset_name=None)}\nrecorded_splits = {'train': SplitInfo(name='train', num_bytes=167, num_examples=1, shard_lengths=None, dataset_name='json')}\n\n def verify_splits(expected_splits: Optional[dict], recorded_splits: dict):\n if expected_splits is None:\n logger.info(\"Unable to verify splits sizes.\")\n return\n if len(set(expected_splits) - set(recorded_splits)) > 0:\n> raise ExpectedMoreSplitsError(str(set(expected_splits) - set(recorded_splits)))\nE datasets.exceptions.ExpectedMoreSplitsError: {'validation'}\n\nsrc/datasets/utils/info_utils.py:68: ExpectedMoreSplitsError\n```\n\nIt looks like this test case wasn't passing when I forked the repo, so I assume I didn't do anything to break it. I also added this case to `test_video.py`, and it fails there as well. If this looks good, I'll go ahead and submit the PR.", "Awesome ! yes feel free to submit the PR, I can see what I can do for the remaining tests", "@lhoestq just submitted it #7616 " ]
2025-06-11T07:02:30
2025-06-19T18:25:49
2025-06-19T18:25:49
CONTRIBUTOR
null
null
null
null
### Feature request Pytorch is migrating video processing to torchcodec and it's pretty cool. It would be nice to migrate both the audio and video features to use torchcodec instead of torchaudio/video. ### Motivation My use case is I'm working on a multimodal AV model, and what's nice about torchcodec is I can extract the audio tensors directly from MP4 files. Also, I can easily resample video data to whatever fps I like on the fly. I haven't found an easy/efficient way to do this with torchvision. ### Your contribution I’m modifying the Video dataclass to use torchcodec in place of the current backend, starting from a stable commit for a project I’m working on. If it ends up working well, I’m happy to open a PR on main.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7607/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7607/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7606/comments
https://api.github.com/repos/huggingface/datasets/issues/7606/events
https://github.com/huggingface/datasets/pull/7606
3,133,848,546
PR_kwDODunzps6Z3_kV
7,606
Add `num_proc=` to `.push_to_hub()` (Dataset and IterableDataset)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7606). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-10T14:35:10
2025-06-11T16:47:28
2025-06-11T16:47:25
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7606.diff", "html_url": "https://github.com/huggingface/datasets/pull/7606", "merged_at": "2025-06-11T16:47:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/7606.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7606" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 7, "laugh": 0, "rocket": 0, "total_count": 7, "url": "https://api.github.com/repos/huggingface/datasets/issues/7606/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7606/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7605/comments
https://api.github.com/repos/huggingface/datasets/issues/7605/events
https://github.com/huggingface/datasets/pull/7605
3,131,636,882
PR_kwDODunzps6ZwcPp
7,605
Make `push_to_hub` atomic (#7600)
{ "avatar_url": "https://avatars.githubusercontent.com/u/391004?v=4", "events_url": "https://api.github.com/users/sharvil/events{/privacy}", "followers_url": "https://api.github.com/users/sharvil/followers", "following_url": "https://api.github.com/users/sharvil/following{/other_user}", "gists_url": "https://api.github.com/users/sharvil/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sharvil", "id": 391004, "login": "sharvil", "node_id": "MDQ6VXNlcjM5MTAwNA==", "organizations_url": "https://api.github.com/users/sharvil/orgs", "received_events_url": "https://api.github.com/users/sharvil/received_events", "repos_url": "https://api.github.com/users/sharvil/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sharvil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sharvil/subscriptions", "type": "User", "url": "https://api.github.com/users/sharvil", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7605). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi ! unfortunately we can't allow atomic commits for commits with hundreds of files additions (HF would time out)\r\n\r\nMaybe an alternative would be to retry if there was a commit in between ? this could be the default behavior as well", "Thanks for taking a look – much appreciated!\r\n\r\nI've verified that commits with up to 20,000 files don't time out and the commit time scales linearly with the number of operations enqueued. It took just under 2 minutes to complete (successfully) the 20k file commit.\r\n\r\nThe fundamental issue I'm trying to tackle here is dataset corruption: getting into a state where a dataset on the hub cannot be used when downloaded. Non-atomic commits won't get us there, I think. If, for example, 3 of 5 commits complete and the machine/process calling `push_to_hub` has a network, hardware, or other failure that prevents it from completing the rest of the commits (even with retries) we'll now have some pointer files pointing to the new data and others pointing to the old data => corrupted. While this may seem like an unlikely scenario, it's a regular occurrence at scale.\r\n\r\nIf you still feel strongly that atomic commits are not the right way to go, I can either set it to not be the default or remove it entirely from this PR.\r\n\r\nAs for retries, it's a good idea. In a non-atomic world, the logic gets more complicated:\r\n- keep an explicit queue of pending add/delete operations\r\n- chunkwise pop from queue and commit with `parent_commit` set to previous chunked commit hash\r\n- if `create_commit` fails:\r\n - re-fetch README and set `parent_commit` to latest hash for `revision`\r\n - re-generate dataset card content\r\n - swap old `CommitOperationAdd` with new one for README in the pending queue\r\n- resume chunkwise committing from the queue as above\r\n\r\nEntirely doable, but more involved than I signed up for with this PR.", "Just to clarify – setting the `parent_commit` can be separated from making the commit atomic (which is what I'm suggesting by either atomic commits not the default or removing it from this PR). It's crucial to set the parent commit to avoid the read-modify-write race condition on the README schema." ]
2025-06-09T22:29:38
2025-06-23T19:32:08
2025-06-23T19:32:08
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7605.diff", "html_url": "https://github.com/huggingface/datasets/pull/7605", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7605.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7605" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/60325139?v=4", "events_url": "https://api.github.com/users/lmnt-com/events{/privacy}", "followers_url": "https://api.github.com/users/lmnt-com/followers", "following_url": "https://api.github.com/users/lmnt-com/following{/other_user}", "gists_url": "https://api.github.com/users/lmnt-com/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lmnt-com", "id": 60325139, "login": "lmnt-com", "node_id": "MDEyOk9yZ2FuaXphdGlvbjYwMzI1MTM5", "organizations_url": "https://api.github.com/users/lmnt-com/orgs", "received_events_url": "https://api.github.com/users/lmnt-com/received_events", "repos_url": "https://api.github.com/users/lmnt-com/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lmnt-com/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lmnt-com/subscriptions", "type": "Organization", "url": "https://api.github.com/users/lmnt-com", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7605/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7605/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7604/comments
https://api.github.com/repos/huggingface/datasets/issues/7604/events
https://github.com/huggingface/datasets/pull/7604
3,130,837,169
PR_kwDODunzps6Ztrm_
7,604
Docs and more methods for IterableDataset: push_to_hub, to_parquet...
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7604). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-09T16:44:40
2025-06-10T13:15:23
2025-06-10T13:15:21
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7604.diff", "html_url": "https://github.com/huggingface/datasets/pull/7604", "merged_at": "2025-06-10T13:15:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/7604.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7604" }
to_csv, to_json, to_sql, to_pandas, to_polars, to_dict, to_list
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7604/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7604/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7603/comments
https://api.github.com/repos/huggingface/datasets/issues/7603/events
https://github.com/huggingface/datasets/pull/7603
3,130,394,563
PR_kwDODunzps6ZsKin
7,603
No TF in win tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7603). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-09T13:56:34
2025-06-09T15:33:31
2025-06-09T15:33:30
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7603.diff", "html_url": "https://github.com/huggingface/datasets/pull/7603", "merged_at": "2025-06-09T15:33:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/7603.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7603" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7603/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7603/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7602
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7602/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7602/comments
https://api.github.com/repos/huggingface/datasets/issues/7602/events
https://github.com/huggingface/datasets/pull/7602
3,128,758,924
PR_kwDODunzps6Zmk99
7,602
Enhance error handling and input validation across multiple modules
{ "avatar_url": "https://avatars.githubusercontent.com/u/147746955?v=4", "events_url": "https://api.github.com/users/mohiuddin-khan-shiam/events{/privacy}", "followers_url": "https://api.github.com/users/mohiuddin-khan-shiam/followers", "following_url": "https://api.github.com/users/mohiuddin-khan-shiam/following{/other_user}", "gists_url": "https://api.github.com/users/mohiuddin-khan-shiam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mohiuddin-khan-shiam", "id": 147746955, "login": "mohiuddin-khan-shiam", "node_id": "U_kgDOCM5wiw", "organizations_url": "https://api.github.com/users/mohiuddin-khan-shiam/orgs", "received_events_url": "https://api.github.com/users/mohiuddin-khan-shiam/received_events", "repos_url": "https://api.github.com/users/mohiuddin-khan-shiam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mohiuddin-khan-shiam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mohiuddin-khan-shiam/subscriptions", "type": "User", "url": "https://api.github.com/users/mohiuddin-khan-shiam", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2025-06-08T23:01:06
2025-06-08T23:01:06
null
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7602.diff", "html_url": "https://github.com/huggingface/datasets/pull/7602", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7602.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7602" }
This PR improves the robustness and user experience by: 1. **Audio Module**: - Added clear error messages when required fields ('path' or 'bytes') are missing in audio encoding 2. **DatasetDict**: - Enhanced key access error messages to show available splits when an invalid key is accessed 3. **NonMutableDict**: - Added input validation for the update() method to ensure proper mapping types 4. **Arrow Reader**: - Improved error messages for small dataset percentage splits with suggestions for alternatives 5. **FaissIndex**: - Strengthened input validation with descriptive error messages - Added proper type checking and shape validation for search queries These changes make the code more maintainable and user-friendly by providing actionable feedback when issues arise.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7602/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7602/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7600/comments
https://api.github.com/repos/huggingface/datasets/issues/7600/events
https://github.com/huggingface/datasets/issues/7600
3,127,296,182
I_kwDODunzps66ZsC2
7,600
`push_to_hub` is not concurrency safe (dataset schema corruption)
{ "avatar_url": "https://avatars.githubusercontent.com/u/391004?v=4", "events_url": "https://api.github.com/users/sharvil/events{/privacy}", "followers_url": "https://api.github.com/users/sharvil/followers", "following_url": "https://api.github.com/users/sharvil/following{/other_user}", "gists_url": "https://api.github.com/users/sharvil/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sharvil", "id": 391004, "login": "sharvil", "node_id": "MDQ6VXNlcjM5MTAwNA==", "organizations_url": "https://api.github.com/users/sharvil/orgs", "received_events_url": "https://api.github.com/users/sharvil/received_events", "repos_url": "https://api.github.com/users/sharvil/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sharvil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sharvil/subscriptions", "type": "User", "url": "https://api.github.com/users/sharvil", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "@lhoestq can you please take a look? I've submitted a PR that fixes this issue. Thanks.", "Thanks for the ping ! As I said in https://github.com/huggingface/datasets/pull/7605 there is maybe a more general approach using retries :)", "Dropping this due to inactivity; we've implemented push_to_hub outside of HF datasets that's concurrency safe. Feel free to use the code I provided as a starting point if there's still interest in addressing this issue.", "Exploring another fix here: https://github.com/huggingface/datasets/issues/7600" ]
2025-06-07T17:28:56
2025-07-31T10:00:50
2025-07-31T10:00:50
NONE
null
null
null
null
### Describe the bug Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable. Consider this scenario: - we have an Arrow dataset - there are `N` configs of the dataset - there are `N` independent processes operating on each of the individual configs (e.g. adding a column, `new_col`) - each process calls `push_to_hub` on their particular config when they're done processing - all calls to `push_to_hub` succeed - the `README.md` now has some configs with `new_col` added and some with `new_col` missing Any attempt to load a config (using `load_dataset`) where `new_col` is missing will fail because of a schema mismatch between `README.md` and the Arrow files. Fixing the dataset requires updating `README.md` by hand with the correct schema for the affected config. In effect, `push_to_hub` is doing a `git push --force` (I found this behavior quite surprising). We have hit this issue every time we run processing jobs over our datasets and have to fix corrupted schemas by hand. Reading through the code, it seems that specifying a [`parent_commit`](https://github.com/huggingface/huggingface_hub/blob/v0.32.4/src/huggingface_hub/hf_api.py#L4587) hash around here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5794 would get us to a normal, non-forced git push, and avoid schema corruption. I'm not familiar enough with the code to know how to determine the commit hash from which the in-memory dataset card was loaded. ### Steps to reproduce the bug See above. ### Expected behavior Concurrent edits to disjoint configs of a dataset should never corrupt the dataset schema. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35 - Python version: 3.10.14 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.2 - `fsspec` version: 2023.9.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 5, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/7600/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7600/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7599
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7599/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7599/comments
https://api.github.com/repos/huggingface/datasets/issues/7599/events
https://github.com/huggingface/datasets/issues/7599
3,125,620,119
I_kwDODunzps66TS2X
7,599
My already working dataset (when uploaded few months ago) now is ignoring metadata.jsonl
{ "avatar_url": "https://avatars.githubusercontent.com/u/97530443?v=4", "events_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/events{/privacy}", "followers_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/followers", "following_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/following{/other_user}", "gists_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JuanCarlosMartinezSevilla", "id": 97530443, "login": "JuanCarlosMartinezSevilla", "node_id": "U_kgDOBdAySw", "organizations_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/orgs", "received_events_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/received_events", "repos_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/subscriptions", "type": "User", "url": "https://api.github.com/users/JuanCarlosMartinezSevilla", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Maybe its been a recent update, but i can manage to load the metadata.jsonl separately from the images with:\n\n```\nmetadata = load_dataset(\"PRAIG/SMB\", split=\"train\", data_files=[\"*.jsonl\"])\nimages = load_dataset(\"PRAIG/SMB\", split=\"train\")\n```\nDo you know it this is an expected behaviour? This makes my dataset viewer to only load the images without the labeling of metadata.jsonl.\n\nThanks", "Hi ! this is because we now expect the metadata file to be inside the directory named after the split \"train\" (this way each split can have its own metadata and can be loaded independently)\n\nYou can fix that by configuring it explicitly in the dataset's README.md header:\n\n```yaml\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path:\n - \"train/**/*.png\"\n - \"metadata.jsonl\"\n```\n\n(or by moving the metadata.jsonl in train/ but in this case you also have to modify the content of the JSONL to fix the relative paths to the images)", "Thank you very much, dataset viewer is already working as expected!!" ]
2025-06-06T18:59:00
2025-06-16T15:18:00
2025-06-16T15:18:00
NONE
null
null
null
null
### Describe the bug Hi everyone, I uploaded my dataset https://huggingface.co/datasets/PRAIG/SMB a few months ago while I was waiting for a conference acceptance response. Without modifying anything in the dataset repository now the Dataset viewer is not rendering the metadata.jsonl annotations, neither it is being downloaded when using load_dataset. Can you please help? Thank you in advance. ### Steps to reproduce the bug from datasets import load_dataset ds = load_dataset("PRAIG/SMB") ds = ds["train"] ### Expected behavior It is expected to have all the metadata available in the jsonl file. Fields like: "score_id", "original_width", "original_height", "regions"... among others. ### Environment info datasets==3.6.0, python 3.13.3 (but he problem is already in the huggingface dataset page)
{ "avatar_url": "https://avatars.githubusercontent.com/u/97530443?v=4", "events_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/events{/privacy}", "followers_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/followers", "following_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/following{/other_user}", "gists_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JuanCarlosMartinezSevilla", "id": 97530443, "login": "JuanCarlosMartinezSevilla", "node_id": "U_kgDOBdAySw", "organizations_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/orgs", "received_events_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/received_events", "repos_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/subscriptions", "type": "User", "url": "https://api.github.com/users/JuanCarlosMartinezSevilla", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7599/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7599/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7598/comments
https://api.github.com/repos/huggingface/datasets/issues/7598/events
https://github.com/huggingface/datasets/pull/7598
3,125,184,457
PR_kwDODunzps6ZaclZ
7,598
fix string_to_dict usage for windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7598). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-06T15:54:29
2025-06-06T16:12:22
2025-06-06T16:12:21
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7598.diff", "html_url": "https://github.com/huggingface/datasets/pull/7598", "merged_at": "2025-06-06T16:12:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/7598.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7598" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7598/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7598/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7597
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7597/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7597/comments
https://api.github.com/repos/huggingface/datasets/issues/7597/events
https://github.com/huggingface/datasets/issues/7597
3,123,962,709
I_kwDODunzps66M-NV
7,597
Download datasets from a private hub in 2025
{ "avatar_url": "https://avatars.githubusercontent.com/u/178552926?v=4", "events_url": "https://api.github.com/users/DanielSchuhmacher/events{/privacy}", "followers_url": "https://api.github.com/users/DanielSchuhmacher/followers", "following_url": "https://api.github.com/users/DanielSchuhmacher/following{/other_user}", "gists_url": "https://api.github.com/users/DanielSchuhmacher/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DanielSchuhmacher", "id": 178552926, "login": "DanielSchuhmacher", "node_id": "U_kgDOCqSAXg", "organizations_url": "https://api.github.com/users/DanielSchuhmacher/orgs", "received_events_url": "https://api.github.com/users/DanielSchuhmacher/received_events", "repos_url": "https://api.github.com/users/DanielSchuhmacher/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DanielSchuhmacher/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DanielSchuhmacher/subscriptions", "type": "User", "url": "https://api.github.com/users/DanielSchuhmacher", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi ! First, and in the general case, Hugging Face does offer to host private datasets, and with a subscription you can even choose the region in which the repositories are hosted (US, EU)\n\nThen if you happen to have a private deployment, you can set the HF_ENDPOINT environment variable (same as in https://github.com/huggingface/transformers/issues/38634)", "Thank you @lhoestq. Works as described!" ]
2025-06-06T07:55:19
2025-06-13T13:46:00
2025-06-13T13:46:00
NONE
null
null
null
null
### Feature request In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted. This issue was raised before here: https://github.com/huggingface/datasets/issues/3679 @juliensimon ### Motivation none ### Your contribution none
{ "avatar_url": "https://avatars.githubusercontent.com/u/178552926?v=4", "events_url": "https://api.github.com/users/DanielSchuhmacher/events{/privacy}", "followers_url": "https://api.github.com/users/DanielSchuhmacher/followers", "following_url": "https://api.github.com/users/DanielSchuhmacher/following{/other_user}", "gists_url": "https://api.github.com/users/DanielSchuhmacher/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DanielSchuhmacher", "id": 178552926, "login": "DanielSchuhmacher", "node_id": "U_kgDOCqSAXg", "organizations_url": "https://api.github.com/users/DanielSchuhmacher/orgs", "received_events_url": "https://api.github.com/users/DanielSchuhmacher/received_events", "repos_url": "https://api.github.com/users/DanielSchuhmacher/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DanielSchuhmacher/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DanielSchuhmacher/subscriptions", "type": "User", "url": "https://api.github.com/users/DanielSchuhmacher", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7597/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7597/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7596
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7596/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7596/comments
https://api.github.com/repos/huggingface/datasets/issues/7596/events
https://github.com/huggingface/datasets/pull/7596
3,122,595,042
PR_kwDODunzps6ZRkEU
7,596
Add albumentations to use dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/5481618?v=4", "events_url": "https://api.github.com/users/ternaus/events{/privacy}", "followers_url": "https://api.github.com/users/ternaus/followers", "following_url": "https://api.github.com/users/ternaus/following{/other_user}", "gists_url": "https://api.github.com/users/ternaus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ternaus", "id": 5481618, "login": "ternaus", "node_id": "MDQ6VXNlcjU0ODE2MTg=", "organizations_url": "https://api.github.com/users/ternaus/orgs", "received_events_url": "https://api.github.com/users/ternaus/received_events", "repos_url": "https://api.github.com/users/ternaus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ternaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ternaus/subscriptions", "type": "User", "url": "https://api.github.com/users/ternaus", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "@lhoestq ping", "@lhoestq ping", "@lhoestq Thanks. Cleaned up torchvision." ]
2025-06-05T20:39:46
2025-06-17T18:38:08
2025-06-17T14:44:30
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7596.diff", "html_url": "https://github.com/huggingface/datasets/pull/7596", "merged_at": "2025-06-17T14:44:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/7596.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7596" }
1. Fixed broken link to the list of transforms in torchvison. 2. Extended section about video image augmentations with an example from Albumentations.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7596/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7596/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7595
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7595/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7595/comments
https://api.github.com/repos/huggingface/datasets/issues/7595/events
https://github.com/huggingface/datasets/pull/7595
3,121,689,436
PR_kwDODunzps6ZOaFl
7,595
Add `IterableDataset.push_to_hub()`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7595). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-05T15:29:32
2025-06-06T16:12:37
2025-06-06T16:12:36
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7595.diff", "html_url": "https://github.com/huggingface/datasets/pull/7595", "merged_at": "2025-06-06T16:12:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/7595.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7595" }
Basic implementation, which writes one shard per input dataset shard. This is to be improved later. Close https://github.com/huggingface/datasets/issues/5665 PS: for image/audio datasets structured as actual image/audio files (not parquet), you can sometimes speed it up with `ds.decode(num_threads=...).push_to_hub(...)`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7595/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7595/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7594
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7594/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7594/comments
https://api.github.com/repos/huggingface/datasets/issues/7594/events
https://github.com/huggingface/datasets/issues/7594
3,120,799,626
I_kwDODunzps66A5-K
7,594
Add option to ignore keys/columns when loading a dataset from jsonl(or any other data format)
{ "avatar_url": "https://avatars.githubusercontent.com/u/36810152?v=4", "events_url": "https://api.github.com/users/avishaiElmakies/events{/privacy}", "followers_url": "https://api.github.com/users/avishaiElmakies/followers", "following_url": "https://api.github.com/users/avishaiElmakies/following{/other_user}", "gists_url": "https://api.github.com/users/avishaiElmakies/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avishaiElmakies", "id": 36810152, "login": "avishaiElmakies", "node_id": "MDQ6VXNlcjM2ODEwMTUy", "organizations_url": "https://api.github.com/users/avishaiElmakies/orgs", "received_events_url": "https://api.github.com/users/avishaiElmakies/received_events", "repos_url": "https://api.github.com/users/avishaiElmakies/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avishaiElmakies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avishaiElmakies/subscriptions", "type": "User", "url": "https://api.github.com/users/avishaiElmakies", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Good point, I'd be in favor of having the `columns` argument in `JsonConfig` (and the others) to align with `ParquetConfig` to let users choose which columns to load and ignore the rest", "Is it possible to ignore columns when using parquet? ", "Yes, you can pass `columns=...` to load_dataset to select which columns to load, and it is passed to `ParquetConfig` :)", "Ok, i didn't know that. \nAnyway, it would be good to add this to others", "Hi @lhoestq \n\nI'd like to take this up!\n\nAs you suggested, I’ll extend the support for the columns parameter (currently used in ParquetConfig) to JsonConfig as well. This will allow users to selectively load specific keys/columns from .jsonl (or .json) files and ignore the rest — solving the type inconsistency issues in unclean datasets.", "Hi @avishaiElmakies and @lhoestq \n\nJust wanted to let you know that this is now implemented in #7594\nAs suggested, support for the `columns=...` argument (previously available for Parquet) has now been extended to **JSON and JSONL** loading via `load_dataset(...)`. You can now load only specific keys/columns and skip the rest — which should help in cases where some fields are unclean, inconsistent, or just unnecessary.\n\n### ✅ Example:\n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"json\", data_files=\"your_data.jsonl\", columns=[\"id\", \"title\"])\nprint(dataset[\"train\"].column_names)\n# Output: ['id', 'title']\n```\n\n### 🔧 Summary of changes:\n\n* Added `columns: Optional[List[str]]` to `JsonConfig`\n* Updated `_generate_tables()` to filter selected columns\n* Forwarded `columns` argument from `load_dataset()` to the config\n* Added test case to validate behavior\n\nLet me know if you'd like the same to be added for CSV or others as a follow-up — happy to help.", "@ArjunJagdale this looks great! Thanks!\nI believe that every format that is supported by `datasets` should probably have this feature since it is very useful and will streamline the api (people will know that they can just use `columns` to select the columns they want, and it will not be dependent on the data format) ", "Thanks @avishaiElmakies — totally agree, making `columns=...` support consistent across all formats would be really helpful for users." ]
2025-06-05T11:12:45
2025-06-28T09:03:00
null
NONE
null
null
null
null
### Feature request Hi, I would like the option to ignore keys/columns when loading a dataset from files (e.g. jsonl). ### Motivation I am working on a dataset which is built on jsonl. It seems the dataset is unclean and a column has different types in each row. I can't clean this or remove the column (It is not my data and it is too big for me to clean and save on my own hardware). I would like the option to just ignore this column when using `load_dataset`, since i don't need it. I tried to look if this is already possible but couldn't find a solution. if there is I would love some help. If it is not currently possible, I would love this feature ### Your contribution I don't think I can help this time, unfortunately.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7594/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7594/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7593
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7593/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7593/comments
https://api.github.com/repos/huggingface/datasets/issues/7593/events
https://github.com/huggingface/datasets/pull/7593
3,118,812,368
PR_kwDODunzps6ZE34G
7,593
Fix broken link to albumentations
{ "avatar_url": "https://avatars.githubusercontent.com/u/5481618?v=4", "events_url": "https://api.github.com/users/ternaus/events{/privacy}", "followers_url": "https://api.github.com/users/ternaus/followers", "following_url": "https://api.github.com/users/ternaus/following{/other_user}", "gists_url": "https://api.github.com/users/ternaus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ternaus", "id": 5481618, "login": "ternaus", "node_id": "MDQ6VXNlcjU0ODE2MTg=", "organizations_url": "https://api.github.com/users/ternaus/orgs", "received_events_url": "https://api.github.com/users/ternaus/received_events", "repos_url": "https://api.github.com/users/ternaus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ternaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ternaus/subscriptions", "type": "User", "url": "https://api.github.com/users/ternaus", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7593). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq ping" ]
2025-06-04T19:00:13
2025-06-05T16:37:02
2025-06-05T16:36:32
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7593.diff", "html_url": "https://github.com/huggingface/datasets/pull/7593", "merged_at": "2025-06-05T16:36:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/7593.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7593" }
A few months back I rewrote all docs at [https://albumentations.ai/docs](https://albumentations.ai/docs), and some pages changed their links. In this PR fixed link to the most recent doc in Albumentations about bounding boxes and it's format. Fix a few typos in the doc as well.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7593/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7593/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7592
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7592/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7592/comments
https://api.github.com/repos/huggingface/datasets/issues/7592/events
https://github.com/huggingface/datasets/pull/7592
3,118,203,880
PR_kwDODunzps6ZC2so
7,592
Remove scripts altogether
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7592). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi @lhoestq,\r\nI wanted to ask\r\nare you planning to stop supporting dataset builds using `GeneratorBasedBuilder`?\r\n\r\nIf so, could you share the reason why?", "We stopped supporting dataset scripts altogether, whether they are based on GeneratorBasedBuilder or any other builder. This means you can't `load_dataset()` a dataset script anymore. We did this mostly for security reasons which is blocking for many users and also impossible to build upon (e.g. the for the Dataset Viewer on HF)", "Ah, so only the `trust_remote_code` feature of `load_dataset` is deprecated, and\r\n\r\n```python\r\nfrom datasets import load_dataset_builder\r\n \r\nbuilder = load_dataset_builder('cornell-movie-review-data/rotten_tomatoes') \r\nbuilder.download_and_prepare() \r\n```\r\n\r\nwe can still load data using `load_dataset_builder` and `download_and_prepare`, right?\r\nThat's a relief. I thought the removal of `trust_remote_code` in `load_dataset` meant `GeneratorBasedBuilder` was being deprecated too, haha.\r\nGot it, thanks for the clarification!\r\n", "Can you give an example on how to upgrade from using `trust_remote_code`? I used to load_dataset from a script generating my training data in a streaming way.", "For guys who dislike this change +1" ]
2025-06-04T15:14:11
2025-08-04T15:17:05
2025-06-09T16:45:27
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7592.diff", "html_url": "https://github.com/huggingface/datasets/pull/7592", "merged_at": "2025-06-09T16:45:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/7592.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7592" }
TODO: - [x] remplace fixtures based on script with no-script fixtures - [x] windaube
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 1, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7592/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7592/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7591/comments
https://api.github.com/repos/huggingface/datasets/issues/7591/events
https://github.com/huggingface/datasets/issues/7591
3,117,816,388
I_kwDODunzps651hpE
7,591
Add num_proc parameter to push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/46050679?v=4", "events_url": "https://api.github.com/users/SwayStar123/events{/privacy}", "followers_url": "https://api.github.com/users/SwayStar123/followers", "following_url": "https://api.github.com/users/SwayStar123/following{/other_user}", "gists_url": "https://api.github.com/users/SwayStar123/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SwayStar123", "id": 46050679, "login": "SwayStar123", "node_id": "MDQ6VXNlcjQ2MDUwNjc5", "organizations_url": "https://api.github.com/users/SwayStar123/orgs", "received_events_url": "https://api.github.com/users/SwayStar123/received_events", "repos_url": "https://api.github.com/users/SwayStar123/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SwayStar123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SwayStar123/subscriptions", "type": "User", "url": "https://api.github.com/users/SwayStar123", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi @SwayStar123 \n\nI'd be interested in taking this up. I plan to add a `num_proc` parameter to `push_to_hub()` and use parallel uploads for shards using `concurrent.futures`. Will explore whether `ThreadPoolExecutor` or `ProcessPoolExecutor` is more suitable based on current implementation. Let me know if that sounds good!\n", "Just a quick update — `push_to_hub()` already had the `num_proc` argument in its signature and was correctly passing it internally to `_push_parquet_shards_to_hub()`.\n\nThe actual change required was inside `_push_parquet_shards_to_hub()` to enable parallel shard uploads using `multiprocessing` when `num_proc > 1`.\n\n@lhoestq @SwayStar123 ", "> Hi @SwayStar123 \n> \n> I'd be interested in taking this up. I plan to add a `num_proc` parameter to `push_to_hub()` and use parallel uploads for shards using `concurrent.futures`. Will explore whether `ThreadPoolExecutor` or `ProcessPoolExecutor` is more suitable based on current implementation. Let me know if that sounds good!\n> \n\nHey thanks for working on it. But I'm not a hf dev so I don't know the best way to do it." ]
2025-06-04T13:19:15
2025-06-27T06:13:54
null
NONE
null
null
null
null
### Feature request A number of processes parameter to the dataset.push_to_hub method ### Motivation Shards are currently uploaded serially which makes it slow for many shards, uploading can be done in parallel and much faster
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7591/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7591/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7590
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7590/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7590/comments
https://api.github.com/repos/huggingface/datasets/issues/7590/events
https://github.com/huggingface/datasets/issues/7590
3,101,654,892
I_kwDODunzps64339s
7,590
`Sequence(Features(...))` causes PyArrow cast error in `load_dataset` despite correct schema.
{ "avatar_url": "https://avatars.githubusercontent.com/u/183279820?v=4", "events_url": "https://api.github.com/users/AHS-uni/events{/privacy}", "followers_url": "https://api.github.com/users/AHS-uni/followers", "following_url": "https://api.github.com/users/AHS-uni/following{/other_user}", "gists_url": "https://api.github.com/users/AHS-uni/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AHS-uni", "id": 183279820, "login": "AHS-uni", "node_id": "U_kgDOCuygzA", "organizations_url": "https://api.github.com/users/AHS-uni/orgs", "received_events_url": "https://api.github.com/users/AHS-uni/received_events", "repos_url": "https://api.github.com/users/AHS-uni/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AHS-uni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AHS-uni/subscriptions", "type": "User", "url": "https://api.github.com/users/AHS-uni", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi @lhoestq \n\nCould you help confirm whether this qualifies as a bug?\n\nIt looks like the issue stems from how `Sequence(Features(...))` is interpreted as a plain struct during schema inference, which leads to a mismatch when casting with PyArrow (especially with nested structs inside lists). From the description, this seems like an inconsistency with expected behavior.\n\nIf confirmed, I’d be happy to take a shot at investigating and potentially submitting a fix.\n\nAlso looping in @AHS-uni — could you kindly share a minimal JSONL example that reproduces this?\n\nThanks!", "Hello @Flink-ddd \n\nI updated the minimal example and included both JSON and JSONL minimal examples in the Colab notebook. \n\nHere is the minimal JSON file for convenience (can't upload JSONL files).\n\n[mini.json](https://github.com/user-attachments/files/20535145/mini.json)\n\nI've also found a number of issues which describe a similar problem:\n\n[7569](https://github.com/huggingface/datasets/issues/7569) (Open)\n[7137](https://github.com/huggingface/datasets/issues/7137) (Open)\n[7501](https://github.com/huggingface/datasets/issues/7501) (Closed)\n[2434](https://github.com/huggingface/datasets/issues/2434) (Closed)\n\nThe closed issues don't really address the problem (IMO). [7501](https://github.com/huggingface/datasets/issues/7501) provides a workaround (using a Python list instead of `Sequence`), but it seem precarious. ", "Hi ! `Sequence({...})` corresponds to a struct of lists ([docs](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/main_classes#datasets.Features)). This come from Tensorflow Datasets.\n\nIf you want to use a list of structs, you should use `[{...}]`, e.g.\n\n```python\nitem = {\n \"id\": Value(\"string\"),\n \"data\": Value(\"string\"),\n}\n\nfeatures = Features({\n \"list\": [item],\n})\n```", "@lhoestq Thanks for your explanation, which helps me understand the logic behind. But I'm confused how to define that in `README.md`?\n\nMy jsonl data is: \n```\n{\"answers\": [{\"text\": \"text1\", \"label\": \"label1\"}, {\"text\": \"text2\", \"label\": \"label2\"},]}\n{\"answers\": [{\"text\": \"text1\", \"label\": \"label1\"}, {\"text\": \"text2\", \"label\": \"label2\"},]}\n...\n```\n\nMy README.md look like\n```\ndataset_info:\n- config_name: default\n features:\n - name: answers\n sequence:\n - name: text\n dtype: string\n - name: label\n dtype: string\n```\nI understand `sequence` here is not correct, but what's the correct format? I tried following (`sequence -> dtype`)and seems not the case:\n```\ndataset_info:\n- config_name: default\n features:\n - name: answers\n dtype:\n - name: text\n sequence: string\n - name: label\n sequence: string\n```", "The `List` type which doesn't have the weird dict behavior of `Sequence` has been added for `datasets` 4.0 (to be released next week). Feel free to install `datasets` from source to try it out :)\nEDIT: it's out !\n\nYou can fix the issue using `List` instead of `Sequence`, e.g. in the case of the original post:\n\n```python\n# Feature spec with List of structs\nitem = {\n \"id\": Value(\"string\"),\n \"data\": Value(\"string\"),\n}\n\nfeatures = Features({\n \"list\": List(item),\n})\n```\n\nfor which the README.md is\n\n```yaml\ndataset_info:\n- config_name: default\n features:\n - name: list\n list:\n - name: id\n dtype: string\n - name: data\n dtype: string\n```", "@lhoestq Thanks! I didn't realize there is a `list` keyword I could use. I thought I had to use `dtype` or something. Hope there could be better documentation on the `README.md` formats. I've closed my issue #7137 " ]
2025-05-29T22:53:36
2025-07-19T22:45:08
2025-07-19T22:45:08
NONE
null
null
null
null
### Description When loading a dataset with a field declared as a list of structs using `Sequence(Features(...))`, `load_dataset` incorrectly infers the field as a plain `struct<...>` instead of a `list<struct<...>>`. This leads to the following error: ``` ArrowNotImplementedError: Unsupported cast from list<item: struct<id: string, data: string>> to struct using function cast_struct ``` This occurs even when the `features` schema is explicitly provided and the dataset format supports nested structures natively (e.g., JSON, JSONL). --- ### Minimal Reproduction [Colab Link.](https://colab.research.google.com/drive/1FZPQy6TP3jVd4B3mYKyfQaWNuOAvljUq?usp=sharing) #### Dataset ```python data = [ { "list": [ {"id": "example1", "data": "text"}, ] }, ] ``` #### Schema ```python from datasets import Features, Sequence, Value item = Features({ "id": Value("string"), "data": Value("string"), }) features = Features({ "list": Sequence(item), }) ``` --- ### Tested File Formats The same schema was tested across different formats: | Format | Method | Result | | --------- | --------------------------- | ------------------- | | JSONL | `load_dataset("json", ...)` | Arrow cast error | | JSON | `load_dataset("json", ...)` | Arrow cast error | | In-memory | `Dataset.from_list(...)` | Works as expected | The issue seems not to be in the schema or the data, but in how `load_dataset()` handles the `Sequence(Features(...))` pattern when parsing from files (specifically JSON and JSONL). --- ### Expected Behavior If `features` is explicitly defined as: ```python Features({"list": Sequence(Features({...}))}) ``` Then the data should load correctly across all backends — including from JSON and JSONL — without any Arrow casting errors. This works correctly when loading from memory via `Dataset.from_list`. --- ### Environment * `datasets`: 3.6.0 * `pyarrow`: 20.0.0 * Python: 3.12.10 * OS: Ubuntu 24.04.2 LTS * Notebook: \[Colab test notebook available] ---
{ "avatar_url": "https://avatars.githubusercontent.com/u/183279820?v=4", "events_url": "https://api.github.com/users/AHS-uni/events{/privacy}", "followers_url": "https://api.github.com/users/AHS-uni/followers", "following_url": "https://api.github.com/users/AHS-uni/following{/other_user}", "gists_url": "https://api.github.com/users/AHS-uni/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AHS-uni", "id": 183279820, "login": "AHS-uni", "node_id": "U_kgDOCuygzA", "organizations_url": "https://api.github.com/users/AHS-uni/orgs", "received_events_url": "https://api.github.com/users/AHS-uni/received_events", "repos_url": "https://api.github.com/users/AHS-uni/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AHS-uni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AHS-uni/subscriptions", "type": "User", "url": "https://api.github.com/users/AHS-uni", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7590/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7590/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7589
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7589/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7589/comments
https://api.github.com/repos/huggingface/datasets/issues/7589/events
https://github.com/huggingface/datasets/pull/7589
3,101,119,704
PR_kwDODunzps6YKiyL
7,589
feat: use content defined chunking
{ "avatar_url": "https://avatars.githubusercontent.com/u/961747?v=4", "events_url": "https://api.github.com/users/kszucs/events{/privacy}", "followers_url": "https://api.github.com/users/kszucs/followers", "following_url": "https://api.github.com/users/kszucs/following{/other_user}", "gists_url": "https://api.github.com/users/kszucs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kszucs", "id": 961747, "login": "kszucs", "node_id": "MDQ6VXNlcjk2MTc0Nw==", "organizations_url": "https://api.github.com/users/kszucs/orgs", "received_events_url": "https://api.github.com/users/kszucs/received_events", "repos_url": "https://api.github.com/users/kszucs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kszucs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kszucs/subscriptions", "type": "User", "url": "https://api.github.com/users/kszucs", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7589). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Need to set `DEFAULT_MAX_BATCH_SIZE = 1024 * 1024`", "We should consider enabling page indexes by default when writing parquet files to enable page pruning readers like the next dataset viewer https://github.com/huggingface/dataset-viewer/pull/3199" ]
2025-05-29T18:19:41
2025-07-25T11:56:51
null
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7589.diff", "html_url": "https://github.com/huggingface/datasets/pull/7589", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7589.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7589" }
Use content defined chunking by default when writing parquet files. - [x] set the parameters in `io.parquet.ParquetDatasetReader` - [x] set the parameters in `arrow_writer.ParquetWriter` It requires a new pyarrow pin ">=21.0.0" which is released now.
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7589/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7589/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7588
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7588/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7588/comments
https://api.github.com/repos/huggingface/datasets/issues/7588/events
https://github.com/huggingface/datasets/issues/7588
3,094,012,025
I_kwDODunzps64auB5
7,588
ValueError: Invalid pattern: '**' can only be an entire path component [Colab]
{ "avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4", "events_url": "https://api.github.com/users/wkambale/events{/privacy}", "followers_url": "https://api.github.com/users/wkambale/followers", "following_url": "https://api.github.com/users/wkambale/following{/other_user}", "gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wkambale", "id": 43061081, "login": "wkambale", "node_id": "MDQ6VXNlcjQzMDYxMDgx", "organizations_url": "https://api.github.com/users/wkambale/orgs", "received_events_url": "https://api.github.com/users/wkambale/received_events", "repos_url": "https://api.github.com/users/wkambale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wkambale/subscriptions", "type": "User", "url": "https://api.github.com/users/wkambale", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Could you please run the following code snippet in your environment and share the exact output? This will help check for any compatibility issues within the env itself. \n\n```\nimport datasets\nimport huggingface_hub\nimport fsspec\n\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\n```", "```bash\ndatasets version: 2.14.4\nhuggingface_hub version: 0.31.4\nfsspec version: 2025.3.2\n```", "Version 2.14.4 is not the latest version available, in fact it is from August 08, 2023 (you can check here: https://pypi.org/project/datasets/#history)\n\nUse pip install datasets==3.6.0 to install a more recent version (from May 7, 2025)\n\nI also had the same problem with Colab, after updating to the latest version it was solved.\n\nI hope it helps", "thank you @CleitonOERocha. it sure did help.\n\nupdating `datasets` to v3.6.0 and keeping `fsspec` on v2025.3.2 eliminates the issue.", "Very helpful, thank you!" ]
2025-05-27T13:46:05
2025-05-30T13:22:52
2025-05-30T01:26:30
NONE
null
null
null
null
### Describe the bug I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate). now i changed a few hyperparameters to increase number of tokens for the model, increase Transformer layers, and all however, when i try to load the dataset, this error keeps coming up.. i have tried everything.. i have re-written the code a hundred times, and this keep coming up ### Steps to reproduce the bug Imports: ```bash !pip install datasets huggingface_hub fsspec ``` Python code: ```python from datasets import load_dataset HF_DATASET_NAME = "kambale/luganda-english-parallel-corpus" # Load the dataset try: if not HF_DATASET_NAME or HF_DATASET_NAME == "YOUR_HF_DATASET_NAME": raise ValueError( "Please provide a valid Hugging Face dataset name." ) dataset = load_dataset(HF_DATASET_NAME) # Omitted code as the error happens on the line above except ValueError as ve: print(f"Configuration Error: {ve}") raise except Exception as e: print(f"An error occurred while loading the dataset '{HF_DATASET_NAME}': {e}") raise e ``` now, i have tried going through this [issue](https://github.com/huggingface/datasets/issues/6737) and nothing helps ### Expected behavior loading the dataset successfully and perform splits (train, test, validation) ### Environment info from the imports, i do not install specific versions of these libraries, so the latest or available version is installed * `datasets` version: latest * `Platform`: Google Colab * `Hardware`: NVIDIA A100 GPU * `Python` version: latest * `huggingface_hub` version: latest * `fsspec` version: latest
{ "avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4", "events_url": "https://api.github.com/users/wkambale/events{/privacy}", "followers_url": "https://api.github.com/users/wkambale/followers", "following_url": "https://api.github.com/users/wkambale/following{/other_user}", "gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wkambale", "id": 43061081, "login": "wkambale", "node_id": "MDQ6VXNlcjQzMDYxMDgx", "organizations_url": "https://api.github.com/users/wkambale/orgs", "received_events_url": "https://api.github.com/users/wkambale/received_events", "repos_url": "https://api.github.com/users/wkambale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wkambale/subscriptions", "type": "User", "url": "https://api.github.com/users/wkambale", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7588/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7588/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7587
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7587/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7587/comments
https://api.github.com/repos/huggingface/datasets/issues/7587/events
https://github.com/huggingface/datasets/pull/7587
3,091,834,987
PR_kwDODunzps6XrB8F
7,587
load_dataset splits typing
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7587). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-26T18:28:40
2025-05-26T18:31:10
2025-05-26T18:29:57
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7587.diff", "html_url": "https://github.com/huggingface/datasets/pull/7587", "merged_at": "2025-05-26T18:29:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/7587.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7587" }
close https://github.com/huggingface/datasets/issues/7583
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7587/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7587/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7586
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7586/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7586/comments
https://api.github.com/repos/huggingface/datasets/issues/7586/events
https://github.com/huggingface/datasets/issues/7586
3,091,320,431
I_kwDODunzps64Qc5v
7,586
help is appreciated
{ "avatar_url": "https://avatars.githubusercontent.com/u/54931785?v=4", "events_url": "https://api.github.com/users/rajasekarnp1/events{/privacy}", "followers_url": "https://api.github.com/users/rajasekarnp1/followers", "following_url": "https://api.github.com/users/rajasekarnp1/following{/other_user}", "gists_url": "https://api.github.com/users/rajasekarnp1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rajasekarnp1", "id": 54931785, "login": "rajasekarnp1", "node_id": "MDQ6VXNlcjU0OTMxNzg1", "organizations_url": "https://api.github.com/users/rajasekarnp1/orgs", "received_events_url": "https://api.github.com/users/rajasekarnp1/received_events", "repos_url": "https://api.github.com/users/rajasekarnp1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rajasekarnp1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajasekarnp1/subscriptions", "type": "User", "url": "https://api.github.com/users/rajasekarnp1", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "how is this related to this repository ?" ]
2025-05-26T14:00:42
2025-05-26T18:21:57
null
NONE
null
null
null
null
### Feature request https://github.com/rajasekarnp1/neural-audio-upscaler/tree/main ### Motivation ai model develpment and audio ### Your contribution ai model develpment and audio
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7586/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7586/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7585/comments
https://api.github.com/repos/huggingface/datasets/issues/7585/events
https://github.com/huggingface/datasets/pull/7585
3,091,227,921
PR_kwDODunzps6Xo-Tw
7,585
Avoid multiple default config names
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7585). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-26T13:27:59
2025-06-05T12:41:54
2025-06-05T12:41:52
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7585.diff", "html_url": "https://github.com/huggingface/datasets/pull/7585", "merged_at": "2025-06-05T12:41:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/7585.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7585" }
Fix duplicating default config names. Currently, when calling `push_to_hub(set_default=True` with 2 different config names, both are set as default. Moreover, this will generate an error next time we try to push another default config name, raised by `MetadataConfigs.get_default_config_name`: https://github.com/huggingface/datasets/blob/da1db8a5b89fc0badaa0f571b36e122e52ae8c61/src/datasets/arrow_dataset.py#L5757 https://github.com/huggingface/datasets/blob/da1db8a5b89fc0badaa0f571b36e122e52ae8c61/src/datasets/utils/metadata.py#L186-L188
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7585/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7585/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7584
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7584/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7584/comments
https://api.github.com/repos/huggingface/datasets/issues/7584/events
https://github.com/huggingface/datasets/issues/7584
3,090,255,023
I_kwDODunzps64MYyv
7,584
Add LMDB format support
{ "avatar_url": "https://avatars.githubusercontent.com/u/30512160?v=4", "events_url": "https://api.github.com/users/trotsky1997/events{/privacy}", "followers_url": "https://api.github.com/users/trotsky1997/followers", "following_url": "https://api.github.com/users/trotsky1997/following{/other_user}", "gists_url": "https://api.github.com/users/trotsky1997/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/trotsky1997", "id": 30512160, "login": "trotsky1997", "node_id": "MDQ6VXNlcjMwNTEyMTYw", "organizations_url": "https://api.github.com/users/trotsky1997/orgs", "received_events_url": "https://api.github.com/users/trotsky1997/received_events", "repos_url": "https://api.github.com/users/trotsky1997/repos", "site_admin": false, "starred_url": "https://api.github.com/users/trotsky1997/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trotsky1997/subscriptions", "type": "User", "url": "https://api.github.com/users/trotsky1997", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi ! Can you explain what's your use case ? Is it about converting LMDB to Dataset objects (i.e. converting to Arrow) ?" ]
2025-05-26T07:10:13
2025-05-26T18:23:37
null
NONE
null
null
null
null
### Feature request Add LMDB format support for large memory-mapping files ### Motivation Add LMDB format support for large memory-mapping files ### Your contribution I'm trying to add it
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7584/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7584/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7583/comments
https://api.github.com/repos/huggingface/datasets/issues/7583/events
https://github.com/huggingface/datasets/issues/7583
3,088,987,757
I_kwDODunzps64HjZt
7,583
load_dataset type stubs reject List[str] for split parameter, but runtime supports it
{ "avatar_url": "https://avatars.githubusercontent.com/u/25069969?v=4", "events_url": "https://api.github.com/users/hierr/events{/privacy}", "followers_url": "https://api.github.com/users/hierr/followers", "following_url": "https://api.github.com/users/hierr/following{/other_user}", "gists_url": "https://api.github.com/users/hierr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hierr", "id": 25069969, "login": "hierr", "node_id": "MDQ6VXNlcjI1MDY5OTY5", "organizations_url": "https://api.github.com/users/hierr/orgs", "received_events_url": "https://api.github.com/users/hierr/received_events", "repos_url": "https://api.github.com/users/hierr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hierr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hierr/subscriptions", "type": "User", "url": "https://api.github.com/users/hierr", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-05-25T02:33:18
2025-05-26T18:29:58
2025-05-26T18:29:58
NONE
null
null
null
null
### Describe the bug The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type checkers like Pylance to raise `reportArgumentType` errors when passing a list of strings, even though it works as intended at runtime. ### Steps to reproduce the bug 1. Use load_dataset with multiple splits e.g.: ``` from datasets import load_dataset ds_train, ds_val, ds_test = load_dataset( "Silly-Machine/TuPyE-Dataset", "binary", split=["train[:75%]", "train[75%:]", "test"] ) ``` 2. Observe that code executes correctly at runtime and Pylance raises `Argument of type "List[str]" cannot be assigned to parameter "split" of type "str | Split | None"` ### Expected behavior The type stubs for [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) should accept `Union[str, Split, List[str], None]` or more specific overloads for the split parameter to correctly represent runtime behavior. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.7 - `huggingface_hub` version: 0.32.0 - PyArrow version: 20.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7583/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7583/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7582
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7582/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7582/comments
https://api.github.com/repos/huggingface/datasets/issues/7582/events
https://github.com/huggingface/datasets/pull/7582
3,083,515,643
PR_kwDODunzps6XPIt7
7,582
fix: Add embed_storage in Pdf feature
{ "avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4", "events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}", "followers_url": "https://api.github.com/users/AndreaFrancis/followers", "following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}", "gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AndreaFrancis", "id": 5564745, "login": "AndreaFrancis", "node_id": "MDQ6VXNlcjU1NjQ3NDU=", "organizations_url": "https://api.github.com/users/AndreaFrancis/orgs", "received_events_url": "https://api.github.com/users/AndreaFrancis/received_events", "repos_url": "https://api.github.com/users/AndreaFrancis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions", "type": "User", "url": "https://api.github.com/users/AndreaFrancis", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7582). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-22T14:06:29
2025-05-22T14:17:38
2025-05-22T14:17:36
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7582.diff", "html_url": "https://github.com/huggingface/datasets/pull/7582", "merged_at": "2025-05-22T14:17:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/7582.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7582" }
Add missing `embed_storage` method in Pdf feature (Same as in Audio and Image)
{ "avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4", "events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}", "followers_url": "https://api.github.com/users/AndreaFrancis/followers", "following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}", "gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AndreaFrancis", "id": 5564745, "login": "AndreaFrancis", "node_id": "MDQ6VXNlcjU1NjQ3NDU=", "organizations_url": "https://api.github.com/users/AndreaFrancis/orgs", "received_events_url": "https://api.github.com/users/AndreaFrancis/received_events", "repos_url": "https://api.github.com/users/AndreaFrancis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions", "type": "User", "url": "https://api.github.com/users/AndreaFrancis", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7582/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7582/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7581/comments
https://api.github.com/repos/huggingface/datasets/issues/7581/events
https://github.com/huggingface/datasets/pull/7581
3,083,080,413
PR_kwDODunzps6XNpm0
7,581
Add missing property on `RepeatExamplesIterable`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42788329?v=4", "events_url": "https://api.github.com/users/SilvanCodes/events{/privacy}", "followers_url": "https://api.github.com/users/SilvanCodes/followers", "following_url": "https://api.github.com/users/SilvanCodes/following{/other_user}", "gists_url": "https://api.github.com/users/SilvanCodes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SilvanCodes", "id": 42788329, "login": "SilvanCodes", "node_id": "MDQ6VXNlcjQyNzg4MzI5", "organizations_url": "https://api.github.com/users/SilvanCodes/orgs", "received_events_url": "https://api.github.com/users/SilvanCodes/received_events", "repos_url": "https://api.github.com/users/SilvanCodes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SilvanCodes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SilvanCodes/subscriptions", "type": "User", "url": "https://api.github.com/users/SilvanCodes", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-05-22T11:41:07
2025-06-05T12:41:30
2025-06-05T12:41:29
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7581.diff", "html_url": "https://github.com/huggingface/datasets/pull/7581", "merged_at": "2025-06-05T12:41:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/7581.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7581" }
Fixes #7561
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7581/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7581/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7580
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7580/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7580/comments
https://api.github.com/repos/huggingface/datasets/issues/7580/events
https://github.com/huggingface/datasets/issues/7580
3,082,993,027
I_kwDODunzps63wr2D
7,580
Requesting a specific split (eg: test) still downloads all (train, test, val) data when streaming=False.
{ "avatar_url": "https://avatars.githubusercontent.com/u/48768216?v=4", "events_url": "https://api.github.com/users/s3pi/events{/privacy}", "followers_url": "https://api.github.com/users/s3pi/followers", "following_url": "https://api.github.com/users/s3pi/following{/other_user}", "gists_url": "https://api.github.com/users/s3pi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/s3pi", "id": 48768216, "login": "s3pi", "node_id": "MDQ6VXNlcjQ4NzY4MjE2", "organizations_url": "https://api.github.com/users/s3pi/orgs", "received_events_url": "https://api.github.com/users/s3pi/received_events", "repos_url": "https://api.github.com/users/s3pi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/s3pi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/s3pi/subscriptions", "type": "User", "url": "https://api.github.com/users/s3pi", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi ! There was a PR open to improve this: https://github.com/huggingface/datasets/pull/6832 \nbut it hasn't been continued so far.\n\nIt would be a cool improvement though !" ]
2025-05-22T11:08:16
2025-05-26T18:40:31
null
NONE
null
null
null
null
### Describe the bug When using load_dataset() from the datasets library (in load.py), specifying a particular split (e.g., split="train") still results in downloading data for all splits when streaming=False. This happens during the builder_instance.download_and_prepare() call. This behavior leads to unnecessary bandwidth usage and longer download times, especially for large datasets, even if the user only intends to use a single split. ### Steps to reproduce the bug dataset_name = "skbose/indian-english-nptel-v0" dataset = load_dataset(dataset_name, token=hf_token, split="test") ### Expected behavior Optimize the download logic so that only the required split is downloaded when streaming=False when a specific split is provided. ### Environment info Dataset: skbose/indian-english-nptel-v0 Platform: M1 Apple Silicon Python verison: 3.12.9 datasets>=3.5.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7580/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7580/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7579
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7579/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7579/comments
https://api.github.com/repos/huggingface/datasets/issues/7579/events
https://github.com/huggingface/datasets/pull/7579
3,081,849,022
PR_kwDODunzps6XJerX
7,579
Fix typos in PDF and Video documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4", "events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}", "followers_url": "https://api.github.com/users/AndreaFrancis/followers", "following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}", "gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AndreaFrancis", "id": 5564745, "login": "AndreaFrancis", "node_id": "MDQ6VXNlcjU1NjQ3NDU=", "organizations_url": "https://api.github.com/users/AndreaFrancis/orgs", "received_events_url": "https://api.github.com/users/AndreaFrancis/received_events", "repos_url": "https://api.github.com/users/AndreaFrancis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions", "type": "User", "url": "https://api.github.com/users/AndreaFrancis", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7579). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-22T02:27:40
2025-05-22T12:53:49
2025-05-22T12:53:47
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7579.diff", "html_url": "https://github.com/huggingface/datasets/pull/7579", "merged_at": "2025-05-22T12:53:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/7579.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7579" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7579/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7579/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7577/comments
https://api.github.com/repos/huggingface/datasets/issues/7577/events
https://github.com/huggingface/datasets/issues/7577
3,080,833,740
I_kwDODunzps63ocrM
7,577
arrow_schema is not compatible with list
{ "avatar_url": "https://avatars.githubusercontent.com/u/164412025?v=4", "events_url": "https://api.github.com/users/jonathanshen-upwork/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanshen-upwork/followers", "following_url": "https://api.github.com/users/jonathanshen-upwork/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanshen-upwork/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanshen-upwork", "id": 164412025, "login": "jonathanshen-upwork", "node_id": "U_kgDOCcy6eQ", "organizations_url": "https://api.github.com/users/jonathanshen-upwork/orgs", "received_events_url": "https://api.github.com/users/jonathanshen-upwork/received_events", "repos_url": "https://api.github.com/users/jonathanshen-upwork/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanshen-upwork/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanshen-upwork/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanshen-upwork", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Thanks for reporting, I'll look into it", "Actually it looks like you just forgot parenthesis:\n\n```diff\n- f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})\n+ f = datasets.Features({'x': list([datasets.Value(dtype='int32')])})\n```\n\nor simply using the `[ ]` syntax:\n\n```python\nf = datasets.Features({'x':[datasets.Value(dtype='int32')]})\n```\n\nI'm closing this issue if you don't mind", "Ah is that what the syntax is? I don't think I was able to find an actual example of it so I assumed it was in the same way that you specify types eg. `list[int]`. This is good to know, thanks." ]
2025-05-21T16:37:01
2025-05-26T18:49:51
2025-05-26T18:32:55
NONE
null
null
null
null
### Describe the bug ``` import datasets f = datasets.Features({'x': list[datasets.Value(dtype='int32')]}) f.arrow_schema Traceback (most recent call last): File "datasets/features/features.py", line 1826, in arrow_schema return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)}) ^^^^^^^^^ File "datasets/features/features.py", line 1815, in type return get_nested_type(self) ^^^^^^^^^^^^^^^^^^^^^ File "datasets/features/features.py", line 1252, in get_nested_type return pa.struct( ^^^^^^^^^^ File "pyarrow/types.pxi", line 5406, in pyarrow.lib.struct File "pyarrow/types.pxi", line 3890, in pyarrow.lib.field File "pyarrow/types.pxi", line 5918, in pyarrow.lib.ensure_type TypeError: DataType expected, got <class 'list'> ``` The following works ``` f = datasets.Features({'x': datasets.LargeList(datasets.Value(dtype='int32'))}) ``` ### Expected behavior according to https://github.com/huggingface/datasets/blob/458f45a22c3cc9aea5f442f6f519333dcfeae9b9/src/datasets/features/features.py#L1765 python list should be a valid type specification for features ### Environment info - `datasets` version: 3.5.1 - Platform: macOS-15.5-arm64-arm-64bit - Python version: 3.12.9 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7577/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7577/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7576
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7576/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7576/comments
https://api.github.com/repos/huggingface/datasets/issues/7576/events
https://github.com/huggingface/datasets/pull/7576
3,080,450,538
PR_kwDODunzps6XEuMz
7,576
Fix regex library warnings
{ "avatar_url": "https://avatars.githubusercontent.com/u/35470921?v=4", "events_url": "https://api.github.com/users/emmanuel-ferdman/events{/privacy}", "followers_url": "https://api.github.com/users/emmanuel-ferdman/followers", "following_url": "https://api.github.com/users/emmanuel-ferdman/following{/other_user}", "gists_url": "https://api.github.com/users/emmanuel-ferdman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/emmanuel-ferdman", "id": 35470921, "login": "emmanuel-ferdman", "node_id": "MDQ6VXNlcjM1NDcwOTIx", "organizations_url": "https://api.github.com/users/emmanuel-ferdman/orgs", "received_events_url": "https://api.github.com/users/emmanuel-ferdman/received_events", "repos_url": "https://api.github.com/users/emmanuel-ferdman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/emmanuel-ferdman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emmanuel-ferdman/subscriptions", "type": "User", "url": "https://api.github.com/users/emmanuel-ferdman", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7576). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-21T14:31:58
2025-06-05T13:35:16
2025-06-05T12:37:55
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7576.diff", "html_url": "https://github.com/huggingface/datasets/pull/7576", "merged_at": "2025-06-05T12:37:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/7576.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7576" }
# PR Summary This small PR resolves the regex library warnings showing starting Python3.11: ```python DeprecationWarning: 'count' is passed as positional argument ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7576/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7576/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7575
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7575/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7575/comments
https://api.github.com/repos/huggingface/datasets/issues/7575/events
https://github.com/huggingface/datasets/pull/7575
3,080,228,718
PR_kwDODunzps6XD9gM
7,575
[MINOR:TYPO] Update save_to_disk docstring
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cakiki", "id": 3664563, "login": "cakiki", "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "organizations_url": "https://api.github.com/users/cakiki/orgs", "received_events_url": "https://api.github.com/users/cakiki/received_events", "repos_url": "https://api.github.com/users/cakiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "type": "User", "url": "https://api.github.com/users/cakiki", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-05-21T13:22:24
2025-06-05T12:39:13
2025-06-05T12:39:13
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7575.diff", "html_url": "https://github.com/huggingface/datasets/pull/7575", "merged_at": "2025-06-05T12:39:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/7575.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7575" }
r/hub/filesystem in save_to_disk
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7575/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7575/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7574
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7574/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7574/comments
https://api.github.com/repos/huggingface/datasets/issues/7574/events
https://github.com/huggingface/datasets/issues/7574
3,079,641,072
I_kwDODunzps63j5fw
7,574
Missing multilingual directions in IWSLT2017 dataset's processing script
{ "avatar_url": "https://avatars.githubusercontent.com/u/79297451?v=4", "events_url": "https://api.github.com/users/andy-joy-25/events{/privacy}", "followers_url": "https://api.github.com/users/andy-joy-25/followers", "following_url": "https://api.github.com/users/andy-joy-25/following{/other_user}", "gists_url": "https://api.github.com/users/andy-joy-25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andy-joy-25", "id": 79297451, "login": "andy-joy-25", "node_id": "MDQ6VXNlcjc5Mjk3NDUx", "organizations_url": "https://api.github.com/users/andy-joy-25/orgs", "received_events_url": "https://api.github.com/users/andy-joy-25/received_events", "repos_url": "https://api.github.com/users/andy-joy-25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andy-joy-25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andy-joy-25/subscriptions", "type": "User", "url": "https://api.github.com/users/andy-joy-25", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "I have opened 2 PRs on the Hub: `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/7` and `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/8` to resolve this issue", "cool ! I pinged the owners of the dataset on HF to merge your PRs :)" ]
2025-05-21T09:53:17
2025-05-26T18:36:38
null
NONE
null
null
null
null
### Describe the bug Hi, Upon using `iwslt2017.py` in `IWSLT/iwslt2017` on the Hub for loading the datasets, I am unable to obtain the datasets for the language pairs `de-it`, `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` using it. These 6 pairs do not show up when using `get_dataset_config_names()` to obtain the list of all the configs present in `IWSLT/iwslt2017`. This should not be the case since as mentioned in their original paper (please see https://aclanthology.org/2017.iwslt-1.1.pdf), the authors specify that "_this year we proposed the multilingual translation between any pair of languages from {Dutch, English, German, Italian, Romanian}..._" and because these datasets are indeed present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`. Best Regards, Anand ### Steps to reproduce the bug Check the output of `get_dataset_config_names("IWSLT/iwslt2017", trust_remote_code=True)`: only 24 language pairs are present and the following 6 config names are absent: `iwslt2017-de-it`, `iwslt2017-de-ro`, `iwslt2017-de-nl`, `iwslt2017-it-de`, `iwslt2017-nl-de`, and `iwslt2017-ro-de`. ### Expected behavior The aforementioned 6 language pairs should also be present and hence, all these 6 language pairs' IWSLT2017 datasets must also be available for further use. I would suggest removing `de` from the `BI_LANGUAGES` list and moving it over to the `MULTI_LANGUAGES` list instead in `iwslt2017.py` to account for all the 6 missing language pairs (the same `de-en` dataset is present in both `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip` and `data/2017-01-trnted/texts/de/en/de-en.zip` but the `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` datasets are only present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`: so, its unclear why the following comment: _`# XXX: Artificially removed DE from here, as it also exists within bilingual data`_ has been added as `L71` in `iwslt2017.py`). The `README.md` file in `IWSLT/iwslt2017`must then be re-created using `datasets-cli test path/to/iwslt2017.py --save_info --all_configs` to pass all split size verification checks for the 6 new language pairs which were previously non-existent. ### Environment info - `datasets` version: 3.5.0 - Platform: Linux-6.8.0-56-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.30.1 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7574/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7574/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7573/comments
https://api.github.com/repos/huggingface/datasets/issues/7573/events
https://github.com/huggingface/datasets/issues/7573
3,076,415,382
I_kwDODunzps63Xl-W
7,573
No Samsum dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17688220?v=4", "events_url": "https://api.github.com/users/IgorKasianenko/events{/privacy}", "followers_url": "https://api.github.com/users/IgorKasianenko/followers", "following_url": "https://api.github.com/users/IgorKasianenko/following{/other_user}", "gists_url": "https://api.github.com/users/IgorKasianenko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/IgorKasianenko", "id": 17688220, "login": "IgorKasianenko", "node_id": "MDQ6VXNlcjE3Njg4MjIw", "organizations_url": "https://api.github.com/users/IgorKasianenko/orgs", "received_events_url": "https://api.github.com/users/IgorKasianenko/received_events", "repos_url": "https://api.github.com/users/IgorKasianenko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/IgorKasianenko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IgorKasianenko/subscriptions", "type": "User", "url": "https://api.github.com/users/IgorKasianenko", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "According to the following https://huggingface.co/posts/seawolf2357/424129432408590, as of now the dataset seems to be inaccessible.\n\n@IgorKasianenko, would https://huggingface.co/datasets/knkarthick/samsum suffice for your purpose?\n", "Thanks @SP1029 for the update!\nThat will work for now, using it as replacement. Is there a officially recommended way to maintain the CC licensed dataset under the organization account? \nFeel free to close this issue", "> Is there an officially recommended way to maintain a CC-licensed dataset under an organizational account?\n\n@IgorKasianenko, apologies, this is not my area of expertise.\n\n> Please feel free to close this issue.\n\nI have limited access and may not be able to do that. Since you opened it, you would be able to close it.", "dataset_samsum = load_dataset(\"knkarthick/samsum\")\n\nis working" ]
2025-05-20T09:54:35
2025-07-21T18:34:34
2025-06-18T12:52:23
NONE
null
null
null
null
### Describe the bug https://huggingface.co/datasets/Samsung/samsum dataset not found error 404 Originated from https://github.com/meta-llama/llama-cookbook/issues/948 ### Steps to reproduce the bug go to website https://huggingface.co/datasets/Samsung/samsum see the error also downloading it with python throws ``` Couldn't find 'Samsung/samsum' on the Hugging Face Hub either: FileNotFoundError: Samsung/samsum@f00baf5a7d4abfec6820415493bcb52c587788e6/samsum.py (repository not found) ``` ### Expected behavior Dataset exists ### Environment info ``` - `datasets` version: 3.2.0 - Platform: macOS-15.4.1-arm64-arm-64bit - Python version: 3.12.2 - `huggingface_hub` version: 0.26.5 - PyArrow version: 16.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/17688220?v=4", "events_url": "https://api.github.com/users/IgorKasianenko/events{/privacy}", "followers_url": "https://api.github.com/users/IgorKasianenko/followers", "following_url": "https://api.github.com/users/IgorKasianenko/following{/other_user}", "gists_url": "https://api.github.com/users/IgorKasianenko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/IgorKasianenko", "id": 17688220, "login": "IgorKasianenko", "node_id": "MDQ6VXNlcjE3Njg4MjIw", "organizations_url": "https://api.github.com/users/IgorKasianenko/orgs", "received_events_url": "https://api.github.com/users/IgorKasianenko/received_events", "repos_url": "https://api.github.com/users/IgorKasianenko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/IgorKasianenko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IgorKasianenko/subscriptions", "type": "User", "url": "https://api.github.com/users/IgorKasianenko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7573/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7573/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7572
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7572/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7572/comments
https://api.github.com/repos/huggingface/datasets/issues/7572/events
https://github.com/huggingface/datasets/pull/7572
3,074,529,251
PR_kwDODunzps6WwsZB
7,572
Fixed typos
{ "avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4", "events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}", "followers_url": "https://api.github.com/users/TopCoder2K/followers", "following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}", "gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TopCoder2K", "id": 47208659, "login": "TopCoder2K", "node_id": "MDQ6VXNlcjQ3MjA4NjU5", "organizations_url": "https://api.github.com/users/TopCoder2K/orgs", "received_events_url": "https://api.github.com/users/TopCoder2K/received_events", "repos_url": "https://api.github.com/users/TopCoder2K/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions", "type": "User", "url": "https://api.github.com/users/TopCoder2K", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "@lhoestq, mentioning in case you haven't seen this PR. The contribution is very small and easy to check :)" ]
2025-05-19T17:16:59
2025-06-05T12:25:42
2025-06-05T12:25:41
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7572.diff", "html_url": "https://github.com/huggingface/datasets/pull/7572", "merged_at": "2025-06-05T12:25:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/7572.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7572" }
More info: [comment](https://github.com/huggingface/datasets/pull/7564#issuecomment-2863391781).
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7572/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7572/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7571
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7571/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7571/comments
https://api.github.com/repos/huggingface/datasets/issues/7571/events
https://github.com/huggingface/datasets/pull/7571
3,074,116,942
PR_kwDODunzps6WvRqi
7,571
fix string_to_dict test
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7571). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-19T14:49:23
2025-05-19T14:52:24
2025-05-19T14:49:28
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7571.diff", "html_url": "https://github.com/huggingface/datasets/pull/7571", "merged_at": "2025-05-19T14:49:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/7571.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7571" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7571/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7571/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7570
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7570/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7570/comments
https://api.github.com/repos/huggingface/datasets/issues/7570/events
https://github.com/huggingface/datasets/issues/7570
3,065,966,529
I_kwDODunzps62vu_B
7,570
Dataset lib seems to broke after fssec lib update
{ "avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4", "events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}", "followers_url": "https://api.github.com/users/sleepingcat4/followers", "following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}", "gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sleepingcat4", "id": 81933585, "login": "sleepingcat4", "node_id": "MDQ6VXNlcjgxOTMzNTg1", "organizations_url": "https://api.github.com/users/sleepingcat4/orgs", "received_events_url": "https://api.github.com/users/sleepingcat4/received_events", "repos_url": "https://api.github.com/users/sleepingcat4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions", "type": "User", "url": "https://api.github.com/users/sleepingcat4", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi, can you try updating `datasets` ? Colab still installs `datasets` 2.x by default, instead of 3.x\n\nIt would be cool to also report this to google colab, they have a GitHub repo for this IIRC", "@lhoestq I have updated it to `datasets==3.6.0` and now there's an entirely different issue on colab while locally its fine. \n\n```\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_auth.py:94: UserWarning: \nThe secret `HF_TOKEN` does not exist in your Colab secrets.\nTo authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.\nYou will be able to reuse this secret in all of your notebooks.\nPlease note that authentication is recommended but still optional to access public models or datasets.\n warnings.warn(\nREADME.md: 100%\n 2.88k/2.88k [00:00<00:00, 166kB/s]\nsuno.jsonl.zst: 100%\n 221M/221M [00:05<00:00, 48.6MB/s]\nGenerating train split: \n 18633/0 [00:01<00:00, 13018.92 examples/s]\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\n[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\n 1870 try:\n-> 1871 writer.write_table(table)\n 1872 except CastError as cast_error:\n\n17 frames\nTypeError: Couldn't cast array of type\nstruct<id: string, type: string, infill: bool, source: string, continue_at: double, infill_dur_s: double, infill_end_s: double, infill_start_s: double, include_future_s: double, include_history_s: double, infill_context_end_s: double, infill_context_start_s: int64>\nto\n{'id': Value(dtype='string', id=None), 'type': Value(dtype='string', id=None), 'infill': Value(dtype='bool', id=None), 'source': Value(dtype='string', id=None), 'continue_at': Value(dtype='float64', id=None), 'include_history_s': Value(dtype='float64', id=None)}\n\nThe above exception was the direct cause of the following exception:\n\nDatasetGenerationError Traceback (most recent call last)\n[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\n 1896 if isinstance(e, DatasetGenerationError):\n 1897 raise\n-> 1898 raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\n 1899 \n 1900 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)\n\nDatasetGenerationError: An error occurred while generating the dataset\n```", "@lhoestq opps sorry the dataset was in .zst which was causing this error rather than being a datasets library fault. After upgrading dataset version Colab is working fine. " ]
2025-05-15T11:45:06
2025-06-13T00:44:27
2025-06-13T00:44:27
NONE
null
null
null
null
### Describe the bug I am facing an issue since today where HF's dataset is acting weird and in some instances failure to recognise a valid dataset entirely, I think it is happening due to recent change in `fsspec` lib as using this command fixed it for me in one-time: `!pip install -U datasets huggingface_hub fsspec` ### Steps to reproduce the bug from datasets import load_dataset def download_hf(): dataset_name = input("Enter the dataset name: ") subset_name = input("Enter subset name: ") ds = load_dataset(dataset_name, name=subset_name) for split in ds: ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False) download_hf() ### Expected behavior ``` Downloading readme: 100%  1.55k/1.55k [00:00<00:00, 121kB/s] Downloading data files: 100%  1/1 [00:00<00:00,  2.06it/s] Downloading data: 0%| | 0.00/54.2k [00:00<?, ?B/s] Downloading data: 100%|██████████| 54.2k/54.2k [00:00<00:00, 121kB/s] Extracting data files: 100%  1/1 [00:00<00:00, 35.17it/s] Generating test split:   140/0 [00:00<00:00, 2628.62 examples/s] --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) [<ipython-input-2-12ab305b0e77>](https://localhost:8080/#) in <cell line: 0>() 8 ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False) 9 ---> 10 download_hf() 2 frames [/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory) 1171 is_local = not is_remote_filesystem(self._fs) 1172 if not is_local: -> 1173 raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") 1174 if not os.path.exists(self._output_dir): 1175 raise FileNotFoundError( NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. ``` OR ``` Traceback (most recent call last): File "e:\Fuck\download-data\mcq_dataset.py", line 10, in <module> download_hf() File "e:\Fuck\download-data\mcq_dataset.py", line 6, in download_hf ds = load_dataset(dataset_name, name=subset_name) File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2606, in load_dataset builder_instance = load_dataset_builder( File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2277, in load_dataset_builder dataset_module = dataset_module_factory( File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1917, in dataset_module_factory raise e1 from None File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1867, in dataset_module_factory raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e datasets.exceptions.DatasetNotFoundError: Dataset 'dataset repo_id' doesn't exist on the Hub or cannot be accessed. ``` ### Environment info colab and 3.10 local system
{ "avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4", "events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}", "followers_url": "https://api.github.com/users/sleepingcat4/followers", "following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}", "gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sleepingcat4", "id": 81933585, "login": "sleepingcat4", "node_id": "MDQ6VXNlcjgxOTMzNTg1", "organizations_url": "https://api.github.com/users/sleepingcat4/orgs", "received_events_url": "https://api.github.com/users/sleepingcat4/received_events", "repos_url": "https://api.github.com/users/sleepingcat4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions", "type": "User", "url": "https://api.github.com/users/sleepingcat4", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7570/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7570/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7569
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7569/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7569/comments
https://api.github.com/repos/huggingface/datasets/issues/7569/events
https://github.com/huggingface/datasets/issues/7569
3,061,234,054
I_kwDODunzps62drmG
7,569
Dataset creation is broken if nesting a dict inside a dict inside a list
{ "avatar_url": "https://avatars.githubusercontent.com/u/25732590?v=4", "events_url": "https://api.github.com/users/TimSchneider42/events{/privacy}", "followers_url": "https://api.github.com/users/TimSchneider42/followers", "following_url": "https://api.github.com/users/TimSchneider42/following{/other_user}", "gists_url": "https://api.github.com/users/TimSchneider42/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TimSchneider42", "id": 25732590, "login": "TimSchneider42", "node_id": "MDQ6VXNlcjI1NzMyNTkw", "organizations_url": "https://api.github.com/users/TimSchneider42/orgs", "received_events_url": "https://api.github.com/users/TimSchneider42/received_events", "repos_url": "https://api.github.com/users/TimSchneider42/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TimSchneider42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TimSchneider42/subscriptions", "type": "User", "url": "https://api.github.com/users/TimSchneider42", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi ! That's because Séquence is a type that comes from tensorflow datasets and inverts lists and focus when doing Séquence(dict).\n\nInstead you should use a list. In your case\n```python\nfeatures = Features({\n \"a\": [{\"b\": {\"c\": Value(\"string\")}}]\n})\n```", "Hi,\n\nThanks for the swift reply! Could you quickly clarify a couple of points?\n\n1. Is there any benefit in using Sequence over normal lists? Especially for longer lists (in my case, up to 256 entries)\n2. When exactly can I use Sequence? If there is a maximum of one level of dictionaries inside, then it's always fine?\n3. When creating the data in the generator, do I need to swap lists and dicts manually, or does that happen automatically?\n\nAlso, the documentation does not seem to mention this limitation of the Sequence type anywhere and encourages users to use it [here](https://huggingface.co/docs/datasets/en/about_dataset_features). In fact, I did not even know that just using a Python list was an option. Maybe the documentation can be improved to mention the limitations of Sequence and highlight that lists can be used instead.\n\nThanks a lot in advance!\n\nBest,\nTim" ]
2025-05-13T21:06:45
2025-05-20T19:25:15
null
NONE
null
null
null
null
### Describe the bug Hey, I noticed that the creation of datasets with `Dataset.from_generator` is broken if dicts and lists are nested in a certain way and a schema is being passed. See below for details. Best, Tim ### Steps to reproduce the bug Runing this code: ```python from datasets import Dataset, Features, Sequence, Value def generator(): yield { "a": [{"b": {"c": 0}}], } features = Features( { "a": Sequence( feature={ "b": { "c": Value("int32"), }, }, length=1, ) } ) dataset = Dataset.from_generator(generator, features=features) ``` leads to ``` Generating train split: 1 examples [00:00, 540.85 examples/s] Traceback (most recent call last): File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1635, in _prepare_split_single num_examples, num_bytes = writer.finalize() ^^^^^^^^^^^^^^^^^ File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 657, in finalize self.write_examples_on_file() File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 510, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 629, in write_batch pa_table = pa.Table.from_arrays(arrays, schema=schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 4851, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 1608, in pyarrow.lib._sanitize_arrays File "pyarrow/array.pxi", line 399, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 1004, in pyarrow.lib.Array.cast File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/pyarrow/compute.py", line 405, in cast return call_function("cast", [arr], options, memory_pool) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/_compute.pyx", line 598, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 393, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Unsupported cast from fixed_size_list<item: struct<c: int32>>[1] to struct using function cast_struct The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/user/test/tools/hf_test2.py", line 23, in <module> dataset = Dataset.from_generator(generator, features=features) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1114, in from_generator ).read() ^^^^^^ File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/io/generator.py", line 49, in read self.builder.download_and_prepare( File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 925, in download_and_prepare self._download_and_prepare( File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1649, in _download_and_prepare super()._download_and_prepare( File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1001, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1487, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1644, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset Process finished with exit code 1 ``` ### Expected behavior I expected this code not to lead to an error. I have done some digging and figured out that the problem seems to be the `get_nested_type` function in `features.py`, which, for whatever reason, flips Sequences and dicts whenever it encounters a dict inside of a sequence. This seems to be necessary, as disabling that flip leads to another error. However, by keeping that flip enabled for the highest level and disabling it for all subsequent levels, I was able to work around this problem. Specifically, by patching `get_nested_type` as follows, it works on the given example (emphasis on the `level` parameter I added): ```python def get_nested_type(schema: FeatureType, level=0) -> pa.DataType: """ get_nested_type() converts a datasets.FeatureType into a pyarrow.DataType, and acts as the inverse of generate_from_arrow_type(). It performs double-duty as the implementation of Features.type and handles the conversion of datasets.Feature->pa.struct """ # Nested structures: we allow dict, list/tuples, sequences if isinstance(schema, Features): return pa.struct( {key: get_nested_type(schema[key], level = level + 1) for key in schema} ) # Features is subclass of dict, and dict order is deterministic since Python 3.6 elif isinstance(schema, dict): return pa.struct( {key: get_nested_type(schema[key], level = level + 1) for key in schema} ) # however don't sort on struct types since the order matters elif isinstance(schema, (list, tuple)): if len(schema) != 1: raise ValueError("When defining list feature, you should just provide one example of the inner type") value_type = get_nested_type(schema[0], level = level + 1) return pa.list_(value_type) elif isinstance(schema, LargeList): value_type = get_nested_type(schema.feature, level = level + 1) return pa.large_list(value_type) elif isinstance(schema, Sequence): value_type = get_nested_type(schema.feature, level = level + 1) # We allow to reverse list of dict => dict of list for compatibility with tfds if isinstance(schema.feature, dict) and level == 1: data_type = pa.struct({f.name: pa.list_(f.type, schema.length) for f in value_type}) else: data_type = pa.list_(value_type, schema.length) return data_type # Other objects are callable which returns their data type (ClassLabel, Array2D, Translation, Arrow datatype creation methods) return schema() ``` I have honestly no idea what I am doing here, so this might produce other issues for different inputs. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.35 - Python version: 3.11.11 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0 Also tested it with 3.5.0, same result.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7569/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7569/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7568
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7568/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7568/comments
https://api.github.com/repos/huggingface/datasets/issues/7568/events
https://github.com/huggingface/datasets/issues/7568
3,060,515,257
I_kwDODunzps62a8G5
7,568
`IterableDatasetDict.map()` call removes `column_names` (in fact info.features)
{ "avatar_url": "https://avatars.githubusercontent.com/u/7893763?v=4", "events_url": "https://api.github.com/users/mombip/events{/privacy}", "followers_url": "https://api.github.com/users/mombip/followers", "following_url": "https://api.github.com/users/mombip/following{/other_user}", "gists_url": "https://api.github.com/users/mombip/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mombip", "id": 7893763, "login": "mombip", "node_id": "MDQ6VXNlcjc4OTM3NjM=", "organizations_url": "https://api.github.com/users/mombip/orgs", "received_events_url": "https://api.github.com/users/mombip/received_events", "repos_url": "https://api.github.com/users/mombip/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mombip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mombip/subscriptions", "type": "User", "url": "https://api.github.com/users/mombip", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi ! IterableDataset doesn't know what's the output of the function you pass to map(), so it's not possible to know in advance the features of the output dataset.\n\nThere is a workaround though: either do `ds = ds.map(..., features=features)`, or you can do `ds = ds._resolve_features()` which iterates on the first rows to infer the dataset features.", "Thank you. I understand that “IterableDataset doesn't know what's the output of the function”—that’s true, but:\n\nUnfortunately, the workaround you proposed **doesn’t solve** the problem. `ds.map()` is called multiple times by third-party code (i.e. `SFTTrainer`). To apply your approach, I would have to modify external library code. That’s why I decided to patch the _class_ rather than update `dataset` _objects_ (in fact, updating the object after `map()` was my initial approach, but then I realized I’m not the only one mapping an already-mapped dataset.)\n\nAs a user, I expected that after mapping I would get a new dataset with the correct column names. If, for some reason, that can’t be the default behavior, I would expect an argument—i.e. `auto_resolve_features: bool = False` — to control how my dataset is mapped if following mapping operation are called.\n\nIt’s also problematic that `column_names` are tied to `features`, which is even more confusing and forces you to inspect the source code to understand what’s going on.\n\n**New version of workaround:**\n```python\ndef patch_iterable_dataset_map():\n _orig_map = IterableDataset.map\n\n def _patched_map(self, *args, **kwargs):\n ds = _orig_map(self, *args, **kwargs)\n return ds._resolve_features()\n\n IterableDataset.map = _patched_map\n```", "I see, maybe `.resolve_features()` should be called by default in this case in the SFTTrainer ? (or pass `features=` if the data processing always output the same features)\n\nWe can even support a new parameter `features=\"infer\"` if it would be comfortable to not use internal methods in SFTTrainer", "I think most straightforward solution would be to reinitialize `features` from data after mapping if `feature` argument is not passed. I hink it is more intuitive behavior than just cleaning features. There is also problem in usage `.resolve_features()` in this context. I observed that it leads to `_head()` method execution and it then causes that 5 batches from dataset are iterated (`_head()` defaults to 5 batches). \nI'm not sure how it influences whole process. Are those 5 batches (in my case it's 5000 rows) used only to find `features`. Does final training/eval process \"see\" this items? How it affects IterableDataset state (current position)?", "I checked the source code and while it indeed iterates on the first 5 rows. As a normal iteration, it does record the state in case you call `.state_dict()`, but it doesn't change the starting state. The starting state is always the beginning of the dataset, unless it is explicitly set with `.load_state_dict()`. To be clear, if you iterate on the dataset after `._resolve_features()`, it will start from the beginning of the dataset (or from a state you manually pass using `.load_state_dict()`)", "Hi!\nI’ve opened a PR #7658 to address this issue.\n\nThe fix ensures that info.features is only updated if features is not None, preventing accidental loss of schema and column_names.\nPlease let me know if you see any edge cases or have additional concerns!\nAlso, if a test is needed for this case, happy to discuss—the fix is small, but I can add one if the maintainers prefer.\n\nThanks everyone for the clear diagnosis and suggestions in this thread!" ]
2025-05-13T15:45:42
2025-06-30T09:33:47
null
NONE
null
null
null
null
When calling `IterableDatasetDict.map()`, each split’s `IterableDataset.map()` is invoked without a `features` argument. While omitting the argument isn’t itself incorrect, the implementation then sets `info.features = features`, which destroys the original `features` content. Since `IterableDataset.column_names` relies on `info.features`, it ends up broken (`None`). **Reproduction** 1. Define an IterableDatasetDict with a non-None features schema. 2. my_iterable_dataset_dict contains "text" column. 3. Call: ```Python new_dict = my_iterable_dataset_dict.map( function=my_fn, with_indices=False, batched=True, batch_size=16, ) ``` 4. Observe ```Python new_dict["train"].info.features # {'text': Value(dtype='string', id=None)} new_dict["train"].column_names # ['text'] ``` 5. Call: ```Python new_dict = my_iterable_dataset_dict.map( function=my_fn, with_indices=False, batched=True, batch_size=16, remove_columns=["foo"] ) ``` 6. Observe: ```Python new_dict["train"].info.features # → None new_dict["train"].column_names # → None ``` 5. Internally, in dataset_dict.py this loop omits features ([code](https://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/dataset_dict.py#L2047C5-L2056C14)): ```Python for split, dataset in self.items(): dataset_dict[split] = dataset.map( function=function, with_indices=with_indices, input_columns=input_columns, batched=batched, batch_size=batch_size, drop_last_batch=drop_last_batch, remove_columns=remove_columns, fn_kwargs=fn_kwargs, # features omitted → defaults to None ) ``` 7. Then inside IterableDataset.map() ([code](https://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2619C1-L2622C37)) correct `info.features` is replaced by features which is None: ```Python info = self.info.copy() info.features = features # features is None here return IterableDataset(..., info=info, ...) ``` **Suggestion** It looks like this replacement was added intentionally but maybe should be done only if `features` is `not None`. **Workarround:** `SFTTrainer` calls `dataset.map()` several times and then fails on `NoneType` when iterating `dataset.column_names`. I decided to write this patch - works form me. ```python def patch_iterable_dataset_map(): _orig_map = IterableDataset.map def _patched_map(self, *args, **kwargs): if "features" not in kwargs or kwargs["features"] is None: kwargs["features"] = self.info.features return _orig_map(self, *args, **kwargs) IterableDataset.map = _patched_map ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7568/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7568/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7567
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7567/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7567/comments
https://api.github.com/repos/huggingface/datasets/issues/7567/events
https://github.com/huggingface/datasets/issues/7567
3,058,308,538
I_kwDODunzps62ShW6
7,567
interleave_datasets seed with multiple workers
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi ! It's already the case IIRC: the effective seed looks like `seed + worker_id`. Do you have a reproducible example ?", "here is an example with shuffle\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard):\n worker_info = torch.utils.data.get_worker_info()\n for i in range(10):\n yield {'value': i, 'worker_id': worker_info.id}\n\n\ndef main():\n ds = datasets.IterableDataset.from_generator(gen, gen_kwargs={'shard': list(range(8))})\n ds = ds.shuffle(buffer_size=100, seed=1234)\n dataloader = torch.utils.data.DataLoader(ds, batch_size=None, num_workers=8)\n for i, ex in enumerate(itertools.islice(dataloader, 50)):\n print(i, ex)\n\n\nif __name__ == '__main__':\n multiprocessing.set_start_method('spawn')\n main()\n```\n\n```\npython test.py\n0 {'value': 8, 'worker_id': 0}\n1 {'value': 8, 'worker_id': 1}\n2 {'value': 8, 'worker_id': 2}\n3 {'value': 8, 'worker_id': 3}\n4 {'value': 8, 'worker_id': 4}\n5 {'value': 8, 'worker_id': 5}\n6 {'value': 8, 'worker_id': 6}\n7 {'value': 8, 'worker_id': 7}\n8 {'value': 9, 'worker_id': 0}\n9 {'value': 9, 'worker_id': 1}\n10 {'value': 9, 'worker_id': 2}\n11 {'value': 9, 'worker_id': 3}\n12 {'value': 9, 'worker_id': 4}\n13 {'value': 9, 'worker_id': 5}\n14 {'value': 9, 'worker_id': 6}\n15 {'value': 9, 'worker_id': 7}\n16 {'value': 5, 'worker_id': 0}\n17 {'value': 5, 'worker_id': 1}\n18 {'value': 5, 'worker_id': 2}\n19 {'value': 5, 'worker_id': 3}\n```", "With `interleave_datasets`\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard, value):\n while True:\n yield {'value': value}\n\n\ndef main():\n ds = [\n datasets.IterableDataset.from_generator(gen, gen_kwargs={'shard': list(range(8)), 'value': i})\n for i in range(10)\n ]\n ds = datasets.interleave_datasets(ds, probabilities=[1 / len(ds)] * len(ds), seed=1234)\n dataloader = torch.utils.data.DataLoader(ds, batch_size=None, num_workers=8)\n for i, ex in enumerate(itertools.islice(dataloader, 50)):\n print(i, ex)\n\n\nif __name__ == '__main__':\n multiprocessing.set_start_method('spawn')\n main()\n```\n\n```\npython test.py\n0 {'value': 9}\n1 {'value': 9}\n2 {'value': 9}\n3 {'value': 9}\n4 {'value': 9}\n5 {'value': 9}\n6 {'value': 9}\n7 {'value': 9}\n8 {'value': 3}\n9 {'value': 3}\n10 {'value': 3}\n11 {'value': 3}\n12 {'value': 3}\n13 {'value': 3}\n14 {'value': 3}\n15 {'value': 3}\n16 {'value': 9}\n17 {'value': 9}\n18 {'value': 9}\n19 {'value': 9}\n20 {'value': 9}\n21 {'value': 9}\n22 {'value': 9}\n23 {'value': 9}\n```", "Same results after updating to datasets 3.6.0.", "Ah my bad, `shuffle()` uses a global effective seed which is something like `seed + epoch`, which is used to do the same shards shuffle in each worker so that each worker have a non-overlapping set of shards:\n\nhttps://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2102-L2111\n\nI think we should take into account the `worker_id` in a local seed for the buffer right after this line:\n\nhttps://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2151-L2153\n\nlike adding a new step that would propagate in the examples iterables or something like that:\n\n```python\nex_iterable = ex_iterable.shift_rngs(value=worker_id)\n```\n\nis this something you'd like to explore ? contributions on this subject are very welcome", "Potentially, but busy. If anyone wants to take this up please feel free to, otherwise I may or may not revisit when I have free time.\n\nFor what it's worth I got around this with\n\n```\n\nclass SeedGeneratorWithWorkerIterable(iterable_dataset._BaseExamplesIterable):\n \"\"\"ExamplesIterable that seeds the rng with worker id.\"\"\"\n\n def __init__(\n self,\n ex_iterable: iterable_dataset._BaseExamplesIterable,\n generator: np.random.Generator,\n rank: int = 0,\n ):\n \"\"\"Constructor.\"\"\"\n super().__init__()\n self.ex_iterable = ex_iterable\n self.generator = generator\n self.rank = rank\n\n def _init_state_dict(self) -> dict:\n self._state_dict = self.ex_iterable._init_state_dict()\n return self._state_dict\n\n def __iter__(self):\n \"\"\"Data iterator.\"\"\"\n effective_seed = copy.deepcopy(self.generator).integers(0, 1 << 63) - self.rank\n effective_seed = (1 << 63) + effective_seed if effective_seed < 0 else effective_seed\n generator = np.random.default_rng(effective_seed)\n self.ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\n if self._state_dict:\n self._state_dict = self.ex_iterable._init_state_dict()\n yield from iter(self.ex_iterable)\n\n def shuffle_data_sources(self, generator):\n \"\"\"Shuffle data sources.\"\"\"\n ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\n return SeedGeneratorWithWorkerIterable(ex_iterable, generator=generator, rank=self.rank)\n\n def shard_data_sources(self, num_shards: int, index: int, contiguous=True): # noqa: FBT002\n \"\"\"Shard data sources.\"\"\"\n ex_iterable = self.ex_iterable.shard_data_sources(num_shards, index, contiguous=contiguous)\n return SeedGeneratorWithWorkerIterable(ex_iterable, generator=self.generator, rank=index)\n\n @property\n def is_typed(self):\n return self.ex_iterable.is_typed\n\n @property\n def features(self):\n return self.ex_iterable.features\n\n @property\n def num_shards(self) -> int:\n \"\"\"Number of shards.\"\"\"\n return self.ex_iterable.num_shards\n```", "Thanks for the detailed insights!\n\nAfter reviewing the issue and the current implementation in `iterable_dataset.py`, I can confirm the cause:\n\nWhen using `interleave_datasets(..., seed=...)` with `num_workers > 1` (e.g. via `DataLoader`), the same RNG state is shared across workers — which leads to each worker producing identical sample sequences. This is because the seed is not modulated by `worker_id`, unlike the usual approach in `shuffle()` where seed is adjusted using the `epoch`.\n\nAs @lhoestq suggested, a proper fix would involve introducing something like:\n\n```python\nex_iterable = ex_iterable.shift_rngs(worker_id)\n```\n\n@jonathanasdf Also really appreciate the workaround implementation shared above — that was helpful to validate the behavior and will help shape the general solution." ]
2025-05-12T22:38:27
2025-06-29T06:53:59
null
NONE
null
null
null
null
### Describe the bug Using interleave_datasets with multiple dataloader workers and a seed set causes the same dataset sampling order across all workers. Should the seed be modulated with the worker id? ### Steps to reproduce the bug See above ### Expected behavior See above ### Environment info - `datasets` version: 3.5.1 - Platform: macOS-15.4.1-arm64-arm-64bit - Python version: 3.12.9 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7567/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7567/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7566
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7566/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7566/comments
https://api.github.com/repos/huggingface/datasets/issues/7566/events
https://github.com/huggingface/datasets/issues/7566
3,055,279,344
I_kwDODunzps62G9zw
7,566
terminate called without an active exception; Aborted (core dumped)
{ "avatar_url": "https://avatars.githubusercontent.com/u/18581488?v=4", "events_url": "https://api.github.com/users/alexey-milovidov/events{/privacy}", "followers_url": "https://api.github.com/users/alexey-milovidov/followers", "following_url": "https://api.github.com/users/alexey-milovidov/following{/other_user}", "gists_url": "https://api.github.com/users/alexey-milovidov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alexey-milovidov", "id": 18581488, "login": "alexey-milovidov", "node_id": "MDQ6VXNlcjE4NTgxNDg4", "organizations_url": "https://api.github.com/users/alexey-milovidov/orgs", "received_events_url": "https://api.github.com/users/alexey-milovidov/received_events", "repos_url": "https://api.github.com/users/alexey-milovidov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alexey-milovidov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexey-milovidov/subscriptions", "type": "User", "url": "https://api.github.com/users/alexey-milovidov", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "@alexey-milovidov I followed the code snippet, but am able to successfully execute without any error. Could you please verify if the error persists or there is any additional details.", "@alexey-milovidov else if the problem does not exist please feel free to close this issue.", "```\nmilovidov@milovidov-pc:~/work/datasets$ \n./main.py \nResolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25868/25868 [00:05<00:00, 4753.90it/s]\nResolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25868/25868 [00:00<00:00, 238798.85it/s]\n{'text': \"How AP reported in all formats from tornado-stricken regionsMarch 8, 2012\\nWhen the first serious bout of tornadoes of 2012 blew through middle America in the middle of the night, they touched down in places hours from any AP bureau. Our closest video journalist was Chicago-based Robert Ray, who dropped his plans to travel to Georgia for Super Tuesday, booked several flights to the cities closest to the strikes and headed for the airport. He’d decide once there which flight to take.\\nHe never got on board a plane. Instead, he ended up driving toward Harrisburg, Ill., where initial reports suggested a town was destroyed. That decision turned out to be a lucky break for the AP. Twice.\\nRay was among the first journalists to arrive and he confirmed those reports -- in all formats. He shot powerful video, put victims on the phone with AP Radio and played back sound to an editor who transcribed the interviews and put the material on text wires. He then walked around the devastation with the Central Regional Desk on the line, talking to victims with the phone held so close that editors could transcribe his interviews in real time.\\nRay also made a dramatic image of a young girl who found a man’s prosthetic leg in the rubble, propped it up next to her destroyed home and spray-painted an impromptu sign: “Found leg. Seriously.”\\nThe following day, he was back on the road and headed for Georgia and a Super Tuesday date with Newt Gingrich’s campaign. The drive would take him through a stretch of the South that forecasters expected would suffer another wave of tornadoes.\\nTo prevent running into THAT storm, Ray used his iPhone to monitor Doppler radar, zooming in on extreme cells and using Google maps to direct himself to safe routes. And then the journalist took over again.\\n“When weather like that occurs, a reporter must seize the opportunity to get the news out and allow people to see, hear and read the power of nature so that they can take proper shelter,” Ray says.\\nSo Ray now started to use his phone to follow the storms. He attached a small GoPro camera to his steering wheel in case a tornado dropped down in front of the car somewhere, and took video of heavy rain and hail with his iPhone. Soon, he spotted a tornado and the chase was on. He followed an unmarked emergency vehicle to Cleveland, Tenn., where he was first on the scene of the storm's aftermath.\\nAgain, the tornadoes had struck in locations that were hours from the nearest AP bureau. Damage and debris, as well as a wickedly violent storm that made travel dangerous, slowed our efforts to get to the news. That wasn’t a problem in Tennessee, where our customers were well served by an all-formats report that included this text story.\\n“CLEVELAND, Tenn. (AP) _ Fierce wind, hail and rain lashed Tennessee for the second time in three days, and at least 15 people were hospitalized Friday in the Chattanooga area.”\\nThe byline? Robert Ray.\\nFor being adept with technology, chasing after news as it literally dropped from the sky and setting a standard for all-formats reporting that put the AP ahead on the most competitive news story of the day, Ray wins this week’s $300 Best of the States prize.\\n© 2013 The Associated Press. All rights reserved. Terms and conditions apply. See AP.org for details.\", 'id': '<urn:uuid:d66bc6fe-8477-4adf-b430-f6a558ccc8ff>', 'dump': 'CC-MAIN-2013-20', 'url': 'http://%[email protected]/Content/Press-Release/2012/How-AP-reported-in-all-formats-from-tornado-stricken-regions', 'date': '2013-05-18T05:48:54Z', 'file_path': 's3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz', 'language': 'en', 'language_score': 0.9721424579620361, 'token_count': 717}\nterminate called without an active exception\nAborted (core dumped)\nmilovidov@milovidov-pc:~/work/datasets$ \npython3 --version\nPython 3.10.12\n```", "Thank you @alexey-milovidov for the details, was able to reproduce the issue.\n\nFollowing is a preliminary analysis which would help to further isolate the issue:\nOn local: \n- For alternate datasets e.g. `speed/english_quotes_paraphrase` instead of `HuggingFaceFW/fineweb` the code works\n- Multiple calls of `print(next(iter(dataset)))` can be performed successfully before the `terminate` is raised, indicating possibility of issue when connection is closed\n\nOn colab:\n- The above code works properly" ]
2025-05-11T23:05:54
2025-06-23T17:56:02
null
NONE
null
null
null
null
### Describe the bug I use it as in the tutorial here: https://huggingface.co/docs/datasets/stream, and it ends up with abort. ### Steps to reproduce the bug 1. `pip install datasets` 2. ``` $ cat main.py #!/usr/bin/env python3 from datasets import load_dataset dataset = load_dataset('HuggingFaceFW/fineweb', split='train', streaming=True) print(next(iter(dataset))) ``` 3. `chmod +x main.py` ``` $ ./main.py README.md: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 43.1k/43.1k [00:00<00:00, 7.04MB/s] Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25868/25868 [00:05<00:00, 4859.26it/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25868/25868 [00:00<00:00, 54773.56it/s] {'text': "How AP reported in all formats from tornado-stricken regionsMarch 8, 2012\nWhen the first serious bout of tornadoes of 2012 blew through middle America in the middle of the night, they touched down in places hours from any AP bureau. Our closest video journalist was Chicago-based Robert Ray, who dropped his plans to travel to Georgia for Super Tuesday, booked several flights to the cities closest to the strikes and headed for the airport. He’d decide once there which flight to take.\nHe never got on board a plane. Instead, he ended up driving toward Harrisburg, Ill., where initial reports suggested a town was destroyed. That decision turned out to be a lucky break for the AP. Twice.\nRay was among the first journalists to arrive and he confirmed those reports -- in all formats. He shot powerful video, put victims on the phone with AP Radio and played back sound to an editor who transcribed the interviews and put the material on text wires. He then walked around the devastation with the Central Regional Desk on the line, talking to victims with the phone held so close that editors could transcribe his interviews in real time.\nRay also made a dramatic image of a young girl who found a man’s prosthetic leg in the rubble, propped it up next to her destroyed home and spray-painted an impromptu sign: “Found leg. Seriously.”\nThe following day, he was back on the road and headed for Georgia and a Super Tuesday date with Newt Gingrich’s campaign. The drive would take him through a stretch of the South that forecasters expected would suffer another wave of tornadoes.\nTo prevent running into THAT storm, Ray used his iPhone to monitor Doppler radar, zooming in on extreme cells and using Google maps to direct himself to safe routes. And then the journalist took over again.\n“When weather like that occurs, a reporter must seize the opportunity to get the news out and allow people to see, hear and read the power of nature so that they can take proper shelter,” Ray says.\nSo Ray now started to use his phone to follow the storms. He attached a small GoPro camera to his steering wheel in case a tornado dropped down in front of the car somewhere, and took video of heavy rain and hail with his iPhone. Soon, he spotted a tornado and the chase was on. He followed an unmarked emergency vehicle to Cleveland, Tenn., where he was first on the scene of the storm's aftermath.\nAgain, the tornadoes had struck in locations that were hours from the nearest AP bureau. Damage and debris, as well as a wickedly violent storm that made travel dangerous, slowed our efforts to get to the news. That wasn’t a problem in Tennessee, where our customers were well served by an all-formats report that included this text story.\n“CLEVELAND, Tenn. (AP) _ Fierce wind, hail and rain lashed Tennessee for the second time in three days, and at least 15 people were hospitalized Friday in the Chattanooga area.”\nThe byline? Robert Ray.\nFor being adept with technology, chasing after news as it literally dropped from the sky and setting a standard for all-formats reporting that put the AP ahead on the most competitive news story of the day, Ray wins this week’s $300 Best of the States prize.\n© 2013 The Associated Press. All rights reserved. Terms and conditions apply. See AP.org for details.", 'id': '<urn:uuid:d66bc6fe-8477-4adf-b430-f6a558ccc8ff>', 'dump': 'CC-MAIN-2013-20', 'url': 'http://%[email protected]/Content/Press-Release/2012/How-AP-reported-in-all-formats-from-tornado-stricken-regions', 'date': '2013-05-18T05:48:54Z', 'file_path': 's3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz', 'language': 'en', 'language_score': 0.9721424579620361, 'token_count': 717} terminate called without an active exception Aborted (core dumped) ``` ### Expected behavior I'm not a proficient Python user, so it might be my own error, but even in that case, the error message should be better. ### Environment info `Successfully installed datasets-3.6.0 dill-0.3.8 hf-xet-1.1.0 huggingface-hub-0.31.1 multiprocess-0.70.16 requests-2.32.3 xxhash-3.5.0` ``` $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=22.04 DISTRIB_CODENAME=jammy DISTRIB_DESCRIPTION="Ubuntu 22.04.4 LTS" ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7566/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7566/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7565
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7565/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7565/comments
https://api.github.com/repos/huggingface/datasets/issues/7565/events
https://github.com/huggingface/datasets/pull/7565
3,051,731,207
PR_kwDODunzps6VkFBm
7,565
add check if repo exists for dataset uploading
{ "avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4", "events_url": "https://api.github.com/users/Samoed/events{/privacy}", "followers_url": "https://api.github.com/users/Samoed/followers", "following_url": "https://api.github.com/users/Samoed/following{/other_user}", "gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Samoed", "id": 36135455, "login": "Samoed", "node_id": "MDQ6VXNlcjM2MTM1NDU1", "organizations_url": "https://api.github.com/users/Samoed/orgs", "received_events_url": "https://api.github.com/users/Samoed/received_events", "repos_url": "https://api.github.com/users/Samoed/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Samoed/subscriptions", "type": "User", "url": "https://api.github.com/users/Samoed", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7565). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq Can you review, please? I don't think that errors in CI are related to my changes" ]
2025-05-09T10:27:00
2025-06-09T14:39:23
null
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7565.diff", "html_url": "https://github.com/huggingface/datasets/pull/7565", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7565.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7565" }
Currently, I'm reuploading datasets for [`MTEB`](https://github.com/embeddings-benchmark/mteb/). Some of them have many splits (more than 20), and I'm encountering the error: `Too many requests for https://huggingface.co/datasets/repo/create`. It seems that this issue occurs because the dataset tries to recreate itself every time a split is uploaded. To resolve this, I've added a check to ensure that if the dataset already exists, it won't attempt to recreate it.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7565/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7565/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7564
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7564/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7564/comments
https://api.github.com/repos/huggingface/datasets/issues/7564/events
https://github.com/huggingface/datasets/pull/7564
3,049,275,226
PR_kwDODunzps6VczLS
7,564
Implementation of iteration over values of a column in an IterableDataset object
{ "avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4", "events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}", "followers_url": "https://api.github.com/users/TopCoder2K/followers", "following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}", "gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TopCoder2K", "id": 47208659, "login": "TopCoder2K", "node_id": "MDQ6VXNlcjQ3MjA4NjU5", "organizations_url": "https://api.github.com/users/TopCoder2K/orgs", "received_events_url": "https://api.github.com/users/TopCoder2K/received_events", "repos_url": "https://api.github.com/users/TopCoder2K/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions", "type": "User", "url": "https://api.github.com/users/TopCoder2K", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "A couple of questions:\r\n1. I've noticed two strange things: 1) \"Around 80% of the final dataset is made of the `en_dataset`\" in https://huggingface.co/docs/datasets/stream, 2) \"Click on \"Pull request\" to send your to the project maintainers\" in https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md Are `en_dataset` and \"your [???]\" typos? If so, I can fix them in this PR.\r\n2. Should I update https://huggingface.co/docs/datasets/stream or https://huggingface.co/docs/datasets/access#iterabledataset to include the new feature?", "Great ! and chained indexing was easy indeed, thanks :)\r\n\r\nregarding your questions:\r\n\r\n> I've noticed two strange things: 1) \"Around 80% of the final dataset is made of the en_dataset\" in https://huggingface.co/docs/datasets/stream, 2) \"Click on \"Pull request\" to send your to the project maintainers\" in https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md Are en_dataset and \"your [???]\" typos? If so, I can fix them in this PR.\r\n\r\nOh good catch, both should be fixed indeed. Feel free to open a new PR for those docs fixes\r\n\r\n> Should I update https://huggingface.co/docs/datasets/stream or https://huggingface.co/docs/datasets/access#iterabledataset to include the new feature?\r\n\r\nYep good idea, I think in both places, since /stream is supposed to be exhaustive, and /access already mentions accessing a specific column for `Dataset`", "@lhoestq, thank you for the answers!\r\n\r\n> Yep good idea, I think in both places, since /stream is supposed to be exhaustive, and /access already mentions accessing a specific column for Dataset\r\n\r\n👍, I'll try to add something.\r\n\r\nBy the way, do you have any ideas about why the CI pipelines have failed? Essentially, I've already encountered these problems [here](https://github.com/huggingface/datasets/issues/7381#issuecomment-2863421974).\r\nI think `check_code_quality` has failed due to the usage of `pre-commit`. The problem seems to be the old version of the ruff hook. I've tried `v0.11.8` (the one that was installed with `pip install -e \".[quality]\"`) and `pre-commit` seems to work like `make style` now. However, I don't have any ideas about `pyav` since I don't know what it is...", "I've updated /stream and /access, please check the style and clarity. By the way, I would like to add `IterableDataset.skip` near `IterableDataset.take` to mimic [slicing](https://huggingface.co/docs/datasets/access/#slicing). What do you think?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7564). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-08T14:59:22
2025-05-19T12:15:02
2025-05-19T12:15:02
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7564.diff", "html_url": "https://github.com/huggingface/datasets/pull/7564", "merged_at": "2025-05-19T12:15:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/7564.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7564" }
Refers to [this issue](https://github.com/huggingface/datasets/issues/7381).
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7564/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7564/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7563/comments
https://api.github.com/repos/huggingface/datasets/issues/7563/events
https://github.com/huggingface/datasets/pull/7563
3,046,351,253
PR_kwDODunzps6VS0QL
7,563
set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7563). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-07T15:18:29
2025-05-07T15:21:05
2025-05-07T15:18:36
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7563.diff", "html_url": "https://github.com/huggingface/datasets/pull/7563", "merged_at": "2025-05-07T15:18:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/7563.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7563" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7563/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7563/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7562
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7562/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7562/comments
https://api.github.com/repos/huggingface/datasets/issues/7562/events
https://github.com/huggingface/datasets/pull/7562
3,046,339,430
PR_kwDODunzps6VSxmx
7,562
release: 3.6.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7562). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-07T15:15:13
2025-05-07T15:17:46
2025-05-07T15:15:21
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7562.diff", "html_url": "https://github.com/huggingface/datasets/pull/7562", "merged_at": "2025-05-07T15:15:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/7562.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7562" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7562/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7562/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7561/comments
https://api.github.com/repos/huggingface/datasets/issues/7561/events
https://github.com/huggingface/datasets/issues/7561
3,046,302,653
I_kwDODunzps61kuO9
7,561
NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet
{ "avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4", "events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}", "followers_url": "https://api.github.com/users/cyanic-selkie/followers", "following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}", "gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cyanic-selkie", "id": 32219669, "login": "cyanic-selkie", "node_id": "MDQ6VXNlcjMyMjE5NjY5", "organizations_url": "https://api.github.com/users/cyanic-selkie/orgs", "received_events_url": "https://api.github.com/users/cyanic-selkie/received_events", "repos_url": "https://api.github.com/users/cyanic-selkie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions", "type": "User", "url": "https://api.github.com/users/cyanic-selkie", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-05-07T15:05:42
2025-06-05T12:41:30
2025-06-05T12:41:30
NONE
null
null
null
null
### Describe the bug When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than attempting to open a PR. ### Steps to reproduce the bug 1. Create an `IterableDataset`. 2. Call `.repeat(None)` on it. 3. Wrap it in a pytorch `DataLoader` 4. Iterate over it. ### Expected behavior This should work normally. ### Environment info datasets: 3.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7561/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7561/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7560
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7560/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7560/comments
https://api.github.com/repos/huggingface/datasets/issues/7560/events
https://github.com/huggingface/datasets/pull/7560
3,046,265,500
PR_kwDODunzps6VShIc
7,560
fix decoding tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7560). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-07T14:56:14
2025-05-07T14:59:02
2025-05-07T14:56:20
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7560.diff", "html_url": "https://github.com/huggingface/datasets/pull/7560", "merged_at": "2025-05-07T14:56:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/7560.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7560" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7560/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7560/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7559
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7559/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7559/comments
https://api.github.com/repos/huggingface/datasets/issues/7559/events
https://github.com/huggingface/datasets/pull/7559
3,046,177,078
PR_kwDODunzps6VSNiX
7,559
fix aiohttp import
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7559). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-07T14:31:32
2025-05-07T14:34:34
2025-05-07T14:31:38
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7559.diff", "html_url": "https://github.com/huggingface/datasets/pull/7559", "merged_at": "2025-05-07T14:31:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/7559.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7559" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7559/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7559/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7558
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7558/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7558/comments
https://api.github.com/repos/huggingface/datasets/issues/7558/events
https://github.com/huggingface/datasets/pull/7558
3,046,066,628
PR_kwDODunzps6VR1gN
7,558
fix regression
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7558). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-07T13:56:03
2025-05-07T13:58:52
2025-05-07T13:56:18
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7558.diff", "html_url": "https://github.com/huggingface/datasets/pull/7558", "merged_at": "2025-05-07T13:56:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/7558.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7558" }
reported in https://github.com/huggingface/datasets/pull/7557 (I just reorganized the condition) wanted to apply this change to the original PR but github didn't let me apply it directly - merging this one instead
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7558/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7558/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7557
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7557/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7557/comments
https://api.github.com/repos/huggingface/datasets/issues/7557/events
https://github.com/huggingface/datasets/pull/7557
3,045,962,076
PR_kwDODunzps6VRenr
7,557
check for empty _formatting
{ "avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4", "events_url": "https://api.github.com/users/winglian/events{/privacy}", "followers_url": "https://api.github.com/users/winglian/followers", "following_url": "https://api.github.com/users/winglian/following{/other_user}", "gists_url": "https://api.github.com/users/winglian/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/winglian", "id": 381258, "login": "winglian", "node_id": "MDQ6VXNlcjM4MTI1OA==", "organizations_url": "https://api.github.com/users/winglian/orgs", "received_events_url": "https://api.github.com/users/winglian/received_events", "repos_url": "https://api.github.com/users/winglian/repos", "site_admin": false, "starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/winglian/subscriptions", "type": "User", "url": "https://api.github.com/users/winglian", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Thanks for reporting and for the fix ! I tried to reorganize the condition in your PR but didn't get the right permission so. I ended up merging https://github.com/huggingface/datasets/pull/7558 directly so I can make a release today - I hope you don't mind" ]
2025-05-07T13:22:37
2025-05-07T13:57:12
2025-05-07T13:57:12
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7557.diff", "html_url": "https://github.com/huggingface/datasets/pull/7557", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7557.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7557" }
Fixes a regression from #7553 breaking shuffling of iterable datasets <img width="884" alt="Screenshot 2025-05-07 at 9 16 52 AM" src="https://github.com/user-attachments/assets/d2f43c5f-4092-4efe-ac31-a32cbd025fe3" />
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7557/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7557/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7556
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7556/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7556/comments
https://api.github.com/repos/huggingface/datasets/issues/7556/events
https://github.com/huggingface/datasets/pull/7556
3,043,615,210
PR_kwDODunzps6VJlTR
7,556
Add `--merge-pull-request` option for `convert_to_parquet`
{ "avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4", "events_url": "https://api.github.com/users/klamike/events{/privacy}", "followers_url": "https://api.github.com/users/klamike/followers", "following_url": "https://api.github.com/users/klamike/following{/other_user}", "gists_url": "https://api.github.com/users/klamike/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klamike", "id": 17013474, "login": "klamike", "node_id": "MDQ6VXNlcjE3MDEzNDc0", "organizations_url": "https://api.github.com/users/klamike/orgs", "received_events_url": "https://api.github.com/users/klamike/received_events", "repos_url": "https://api.github.com/users/klamike/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klamike/subscriptions", "type": "User", "url": "https://api.github.com/users/klamike", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "This is ready for a review, happy to make any changes. The main question for maintainers is how this should interact with #7555. If my suggestion there is accepted, this PR can be kept as is. If not, more changes are required to merge all the PR parts.", "Closing since convert to parquet has been removed... https://github.com/huggingface/datasets/pull/7592#issuecomment-3073053138" ]
2025-05-06T18:05:05
2025-07-18T19:09:10
2025-07-18T19:09:10
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7556.diff", "html_url": "https://github.com/huggingface/datasets/pull/7556", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7556.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7556" }
Closes #7527 Note that this implementation **will only merge the last PR in the case that they get split up by `push_to_hub`**. See https://github.com/huggingface/datasets/discussions/7555 for more details.
{ "avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4", "events_url": "https://api.github.com/users/klamike/events{/privacy}", "followers_url": "https://api.github.com/users/klamike/followers", "following_url": "https://api.github.com/users/klamike/following{/other_user}", "gists_url": "https://api.github.com/users/klamike/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klamike", "id": 17013474, "login": "klamike", "node_id": "MDQ6VXNlcjE3MDEzNDc0", "organizations_url": "https://api.github.com/users/klamike/orgs", "received_events_url": "https://api.github.com/users/klamike/received_events", "repos_url": "https://api.github.com/users/klamike/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klamike/subscriptions", "type": "User", "url": "https://api.github.com/users/klamike", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7556/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7556/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7554/comments
https://api.github.com/repos/huggingface/datasets/issues/7554/events
https://github.com/huggingface/datasets/issues/7554
3,043,089,844
I_kwDODunzps61Yd20
7,554
datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script)
{ "avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4", "events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}", "followers_url": "https://api.github.com/users/sei-eschwartz/followers", "following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}", "gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sei-eschwartz", "id": 50171988, "login": "sei-eschwartz", "node_id": "MDQ6VXNlcjUwMTcxOTg4", "organizations_url": "https://api.github.com/users/sei-eschwartz/orgs", "received_events_url": "https://api.github.com/users/sei-eschwartz/received_events", "repos_url": "https://api.github.com/users/sei-eschwartz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions", "type": "User", "url": "https://api.github.com/users/sei-eschwartz", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! there has been some effort on allowing to download only a subset of splits in https://github.com/huggingface/datasets/pull/6832 but no one has been continuing this work so far. This would be a welcomed contribution though\n\nAlso note that loading script are often unoptimized, and we recommend using datasets in standard formats like Parquet instead.\n\nBtw there is a CLI tool to convert a loading script to parquet:\n\n```\ndatasets-cli convert_to_parquet <dataset-name> --trust_remote_code\n```", "Closing in favor of #6832 " ]
2025-05-06T14:43:38
2025-05-07T14:53:45
2025-05-07T14:53:44
NONE
null
null
null
null
### Describe the bug `datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actually process all the splits? But I thought loading scripts were designed to avoid this. ### Steps to reproduce the bug See [this notebook](https://colab.research.google.com/drive/14kcXp_hgcdj-kIzK0bCG6taE-CLZPVvq?usp=sharing) Or: ```python from datasets import load_dataset dataset = load_dataset('jordiae/exebench', split='test_synth', trust_remote_code=True) ``` ### Expected behavior I expected only the `test_synth` split to be downloaded and processed. ### Environment info - `datasets` version: 3.5.1 - Platform: Linux-6.1.123+-x86_64-with-glibc2.35 - Python version: 3.11.12 - `huggingface_hub` version: 0.30.2 - PyArrow version: 18.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2025.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4", "events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}", "followers_url": "https://api.github.com/users/sei-eschwartz/followers", "following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}", "gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sei-eschwartz", "id": 50171988, "login": "sei-eschwartz", "node_id": "MDQ6VXNlcjUwMTcxOTg4", "organizations_url": "https://api.github.com/users/sei-eschwartz/orgs", "received_events_url": "https://api.github.com/users/sei-eschwartz/received_events", "repos_url": "https://api.github.com/users/sei-eschwartz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions", "type": "User", "url": "https://api.github.com/users/sei-eschwartz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7554/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7554/timeline
null
duplicate
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7553
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7553/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7553/comments
https://api.github.com/repos/huggingface/datasets/issues/7553/events
https://github.com/huggingface/datasets/pull/7553
3,042,953,907
PR_kwDODunzps6VHUNW
7,553
Rebatch arrow iterables before formatted iterable
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7553). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq Our CI found an issue with this changeset causing a regression with shuffling iterable datasets \r\n<img width=\"884\" alt=\"Screenshot 2025-05-07 at 9 16 52 AM\" src=\"https://github.com/user-attachments/assets/bf7d9c7e-cc14-47da-8da6-d1a345992d7c\" />\r\n" ]
2025-05-06T13:59:58
2025-05-07T13:17:41
2025-05-06T14:03:42
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7553.diff", "html_url": "https://github.com/huggingface/datasets/pull/7553", "merged_at": "2025-05-06T14:03:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/7553.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7553" }
close https://github.com/huggingface/datasets/issues/7538 and https://github.com/huggingface/datasets/issues/7475
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7553/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7553/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7552
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7552/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7552/comments
https://api.github.com/repos/huggingface/datasets/issues/7552/events
https://github.com/huggingface/datasets/pull/7552
3,040,258,084
PR_kwDODunzps6U-BUv
7,552
Enable xet in push to hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7552). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-05T17:02:09
2025-05-06T12:42:51
2025-05-06T12:42:48
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7552.diff", "html_url": "https://github.com/huggingface/datasets/pull/7552", "merged_at": "2025-05-06T12:42:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/7552.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7552" }
follows https://github.com/huggingface/huggingface_hub/pull/3035 related to https://github.com/huggingface/datasets/issues/7526
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7552/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7552/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7551
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7551/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7551/comments
https://api.github.com/repos/huggingface/datasets/issues/7551/events
https://github.com/huggingface/datasets/issues/7551
3,038,114,928
I_kwDODunzps61FfRw
7,551
Issue with offline mode and partial dataset cached
{ "avatar_url": "https://avatars.githubusercontent.com/u/353245?v=4", "events_url": "https://api.github.com/users/nrv/events{/privacy}", "followers_url": "https://api.github.com/users/nrv/followers", "following_url": "https://api.github.com/users/nrv/following{/other_user}", "gists_url": "https://api.github.com/users/nrv/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nrv", "id": 353245, "login": "nrv", "node_id": "MDQ6VXNlcjM1MzI0NQ==", "organizations_url": "https://api.github.com/users/nrv/orgs", "received_events_url": "https://api.github.com/users/nrv/received_events", "repos_url": "https://api.github.com/users/nrv/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nrv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nrv/subscriptions", "type": "User", "url": "https://api.github.com/users/nrv", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "It seems the problem comes from builder.py / create_config_id()\n\nOn the first call, when the cache is empty we have\n```\nconfig_kwargs = {'data_files': {'train': ['hf://datasets/uonlp/CulturaX@6a8734bc69fefcbb7735f4f9250f43e4cd7a442e/fr/fr_part_00038.parquet']}}\n```\nleading to config_id beeing 'default-2935e8cdcc21c613'\n\nthen, on the second call, \n```\nconfig_kwargs = {'data_files': 'fr/fr_part_00038.parquet'}\n```\nthus explaining why the hash is not the same, despite having the same parameter when calling load_dataset : data_files=\"fr/fr_part_00038.parquet\"", "Same behavior with version 3.5.1", "Same issue when loading `google/IndicGenBench_flores_in` with `dataset==2.21.0` and `dataset==3.6.0` .", "\n\n\n> It seems the problem comes from builder.py / create_config_id()\n> \n> On the first call, when the cache is empty we have\n> \n> ```\n> config_kwargs = {'data_files': {'train': ['hf://datasets/uonlp/CulturaX@6a8734bc69fefcbb7735f4f9250f43e4cd7a442e/fr/fr_part_00038.parquet']}}\n> ```\n> \n> leading to config_id beeing 'default-2935e8cdcc21c613'\n> \n> then, on the second call,\n> \n> ```\n> config_kwargs = {'data_files': 'fr/fr_part_00038.parquet'}\n> ```\n> \n> thus explaining why the hash is not the same, despite having the same parameter when calling load_dataset : data_files=\"fr/fr_part_00038.parquet\"\n\n\nI have identified that the issue indeed lies in the `data_files` within `config_kwargs`. \nThe format and prefix of `data_files` differ depending on whether `HF_HUB_OFFLINE` is set, leading to different final `config_id` values. \nWhen I use other datasets without passing the `data_files` parameter, this issue does not occur.\n\nA possible solution might be to standardize the formatting of `data_files` within the `create_config_id` function." ]
2025-05-04T16:49:37
2025-05-13T03:18:43
null
NONE
null
null
null
null
### Describe the bug Hi, a issue related to #4760 here when loading a single file from a dataset, unable to access it in offline mode afterwards ### Steps to reproduce the bug ```python import os # os.environ["HF_HUB_OFFLINE"] = "1" os.environ["HF_TOKEN"] = "xxxxxxxxxxxxxx" import datasets dataset_name = "uonlp/CulturaX" data_files = "fr/fr_part_00038.parquet" ds = datasets.load_dataset(dataset_name, split='train', data_files=data_files) print(f"Dataset loaded : {ds}") ``` Once the file has been cached, I rerun with the HF_HUB_OFFLINE activated an get this error : ``` ValueError: Couldn't find cache for uonlp/CulturaX for config 'default-1e725f978350254e' Available configs in the cache: ['default-2935e8cdcc21c613'] ``` ### Expected behavior Should be able to access the previously cached files ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-5.4.0-215-generic-x86_64-with-glibc2.31 - Python version: 3.12.0 - `huggingface_hub` version: 0.27.0 - PyArrow version: 19.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7551/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7551/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7550
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7550/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7550/comments
https://api.github.com/repos/huggingface/datasets/issues/7550/events
https://github.com/huggingface/datasets/pull/7550
3,037,017,367
PR_kwDODunzps6UzksN
7,550
disable aiohttp depend for python 3.13t free-threading compat
{ "avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4", "events_url": "https://api.github.com/users/Qubitium/events{/privacy}", "followers_url": "https://api.github.com/users/Qubitium/followers", "following_url": "https://api.github.com/users/Qubitium/following{/other_user}", "gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Qubitium", "id": 417764, "login": "Qubitium", "node_id": "MDQ6VXNlcjQxNzc2NA==", "organizations_url": "https://api.github.com/users/Qubitium/orgs", "received_events_url": "https://api.github.com/users/Qubitium/received_events", "repos_url": "https://api.github.com/users/Qubitium/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions", "type": "User", "url": "https://api.github.com/users/Qubitium", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-05-03T00:28:18
2025-05-03T00:28:24
2025-05-03T00:28:24
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7550.diff", "html_url": "https://github.com/huggingface/datasets/pull/7550", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7550.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7550" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4", "events_url": "https://api.github.com/users/Qubitium/events{/privacy}", "followers_url": "https://api.github.com/users/Qubitium/followers", "following_url": "https://api.github.com/users/Qubitium/following{/other_user}", "gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Qubitium", "id": 417764, "login": "Qubitium", "node_id": "MDQ6VXNlcjQxNzc2NA==", "organizations_url": "https://api.github.com/users/Qubitium/orgs", "received_events_url": "https://api.github.com/users/Qubitium/received_events", "repos_url": "https://api.github.com/users/Qubitium/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions", "type": "User", "url": "https://api.github.com/users/Qubitium", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7550/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7550/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7549
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7549/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7549/comments
https://api.github.com/repos/huggingface/datasets/issues/7549/events
https://github.com/huggingface/datasets/issues/7549
3,036,272,015
I_kwDODunzps60-dWP
7,549
TypeError: Couldn't cast array of type string to null on webdataset format dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/117186571?v=4", "events_url": "https://api.github.com/users/narugo1992/events{/privacy}", "followers_url": "https://api.github.com/users/narugo1992/followers", "following_url": "https://api.github.com/users/narugo1992/following{/other_user}", "gists_url": "https://api.github.com/users/narugo1992/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/narugo1992", "id": 117186571, "login": "narugo1992", "node_id": "U_kgDOBvwgCw", "organizations_url": "https://api.github.com/users/narugo1992/orgs", "received_events_url": "https://api.github.com/users/narugo1992/received_events", "repos_url": "https://api.github.com/users/narugo1992/repos", "site_admin": false, "starred_url": "https://api.github.com/users/narugo1992/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/narugo1992/subscriptions", "type": "User", "url": "https://api.github.com/users/narugo1992", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "seems to get fixed by explicitly adding `dataset_infos.json` like this\n\n```json\n{\n \"default\": {\n \"description\": \"Image dataset with tags and ratings\",\n \"citation\": \"\",\n \"homepage\": \"\",\n \"license\": \"\",\n \"features\": {\n \"image\": {\n \"dtype\": \"image\",\n \"_type\": \"Image\"\n },\n \"json\": {\n \"id\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"width\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"height\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"rating\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n },\n \"general_tags\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n },\n \"character_tags\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n }\n }\n },\n \"builder_name\": \"webdataset\",\n \"config_name\": \"default\",\n \"version\": {\n \"version_str\": \"1.0.0\",\n \"description\": null,\n \"major\": 1,\n \"minor\": 0,\n \"patch\": 0\n }\n }\n}\n\n```\n\nwill close this issue if no further issues found" ]
2025-05-02T15:18:07
2025-05-02T15:37:05
null
NONE
null
null
null
null
### Describe the bug ```python from datasets import load_dataset dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k") ``` got ``` File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 626, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 255, in pyarrow.lib.array File "pyarrow/array.pxi", line 117, in pyarrow.lib._handle_arrow_array_protocol File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 258, in __arrow_array__ out = cast_array_to_feature( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2006, in cast_array_to_feature arrays = [ File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2007, in <listcomp> _c(array.field(name) if name in array_fields else null_array, subfeature) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2066, in cast_array_to_feature casted_array_values = _c(array.values, feature.feature) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2103, in cast_array_to_feature return array_cast( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1949, in array_cast raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}") TypeError: Couldn't cast array of type string to null The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/load.py", line 2084, in load_dataset builder_instance.download_and_prepare( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 925, in download_and_prepare self._download_and_prepare( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1649, in _download_and_prepare super()._download_and_prepare( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1001, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1487, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1644, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ``` `datasets==3.5.1` whats wrong its inner json structure is like ```yaml features: - name: "image" dtype: "image" - name: "json.id" dtype: "string" - name: "json.width" dtype: "int32" - name: "json.height" dtype: "int32" - name: "json.rating" sequence: dtype: "string" - name: "json.general_tags" sequence: dtype: "string" - name: "json.character_tags" sequence: dtype: "string" ``` i'm 100% sure all the jsons satisfies the abovementioned format. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k") ``` ### Expected behavior load the dataset successfully, with the abovementioned json format and webp images ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 3.5.1 - Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.16 - `huggingface_hub` version: 0.30.2 - PyArrow version: 20.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7549/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7549/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7548/comments
https://api.github.com/repos/huggingface/datasets/issues/7548/events
https://github.com/huggingface/datasets/issues/7548
3,035,568,851
I_kwDODunzps607xrT
7,548
Python 3.13t (free threads) Compat
{ "avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4", "events_url": "https://api.github.com/users/Qubitium/events{/privacy}", "followers_url": "https://api.github.com/users/Qubitium/followers", "following_url": "https://api.github.com/users/Qubitium/following{/other_user}", "gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Qubitium", "id": 417764, "login": "Qubitium", "node_id": "MDQ6VXNlcjQxNzc2NA==", "organizations_url": "https://api.github.com/users/Qubitium/orgs", "received_events_url": "https://api.github.com/users/Qubitium/received_events", "repos_url": "https://api.github.com/users/Qubitium/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions", "type": "User", "url": "https://api.github.com/users/Qubitium", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Update: `datasets` use `aiohttp` for data streaming and from what I understand data streaming is useful for large datasets that do not fit in memory and/or multi-modal datasets like image/audio where you only what the actual binary bits to fed in as needed. \n\nHowever, there are also many cases where aiohttp will never be used. Text datasets that are not huge, relative to machine spec, and non-multi-modal datasets. \n\nGetting `aiohttp` fixed for `free threading` appeals to be a large task that is not going to be get done in a quick manner. It may be faster to make `aiohttp` optional and not forced build. Otherwise, testing python 3.13t is going to be a painful install. \n\nI have created a fork/branch that temp disables aiohttp import so non-streaming usage of datasets can be tested under python 3.13.t:\n\nhttps://github.com/Qubitium/datasets/tree/disable-aiohttp-depend", "We are mostly relying on `huggingface_hub` which uses `requests` to stream files from Hugging Face, so maybe we can move aiohttp to optional dependencies now. Would it solve your issue ? Btw what do you think of `datasets` in the free-threading setting ?", "> We are mostly relying on `huggingface_hub` which uses `requests` to stream files from Hugging Face, so maybe we can move aiohttp to optional dependencies now. Would it solve your issue ? Btw what do you think of `datasets` in the free-threading setting ?\n\nI am testing transformers + dataset (simple text dataset usage) + GPTQModel for quantization and there were no issues encountered with python 3.13t but my test-case is the base-bare minimal test-case since dataset is not sharded, fully in-memory, text-only, small, not used for training. \n\nOn the technical side, dataset is almost always 100% read-only so there should be zero locking issues but I have not checked the dataset internals so there may be cases where streaming, sharding, and/or cases where datset memory/states are updated needs a per dataset `threading.lock`. \n\nSo yes, making `aiohttp` optional will definitely solve my issue. There is also a companion (datasets and tokenizers usually go hand-in-hand) issue with `Tokenizers` as well but that's simple enough with package version update: https://github.com/huggingface/tokenizers/pull/1774\n", "Ok I see ! Anyway feel free to edit the setup.py to move aiohttp to optional (tests) dependencies and open a PR, we can run the CI to see if it's ok as a change", "actually there is https://github.com/huggingface/datasets/pull/7294/ already, let's see if we can merge it", "wouldn't it be the good reason to switch to `httpx`? 😄 (would require slightly more work, short term agree with https://github.com/huggingface/datasets/issues/7548#issuecomment-2854405923)", "I made `aiohttp` optional in `datasets` 3.6.0 :)\n\n`datasets` doesn't use it directly anyway, it's only used when someone wants to download files from HTTP URLs outside of HF" ]
2025-05-02T09:20:09
2025-05-12T15:11:32
null
NONE
null
null
null
null
### Describe the bug Cannot install `datasets` under `python 3.13t` due to dependency on `aiohttp` and aiohttp cannot be built for free-threading python. The `free threading` support issue in `aiothttp` is active since August 2024! Ouch. https://github.com/aio-libs/aiohttp/issues/8796#issue-2475941784 `pip install dataset` ```bash (vm313t) root@gpu-base:~/GPTQModel# pip install datasets WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)")': /simple/datasets/ Collecting datasets Using cached datasets-3.5.1-py3-none-any.whl.metadata (19 kB) Requirement already satisfied: filelock in /root/vm313t/lib/python3.13t/site-packages (from datasets) (3.18.0) Requirement already satisfied: numpy>=1.17 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (2.2.5) Collecting pyarrow>=15.0.0 (from datasets) Using cached pyarrow-20.0.0-cp313-cp313t-manylinux_2_28_x86_64.whl.metadata (3.3 kB) Collecting dill<0.3.9,>=0.3.0 (from datasets) Using cached dill-0.3.8-py3-none-any.whl.metadata (10 kB) Collecting pandas (from datasets) Using cached pandas-2.2.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (89 kB) Requirement already satisfied: requests>=2.32.2 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (2.32.3) Requirement already satisfied: tqdm>=4.66.3 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (4.67.1) Collecting xxhash (from datasets) Using cached xxhash-3.5.0-cp313-cp313t-linux_x86_64.whl Collecting multiprocess<0.70.17 (from datasets) Using cached multiprocess-0.70.16-py312-none-any.whl.metadata (7.2 kB) Collecting fsspec<=2025.3.0,>=2023.1.0 (from fsspec[http]<=2025.3.0,>=2023.1.0->datasets) Using cached fsspec-2025.3.0-py3-none-any.whl.metadata (11 kB) Collecting aiohttp (from datasets) Using cached aiohttp-3.11.18.tar.gz (7.7 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: huggingface-hub>=0.24.0 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (0.30.2) Requirement already satisfied: packaging in /root/vm313t/lib/python3.13t/site-packages (from datasets) (25.0) Requirement already satisfied: pyyaml>=5.1 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (6.0.2) Collecting aiohappyeyeballs>=2.3.0 (from aiohttp->datasets) Using cached aiohappyeyeballs-2.6.1-py3-none-any.whl.metadata (5.9 kB) Collecting aiosignal>=1.1.2 (from aiohttp->datasets) Using cached aiosignal-1.3.2-py2.py3-none-any.whl.metadata (3.8 kB) Collecting attrs>=17.3.0 (from aiohttp->datasets) Using cached attrs-25.3.0-py3-none-any.whl.metadata (10 kB) Collecting frozenlist>=1.1.1 (from aiohttp->datasets) Using cached frozenlist-1.6.0-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (16 kB) Collecting multidict<7.0,>=4.5 (from aiohttp->datasets) Using cached multidict-6.4.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.3 kB) Collecting propcache>=0.2.0 (from aiohttp->datasets) Using cached propcache-0.3.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB) Collecting yarl<2.0,>=1.17.0 (from aiohttp->datasets) Using cached yarl-1.20.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (72 kB) Requirement already satisfied: idna>=2.0 in /root/vm313t/lib/python3.13t/site-packages (from yarl<2.0,>=1.17.0->aiohttp->datasets) (3.10) Requirement already satisfied: typing-extensions>=3.7.4.3 in /root/vm313t/lib/python3.13t/site-packages (from huggingface-hub>=0.24.0->datasets) (4.13.2) Requirement already satisfied: charset-normalizer<4,>=2 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (3.4.1) Requirement already satisfied: urllib3<3,>=1.21.1 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (2.4.0) Requirement already satisfied: certifi>=2017.4.17 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (2025.4.26) Collecting python-dateutil>=2.8.2 (from pandas->datasets) Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB) Collecting pytz>=2020.1 (from pandas->datasets) Using cached pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB) Collecting tzdata>=2022.7 (from pandas->datasets) Using cached tzdata-2025.2-py2.py3-none-any.whl.metadata (1.4 kB) Collecting six>=1.5 (from python-dateutil>=2.8.2->pandas->datasets) Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB) Using cached datasets-3.5.1-py3-none-any.whl (491 kB) Using cached dill-0.3.8-py3-none-any.whl (116 kB) Using cached fsspec-2025.3.0-py3-none-any.whl (193 kB) Using cached multiprocess-0.70.16-py312-none-any.whl (146 kB) Using cached multidict-6.4.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (220 kB) Using cached yarl-1.20.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (404 kB) Using cached aiohappyeyeballs-2.6.1-py3-none-any.whl (15 kB) Using cached aiosignal-1.3.2-py2.py3-none-any.whl (7.6 kB) Using cached attrs-25.3.0-py3-none-any.whl (63 kB) Using cached frozenlist-1.6.0-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (385 kB) Using cached propcache-0.3.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (282 kB) Using cached pyarrow-20.0.0-cp313-cp313t-manylinux_2_28_x86_64.whl (42.2 MB) Using cached pandas-2.2.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.9 MB) Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB) Using cached pytz-2025.2-py2.py3-none-any.whl (509 kB) Using cached six-1.17.0-py2.py3-none-any.whl (11 kB) Using cached tzdata-2025.2-py2.py3-none-any.whl (347 kB) Building wheels for collected packages: aiohttp Building wheel for aiohttp (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for aiohttp (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [156 lines of output] ********************* * Accelerated build * ********************* /tmp/pip-build-env-wjqi8_7w/overlay/lib/python3.13t/site-packages/setuptools/dist.py:759: SetuptoolsDeprecationWarning: License classifiers are deprecated. !! ******************************************************************************** Please consider removing the following classifiers in favor of a SPDX license expression: License :: OSI Approved :: Apache Software License See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#license for details. ******************************************************************************** !! self._finalize_license_expression() running bdist_wheel running build running build_py creating build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/typedefs.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http_parser.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client_reqrep.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client_ws.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_app.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http_websocket.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/resolver.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/tracing.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http_writer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/log.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/__init__.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_runner.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/worker.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/connector.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_middlewares.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/tcp_helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_response.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_server.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_request.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_urldispatcher.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/formdata.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/streams.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/multipart.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_routedef.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_ws.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/payload.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client_proto.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_log.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/base_protocol.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/payload_streamer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_fileresponse.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/test_utils.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/cookiejar.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/compression_utils.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/hdrs.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/pytest_plugin.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_protocol.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/abc.py -> build/lib.linux-x86_64-cpython-313t/aiohttp creating build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/__init__.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/writer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/models.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/reader.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/reader_c.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/reader_py.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket running egg_info writing aiohttp.egg-info/PKG-INFO writing dependency_links to aiohttp.egg-info/dependency_links.txt writing requirements to aiohttp.egg-info/requires.txt writing top-level names to aiohttp.egg-info/top_level.txt reading manifest file 'aiohttp.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching 'aiohttp' anywhere in distribution warning: no files found matching '*.pyi' anywhere in distribution warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.pyd' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.lib' found anywhere in distribution warning: no previously-included files matching '*.dll' found anywhere in distribution warning: no previously-included files matching '*.a' found anywhere in distribution warning: no previously-included files matching '*.obj' found anywhere in distribution warning: no previously-included files found matching 'aiohttp/*.html' no previously-included directories found matching 'docs/_build' adding license file 'LICENSE.txt' writing manifest file 'aiohttp.egg-info/SOURCES.txt' copying aiohttp/_cparser.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/_find_header.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/_headers.pxi -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/_http_parser.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/_http_writer.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/py.typed -> build/lib.linux-x86_64-cpython-313t/aiohttp creating build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/_cparser.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/_find_header.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/_http_parser.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/_http_writer.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/hdrs.py.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/_websocket/mask.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/mask.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/reader_c.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket creating build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash copying aiohttp/_websocket/.hash/mask.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash copying aiohttp/_websocket/.hash/mask.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash copying aiohttp/_websocket/.hash/reader_c.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash running build_ext building 'aiohttp._websocket.mask' extension creating build/temp.linux-x86_64-cpython-313t/aiohttp/_websocket x86_64-linux-gnu-gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O2 -Wall -g -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection -fPIC -I/root/vm313t/include -I/usr/include/python3.13t -c aiohttp/_websocket/mask.c -o build/temp.linux-x86_64-cpython-313t/aiohttp/_websocket/mask.o aiohttp/_websocket/mask.c:1864:80: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’? 1864 | static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw); | ^~~~~~~~~~~~~~~~~~~~ | vectorcallfunc aiohttp/_websocket/mask.c: In function ‘__pyx_f_7aiohttp_10_websocket_4mask__websocket_mask_cython’: aiohttp/_websocket/mask.c:2905:3: warning: ‘Py_OptimizeFlag’ is deprecated [-Wdeprecated-declarations] 2905 | if (unlikely(__pyx_assertions_enabled())) { | ^~ In file included from /usr/include/python3.13t/Python.h:76, from aiohttp/_websocket/mask.c:16: /usr/include/python3.13t/cpython/pydebug.h:13:37: note: declared here 13 | Py_DEPRECATED(3.12) PyAPI_DATA(int) Py_OptimizeFlag; | ^~~~~~~~~~~~~~~ aiohttp/_websocket/mask.c: At top level: aiohttp/_websocket/mask.c:4846:69: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’? 4846 | static PyObject *__Pyx_PyVectorcall_FastCallDict_kw(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) | ^~~~~~~~~~~~~~~~~~~~ | vectorcallfunc aiohttp/_websocket/mask.c:4891:80: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’? 4891 | static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) | ^~~~~~~~~~~~~~~~~~~~ | vectorcallfunc aiohttp/_websocket/mask.c: In function ‘__Pyx_CyFunction_CallAsMethod’: aiohttp/_websocket/mask.c:5580:6: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’? 5580 | __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); | ^~~~~~~~~~~~~~~~~~~~ | vectorcallfunc aiohttp/_websocket/mask.c:1954:45: warning: initialization of ‘int’ from ‘vectorcallfunc’ {aka ‘struct _object * (*)(struct _object *, struct _object * const*, long unsigned int, struct _object *)’} makes integer from pointer without a cast [-Wint-conversion] 1954 | #define __Pyx_CyFunction_func_vectorcall(f) (((PyCFunctionObject*)f)->vectorcall) | ^ aiohttp/_websocket/mask.c:5580:32: note: in expansion of macro ‘__Pyx_CyFunction_func_vectorcall’ 5580 | __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ aiohttp/_websocket/mask.c:5583:16: warning: implicit declaration of function ‘__Pyx_PyVectorcall_FastCallDict’ [-Wimplicit-function-declaration] 5583 | return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ aiohttp/_websocket/mask.c:5583:16: warning: returning ‘int’ from a function with return type ‘PyObject *’ {aka ‘struct _object *’} makes pointer from integer without a cast [-Wint-conversion] 5583 | return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for aiohttp Failed to build aiohttp ERROR: Failed to build installable wheels for some pyproject.toml based projects (aiohttp) ``` ### Steps to reproduce the bug See above ### Expected behavior Install ### Environment info Ubuntu 24.04
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7548/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7548/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7547
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7547/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7547/comments
https://api.github.com/repos/huggingface/datasets/issues/7547/events
https://github.com/huggingface/datasets/pull/7547
3,034,830,291
PR_kwDODunzps6UsTuF
7,547
Avoid global umask for setting file mode.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1282383?v=4", "events_url": "https://api.github.com/users/ryan-clancy/events{/privacy}", "followers_url": "https://api.github.com/users/ryan-clancy/followers", "following_url": "https://api.github.com/users/ryan-clancy/following{/other_user}", "gists_url": "https://api.github.com/users/ryan-clancy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ryan-clancy", "id": 1282383, "login": "ryan-clancy", "node_id": "MDQ6VXNlcjEyODIzODM=", "organizations_url": "https://api.github.com/users/ryan-clancy/orgs", "received_events_url": "https://api.github.com/users/ryan-clancy/received_events", "repos_url": "https://api.github.com/users/ryan-clancy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ryan-clancy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryan-clancy/subscriptions", "type": "User", "url": "https://api.github.com/users/ryan-clancy", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7547). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-01T22:24:24
2025-05-06T13:05:00
2025-05-06T13:05:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7547.diff", "html_url": "https://github.com/huggingface/datasets/pull/7547", "merged_at": "2025-05-06T13:05:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/7547.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7547" }
This PR updates the method for setting the permissions on `cache_path` after calling `shutil.move`. The call to `shutil.move` may not preserve permissions if the source and destination are on different filesystems. Reading and resetting umask can cause race conditions, so directly read what permissions were set for the `temp_file` instead. This fixes https://github.com/huggingface/datasets/issues/7536.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7547/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7547/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7546
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7546/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7546/comments
https://api.github.com/repos/huggingface/datasets/issues/7546/events
https://github.com/huggingface/datasets/issues/7546
3,034,018,298
I_kwDODunzps6013H6
7,546
Large memory use when loading large datasets to a ZFS pool
{ "avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4", "events_url": "https://api.github.com/users/FredHaa/events{/privacy}", "followers_url": "https://api.github.com/users/FredHaa/followers", "following_url": "https://api.github.com/users/FredHaa/following{/other_user}", "gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FredHaa", "id": 6875946, "login": "FredHaa", "node_id": "MDQ6VXNlcjY4NzU5NDY=", "organizations_url": "https://api.github.com/users/FredHaa/orgs", "received_events_url": "https://api.github.com/users/FredHaa/received_events", "repos_url": "https://api.github.com/users/FredHaa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions", "type": "User", "url": "https://api.github.com/users/FredHaa", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! datasets are memory mapped from disk, so they don't fill out your RAM. Not sure what's the source of your memory issue.\n\nWhat kind of system are you using ? and what kind of disk ?", "Well, the fact of the matter is that my RAM is getting filled out by running the given example, as shown in [this video](https://streamable.com/usb0ql).\n\nMy system is a GPU server running Ubuntu. The disk is a SATA SSD attached to the server using a backplane. It is formatted with ZFS, mounted in /cache, and my HF_HOME is set to /cache/hf\n\nI really need this fixed, so I am more than willing to test out various suggestions you might have, or write a PR if we can figure out what is going on.", "I'm not super familiar with ZFS, but it looks like it loads the data in memory when the files are memory mapped, which is an issue.\n\nMaybe it's a caching mechanism ? Since `datasets` accesses every memory mapped file to read a small part (the metadata of the arrow record batches), maybe ZFS brings the whole files in memory for quicker subsequent reads. This is an antipattern when it comes to lazy loading datasets of that size though", "This is the answer.\n\nI tried changing my HF_HOME to an NFS share, and no RAM is then consumed loading the dataset.\n\nI will try to see if I can find a way to configure the ZFS pool to not cache the files (disabling the ARC/primary cache didn't work), and if I do write the solution in this issue. If I can't I guess I have to reformat my cache drive." ]
2025-05-01T14:43:47
2025-05-13T13:30:09
2025-05-13T13:29:53
NONE
null
null
null
null
### Describe the bug When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train models using multiple large datasets. ### Steps to reproduce the bug `uv run --with datasets==3.5.1 python` ```python from datasets import load_dataset load_dataset('MLCommons/peoples_speech', 'clean') load_dataset('mozilla-foundation/common_voice_17_0', 'en') ``` ### Expected behavior I would expect that a lot less than 500GB of RAM would be required to load the dataset, or at least that the RAM usage would be cleared as soon as the dataset is loaded (and thus reside as a memory mapped file) such that other datasets can be loaded. ### Environment info I am currently using the latest datasets==3.5.1 but I have had the same problem with multiple other versions.
{ "avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4", "events_url": "https://api.github.com/users/FredHaa/events{/privacy}", "followers_url": "https://api.github.com/users/FredHaa/followers", "following_url": "https://api.github.com/users/FredHaa/following{/other_user}", "gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FredHaa", "id": 6875946, "login": "FredHaa", "node_id": "MDQ6VXNlcjY4NzU5NDY=", "organizations_url": "https://api.github.com/users/FredHaa/orgs", "received_events_url": "https://api.github.com/users/FredHaa/received_events", "repos_url": "https://api.github.com/users/FredHaa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions", "type": "User", "url": "https://api.github.com/users/FredHaa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7546/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7546/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7545
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7545/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7545/comments
https://api.github.com/repos/huggingface/datasets/issues/7545/events
https://github.com/huggingface/datasets/issues/7545
3,031,617,547
I_kwDODunzps60stAL
7,545
Networked Pull Through Cache
{ "avatar_url": "https://avatars.githubusercontent.com/u/8764173?v=4", "events_url": "https://api.github.com/users/wrmedford/events{/privacy}", "followers_url": "https://api.github.com/users/wrmedford/followers", "following_url": "https://api.github.com/users/wrmedford/following{/other_user}", "gists_url": "https://api.github.com/users/wrmedford/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wrmedford", "id": 8764173, "login": "wrmedford", "node_id": "MDQ6VXNlcjg3NjQxNzM=", "organizations_url": "https://api.github.com/users/wrmedford/orgs", "received_events_url": "https://api.github.com/users/wrmedford/received_events", "repos_url": "https://api.github.com/users/wrmedford/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wrmedford/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wrmedford/subscriptions", "type": "User", "url": "https://api.github.com/users/wrmedford", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2025-04-30T15:16:33
2025-04-30T15:16:33
null
NONE
null
null
null
null
### Feature request Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service. Enable a three-tier cache lookup for datasets: 1. Local on-disk cache 2. Configurable network cache proxy 3. Official Hugging Face Hub ### Motivation - Distributed training & ephemeral jobs: In high-performance or containerized clusters, relying solely on a local disk cache either becomes a streaming bottleneck or incurs a heavy cold-start penalty as each job must re-download datasets. - Traffic & cost reduction: A pull-through network cache lets multiple consumers share a common cache layer, reducing duplicate downloads from the Hub and lowering egress costs. - Better streaming adoption: By offloading repeat dataset pulls to a locally managed cache proxy, streaming workloads can achieve higher throughput and more predictable latency. - Proven pattern: Similar proxy-cache solutions (e.g. Harbor’s Proxy Cache for Docker images) have demonstrated reliability and performance at scale: https://goharbor.io/docs/2.1.0/administration/configure-proxy-cache/ ### Your contribution I’m happy to draft the initial PR for adding HF_DATASET_CACHE_NETWORK_LOCATION support in datasets and sketch out a minimal cache-service prototype. I have limited bandwidth so I would be looking for collaborators if anyone else is interested.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7545/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7545/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7544
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7544/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7544/comments
https://api.github.com/repos/huggingface/datasets/issues/7544/events
https://github.com/huggingface/datasets/pull/7544
3,027,024,285
PR_kwDODunzps6UR4Nn
7,544
Add try_original_type to DatasetDict.map
{ "avatar_url": "https://avatars.githubusercontent.com/u/11156001?v=4", "events_url": "https://api.github.com/users/yoshitomo-matsubara/events{/privacy}", "followers_url": "https://api.github.com/users/yoshitomo-matsubara/followers", "following_url": "https://api.github.com/users/yoshitomo-matsubara/following{/other_user}", "gists_url": "https://api.github.com/users/yoshitomo-matsubara/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yoshitomo-matsubara", "id": 11156001, "login": "yoshitomo-matsubara", "node_id": "MDQ6VXNlcjExMTU2MDAx", "organizations_url": "https://api.github.com/users/yoshitomo-matsubara/orgs", "received_events_url": "https://api.github.com/users/yoshitomo-matsubara/received_events", "repos_url": "https://api.github.com/users/yoshitomo-matsubara/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yoshitomo-matsubara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yoshitomo-matsubara/subscriptions", "type": "User", "url": "https://api.github.com/users/yoshitomo-matsubara", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7544). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Sure! I just committed the changes", "@lhoestq \r\nLet me know if there are other things to do before merge or other places to add `try_original_type` argument " ]
2025-04-29T04:39:44
2025-05-05T14:42:49
2025-05-05T14:42:49
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7544.diff", "html_url": "https://github.com/huggingface/datasets/pull/7544", "merged_at": "2025-05-05T14:42:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/7544.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7544" }
This PR resolves #7472 for DatasetDict The previously merged PR #7483 added `try_original_type` to ArrowDataset, but DatasetDict misses `try_original_type` Cc: @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7544/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7544/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7543/comments
https://api.github.com/repos/huggingface/datasets/issues/7543/events
https://github.com/huggingface/datasets/issues/7543
3,026,867,706
I_kwDODunzps60alX6
7,543
The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.)
{ "avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4", "events_url": "https://api.github.com/users/jxma20/events{/privacy}", "followers_url": "https://api.github.com/users/jxma20/followers", "following_url": "https://api.github.com/users/jxma20/following{/other_user}", "gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxma20", "id": 76415358, "login": "jxma20", "node_id": "MDQ6VXNlcjc2NDE1MzU4", "organizations_url": "https://api.github.com/users/jxma20/orgs", "received_events_url": "https://api.github.com/users/jxma20/received_events", "repos_url": "https://api.github.com/users/jxma20/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxma20/subscriptions", "type": "User", "url": "https://api.github.com/users/jxma20", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-04-29T03:04:59
2025-04-30T02:22:17
2025-04-30T02:22:17
NONE
null
null
null
null
### Describe the bug ## bug When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_batch_size` will be occupied in memory. However, I found that the map function does not actually reduce memory usage when I used it. At first, I thought there was a bug in the program, causing a memory leak—meaning the memory was not released after the data was stored in the cache. But later, I used a Linux command to check for recently modified files during program execution and found that no new files were created or modified. This indicates that the program did not store the dataset in the disk cache. ## bug solved After modifying the parameters of the map function multiple times, I discovered the `cache_file_name` parameter. By changing it, the cache file can be stored in the specified directory. After making this change, I noticed that the cache file appeared. Initially, I found this quite incredible, but then I wondered if the cache file might have failed to be stored in a certain folder. This could be related to the fact that I don't have root privileges. So, I delved into the source code of the map function to find out where the cache file would be stored by default. Eventually, I found the function `def _get_cache_file_path(self, fingerprint):`, which automatically generates the storage path for the cache file. The output was as follows: `/tmp/hf_datasets-j5qco9ug/cache-f2830487643b9cc2.arrow`. My hypothesis was confirmed: the lack of root privileges indeed prevented the cache file from being stored, which in turn prevented the release of memory. Therefore, changing the storage location to a folder where I have write access resolved the issue. ### Steps to reproduce the bug my code `train_data = train_data.map(process_fun, remove_columns=['image_name', 'question_type', 'concern', 'question', 'candidate_answers', 'answer'])` ### Expected behavior Although my bug has been resolved, it still took me nearly a week to search for relevant information and debug the program. However, if a warning or error message about insufficient cache file write permissions could be provided during program execution, I might have been able to identify the cause more quickly. Therefore, I hope this aspect can be improved. I am documenting this bug here so that friends who encounter similar issues can solve their problems in a timely manner. ### Environment info python: 3.10.15 datasets: 3.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4", "events_url": "https://api.github.com/users/jxma20/events{/privacy}", "followers_url": "https://api.github.com/users/jxma20/followers", "following_url": "https://api.github.com/users/jxma20/following{/other_user}", "gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxma20", "id": 76415358, "login": "jxma20", "node_id": "MDQ6VXNlcjc2NDE1MzU4", "organizations_url": "https://api.github.com/users/jxma20/orgs", "received_events_url": "https://api.github.com/users/jxma20/received_events", "repos_url": "https://api.github.com/users/jxma20/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxma20/subscriptions", "type": "User", "url": "https://api.github.com/users/jxma20", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7543/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7543/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7542
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7542/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7542/comments
https://api.github.com/repos/huggingface/datasets/issues/7542/events
https://github.com/huggingface/datasets/pull/7542
3,025,054,630
PR_kwDODunzps6ULHxo
7,542
set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7542). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-04-28T14:03:48
2025-04-28T14:08:37
2025-04-28T14:04:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7542.diff", "html_url": "https://github.com/huggingface/datasets/pull/7542", "merged_at": "2025-04-28T14:04:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/7542.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7542" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7542/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7542/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7541/comments
https://api.github.com/repos/huggingface/datasets/issues/7541/events
https://github.com/huggingface/datasets/pull/7541
3,025,045,919
PR_kwDODunzps6ULF7F
7,541
release: 3.5.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7541). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-04-28T14:00:59
2025-04-28T14:03:38
2025-04-28T14:01:54
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7541.diff", "html_url": "https://github.com/huggingface/datasets/pull/7541", "merged_at": "2025-04-28T14:01:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/7541.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7541" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7541/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7541/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7540
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7540/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7540/comments
https://api.github.com/repos/huggingface/datasets/issues/7540/events
https://github.com/huggingface/datasets/pull/7540
3,024,862,966
PR_kwDODunzps6UKe6T
7,540
support pyarrow 20
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7540). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-04-28T13:01:11
2025-04-28T13:23:53
2025-04-28T13:23:52
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7540.diff", "html_url": "https://github.com/huggingface/datasets/pull/7540", "merged_at": "2025-04-28T13:23:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/7540.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7540" }
fix ``` TypeError: ArrayExtensionArray.to_pylist() got an unexpected keyword argument 'maps_as_pydicts' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7540/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7540/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7539
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7539/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7539/comments
https://api.github.com/repos/huggingface/datasets/issues/7539/events
https://github.com/huggingface/datasets/pull/7539
3,023,311,163
PR_kwDODunzps6UFQ0W
7,539
Fix IterableDataset state_dict shard_example_idx counting
{ "avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4", "events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}", "followers_url": "https://api.github.com/users/Harry-Yang0518/followers", "following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}", "gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Harry-Yang0518", "id": 129883215, "login": "Harry-Yang0518", "node_id": "U_kgDOB73cTw", "organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs", "received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events", "repos_url": "https://api.github.com/users/Harry-Yang0518/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions", "type": "User", "url": "https://api.github.com/users/Harry-Yang0518", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7539). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi ! FYI I made a PR to fix https://github.com/huggingface/datasets/issues/7538 and it also fixed https://github.com/huggingface/datasets/issues/7475, so if I'm not mistaken this PR is not needed anymore" ]
2025-04-27T20:41:18
2025-05-06T14:24:25
2025-05-06T14:24:24
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7539.diff", "html_url": "https://github.com/huggingface/datasets/pull/7539", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7539.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7539" }
# Fix IterableDataset's state_dict shard_example_idx reporting ## Description This PR fixes issue #7475 where the `shard_example_idx` value in `IterableDataset`'s `state_dict()` always equals the number of samples in a shard, even if only a few examples have been consumed. The issue is in the `_iter_arrow` method of the `ArrowExamplesIterable` class where it updates the `shard_example_idx` state by the full length of the batch (`len(pa_table)`) even when we're only partway through processing the examples. ## Changes Modified the `_iter_arrow` method of `ArrowExamplesIterable` to: 1. Track the actual number of examples processed 2. Only increment the `shard_example_idx` by the number of examples actually yielded 3. Handle partial batches correctly ## How to Test I've included a simple test case that demonstrates the fix: ```python from datasets import Dataset # Create a test dataset ds = Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=1) # Iterate through part of the dataset for idx, example in enumerate(ds): print(example) if idx == 2: # Stop after 3 examples (0, 1, 2) state_dict = ds.state_dict() print("Checkpoint state_dict:", state_dict) break # Before the fix, the output would show shard_example_idx: 6 # After the fix, it shows shard_example_idx: 3, correctly reflecting the 3 processed examples ``` ## Implementation Details 1. Added logic to track the number of examples actually seen in the current shard 2. Modified the state update to only count examples actually yielded 3. Improved handling of partial batches and skipped examples This fix ensures that checkpointing and resuming works correctly with exactly the expected number of examples, rather than skipping ahead to the end of the batch.
{ "avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4", "events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}", "followers_url": "https://api.github.com/users/Harry-Yang0518/followers", "following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}", "gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Harry-Yang0518", "id": 129883215, "login": "Harry-Yang0518", "node_id": "U_kgDOB73cTw", "organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs", "received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events", "repos_url": "https://api.github.com/users/Harry-Yang0518/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions", "type": "User", "url": "https://api.github.com/users/Harry-Yang0518", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7539/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7539/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7538
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7538/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7538/comments
https://api.github.com/repos/huggingface/datasets/issues/7538/events
https://github.com/huggingface/datasets/issues/7538
3,023,280,056
I_kwDODunzps60M5e4
7,538
`IterableDataset` drops samples when resuming from a checkpoint
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Thanks for reporting ! I fixed the issue using RebatchedArrowExamplesIterable before the formatted iterable" ]
2025-04-27T19:34:49
2025-05-06T14:04:05
2025-05-06T14:03:42
COLLABORATOR
null
null
null
null
When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted. In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one (after formatting). However, the child increments the `shard_example_idx` counter (in its `iter_arrow`) before returning the batch for the whole batch size, which leads to a portion of samples being skipped if the iteration (of the parent iterable) is stopped mid-batch. Perhaps one way to avoid this would be by signalling the child iterable which samples (within the chunk) are processed by the parent and which are not, so that it can adjust the `shard_example_idx` counter accordingly. This would also mean the chunk needs to be sliced when resuming, but this is straightforward to implement. The following is a minimal reproducer of the bug: ```python from datasets import Dataset from datasets.distributed import split_dataset_by_node ds = Dataset.from_dict({"n": list(range(24))}) ds = ds.to_iterable_dataset(num_shards=4) world_size = 4 rank = 0 ds_rank = split_dataset_by_node(ds, rank, world_size) it = iter(ds_rank) examples = [] for idx, example in enumerate(it): examples.append(example) if idx == 2: state_dict = ds_rank.state_dict() break ds_rank.load_state_dict(state_dict) it_resumed = iter(ds_rank) examples_resumed = examples[:] for example in it: examples.append(example) for example in it_resumed: examples_resumed.append(example) print("ORIGINAL ITER EXAMPLES:", examples) print("RESUMED ITER EXAMPLES:", examples_resumed) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7538/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7538/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7537
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7537/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7537/comments
https://api.github.com/repos/huggingface/datasets/issues/7537/events
https://github.com/huggingface/datasets/issues/7537
3,018,792,966
I_kwDODunzps6z7yAG
7,537
`datasets.map(..., num_proc=4)` multi-processing fails
{ "avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4", "events_url": "https://api.github.com/users/faaany/events{/privacy}", "followers_url": "https://api.github.com/users/faaany/followers", "following_url": "https://api.github.com/users/faaany/following{/other_user}", "gists_url": "https://api.github.com/users/faaany/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/faaany", "id": 24477841, "login": "faaany", "node_id": "MDQ6VXNlcjI0NDc3ODQx", "organizations_url": "https://api.github.com/users/faaany/orgs", "received_events_url": "https://api.github.com/users/faaany/received_events", "repos_url": "https://api.github.com/users/faaany/repos", "site_admin": false, "starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/faaany/subscriptions", "type": "User", "url": "https://api.github.com/users/faaany", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "related: https://github.com/huggingface/datasets/issues/7510\n\nwe need to do more tests to see if latest `dill` is deterministic" ]
2025-04-25T01:53:47
2025-05-06T13:12:08
null
NONE
null
null
null
null
The following code fails in python 3.11+ ```python tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) ``` Error log: ```bash Traceback (most recent call last): File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 315, in _bootstrap self.run() File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.12/dist-packages/multiprocess/pool.py", line 114, in worker task = get() ^^^^^ File "/usr/local/lib/python3.12/dist-packages/multiprocess/queues.py", line 371, in get return _ForkingPickler.loads(res) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 327, in loads return load(file, ignore, **kwds) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 313, in load return Unpickler(file, ignore=ignore, **kwds).load() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 525, in load obj = StockUnpickler.load(self) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 659, in _create_code if len(args) == 16: return CodeType(*args) ^^^^^^^^^^^^^^^ TypeError: code() argument 13 must be str, not int ``` After upgrading dill to the latest 0.4.0 with "pip install --upgrade dill", it can pass. So it seems that there is a compatibility issue between dill 0.3.4 and python 3.11+, because python 3.10 works fine. Is the dill deterministic issue mentioned in https://github.com/huggingface/datasets/blob/main/setup.py#L117) still valid? Any plan to unpin?
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7537/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7537/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7536
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7536/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7536/comments
https://api.github.com/repos/huggingface/datasets/issues/7536/events
https://github.com/huggingface/datasets/issues/7536
3,018,425,549
I_kwDODunzps6z6YTN
7,536
[Errno 13] Permission denied: on `.incomplete` file
{ "avatar_url": "https://avatars.githubusercontent.com/u/1282383?v=4", "events_url": "https://api.github.com/users/ryan-clancy/events{/privacy}", "followers_url": "https://api.github.com/users/ryan-clancy/followers", "following_url": "https://api.github.com/users/ryan-clancy/following{/other_user}", "gists_url": "https://api.github.com/users/ryan-clancy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ryan-clancy", "id": 1282383, "login": "ryan-clancy", "node_id": "MDQ6VXNlcjEyODIzODM=", "organizations_url": "https://api.github.com/users/ryan-clancy/orgs", "received_events_url": "https://api.github.com/users/ryan-clancy/received_events", "repos_url": "https://api.github.com/users/ryan-clancy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ryan-clancy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryan-clancy/subscriptions", "type": "User", "url": "https://api.github.com/users/ryan-clancy", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)", "> It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)\n\n@lhoestq is this something which can go in a 3.5.1 release?", "Yes for sure", "@lhoestq - can you take a look at https://github.com/huggingface/datasets/pull/7547/?" ]
2025-04-24T20:52:45
2025-05-06T13:05:01
2025-05-06T13:05:01
CONTRIBUTOR
null
null
null
null
### Describe the bug When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS. It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can sometimes be created with `000` permissions leading to the permission denied error (the user running the code is still the owner of the file). Deleting that particular file and re-running the code with 0 changes will usually succeed. Is there some race condition happening with the [umask](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L416), which is process global, and the [file creation](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L404)? ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .venv/lib/python3.12/site-packages/datasets/load.py:2084: in load_dataset builder_instance.download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:925: in download_and_prepare self._download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:1649: in _download_and_prepare super()._download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:979: in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) .venv/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py:120: in _split_generators downloaded_files = dl_manager.download(files) .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:159: in download downloaded_path_or_paths = map_nested( .venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:514: in map_nested _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) .venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:382: in _single_map_nested return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:206: in _download_batched return thread_map( .venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:69: in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) .venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:51: in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) .venv/lib/python3.12/site-packages/tqdm/std.py:1181: in __iter__ for obj in iterable: ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:619: in result_iterator yield _result_or_cancel(fs.pop()) ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:317: in _result_or_cancel return fut.result(timeout) ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:449: in result return self.__get_result() ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:401: in __get_result raise self._exception ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/thread.py:59: in run result = self.fn(*self.args, **self.kwargs) .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:229: in _download_single out = cached_path(url_or_filename, download_config=download_config) .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:206: in cached_path output_path = get_from_cache( .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:412: in get_from_cache fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm) .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:331: in fsspec_get fs.get_file(path, temp_file.name, callback=callback) .venv/lib/python3.12/site-packages/fsspec/asyn.py:118: in wrapper return sync(self.loop, func, *args, **kwargs) .venv/lib/python3.12/site-packages/fsspec/asyn.py:103: in sync raise return_result .venv/lib/python3.12/site-packages/fsspec/asyn.py:56: in _runner result[0] = await coro _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <s3fs.core.S3FileSystem object at 0x7f27c18b2e70> rpath = '<my-bucket>/<my-prefix>/img_1.jpg' lpath = '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete' callback = <datasets.utils.file_utils.TqdmCallback object at 0x7f27c00cdbe0> version_id = None, kwargs = {} _open_file = <function S3FileSystem._get_file.<locals>._open_file at 0x7f27628d1120> body = <StreamingBody at 0x7f276344fa80 for ClientResponse at 0x7f27c015fce0> content_length = 521923, failed_reads = 0, bytes_read = 0 async def _get_file( self, rpath, lpath, callback=_DEFAULT_CALLBACK, version_id=None, **kwargs ): if os.path.isdir(lpath): return bucket, key, vers = self.split_path(rpath) async def _open_file(range: int): kw = self.req_kw.copy() if range: kw["Range"] = f"bytes={range}-" resp = await self._call_s3( "get_object", Bucket=bucket, Key=key, **version_id_kw(version_id or vers), **kw, ) return resp["Body"], resp.get("ContentLength", None) body, content_length = await _open_file(range=0) callback.set_size(content_length) failed_reads = 0 bytes_read = 0 try: > with open(lpath, "wb") as f0: E PermissionError: [Errno 13] Permission denied: '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete' .venv/lib/python3.12/site-packages/s3fs/core.py:1355: PermissionError ``` ### Steps to reproduce the bug I believe this is a race condition and cannot reliably re-produce it, but it happens fairly frequently in our GitHub Actions tests and can also be re-produced (with lesser frequency) on cloud VMs. ### Expected behavior The dataset loads properly with no permission denied error. ### Environment info - `datasets` version: 3.5.0 - Platform: Linux-5.10.0-34-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.12.10 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7536/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7536/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7535
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7535/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7535/comments
https://api.github.com/repos/huggingface/datasets/issues/7535/events
https://github.com/huggingface/datasets/pull/7535
3,018,289,872
PR_kwDODunzps6T0lm3
7,535
Change dill version in requirements
{ "avatar_url": "https://avatars.githubusercontent.com/u/98061329?v=4", "events_url": "https://api.github.com/users/JGrel/events{/privacy}", "followers_url": "https://api.github.com/users/JGrel/followers", "following_url": "https://api.github.com/users/JGrel/following{/other_user}", "gists_url": "https://api.github.com/users/JGrel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JGrel", "id": 98061329, "login": "JGrel", "node_id": "U_kgDOBdhMEQ", "organizations_url": "https://api.github.com/users/JGrel/orgs", "received_events_url": "https://api.github.com/users/JGrel/received_events", "repos_url": "https://api.github.com/users/JGrel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JGrel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JGrel/subscriptions", "type": "User", "url": "https://api.github.com/users/JGrel", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7535). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-04-24T19:44:28
2025-05-19T14:51:29
null
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7535.diff", "html_url": "https://github.com/huggingface/datasets/pull/7535", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7535.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7535" }
Change dill version to >=0.3.9,<0.4.5 and check for errors
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7535/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7535/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7534
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7534/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7534/comments
https://api.github.com/repos/huggingface/datasets/issues/7534/events
https://github.com/huggingface/datasets/issues/7534
3,017,259,407
I_kwDODunzps6z17mP
7,534
TensorFlow RaggedTensor Support (batch-level)
{ "avatar_url": "https://avatars.githubusercontent.com/u/7490199?v=4", "events_url": "https://api.github.com/users/Lundez/events{/privacy}", "followers_url": "https://api.github.com/users/Lundez/followers", "following_url": "https://api.github.com/users/Lundez/following{/other_user}", "gists_url": "https://api.github.com/users/Lundez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lundez", "id": 7490199, "login": "Lundez", "node_id": "MDQ6VXNlcjc0OTAxOTk=", "organizations_url": "https://api.github.com/users/Lundez/orgs", "received_events_url": "https://api.github.com/users/Lundez/received_events", "repos_url": "https://api.github.com/users/Lundez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lundez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lundez/subscriptions", "type": "User", "url": "https://api.github.com/users/Lundez", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Keras doesn't support other inputs other than tf.data.Dataset objects ? it's a bit painful to have to support and maintain this kind of integration\n\nIs there a way to use a `datasets.Dataset` with outputs formatted as tensors / ragged tensors instead ? like in https://huggingface.co/docs/datasets/use_with_tensorflow#dataset-format", "I'll give it a try when I get the time. But quite sure I already tested the `with_format` approach.\n\nKeras when using TF as backend converts the datasets into `tf.data.Dataset`, much like you do.", "Hi @Lundez! Thanks for raising this — very valid point, especially for Object Detection use-cases.\n\nYou're right that np_get_batch currently enforces numpy batching, which breaks RaggedTensor support due to its inability to handle nested structures. This likely needs a redesign to allow TensorFlow-native batching in specific formats.\n\nBefore diving into a code change though, could you confirm:\n\nDoes `.with_format(\"tensorflow\")` (without batching) return a `tf.data.Dataset` that works if batching is deferred to `model.fit()`?\n\nHave you tried something like:\n\n```python\ntf_dataset = dataset.with_format(\"tensorflow\").to_tf_dataset(\n columns=[\"image\", \"labels\"],\n label_cols=None,\n batch_size=None # No batching here\n)\nmodel.fit(tf_dataset.batch(BATCH_SIZE)) # Use RaggedTensor batching here\n```\n\nIf this works, it might be worth updating the documentation rather than changing batching logic inside datasets itself.\n\nThat said, happy to explore changes if batching needs to be supported natively for RaggedTensor. Just flagging that it’d require some careful design due to existing numpy assumptions.", "Hi, we've had to move on for now. \n\nWe have actually also moved to dense tensors to make it possible to xla complie the training. \n\nBut I'll check when I'm back from vacation which is far into the future. \n\nThanks" ]
2025-04-24T13:14:52
2025-06-30T17:03:39
null
NONE
null
null
null
null
### Feature request Hi, Currently datasets does not support RaggedTensor output on batch-level. When building a Object Detection Dataset (with TensorFlow) I need to enable RaggedTensors as that's how BBoxes & classes are expected from the Keras Model POV. Currently there's a error thrown saying that "Nested Data is not supported". It'd be very helpful if this was fixed! :) ### Motivation Enabling Object Detection pipelines for TensorFlow. ### Your contribution With guidance I'd happily help making the PR. The current implementation with DataCollator and later enforcing `np.array` is the problematic part (at the end of `np_get_batch` in `tf_utils.py`). As `numpy` don't support "Raggednes"
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7534/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7534/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7533
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7533/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7533/comments
https://api.github.com/repos/huggingface/datasets/issues/7533/events
https://github.com/huggingface/datasets/pull/7533
3,015,075,086
PR_kwDODunzps6TpraJ
7,533
Add custom fingerprint support to `from_generator`
{ "avatar_url": "https://avatars.githubusercontent.com/u/43753582?v=4", "events_url": "https://api.github.com/users/simonreise/events{/privacy}", "followers_url": "https://api.github.com/users/simonreise/followers", "following_url": "https://api.github.com/users/simonreise/following{/other_user}", "gists_url": "https://api.github.com/users/simonreise/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/simonreise", "id": 43753582, "login": "simonreise", "node_id": "MDQ6VXNlcjQzNzUzNTgy", "organizations_url": "https://api.github.com/users/simonreise/orgs", "received_events_url": "https://api.github.com/users/simonreise/received_events", "repos_url": "https://api.github.com/users/simonreise/repos", "site_admin": false, "starred_url": "https://api.github.com/users/simonreise/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonreise/subscriptions", "type": "User", "url": "https://api.github.com/users/simonreise", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "This is great !\r\n\r\nWhat do you think of passing `config_id=` directly to the builder instead of just the suffix ? This would be a power user argument though, or for internal use. And in from_generator the new argument can be `fingerprint=` as in `Dataset.__init__()`\r\n\r\nThe `config_id` can be defined using something like `config_id = \"default-fingerprint=\" + fingerprint`\r\n\r\nI feel ike this could make the Dataset API more coherent if we avoid introducing a new argument while we can juste use `fingerprint=`", "I looked into this issue and the original cause makes total sense — hashing a large generator is clearly inefficient and fragile for big datasets.\r\n\r\nPR #7533 looks like a robust and flexible solution! It cleanly separates the fingerprinting responsibility by letting users pass `fingerprint=` (now `config_id=`), which avoids hashing heavy objects like generators but still preserves caching logic.\r\n", "@lhoestq could you please re-review the changes I made?", "@lhoestq ping\r\nI also added a simple test for the `fingerprint` parameter" ]
2025-04-23T19:31:35
2025-07-10T09:29:35
null
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7533.diff", "html_url": "https://github.com/huggingface/datasets/pull/7533", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7533.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7533" }
This PR adds `dataset_id_suffix` parameter to 'Dataset.from_generator' function. `Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount of time or even cause MemoryError if the dataset processed in a generator function is large enough. This PR allows user to pass a custom fingerprint (`dataset_id_suffix`) to be used as a suffix in a dataset name instead of the one generated by hashing the args. This PR is a possible solution of #7513
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7533/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7533/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7532
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7532/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7532/comments
https://api.github.com/repos/huggingface/datasets/issues/7532/events
https://github.com/huggingface/datasets/pull/7532
3,009,546,204
PR_kwDODunzps6TW8Ss
7,532
Document the HF_DATASETS_CACHE environment variable in the datasets cache documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4", "events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}", "followers_url": "https://api.github.com/users/Harry-Yang0518/followers", "following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}", "gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Harry-Yang0518", "id": 129883215, "login": "Harry-Yang0518", "node_id": "U_kgDOB73cTw", "organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs", "received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events", "repos_url": "https://api.github.com/users/Harry-Yang0518/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions", "type": "User", "url": "https://api.github.com/users/Harry-Yang0518", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7532). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Your clarification in your comment at https://github.com/huggingface/datasets/issues/7480#issuecomment-2833640084 sounds great, would you like to update this PR to include it ?", "Hi @lhoestq, I’ve updated the documentation to reflect the clarifications discussed in #7480. Let me know if anything else is needed!\r\n" ]
2025-04-22T00:23:13
2025-05-06T15:54:38
2025-05-06T15:54:38
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7532.diff", "html_url": "https://github.com/huggingface/datasets/pull/7532", "merged_at": "2025-05-06T15:54:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/7532.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7532" }
This pull request updates the Datasets documentation to include the `HF_DATASETS_CACHE` environment variable. While the current documentation only mentions `HF_HOME` for overriding the default cache directory, `HF_DATASETS_CACHE` is also a supported and useful option for specifying a custom cache location for datasets stored in Arrow format. This addition is based on the discussion in (https://github.com/huggingface/datasets/issues/7457), where users noted the absence of this variable in the documentation despite its functionality. The update adds a new section to `cache.mdx` that explains how to use `HF_DATASETS_CACHE` with an example. This change aims to improve clarity and help users better manage their cache directories when working in shared environments or with limited local storage. Closes #7457.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7532/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7532/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7531/comments
https://api.github.com/repos/huggingface/datasets/issues/7531/events
https://github.com/huggingface/datasets/issues/7531
3,008,914,887
I_kwDODunzps6zWGXH
7,531
Deepspeed reward training hangs at end of training with Dataset.from_list
{ "avatar_url": "https://avatars.githubusercontent.com/u/60710414?v=4", "events_url": "https://api.github.com/users/Matt00n/events{/privacy}", "followers_url": "https://api.github.com/users/Matt00n/followers", "following_url": "https://api.github.com/users/Matt00n/following{/other_user}", "gists_url": "https://api.github.com/users/Matt00n/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Matt00n", "id": 60710414, "login": "Matt00n", "node_id": "MDQ6VXNlcjYwNzEwNDE0", "organizations_url": "https://api.github.com/users/Matt00n/orgs", "received_events_url": "https://api.github.com/users/Matt00n/received_events", "repos_url": "https://api.github.com/users/Matt00n/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Matt00n/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Matt00n/subscriptions", "type": "User", "url": "https://api.github.com/users/Matt00n", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi ! How big is the dataset ? if you load it using `from_list`, the dataset lives in memory and has to be copied to every gpu process, which can be slow.\n\nIt's fasted if you load it from JSON files from disk, because in that case the dataset in converted to Arrow and loaded from disk using memory mapping. Memory mapping allows to quickly reload the dataset in other processes.\n\nMaybe we can change `from_list` and other methods to always use the disk though, instead of loading in memory, WDYT ?", "Thanks for raising this! As lhoestq mentioned, the root cause seems to be that `Dataset.from_list()` creates an in-memory dataset, which causes issues with DeepSpeed across multiple GPUs due to the cost of copying that memory to all processes.\n\nUsing `load_dataset(\"json\", ...)` works because Hugging Face datasets then convert the data to Apache Arrow and use **memory mapping**, which avoids this copying overhead.\n\nPossible improvement could be to add an option like `use_disk=True` to `Dataset.from_list()` to allow users to write to Arrow + memory-map the dataset, enabling compatibility with multi-process settings like DeepSpeed, while keeping the current fast behavior by default.\n\nWould love to hear if this direction sounds acceptable before attempting a PR.\n" ]
2025-04-21T17:29:20
2025-06-29T06:20:45
null
NONE
null
null
null
null
There seems to be a weird interaction between Deepspeed, the Dataset.from_list method and trl's RewardTrainer. On a multi-GPU setup (10 A100s), training always hangs at the very end of training until it times out. The training itself works fine until the end of training and running the same script with Deepspeed on a single GPU works without hangig. The issue persisted across a wide range of Deepspeed configs and training arguments. The issue went away when storing the exact same dataset as a JSON and using `dataset = load_dataset("json", ...)`. Here is my training script: ```python import pickle import os import random import warnings import torch from datasets import load_dataset, Dataset from transformers import AutoModelForSequenceClassification, AutoTokenizer from trl import RewardConfig, RewardTrainer, ModelConfig ####################################### Reward model ################################################# # Explicitly set arguments model_name_or_path = "Qwen/Qwen2.5-1.5B" output_dir = "Qwen2-0.5B-Reward-LoRA" per_device_train_batch_size = 2 num_train_epochs = 5 gradient_checkpointing = True learning_rate = 1.0e-4 logging_steps = 25 eval_strategy = "steps" eval_steps = 50 max_length = 2048 torch_dtype = "auto" trust_remote_code = False model_args = ModelConfig( model_name_or_path=model_name_or_path, model_revision=None, trust_remote_code=trust_remote_code, torch_dtype=torch_dtype, lora_task_type="SEQ_CLS", # Make sure task type is seq_cls ) training_args = RewardConfig( output_dir=output_dir, per_device_train_batch_size=per_device_train_batch_size, num_train_epochs=num_train_epochs, gradient_checkpointing=gradient_checkpointing, learning_rate=learning_rate, logging_steps=logging_steps, eval_strategy=eval_strategy, eval_steps=eval_steps, max_length=max_length, gradient_checkpointing_kwargs=dict(use_reentrant=False), center_rewards_coefficient = 0.01, fp16=False, bf16=True, save_strategy="no", dataloader_num_workers=0, # deepspeed="./configs/deepspeed_config.json", ) ################ # Model & Tokenizer ################ model_kwargs = dict( revision=model_args.model_revision, use_cache=False if training_args.gradient_checkpointing else True, torch_dtype=model_args.torch_dtype, ) tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, use_fast=True ) model = AutoModelForSequenceClassification.from_pretrained( model_args.model_name_or_path, num_labels=1, trust_remote_code=model_args.trust_remote_code, **model_kwargs ) # Align padding tokens between tokenizer and model model.config.pad_token_id = tokenizer.pad_token_id # If post-training a base model, use ChatML as the default template if tokenizer.chat_template is None: model, tokenizer = setup_chat_format(model, tokenizer) if model_args.use_peft and model_args.lora_task_type != "SEQ_CLS": warnings.warn( "You are using a `task_type` that is different than `SEQ_CLS` for PEFT. This will lead to silent bugs" " Make sure to pass --lora_task_type SEQ_CLS when using this script with PEFT.", UserWarning, ) ############## # Load dataset ############## with open('./prefs.pkl', 'rb') as fh: loaded_data = pickle.load(fh) random.shuffle(loaded_data) dataset = [] for a_wins, a, b in loaded_data: if a_wins == 0: a, b = b, a dataset.append({'chosen': a, 'rejected': b}) dataset = Dataset.from_list(dataset) # Split the dataset into training and evaluation sets train_eval_split = dataset.train_test_split(test_size=0.15, shuffle=True, seed=42) # Access the training and evaluation datasets train_dataset = train_eval_split['train'] eval_dataset = train_eval_split['test'] ########## # Training ########## trainer = RewardTrainer( model=model, processing_class=tokenizer, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, ) trainer.train() ``` Replacing `dataset = Dataset.from_list(dataset)` with ```python with open('./prefs.json', 'w') as fh: json.dump(dataset, fh) dataset = load_dataset("json", data_files="./prefs.json", split='train') ``` resolves the issue.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7531/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7531/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }