url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
null | assignees
list | milestone
null | comments
int64 | created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
string | type
null | active_lock_reason
null | draft
bool | pull_request
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
null | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7728
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7728/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7728/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7728/events
|
https://github.com/huggingface/datasets/issues/7728
| 3,298,854,904 |
I_kwDODunzps7EoIf4
| 7,728 |
NonMatchingSplitsSizesError and ExpectedMoreSplitsError
|
{
"login": "efsotr",
"id": 104755879,
"node_id": "U_kgDOBj5ypw",
"avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efsotr",
"html_url": "https://github.com/efsotr",
"followers_url": "https://api.github.com/users/efsotr/followers",
"following_url": "https://api.github.com/users/efsotr/following{/other_user}",
"gists_url": "https://api.github.com/users/efsotr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efsotr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efsotr/subscriptions",
"organizations_url": "https://api.github.com/users/efsotr/orgs",
"repos_url": "https://api.github.com/users/efsotr/repos",
"events_url": "https://api.github.com/users/efsotr/events{/privacy}",
"received_events_url": "https://api.github.com/users/efsotr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-07T04:04:50 | 2025-08-07T07:31:47 | null |
NONE
| null | null | null | null |
### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz",
"validation": "en/c4-validation.00000-of-00008.json.gz"},
)
```
```log
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=828589180707, num_examples=364868892, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=809262831, num_examples=356317, shard_lengths=[223006, 133311], dataset_name='c4')}, {'expected': SplitInfo(name='validation', num_bytes=825767266, num_examples=364608, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=102199431, num_examples=45576, shard_lengths=None, dataset_name='c4')}]
```
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
split="train"
)
```
```log
ExpectedMoreSplitsError: {'validation'}
```
### Expected behavior
No error
### Environment info
datasets 4.0.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7728/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7727
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7727/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7727/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7727/events
|
https://github.com/huggingface/datasets/issues/7727
| 3,295,718,578 |
I_kwDODunzps7EcKyy
| 7,727 |
config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally
|
{
"login": "doctorpangloss",
"id": 2229300,
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doctorpangloss",
"html_url": "https://github.com/doctorpangloss",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-06T08:21:37 | 2025-08-06T08:21:37 | null |
NONE
| null | null | null | null |
### Describe the bug
```
- config_name: some_config
data_files:
- split: train
path:
- images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `load_dataset` on the same directory locally works fine.
### Steps to reproduce the bug
1. create a README.md with the front matter of the form
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
2. `touch ./images/xyz/1.jpg`
3. Observe this directory loads with `load_dataset("filesystem_path", "some_config")` correctly.
4. Observe exceptions when you load this with `load_dataset("repoid/filesystem_path", "some_config")`
### Expected behavior
`./` prefix should be interpreted correctly
### Environment info
datasets 4.0.0
datasets 3.4.0
reproduce
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7727/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7726
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7726/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7726/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7726/events
|
https://github.com/huggingface/datasets/pull/7726
| 3,293,789,832 |
PR_kwDODunzps6iO_oF
| 7,726 |
fix(webdataset): don't .lower() field_name
|
{
"login": "YassineYousfi",
"id": 29985433,
"node_id": "MDQ6VXNlcjI5OTg1NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YassineYousfi",
"html_url": "https://github.com/YassineYousfi",
"followers_url": "https://api.github.com/users/YassineYousfi/followers",
"following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
"gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
"organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
"repos_url": "https://api.github.com/users/YassineYousfi/repos",
"events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 3 | 2025-08-05T16:57:09 | 2025-08-20T16:35:55 | 2025-08-20T16:35:55 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7726",
"html_url": "https://github.com/huggingface/datasets/pull/7726",
"diff_url": "https://github.com/huggingface/datasets/pull/7726.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7726.patch",
"merged_at": "2025-08-20T16:35:55"
}
|
This fixes cases where keys have upper case identifiers
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7726/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7724
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7724/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7724/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7724/events
|
https://github.com/huggingface/datasets/issues/7724
| 3,292,315,241 |
I_kwDODunzps7EPL5p
| 7,724 |
Can not stepinto load_dataset.py?
|
{
"login": "micklexqg",
"id": 13776012,
"node_id": "MDQ6VXNlcjEzNzc2MDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/13776012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/micklexqg",
"html_url": "https://github.com/micklexqg",
"followers_url": "https://api.github.com/users/micklexqg/followers",
"following_url": "https://api.github.com/users/micklexqg/following{/other_user}",
"gists_url": "https://api.github.com/users/micklexqg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/micklexqg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/micklexqg/subscriptions",
"organizations_url": "https://api.github.com/users/micklexqg/orgs",
"repos_url": "https://api.github.com/users/micklexqg/repos",
"events_url": "https://api.github.com/users/micklexqg/events{/privacy}",
"received_events_url": "https://api.github.com/users/micklexqg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-05T09:28:51 | 2025-08-05T09:28:51 | null |
NONE
| null | null | null | null |
I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" -->
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7724/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7723
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7723/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7723/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7723/events
|
https://github.com/huggingface/datasets/issues/7723
| 3,289,943,261 |
I_kwDODunzps7EGIzd
| 7,723 |
Don't remove `trust_remote_code` arg!!!
|
{
"login": "autosquid",
"id": 758925,
"node_id": "MDQ6VXNlcjc1ODkyNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/758925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/autosquid",
"html_url": "https://github.com/autosquid",
"followers_url": "https://api.github.com/users/autosquid/followers",
"following_url": "https://api.github.com/users/autosquid/following{/other_user}",
"gists_url": "https://api.github.com/users/autosquid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/autosquid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/autosquid/subscriptions",
"organizations_url": "https://api.github.com/users/autosquid/orgs",
"repos_url": "https://api.github.com/users/autosquid/repos",
"events_url": "https://api.github.com/users/autosquid/events{/privacy}",
"received_events_url": "https://api.github.com/users/autosquid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 2025-08-04T15:42:07 | 2025-08-04T15:42:07 | null |
NONE
| null | null | null | null |
### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7723/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7722
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7722/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7722/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7722/events
|
https://github.com/huggingface/datasets/issues/7722
| 3,289,741,064 |
I_kwDODunzps7EFXcI
| 7,722 |
Out of memory even though using load_dataset(..., streaming=True)
|
{
"login": "padmalcom",
"id": 3961950,
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/padmalcom",
"html_url": "https://github.com/padmalcom",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-04T14:41:55 | 2025-08-04T14:41:55 | null |
NONE
| null | null | null | null |
### Describe the bug
I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom.
### Steps to reproduce the bug
```
ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True)
for i,sample in enumerate(tqdm(ds)):
target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav')
try:
sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate'])
except Exception as e:
print(f"Could not write audio {i} in ds: {e}")
```
### Expected behavior
I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same.
### Environment info
Python 3.12.11
Ubuntu 24
datasets 4.0.0 and 3.6.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7722/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7721
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7721/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7721/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7721/events
|
https://github.com/huggingface/datasets/issues/7721
| 3,289,426,104 |
I_kwDODunzps7EEKi4
| 7,721 |
Bad split error message when using percentages
|
{
"login": "padmalcom",
"id": 3961950,
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/padmalcom",
"html_url": "https://github.com/padmalcom",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 2 | 2025-08-04T13:20:25 | 2025-08-14T14:42:24 | null |
NONE
| null | null | null | null |
### Describe the bug
Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps.
When doing so, the library returns this error:
raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
ValueError: Bad split: train[0%:10%]. Available splits: ['train']
Edit: Same happens with a split like _train[:90000]_
### Steps to reproduce the bug
```
for split in range(10):
split_str = f"train[{split*10}%:{(split+1)*10}%]"
print(f"Processing split {split_str}...")
ds = load_dataset("user/dataset", split=split_str, streaming=True)
```
### Expected behavior
I'd expect the library to split my dataset in 10% steps.
### Environment info
python 3.12.11
ubuntu 24
dataset 4.0.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7721/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7720
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7720/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7720/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7720/events
|
https://github.com/huggingface/datasets/issues/7720
| 3,287,150,513 |
I_kwDODunzps7D7e-x
| 7,720 |
Datasets 4.0 map function causing column not found
|
{
"login": "Darejkal",
"id": 55143337,
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Darejkal",
"html_url": "https://github.com/Darejkal",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 3 | 2025-08-03T12:52:34 | 2025-08-07T19:23:34 | null |
NONE
| null | null | null | null |
### Describe the bug
Column returned after mapping is not found in new instance of the dataset.
### Steps to reproduce the bug
Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration`
```
def compute_duration(x):
return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]}
def get_total_audio_length(dataset):
data = dataset.map(compute_duration, num_proc=NUM_PROC)
print(data)
durations=data["duration"]
total_seconds = sum(durations)
return total_seconds
```
### Expected behavior
New datasets.Dataset instance should have new columns attached.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2023.12.2
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7720/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7719
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7719/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7719/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7719/events
|
https://github.com/huggingface/datasets/issues/7719
| 3,285,928,491 |
I_kwDODunzps7D20or
| 7,719 |
Specify dataset columns types in typehint
|
{
"login": "Samoed",
"id": 36135455,
"node_id": "MDQ6VXNlcjM2MTM1NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Samoed",
"html_url": "https://github.com/Samoed",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
"organizations_url": "https://api.github.com/users/Samoed/orgs",
"repos_url": "https://api.github.com/users/Samoed/repos",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"received_events_url": "https://api.github.com/users/Samoed/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 2025-08-02T13:22:31 | 2025-08-02T13:22:31 | null |
NONE
| null | null | null | null |
### Feature request
Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131
### Motivation
In MTEB we're using a lot of datasets objects, but they're a bit poor in typehints. E.g. we can specify this for dataloder
```python
from typing import TypedDict
from torch.utils.data import DataLoader
class CorpusInput(TypedDict):
title: list[str]
body: list[str]
class QueryInput(TypedDict):
query: list[str]
instruction: list[str]
def queries_loader() -> DataLoader[QueryInput]:
...
def corpus_loader() -> DataLoader[CorpusInput]:
...
```
But for datasets we can only specify columns in type in comments
```python
from datasets import Dataset
QueryDataset = Dataset
"""Query dataset should have `query` and `instructions` columns as `str` """
```
### Your contribution
I can create draft implementation
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7719/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7719/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7718
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7718/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7718/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7718/events
|
https://github.com/huggingface/datasets/pull/7718
| 3,284,221,177 |
PR_kwDODunzps6hvJ6R
| 7,718 |
add support for pyarrow string view in features
|
{
"login": "onursatici",
"id": 5051569,
"node_id": "MDQ6VXNlcjUwNTE1Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5051569?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/onursatici",
"html_url": "https://github.com/onursatici",
"followers_url": "https://api.github.com/users/onursatici/followers",
"following_url": "https://api.github.com/users/onursatici/following{/other_user}",
"gists_url": "https://api.github.com/users/onursatici/gists{/gist_id}",
"starred_url": "https://api.github.com/users/onursatici/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/onursatici/subscriptions",
"organizations_url": "https://api.github.com/users/onursatici/orgs",
"repos_url": "https://api.github.com/users/onursatici/repos",
"events_url": "https://api.github.com/users/onursatici/events{/privacy}",
"received_events_url": "https://api.github.com/users/onursatici/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-01T14:58:39 | 2025-08-13T13:09:44 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7718",
"html_url": "https://github.com/huggingface/datasets/pull/7718",
"diff_url": "https://github.com/huggingface/datasets/pull/7718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7718.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7718/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7718/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7748
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7748/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7748/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7748/events
|
https://github.com/huggingface/datasets/pull/7748
| 3,347,137,663 |
PR_kwDODunzps6k-adX
| 7,748 |
docs: Streaming best practices
|
{
"login": "Abdul-Omira",
"id": 32625230,
"node_id": "MDQ6VXNlcjMyNjI1MjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/32625230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abdul-Omira",
"html_url": "https://github.com/Abdul-Omira",
"followers_url": "https://api.github.com/users/Abdul-Omira/followers",
"following_url": "https://api.github.com/users/Abdul-Omira/following{/other_user}",
"gists_url": "https://api.github.com/users/Abdul-Omira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abdul-Omira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abdul-Omira/subscriptions",
"organizations_url": "https://api.github.com/users/Abdul-Omira/orgs",
"repos_url": "https://api.github.com/users/Abdul-Omira/repos",
"events_url": "https://api.github.com/users/Abdul-Omira/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abdul-Omira/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-23T00:18:43 | 2025-08-23T00:18:43 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7748",
"html_url": "https://github.com/huggingface/datasets/pull/7748",
"diff_url": "https://github.com/huggingface/datasets/pull/7748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7748.patch",
"merged_at": null
}
|
Add a new 'Streaming best practices' page with practical patterns and pitfalls for large-scale/production use of IterableDataset. Includes examples for batched map with remove_columns, deterministic shuffling with set_epoch, multi-worker sharding, checkpoint/resume, and persistence to Parquet/Hub. Linked from How-to > General usage, next to Stream.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7748/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7747
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7747/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7747/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7747/events
|
https://github.com/huggingface/datasets/pull/7747
| 3,347,098,038 |
PR_kwDODunzps6k-Rtd
| 7,747 |
Add wikipedia-2023-redirects dataset
|
{
"login": "Abdul-Omira",
"id": 32625230,
"node_id": "MDQ6VXNlcjMyNjI1MjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/32625230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abdul-Omira",
"html_url": "https://github.com/Abdul-Omira",
"followers_url": "https://api.github.com/users/Abdul-Omira/followers",
"following_url": "https://api.github.com/users/Abdul-Omira/following{/other_user}",
"gists_url": "https://api.github.com/users/Abdul-Omira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abdul-Omira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abdul-Omira/subscriptions",
"organizations_url": "https://api.github.com/users/Abdul-Omira/orgs",
"repos_url": "https://api.github.com/users/Abdul-Omira/repos",
"events_url": "https://api.github.com/users/Abdul-Omira/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abdul-Omira/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-22T23:49:53 | 2025-08-22T23:49:53 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7747",
"html_url": "https://github.com/huggingface/datasets/pull/7747",
"diff_url": "https://github.com/huggingface/datasets/pull/7747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7747.patch",
"merged_at": null
}
|
Title: Add wikipedia-2023-redirects dataset (redirect resolution + pageviews)
Summary
- New dataset loader: wikipedia_2023_redirects
- Canonical Wikipedia pages enriched with:
- redirects (aliases pointing to the page)
- 2023 pageviews (aggregated)
- Streaming support; robust parsing; license notes included
- Tests with tiny dummy data (XML + TSVs); covers streaming
Motivation
RAG/retrieval often benefits from:
- Query expansion via redirect aliases
- Popularity prior via pageviews
This loader offers a practical, maintenance-light way to access canonical pages alongside their redirect aliases and 2023 pageview totals.
Features
- id: string
- title: string
- url: string
- text: string
- redirects: list[string]
- pageviews_2023: int32
- timestamp: string
Licensing
- Wikipedia text: CC BY-SA 3.0 (attribution and share-alike apply)
- Pageviews: public domain
The PR docs mention both, and the module docstring cites sources.
Notes
- The URLs in _get_urls_for_config are wired to dummy files for tests. In production, these would point to Wikimedia dumps:
- XML page dumps: https://dumps.wikimedia.org/
- Pageviews: https://dumps.wikimedia.org/other/pageviews/
- The schema is intentionally simple and stable. Pageview aggregation is per-title sum across 2023.
Testing
- make style && make quality
- pytest -q tests/test_dataset_wikipedia_2023_redirects.py
Example
```python
from datasets import load_dataset
ds = load_dataset("wikipedia_2023_redirects", split="train")
print(ds[0]["title"], ds[0]["redirects"][:5], ds[0]["pageviews_2023"])
```
Acknowledgements
- Wikipedia/Wikimedia Foundation for the source data
- Hugging Face Datasets for the dataset infrastructure
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7747/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7746
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7746/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7746/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7746/events
|
https://github.com/huggingface/datasets/issues/7746
| 3,345,391,211 |
I_kwDODunzps7HZp5r
| 7,746 |
Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version
|
{
"login": "Awesome075",
"id": 187888489,
"node_id": "U_kgDOCzLzaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/187888489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Awesome075",
"html_url": "https://github.com/Awesome075",
"followers_url": "https://api.github.com/users/Awesome075/followers",
"following_url": "https://api.github.com/users/Awesome075/following{/other_user}",
"gists_url": "https://api.github.com/users/Awesome075/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Awesome075/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Awesome075/subscriptions",
"organizations_url": "https://api.github.com/users/Awesome075/orgs",
"repos_url": "https://api.github.com/users/Awesome075/repos",
"events_url": "https://api.github.com/users/Awesome075/events{/privacy}",
"received_events_url": "https://api.github.com/users/Awesome075/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-22T12:52:03 | 2025-08-23T12:34:39 | null |
NONE
| null | null | null | null |
Hi,
The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter.
The original maintainer's GitHub and Hugging Face repositories appear to be inactive, so a community-led fix is needed.
I have created a working fix by converting the dataset to the modern Parquet format, which does not require a loading script. The fixed version is available here and loads correctly:
**[Awesome075/multi_news_parquet](https://huggingface.co/datasets/Awesome075/multi_news_parquet)**
Could the maintainers please guide me or themselves update the official `multi_news` dataset to use this working Parquet version? This would involve updating the canonical pointer for "multi_news" to resolve to the new repository.
This action would fix the dataset for all users and ensure its continued availability.
Thank you!
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7746/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7745
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7745/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7745/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7745/events
|
https://github.com/huggingface/datasets/issues/7745
| 3,345,286,773 |
I_kwDODunzps7HZQZ1
| 7,745 |
Audio mono argument no longer supported, despite class documentation
|
{
"login": "jheitz",
"id": 5666041,
"node_id": "MDQ6VXNlcjU2NjYwNDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5666041?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jheitz",
"html_url": "https://github.com/jheitz",
"followers_url": "https://api.github.com/users/jheitz/followers",
"following_url": "https://api.github.com/users/jheitz/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jheitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitz/subscriptions",
"organizations_url": "https://api.github.com/users/jheitz/orgs",
"repos_url": "https://api.github.com/users/jheitz/repos",
"events_url": "https://api.github.com/users/jheitz/events{/privacy}",
"received_events_url": "https://api.github.com/users/jheitz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-22T12:15:41 | 2025-08-24T18:22:41 | null |
NONE
| null | null | null | null |
### Describe the bug
Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono)
### Steps to reproduce the bug
Audio(sampling_rate=16000, mono=True) raises the error
TypeError: Audio.__init__() got an unexpected keyword argument 'mono'
However, in the class documentation, is says:
Args:
sampling_rate (`int`, *optional*):
Target sampling rate. If `None`, the native sampling rate is used.
mono (`bool`, defaults to `True`):
Whether to convert the audio signal to mono by averaging samples across
channels.
[...]
### Expected behavior
The above call should either work, or the documentation within the Audio class should be updated
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7745/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7744
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7744/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7744/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7744/events
|
https://github.com/huggingface/datasets/issues/7744
| 3,343,510,686 |
I_kwDODunzps7HSeye
| 7,744 |
dtype: ClassLabel is not parsed correctly in `features.py`
|
{
"login": "cmatKhan",
"id": 43553003,
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmatKhan",
"html_url": "https://github.com/cmatKhan",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-21T23:28:50 | 2025-08-21T23:28:50 | null |
NONE
| null | null | null | null |
`dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail.
This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the error):
```yaml
license: mit
pretty_name: BrentLab Yeast Genome Resources
size_categories:
- 1K<n<10K
language:
- en
dataset_info:
features:
- name: start
dtype: int32
description: Start coordinate (1-based, **inclusive**)
- name: end
dtype: int32
description: End coordinate (1-based, **inclusive**)
- name: strand
dtype: ClassLabel
...
```
is producing the following error in the data viewer:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory
return HubDatasetModuleFactory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2027, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1872, in from_dict
obj = generate_from_dict(dic)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1465, in generate_from_dict
raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
ValueError: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
```
I think that this is caused by this line
https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/features/features.py#L2013
Reproducible example from [naming.py](https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/naming.py)
```python
import itertools
import os
import re
_uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])")
_lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])")
_single_underscore_re = re.compile(r"(?<!_)_(?!_)")
_multiple_underscores_re = re.compile(r"(_{2,})")
_split_re = r"^\w+(\.\w+)*$"
def snakecase_to_camelcase(name):
"""Convert snake-case string to camel-case string."""
name = _single_underscore_re.split(name)
name = [_multiple_underscores_re.split(n) for n in name]
return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "")
snakecase_to_camelcase("ClassLabel")
```
Result:
```raw
'Classlabel'
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7744/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7743
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7743/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7743/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7743/events
|
https://github.com/huggingface/datasets/pull/7743
| 3,342,611,297 |
PR_kwDODunzps6ku8Jw
| 7,743 |
Refactor HDF5 and preserve tree structure
|
{
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-21T17:28:17 | 2025-08-25T18:04:33 | null |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7743",
"html_url": "https://github.com/huggingface/datasets/pull/7743",
"diff_url": "https://github.com/huggingface/datasets/pull/7743.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7743.patch",
"merged_at": null
}
|
Closes #7741. Followup to #7690
- Recursive parsing and feature inference, to preserve the tree structure of the file. Note this means we now visit all links in the file. It also means we have to call` combine_chunks` on any large non-root datasets.
- Support for `complex64` (two `float32`s, used to be converted to two `float64`s)
- Support for ndim complex, compound, more field types for compound (due to reusing the main parser, compound types are treated like groups)
- Cleaned up varlen support
- Always do feature inference and always cast to features (used to cast to schema)
- Updated tests to use `load_dataset` instead of internal APIs
- Removed `columns` in config. Have to give Features (i.e., must specify types) if filtering
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7743/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7742
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7742/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7742/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7742/events
|
https://github.com/huggingface/datasets/issues/7742
| 3,336,704,928 |
I_kwDODunzps7G4hOg
| 7,742 |
module 'pyarrow' has no attribute 'PyExtensionType'
|
{
"login": "mnedelko",
"id": 6106392,
"node_id": "MDQ6VXNlcjYxMDYzOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6106392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnedelko",
"html_url": "https://github.com/mnedelko",
"followers_url": "https://api.github.com/users/mnedelko/followers",
"following_url": "https://api.github.com/users/mnedelko/following{/other_user}",
"gists_url": "https://api.github.com/users/mnedelko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mnedelko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnedelko/subscriptions",
"organizations_url": "https://api.github.com/users/mnedelko/orgs",
"repos_url": "https://api.github.com/users/mnedelko/repos",
"events_url": "https://api.github.com/users/mnedelko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mnedelko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-20T06:14:33 | 2025-08-20T06:23:47 | null |
NONE
| null | null | null | null |
### Describe the bug
When importing certain libraries, users will encounter the following error which can be traced back to the datasets library.
module 'pyarrow' has no attribute 'PyExtensionType'.
Example issue: https://github.com/explodinggradients/ragas/issues/2170
The issue occurs due to the following. I will proceed to submit a PR with the below fix:
**Issue Reason**
The issue is that PyArrow version 21.0.0 doesn’t have PyExtensionType. This was changed in newer versions of PyArrow. The
PyExtensionType class was renamed to ExtensionType in PyArrow 13.0.0 and later versions.
** Issue Solution**
Making the following changes to the following lib files should temporarily resolve the issue.
I will submit a PR to the dataets library in the meantime.
env_name/lib/python3.10/site-packages/datasets/features/features.py:
```
> 521 self.shape = tuple(shape)
522 self.value_type = dtype
523 self.storage_dtype = self._generate_dtype(self.value_type)
524 - pa.PyExtensionType.__init__(self, self.storage_dtype)
524 + pa.ExtensionType.__init__(self, self.storage_dtype)
525
526 def __reduce__(self):
527 return self.__class__, (
```
Updated venv_name/lib/python3.10/site-packages/datasets/features/features.py:
```
510 _type: str = field(default=“Array5D”, init=False, repr=False)
511
512
513 - class _ArrayXDExtensionType(pa.PyExtensionType):
513 + class _ArrayXDExtensionType(pa.ExtensionType):
514 ndims: Optional[int] = None
515
516 def __init__(self, shape: tuple, dtype: str):
```
### Steps to reproduce the bug
Ragas version: 0.3.1
Python version: 3.11
**Code to Reproduce**
_**In notebook:**_
!pip install ragas
from ragas import evaluate
### Expected behavior
The required package installs without issue.
### Environment info
In Jupyter Notebook.
venv
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7742/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7742/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7741
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7741/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7741/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7741/events
|
https://github.com/huggingface/datasets/issues/7741
| 3,334,848,656 |
I_kwDODunzps7GxcCQ
| 7,741 |
Preserve tree structure when loading HDF5
|
{
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 2025-08-19T15:42:05 | 2025-08-22T00:41:46 | null |
CONTRIBUTOR
| null | null | null | null |
### Feature request
https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374
### Motivation
`datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user.
### Your contribution
I'll open a PR (#7743)
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7741/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7740
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7740/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7740/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7740/events
|
https://github.com/huggingface/datasets/pull/7740
| 3,334,693,293 |
PR_kwDODunzps6kUMKM
| 7,740 |
Document HDF5 support
|
{
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-19T14:53:04 | 2025-08-21T19:56:58 | null |
CONTRIBUTOR
| null | null | true |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7740",
"html_url": "https://github.com/huggingface/datasets/pull/7740",
"diff_url": "https://github.com/huggingface/datasets/pull/7740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7740.patch",
"merged_at": null
}
|
I think these are at least the main places where we should put content. Ideally it is not just repeated in the final version
ref #7690
- [ ] Wait for #7743 to land
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7740/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7739
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7739/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7739/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7739/events
|
https://github.com/huggingface/datasets/issues/7739
| 3,331,537,762 |
I_kwDODunzps7Gkzti
| 7,739 |
Replacement of "Sequence" feature with "List" breaks backward compatibility
|
{
"login": "evmaki",
"id": 15764776,
"node_id": "MDQ6VXNlcjE1NzY0Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/15764776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/evmaki",
"html_url": "https://github.com/evmaki",
"followers_url": "https://api.github.com/users/evmaki/followers",
"following_url": "https://api.github.com/users/evmaki/following{/other_user}",
"gists_url": "https://api.github.com/users/evmaki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/evmaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evmaki/subscriptions",
"organizations_url": "https://api.github.com/users/evmaki/orgs",
"repos_url": "https://api.github.com/users/evmaki/repos",
"events_url": "https://api.github.com/users/evmaki/events{/privacy}",
"received_events_url": "https://api.github.com/users/evmaki/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-18T17:28:38 | 2025-08-18T17:28:38 | null |
NONE
| null | null | null | null |
PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of "fixing" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons.
Perhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7739/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7738
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7738/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7738/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7738/events
|
https://github.com/huggingface/datasets/issues/7738
| 3,328,948,690 |
I_kwDODunzps7Ga7nS
| 7,738 |
Allow saving multi-dimensional ndarray with dynamic shapes
|
{
"login": "ryan-minato",
"id": 82735346,
"node_id": "MDQ6VXNlcjgyNzM1MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryan-minato",
"html_url": "https://github.com/ryan-minato",
"followers_url": "https://api.github.com/users/ryan-minato/followers",
"following_url": "https://api.github.com/users/ryan-minato/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-minato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryan-minato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-minato/subscriptions",
"organizations_url": "https://api.github.com/users/ryan-minato/orgs",
"repos_url": "https://api.github.com/users/ryan-minato/repos",
"events_url": "https://api.github.com/users/ryan-minato/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryan-minato/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 1 | 2025-08-18T02:23:51 | 2025-08-22T03:15:19 | null |
NONE
| null | null | null | null |
### Feature request
I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dimensions are not fixed.
A possible implementation could be a new Array or Tensor feature type that stores the data in a structured format, for example,
```python
{
"shape": (5, 224, 224),
"dtype": "uint8",
"data": [...]
}
```
This would allow the datasets library to handle heterogeneous array sizes within a single column without requiring a fixed shape definition in the feature schema.
### Motivation
I am currently trying to upload data from astronomical telescopes, specifically FITS files, to the Hugging Face Hub. This type of data is very similar to images but often has more than three dimensions. For example, data from the SDSS project contains five channels (u, g, r, i, z), and the pixel values can exceed 255, making the Pillow based Image feature unsuitable.
The current datasets library requires a fixed shape to be defined in the feature schema for multi-dimensional arrays, which is a major roadblock. This prevents me from saving my data, as the dimensions of the arrays can vary across different FITS files.
https://github.com/huggingface/datasets/blob/985c9bee6bfc345787a8b9dd316e1d4f3b930503/src/datasets/features/features.py#L613-L614
A feature that supports dynamic shapes would be incredibly beneficial for the astronomy community and other fields dealing with similar high-dimensional, variable-sized data (e.g., medical imaging, scientific simulations).
### Your contribution
I am willing to create a PR to help implement this feature if the proposal is accepted.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7738/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7737
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7737/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7737/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7737/events
|
https://github.com/huggingface/datasets/pull/7737
| 3,318,670,801 |
PR_kwDODunzps6jf5io
| 7,737 |
docs: Add column overwrite example to batch mapping guide
|
{
"login": "Sanjaykumar030",
"id": 183703408,
"node_id": "U_kgDOCvMXcA",
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sanjaykumar030",
"html_url": "https://github.com/Sanjaykumar030",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-13T14:20:19 | 2025-08-25T17:54:00 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7737",
"html_url": "https://github.com/huggingface/datasets/pull/7737",
"diff_url": "https://github.com/huggingface/datasets/pull/7737.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7737.patch",
"merged_at": null
}
|
This PR adds a complementary example showing the **column-overwriting** pattern, which is both more direct and more flexible for many transformations.
### Proposed Change
The original `remove_columns` example remains untouched. Below it, this PR introduces an alternative approach that overwrites an existing column during batch mapping.
This teaches users a core `.map()` capability for in-place transformations without extra intermediate steps.
**New Example:**
> ```python
> >>> from datasets import Dataset
> >>> dataset = Dataset.from_dict({"a": [0, 1, 2]})
> # Overwrite "a" directly to duplicate each value
> >>> duplicated_dataset = dataset.map(
> ... lambda batch: {"a": [x for x in batch["a"] for _ in range(2)]},
> ... batched=True
> ... )
> >>> duplicated_dataset
> Dataset({
> features: ['a'],
> num_rows: 6
> })
> >>> duplicated_dataset["a"]
> [0, 0, 1, 1, 2, 2]
> ```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7737/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7736
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7736/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7736/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7736/events
|
https://github.com/huggingface/datasets/pull/7736
| 3,311,618,096 |
PR_kwDODunzps6jIWQ3
| 7,736 |
Fix type hint `train_test_split`
|
{
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 1 | 2025-08-11T20:46:53 | 2025-08-13T13:13:50 | 2025-08-13T13:13:48 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7736",
"html_url": "https://github.com/huggingface/datasets/pull/7736",
"diff_url": "https://github.com/huggingface/datasets/pull/7736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7736.patch",
"merged_at": "2025-08-13T13:13:48"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7736/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7735
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7735/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7735/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7735/events
|
https://github.com/huggingface/datasets/pull/7735
| 3,310,514,828 |
PR_kwDODunzps6jEq5w
| 7,735 |
fix largelist repr
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 1 | 2025-08-11T15:17:42 | 2025-08-11T15:39:56 | 2025-08-11T15:39:54 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7735",
"html_url": "https://github.com/huggingface/datasets/pull/7735",
"diff_url": "https://github.com/huggingface/datasets/pull/7735.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7735.patch",
"merged_at": "2025-08-11T15:39:54"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7735/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7734
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7734/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7734/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7734/events
|
https://github.com/huggingface/datasets/pull/7734
| 3,306,519,239 |
PR_kwDODunzps6i4pmA
| 7,734 |
Fixing __getitem__ of datasets which behaves inconsistent to documentation when setting _format_type to None
|
{
"login": "awagen",
"id": 40367113,
"node_id": "MDQ6VXNlcjQwMzY3MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/40367113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awagen",
"html_url": "https://github.com/awagen",
"followers_url": "https://api.github.com/users/awagen/followers",
"following_url": "https://api.github.com/users/awagen/following{/other_user}",
"gists_url": "https://api.github.com/users/awagen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/awagen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awagen/subscriptions",
"organizations_url": "https://api.github.com/users/awagen/orgs",
"repos_url": "https://api.github.com/users/awagen/repos",
"events_url": "https://api.github.com/users/awagen/events{/privacy}",
"received_events_url": "https://api.github.com/users/awagen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 2 | 2025-08-09T15:52:54 | 2025-08-17T07:23:00 | 2025-08-17T07:23:00 |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7734",
"html_url": "https://github.com/huggingface/datasets/pull/7734",
"diff_url": "https://github.com/huggingface/datasets/pull/7734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7734.patch",
"merged_at": null
}
|
Setting _format_type to None, should return plain python object but as of 4.0.0 returns Column. This fails in libs such as sentencetransformers (such as in generation of hard negatives) where plain python is expected.
|
{
"login": "awagen",
"id": 40367113,
"node_id": "MDQ6VXNlcjQwMzY3MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/40367113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awagen",
"html_url": "https://github.com/awagen",
"followers_url": "https://api.github.com/users/awagen/followers",
"following_url": "https://api.github.com/users/awagen/following{/other_user}",
"gists_url": "https://api.github.com/users/awagen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/awagen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awagen/subscriptions",
"organizations_url": "https://api.github.com/users/awagen/orgs",
"repos_url": "https://api.github.com/users/awagen/repos",
"events_url": "https://api.github.com/users/awagen/events{/privacy}",
"received_events_url": "https://api.github.com/users/awagen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7734/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7733
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7733/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7733/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7733/events
|
https://github.com/huggingface/datasets/issues/7733
| 3,304,979,299 |
I_kwDODunzps7E_ftj
| 7,733 |
Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path
|
{
"login": "dennys246",
"id": 27898715,
"node_id": "MDQ6VXNlcjI3ODk4NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dennys246",
"html_url": "https://github.com/dennys246",
"followers_url": "https://api.github.com/users/dennys246/followers",
"following_url": "https://api.github.com/users/dennys246/following{/other_user}",
"gists_url": "https://api.github.com/users/dennys246/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dennys246/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dennys246/subscriptions",
"organizations_url": "https://api.github.com/users/dennys246/orgs",
"repos_url": "https://api.github.com/users/dennys246/repos",
"events_url": "https://api.github.com/users/dennys246/events{/privacy}",
"received_events_url": "https://api.github.com/users/dennys246/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-08T19:10:58 | 2025-08-12T00:54:58 | null |
NONE
| null | null | null | null |
### Describe the bug
I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of trouble trying to have the images handled properly (at least in the way I’d expect them to be handled).
I find that I cannot use relative paths for loading images remotely from the Hugging Face repo or from a local repository. Any time I do it always simply appends my current working directory to the dataset. As a result to use the datasets library with my dataset I have to change my working directory to the dataset folder or abandon the dataset object structure, which I cannot imagine you intended. As a result I have to use URL’s since an absolute path on my system obviously wouldn’t work for others. The URL works ok, but despite me having it locally downloaded, it appears to be redownloading the dataset every time I train my snowGAN model on it (and often times I’m coming into HTTPS errors for over requesting the data).
Or maybe image relative paths aren't intended to be loaded directly through your datasets library as images and should be kept as strings for the user to handle? If so I feel like you’re missing out on some pretty seamless functionality
### Steps to reproduce the bug
1. Download a local copy of the dataset (rmdig/rocky_mountain_snowpack) through git or whatever you prefer.
2. Alter the README.md YAML for file_path (the relative path to each image) to be type Image instead of type string
`
---
dataset_info:
features:
- name: image
dtype: Image
- name: file_path
dtype: Image
`
3. Initialize the dataset locally, make sure your working directory is not the dataset directory root
`dataset = datasets.load_dataset(‘path/to/local/rocky_mountain_snowpack/‘)`
4. Call to one of the samples and you’ll get an error that the image was not found in current/working/directory/preprocessed/cores/image_1.png. Showing that it’s simply looking in the current working directory + relative path
`
>>> dataset['train'][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/image.py", line 171, in decode_example
image = PIL.Image.open(path)
^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/PIL/Image.py", line 3277, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dennyschaedig/Datasets/preprocessed/cores/image_1.png'
`
### Expected behavior
I expect the datasets and Image() to load the locally hosted data using path/to/local/rocky_mountain_snowpack/ (that I pass in with my datasets.load_dataset() or the you all handle on the backend) call + relative path.
Instead it appears to load from my current working directory + relative path.
### Environment info
Tested on…
Windows 11, Ubuntu Linux 22.04 and Mac Sequoia 15.5 Silicone M2
datasets version 4.0.0
Python 3.12 and 3.13
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7733/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7732
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7732/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7732/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7732/events
|
https://github.com/huggingface/datasets/issues/7732
| 3,304,673,383 |
I_kwDODunzps7E-VBn
| 7,732 |
webdataset: key errors when `field_name` has upper case characters
|
{
"login": "YassineYousfi",
"id": 29985433,
"node_id": "MDQ6VXNlcjI5OTg1NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YassineYousfi",
"html_url": "https://github.com/YassineYousfi",
"followers_url": "https://api.github.com/users/YassineYousfi/followers",
"following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
"gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
"organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
"repos_url": "https://api.github.com/users/YassineYousfi/repos",
"events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-08T16:56:42 | 2025-08-08T16:56:42 | null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
When using a webdataset each sample can be a collection of different "fields"
like this:
```
images17/image194.left.jpg
images17/image194.right.jpg
images17/image194.json
images17/image12.left.jpg
images17/image12.right.jpg
images17/image12.json
```
if the field_name contains upper case characters, the HF webdataset integration throws a key error when trying to load the dataset:
e.g. from a dataset (now updated so that it doesn't throw this error)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[1], line 2
1 from datasets import load_dataset
----> 2 ds = load_dataset("commaai/comma2k19", data_files={'train': ['data-00000.tar.gz']}, num_proc=1)
File ~/xx/.venv/lib/python3.11/site-packages/datasets/load.py:1412, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1409 return builder_instance.as_streaming_dataset(split=split)
1411 # Download and prepare data
-> 1412 builder_instance.download_and_prepare(
1413 download_config=download_config,
1414 download_mode=download_mode,
1415 verification_mode=verification_mode,
1416 num_proc=num_proc,
1417 storage_options=storage_options,
1418 )
1420 # Build dataset for splits
1421 keep_in_memory = (
1422 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1423 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:894, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
892 if num_proc is not None:
893 prepare_split_kwargs["num_proc"] = num_proc
--> 894 self._download_and_prepare(
895 dl_manager=dl_manager,
896 verification_mode=verification_mode,
897 **prepare_split_kwargs,
898 **download_and_prepare_kwargs,
899 )
900 # Sync info
901 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:1609, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1608 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1609 super()._download_and_prepare(
1610 dl_manager,
1611 verification_mode,
1612 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS
1613 or verification_mode == VerificationMode.ALL_CHECKS,
1614 **prepare_splits_kwargs,
1615 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:948, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
946 split_dict = SplitDict(dataset_name=self.dataset_name)
947 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 948 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
950 # Checksums verification
951 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:81, in WebDataset._split_generators(self, dl_manager)
78 if not self.info.features:
79 # Get one example to get the feature types
80 pipeline = self._get_pipeline_from_tar(tar_paths[0], tar_iterators[0])
---> 81 first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
82 if any(example.keys() != first_examples[0].keys() for example in first_examples):
83 raise ValueError(
84 "The TAR archives of the dataset should be in WebDataset format, "
85 "but the files in the archive don't share the same prefix or the same types."
86 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:55, in WebDataset._get_pipeline_from_tar(cls, tar_path, tar_iterator)
53 data_extension = field_name.split(".")[-1]
54 if data_extension in cls.DECODERS:
---> 55 current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name])
56 if current_example:
57 yield current_example
KeyError: 'processed_log_IMU_magnetometer_value.npy'
```
### Steps to reproduce the bug
unit test was added in: https://github.com/huggingface/datasets/pull/7726
it fails without the fixed proposed in the same PR
### Expected behavior
Not throwing a key error.
### Environment info
```
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
- Python version: 3.11.4
- `huggingface_hub` version: 0.33.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.7.0
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7732/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7731
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7731/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7731/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7731/events
|
https://github.com/huggingface/datasets/issues/7731
| 3,303,637,075 |
I_kwDODunzps7E6YBT
| 7,731 |
Add the possibility of a backend for audio decoding
|
{
"login": "intexcor",
"id": 142020129,
"node_id": "U_kgDOCHcOIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/intexcor",
"html_url": "https://github.com/intexcor",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/intexcor/subscriptions",
"organizations_url": "https://api.github.com/users/intexcor/orgs",
"repos_url": "https://api.github.com/users/intexcor/repos",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"received_events_url": "https://api.github.com/users/intexcor/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 2 | 2025-08-08T11:08:56 | 2025-08-20T16:29:33 | null |
NONE
| null | null | null | null |
### Feature request
Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset.
### Motivation
I use a service for training models in which ffmpeg cannot be installed.
### Your contribution
I use a service for training models in which ffmpeg cannot be installed.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7731/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7730
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7730/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7730/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7730/events
|
https://github.com/huggingface/datasets/pull/7730
| 3,301,907,242 |
PR_kwDODunzps6iqTZI
| 7,730 |
Grammar fix: correct "showed" to "shown" in fingerprint.py
|
{
"login": "brchristian",
"id": 2460418,
"node_id": "MDQ6VXNlcjI0NjA0MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2460418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brchristian",
"html_url": "https://github.com/brchristian",
"followers_url": "https://api.github.com/users/brchristian/followers",
"following_url": "https://api.github.com/users/brchristian/following{/other_user}",
"gists_url": "https://api.github.com/users/brchristian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brchristian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brchristian/subscriptions",
"organizations_url": "https://api.github.com/users/brchristian/orgs",
"repos_url": "https://api.github.com/users/brchristian/repos",
"events_url": "https://api.github.com/users/brchristian/events{/privacy}",
"received_events_url": "https://api.github.com/users/brchristian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 0 | 2025-08-07T21:22:56 | 2025-08-13T18:34:30 | 2025-08-13T13:12:56 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7730",
"html_url": "https://github.com/huggingface/datasets/pull/7730",
"diff_url": "https://github.com/huggingface/datasets/pull/7730.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7730.patch",
"merged_at": "2025-08-13T13:12:56"
}
|
This PR corrects a small grammatical issue in the outputs of fingerprint.py:
```diff
- "This warning is only showed once. Subsequent hashing failures won't be showed."
+ "This warning is only shown once. Subsequent hashing failures won't be shown."
```
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7730/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7729
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7729/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7729/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7729/events
|
https://github.com/huggingface/datasets/issues/7729
| 3,300,672,954 |
I_kwDODunzps7EvEW6
| 7,729 |
OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory
|
{
"login": "SaleemMalikAI",
"id": 115183904,
"node_id": "U_kgDOBt2RIA",
"avatar_url": "https://avatars.githubusercontent.com/u/115183904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaleemMalikAI",
"html_url": "https://github.com/SaleemMalikAI",
"followers_url": "https://api.github.com/users/SaleemMalikAI/followers",
"following_url": "https://api.github.com/users/SaleemMalikAI/following{/other_user}",
"gists_url": "https://api.github.com/users/SaleemMalikAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaleemMalikAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaleemMalikAI/subscriptions",
"organizations_url": "https://api.github.com/users/SaleemMalikAI/orgs",
"repos_url": "https://api.github.com/users/SaleemMalikAI/repos",
"events_url": "https://api.github.com/users/SaleemMalikAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaleemMalikAI/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-07T14:07:23 | 2025-08-07T14:07:23 | null |
NONE
| null | null | null | null |
> Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7729/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7728
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7728/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7728/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7728/events
|
https://github.com/huggingface/datasets/issues/7728
| 3,298,854,904 |
I_kwDODunzps7EoIf4
| 7,728 |
NonMatchingSplitsSizesError and ExpectedMoreSplitsError
|
{
"login": "efsotr",
"id": 104755879,
"node_id": "U_kgDOBj5ypw",
"avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efsotr",
"html_url": "https://github.com/efsotr",
"followers_url": "https://api.github.com/users/efsotr/followers",
"following_url": "https://api.github.com/users/efsotr/following{/other_user}",
"gists_url": "https://api.github.com/users/efsotr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efsotr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efsotr/subscriptions",
"organizations_url": "https://api.github.com/users/efsotr/orgs",
"repos_url": "https://api.github.com/users/efsotr/repos",
"events_url": "https://api.github.com/users/efsotr/events{/privacy}",
"received_events_url": "https://api.github.com/users/efsotr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-07T04:04:50 | 2025-08-07T07:31:47 | null |
NONE
| null | null | null | null |
### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz",
"validation": "en/c4-validation.00000-of-00008.json.gz"},
)
```
```log
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=828589180707, num_examples=364868892, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=809262831, num_examples=356317, shard_lengths=[223006, 133311], dataset_name='c4')}, {'expected': SplitInfo(name='validation', num_bytes=825767266, num_examples=364608, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=102199431, num_examples=45576, shard_lengths=None, dataset_name='c4')}]
```
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
split="train"
)
```
```log
ExpectedMoreSplitsError: {'validation'}
```
### Expected behavior
No error
### Environment info
datasets 4.0.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7728/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7727
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7727/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7727/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7727/events
|
https://github.com/huggingface/datasets/issues/7727
| 3,295,718,578 |
I_kwDODunzps7EcKyy
| 7,727 |
config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally
|
{
"login": "doctorpangloss",
"id": 2229300,
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doctorpangloss",
"html_url": "https://github.com/doctorpangloss",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-06T08:21:37 | 2025-08-06T08:21:37 | null |
NONE
| null | null | null | null |
### Describe the bug
```
- config_name: some_config
data_files:
- split: train
path:
- images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `load_dataset` on the same directory locally works fine.
### Steps to reproduce the bug
1. create a README.md with the front matter of the form
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
2. `touch ./images/xyz/1.jpg`
3. Observe this directory loads with `load_dataset("filesystem_path", "some_config")` correctly.
4. Observe exceptions when you load this with `load_dataset("repoid/filesystem_path", "some_config")`
### Expected behavior
`./` prefix should be interpreted correctly
### Environment info
datasets 4.0.0
datasets 3.4.0
reproduce
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7727/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7726
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7726/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7726/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7726/events
|
https://github.com/huggingface/datasets/pull/7726
| 3,293,789,832 |
PR_kwDODunzps6iO_oF
| 7,726 |
fix(webdataset): don't .lower() field_name
|
{
"login": "YassineYousfi",
"id": 29985433,
"node_id": "MDQ6VXNlcjI5OTg1NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YassineYousfi",
"html_url": "https://github.com/YassineYousfi",
"followers_url": "https://api.github.com/users/YassineYousfi/followers",
"following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
"gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
"organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
"repos_url": "https://api.github.com/users/YassineYousfi/repos",
"events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 3 | 2025-08-05T16:57:09 | 2025-08-20T16:35:55 | 2025-08-20T16:35:55 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7726",
"html_url": "https://github.com/huggingface/datasets/pull/7726",
"diff_url": "https://github.com/huggingface/datasets/pull/7726.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7726.patch",
"merged_at": "2025-08-20T16:35:55"
}
|
This fixes cases where keys have upper case identifiers
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7726/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7724
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7724/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7724/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7724/events
|
https://github.com/huggingface/datasets/issues/7724
| 3,292,315,241 |
I_kwDODunzps7EPL5p
| 7,724 |
Can not stepinto load_dataset.py?
|
{
"login": "micklexqg",
"id": 13776012,
"node_id": "MDQ6VXNlcjEzNzc2MDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/13776012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/micklexqg",
"html_url": "https://github.com/micklexqg",
"followers_url": "https://api.github.com/users/micklexqg/followers",
"following_url": "https://api.github.com/users/micklexqg/following{/other_user}",
"gists_url": "https://api.github.com/users/micklexqg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/micklexqg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/micklexqg/subscriptions",
"organizations_url": "https://api.github.com/users/micklexqg/orgs",
"repos_url": "https://api.github.com/users/micklexqg/repos",
"events_url": "https://api.github.com/users/micklexqg/events{/privacy}",
"received_events_url": "https://api.github.com/users/micklexqg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-05T09:28:51 | 2025-08-05T09:28:51 | null |
NONE
| null | null | null | null |
I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" -->
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7724/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7723
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7723/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7723/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7723/events
|
https://github.com/huggingface/datasets/issues/7723
| 3,289,943,261 |
I_kwDODunzps7EGIzd
| 7,723 |
Don't remove `trust_remote_code` arg!!!
|
{
"login": "autosquid",
"id": 758925,
"node_id": "MDQ6VXNlcjc1ODkyNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/758925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/autosquid",
"html_url": "https://github.com/autosquid",
"followers_url": "https://api.github.com/users/autosquid/followers",
"following_url": "https://api.github.com/users/autosquid/following{/other_user}",
"gists_url": "https://api.github.com/users/autosquid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/autosquid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/autosquid/subscriptions",
"organizations_url": "https://api.github.com/users/autosquid/orgs",
"repos_url": "https://api.github.com/users/autosquid/repos",
"events_url": "https://api.github.com/users/autosquid/events{/privacy}",
"received_events_url": "https://api.github.com/users/autosquid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 2025-08-04T15:42:07 | 2025-08-04T15:42:07 | null |
NONE
| null | null | null | null |
### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7723/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7722
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7722/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7722/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7722/events
|
https://github.com/huggingface/datasets/issues/7722
| 3,289,741,064 |
I_kwDODunzps7EFXcI
| 7,722 |
Out of memory even though using load_dataset(..., streaming=True)
|
{
"login": "padmalcom",
"id": 3961950,
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/padmalcom",
"html_url": "https://github.com/padmalcom",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-04T14:41:55 | 2025-08-04T14:41:55 | null |
NONE
| null | null | null | null |
### Describe the bug
I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom.
### Steps to reproduce the bug
```
ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True)
for i,sample in enumerate(tqdm(ds)):
target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav')
try:
sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate'])
except Exception as e:
print(f"Could not write audio {i} in ds: {e}")
```
### Expected behavior
I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same.
### Environment info
Python 3.12.11
Ubuntu 24
datasets 4.0.0 and 3.6.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7722/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7721
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7721/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7721/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7721/events
|
https://github.com/huggingface/datasets/issues/7721
| 3,289,426,104 |
I_kwDODunzps7EEKi4
| 7,721 |
Bad split error message when using percentages
|
{
"login": "padmalcom",
"id": 3961950,
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/padmalcom",
"html_url": "https://github.com/padmalcom",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 2 | 2025-08-04T13:20:25 | 2025-08-14T14:42:24 | null |
NONE
| null | null | null | null |
### Describe the bug
Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps.
When doing so, the library returns this error:
raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
ValueError: Bad split: train[0%:10%]. Available splits: ['train']
Edit: Same happens with a split like _train[:90000]_
### Steps to reproduce the bug
```
for split in range(10):
split_str = f"train[{split*10}%:{(split+1)*10}%]"
print(f"Processing split {split_str}...")
ds = load_dataset("user/dataset", split=split_str, streaming=True)
```
### Expected behavior
I'd expect the library to split my dataset in 10% steps.
### Environment info
python 3.12.11
ubuntu 24
dataset 4.0.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7721/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7720
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7720/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7720/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7720/events
|
https://github.com/huggingface/datasets/issues/7720
| 3,287,150,513 |
I_kwDODunzps7D7e-x
| 7,720 |
Datasets 4.0 map function causing column not found
|
{
"login": "Darejkal",
"id": 55143337,
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Darejkal",
"html_url": "https://github.com/Darejkal",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 3 | 2025-08-03T12:52:34 | 2025-08-07T19:23:34 | null |
NONE
| null | null | null | null |
### Describe the bug
Column returned after mapping is not found in new instance of the dataset.
### Steps to reproduce the bug
Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration`
```
def compute_duration(x):
return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]}
def get_total_audio_length(dataset):
data = dataset.map(compute_duration, num_proc=NUM_PROC)
print(data)
durations=data["duration"]
total_seconds = sum(durations)
return total_seconds
```
### Expected behavior
New datasets.Dataset instance should have new columns attached.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2023.12.2
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7720/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7719
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7719/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7719/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7719/events
|
https://github.com/huggingface/datasets/issues/7719
| 3,285,928,491 |
I_kwDODunzps7D20or
| 7,719 |
Specify dataset columns types in typehint
|
{
"login": "Samoed",
"id": 36135455,
"node_id": "MDQ6VXNlcjM2MTM1NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Samoed",
"html_url": "https://github.com/Samoed",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
"organizations_url": "https://api.github.com/users/Samoed/orgs",
"repos_url": "https://api.github.com/users/Samoed/repos",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"received_events_url": "https://api.github.com/users/Samoed/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 2025-08-02T13:22:31 | 2025-08-02T13:22:31 | null |
NONE
| null | null | null | null |
### Feature request
Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131
### Motivation
In MTEB we're using a lot of datasets objects, but they're a bit poor in typehints. E.g. we can specify this for dataloder
```python
from typing import TypedDict
from torch.utils.data import DataLoader
class CorpusInput(TypedDict):
title: list[str]
body: list[str]
class QueryInput(TypedDict):
query: list[str]
instruction: list[str]
def queries_loader() -> DataLoader[QueryInput]:
...
def corpus_loader() -> DataLoader[CorpusInput]:
...
```
But for datasets we can only specify columns in type in comments
```python
from datasets import Dataset
QueryDataset = Dataset
"""Query dataset should have `query` and `instructions` columns as `str` """
```
### Your contribution
I can create draft implementation
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7719/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7719/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7718
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7718/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7718/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7718/events
|
https://github.com/huggingface/datasets/pull/7718
| 3,284,221,177 |
PR_kwDODunzps6hvJ6R
| 7,718 |
add support for pyarrow string view in features
|
{
"login": "onursatici",
"id": 5051569,
"node_id": "MDQ6VXNlcjUwNTE1Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5051569?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/onursatici",
"html_url": "https://github.com/onursatici",
"followers_url": "https://api.github.com/users/onursatici/followers",
"following_url": "https://api.github.com/users/onursatici/following{/other_user}",
"gists_url": "https://api.github.com/users/onursatici/gists{/gist_id}",
"starred_url": "https://api.github.com/users/onursatici/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/onursatici/subscriptions",
"organizations_url": "https://api.github.com/users/onursatici/orgs",
"repos_url": "https://api.github.com/users/onursatici/repos",
"events_url": "https://api.github.com/users/onursatici/events{/privacy}",
"received_events_url": "https://api.github.com/users/onursatici/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-01T14:58:39 | 2025-08-13T13:09:44 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7718",
"html_url": "https://github.com/huggingface/datasets/pull/7718",
"diff_url": "https://github.com/huggingface/datasets/pull/7718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7718.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7718/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7718/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7748
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7748/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7748/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7748/events
|
https://github.com/huggingface/datasets/pull/7748
| 3,347,137,663 |
PR_kwDODunzps6k-adX
| 7,748 |
docs: Streaming best practices
|
{
"login": "Abdul-Omira",
"id": 32625230,
"node_id": "MDQ6VXNlcjMyNjI1MjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/32625230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abdul-Omira",
"html_url": "https://github.com/Abdul-Omira",
"followers_url": "https://api.github.com/users/Abdul-Omira/followers",
"following_url": "https://api.github.com/users/Abdul-Omira/following{/other_user}",
"gists_url": "https://api.github.com/users/Abdul-Omira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abdul-Omira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abdul-Omira/subscriptions",
"organizations_url": "https://api.github.com/users/Abdul-Omira/orgs",
"repos_url": "https://api.github.com/users/Abdul-Omira/repos",
"events_url": "https://api.github.com/users/Abdul-Omira/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abdul-Omira/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-23T00:18:43 | 2025-08-23T00:18:43 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7748",
"html_url": "https://github.com/huggingface/datasets/pull/7748",
"diff_url": "https://github.com/huggingface/datasets/pull/7748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7748.patch",
"merged_at": null
}
|
Add a new 'Streaming best practices' page with practical patterns and pitfalls for large-scale/production use of IterableDataset. Includes examples for batched map with remove_columns, deterministic shuffling with set_epoch, multi-worker sharding, checkpoint/resume, and persistence to Parquet/Hub. Linked from How-to > General usage, next to Stream.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7748/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7747
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7747/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7747/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7747/events
|
https://github.com/huggingface/datasets/pull/7747
| 3,347,098,038 |
PR_kwDODunzps6k-Rtd
| 7,747 |
Add wikipedia-2023-redirects dataset
|
{
"login": "Abdul-Omira",
"id": 32625230,
"node_id": "MDQ6VXNlcjMyNjI1MjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/32625230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abdul-Omira",
"html_url": "https://github.com/Abdul-Omira",
"followers_url": "https://api.github.com/users/Abdul-Omira/followers",
"following_url": "https://api.github.com/users/Abdul-Omira/following{/other_user}",
"gists_url": "https://api.github.com/users/Abdul-Omira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abdul-Omira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abdul-Omira/subscriptions",
"organizations_url": "https://api.github.com/users/Abdul-Omira/orgs",
"repos_url": "https://api.github.com/users/Abdul-Omira/repos",
"events_url": "https://api.github.com/users/Abdul-Omira/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abdul-Omira/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-22T23:49:53 | 2025-08-22T23:49:53 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7747",
"html_url": "https://github.com/huggingface/datasets/pull/7747",
"diff_url": "https://github.com/huggingface/datasets/pull/7747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7747.patch",
"merged_at": null
}
|
Title: Add wikipedia-2023-redirects dataset (redirect resolution + pageviews)
Summary
- New dataset loader: wikipedia_2023_redirects
- Canonical Wikipedia pages enriched with:
- redirects (aliases pointing to the page)
- 2023 pageviews (aggregated)
- Streaming support; robust parsing; license notes included
- Tests with tiny dummy data (XML + TSVs); covers streaming
Motivation
RAG/retrieval often benefits from:
- Query expansion via redirect aliases
- Popularity prior via pageviews
This loader offers a practical, maintenance-light way to access canonical pages alongside their redirect aliases and 2023 pageview totals.
Features
- id: string
- title: string
- url: string
- text: string
- redirects: list[string]
- pageviews_2023: int32
- timestamp: string
Licensing
- Wikipedia text: CC BY-SA 3.0 (attribution and share-alike apply)
- Pageviews: public domain
The PR docs mention both, and the module docstring cites sources.
Notes
- The URLs in _get_urls_for_config are wired to dummy files for tests. In production, these would point to Wikimedia dumps:
- XML page dumps: https://dumps.wikimedia.org/
- Pageviews: https://dumps.wikimedia.org/other/pageviews/
- The schema is intentionally simple and stable. Pageview aggregation is per-title sum across 2023.
Testing
- make style && make quality
- pytest -q tests/test_dataset_wikipedia_2023_redirects.py
Example
```python
from datasets import load_dataset
ds = load_dataset("wikipedia_2023_redirects", split="train")
print(ds[0]["title"], ds[0]["redirects"][:5], ds[0]["pageviews_2023"])
```
Acknowledgements
- Wikipedia/Wikimedia Foundation for the source data
- Hugging Face Datasets for the dataset infrastructure
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7747/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7746
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7746/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7746/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7746/events
|
https://github.com/huggingface/datasets/issues/7746
| 3,345,391,211 |
I_kwDODunzps7HZp5r
| 7,746 |
Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version
|
{
"login": "Awesome075",
"id": 187888489,
"node_id": "U_kgDOCzLzaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/187888489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Awesome075",
"html_url": "https://github.com/Awesome075",
"followers_url": "https://api.github.com/users/Awesome075/followers",
"following_url": "https://api.github.com/users/Awesome075/following{/other_user}",
"gists_url": "https://api.github.com/users/Awesome075/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Awesome075/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Awesome075/subscriptions",
"organizations_url": "https://api.github.com/users/Awesome075/orgs",
"repos_url": "https://api.github.com/users/Awesome075/repos",
"events_url": "https://api.github.com/users/Awesome075/events{/privacy}",
"received_events_url": "https://api.github.com/users/Awesome075/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-22T12:52:03 | 2025-08-23T12:34:39 | null |
NONE
| null | null | null | null |
Hi,
The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter.
The original maintainer's GitHub and Hugging Face repositories appear to be inactive, so a community-led fix is needed.
I have created a working fix by converting the dataset to the modern Parquet format, which does not require a loading script. The fixed version is available here and loads correctly:
**[Awesome075/multi_news_parquet](https://huggingface.co/datasets/Awesome075/multi_news_parquet)**
Could the maintainers please guide me or themselves update the official `multi_news` dataset to use this working Parquet version? This would involve updating the canonical pointer for "multi_news" to resolve to the new repository.
This action would fix the dataset for all users and ensure its continued availability.
Thank you!
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7746/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7745
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7745/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7745/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7745/events
|
https://github.com/huggingface/datasets/issues/7745
| 3,345,286,773 |
I_kwDODunzps7HZQZ1
| 7,745 |
Audio mono argument no longer supported, despite class documentation
|
{
"login": "jheitz",
"id": 5666041,
"node_id": "MDQ6VXNlcjU2NjYwNDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5666041?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jheitz",
"html_url": "https://github.com/jheitz",
"followers_url": "https://api.github.com/users/jheitz/followers",
"following_url": "https://api.github.com/users/jheitz/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jheitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitz/subscriptions",
"organizations_url": "https://api.github.com/users/jheitz/orgs",
"repos_url": "https://api.github.com/users/jheitz/repos",
"events_url": "https://api.github.com/users/jheitz/events{/privacy}",
"received_events_url": "https://api.github.com/users/jheitz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-22T12:15:41 | 2025-08-24T18:22:41 | null |
NONE
| null | null | null | null |
### Describe the bug
Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono)
### Steps to reproduce the bug
Audio(sampling_rate=16000, mono=True) raises the error
TypeError: Audio.__init__() got an unexpected keyword argument 'mono'
However, in the class documentation, is says:
Args:
sampling_rate (`int`, *optional*):
Target sampling rate. If `None`, the native sampling rate is used.
mono (`bool`, defaults to `True`):
Whether to convert the audio signal to mono by averaging samples across
channels.
[...]
### Expected behavior
The above call should either work, or the documentation within the Audio class should be updated
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7745/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7744
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7744/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7744/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7744/events
|
https://github.com/huggingface/datasets/issues/7744
| 3,343,510,686 |
I_kwDODunzps7HSeye
| 7,744 |
dtype: ClassLabel is not parsed correctly in `features.py`
|
{
"login": "cmatKhan",
"id": 43553003,
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmatKhan",
"html_url": "https://github.com/cmatKhan",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-21T23:28:50 | 2025-08-21T23:28:50 | null |
NONE
| null | null | null | null |
`dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail.
This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the error):
```yaml
license: mit
pretty_name: BrentLab Yeast Genome Resources
size_categories:
- 1K<n<10K
language:
- en
dataset_info:
features:
- name: start
dtype: int32
description: Start coordinate (1-based, **inclusive**)
- name: end
dtype: int32
description: End coordinate (1-based, **inclusive**)
- name: strand
dtype: ClassLabel
...
```
is producing the following error in the data viewer:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory
return HubDatasetModuleFactory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2027, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1872, in from_dict
obj = generate_from_dict(dic)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1465, in generate_from_dict
raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
ValueError: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
```
I think that this is caused by this line
https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/features/features.py#L2013
Reproducible example from [naming.py](https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/naming.py)
```python
import itertools
import os
import re
_uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])")
_lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])")
_single_underscore_re = re.compile(r"(?<!_)_(?!_)")
_multiple_underscores_re = re.compile(r"(_{2,})")
_split_re = r"^\w+(\.\w+)*$"
def snakecase_to_camelcase(name):
"""Convert snake-case string to camel-case string."""
name = _single_underscore_re.split(name)
name = [_multiple_underscores_re.split(n) for n in name]
return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "")
snakecase_to_camelcase("ClassLabel")
```
Result:
```raw
'Classlabel'
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7744/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7743
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7743/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7743/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7743/events
|
https://github.com/huggingface/datasets/pull/7743
| 3,342,611,297 |
PR_kwDODunzps6ku8Jw
| 7,743 |
Refactor HDF5 and preserve tree structure
|
{
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-21T17:28:17 | 2025-08-25T18:04:33 | null |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7743",
"html_url": "https://github.com/huggingface/datasets/pull/7743",
"diff_url": "https://github.com/huggingface/datasets/pull/7743.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7743.patch",
"merged_at": null
}
|
Closes #7741. Followup to #7690
- Recursive parsing and feature inference, to preserve the tree structure of the file. Note this means we now visit all links in the file. It also means we have to call` combine_chunks` on any large non-root datasets.
- Support for `complex64` (two `float32`s, used to be converted to two `float64`s)
- Support for ndim complex, compound, more field types for compound (due to reusing the main parser, compound types are treated like groups)
- Cleaned up varlen support
- Always do feature inference and always cast to features (used to cast to schema)
- Updated tests to use `load_dataset` instead of internal APIs
- Removed `columns` in config. Have to give Features (i.e., must specify types) if filtering
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7743/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7742
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7742/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7742/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7742/events
|
https://github.com/huggingface/datasets/issues/7742
| 3,336,704,928 |
I_kwDODunzps7G4hOg
| 7,742 |
module 'pyarrow' has no attribute 'PyExtensionType'
|
{
"login": "mnedelko",
"id": 6106392,
"node_id": "MDQ6VXNlcjYxMDYzOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6106392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnedelko",
"html_url": "https://github.com/mnedelko",
"followers_url": "https://api.github.com/users/mnedelko/followers",
"following_url": "https://api.github.com/users/mnedelko/following{/other_user}",
"gists_url": "https://api.github.com/users/mnedelko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mnedelko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnedelko/subscriptions",
"organizations_url": "https://api.github.com/users/mnedelko/orgs",
"repos_url": "https://api.github.com/users/mnedelko/repos",
"events_url": "https://api.github.com/users/mnedelko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mnedelko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-20T06:14:33 | 2025-08-20T06:23:47 | null |
NONE
| null | null | null | null |
### Describe the bug
When importing certain libraries, users will encounter the following error which can be traced back to the datasets library.
module 'pyarrow' has no attribute 'PyExtensionType'.
Example issue: https://github.com/explodinggradients/ragas/issues/2170
The issue occurs due to the following. I will proceed to submit a PR with the below fix:
**Issue Reason**
The issue is that PyArrow version 21.0.0 doesn’t have PyExtensionType. This was changed in newer versions of PyArrow. The
PyExtensionType class was renamed to ExtensionType in PyArrow 13.0.0 and later versions.
** Issue Solution**
Making the following changes to the following lib files should temporarily resolve the issue.
I will submit a PR to the dataets library in the meantime.
env_name/lib/python3.10/site-packages/datasets/features/features.py:
```
> 521 self.shape = tuple(shape)
522 self.value_type = dtype
523 self.storage_dtype = self._generate_dtype(self.value_type)
524 - pa.PyExtensionType.__init__(self, self.storage_dtype)
524 + pa.ExtensionType.__init__(self, self.storage_dtype)
525
526 def __reduce__(self):
527 return self.__class__, (
```
Updated venv_name/lib/python3.10/site-packages/datasets/features/features.py:
```
510 _type: str = field(default=“Array5D”, init=False, repr=False)
511
512
513 - class _ArrayXDExtensionType(pa.PyExtensionType):
513 + class _ArrayXDExtensionType(pa.ExtensionType):
514 ndims: Optional[int] = None
515
516 def __init__(self, shape: tuple, dtype: str):
```
### Steps to reproduce the bug
Ragas version: 0.3.1
Python version: 3.11
**Code to Reproduce**
_**In notebook:**_
!pip install ragas
from ragas import evaluate
### Expected behavior
The required package installs without issue.
### Environment info
In Jupyter Notebook.
venv
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7742/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7742/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7741
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7741/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7741/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7741/events
|
https://github.com/huggingface/datasets/issues/7741
| 3,334,848,656 |
I_kwDODunzps7GxcCQ
| 7,741 |
Preserve tree structure when loading HDF5
|
{
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 2025-08-19T15:42:05 | 2025-08-22T00:41:46 | null |
CONTRIBUTOR
| null | null | null | null |
### Feature request
https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374
### Motivation
`datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user.
### Your contribution
I'll open a PR (#7743)
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7741/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7740
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7740/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7740/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7740/events
|
https://github.com/huggingface/datasets/pull/7740
| 3,334,693,293 |
PR_kwDODunzps6kUMKM
| 7,740 |
Document HDF5 support
|
{
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-19T14:53:04 | 2025-08-21T19:56:58 | null |
CONTRIBUTOR
| null | null | true |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7740",
"html_url": "https://github.com/huggingface/datasets/pull/7740",
"diff_url": "https://github.com/huggingface/datasets/pull/7740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7740.patch",
"merged_at": null
}
|
I think these are at least the main places where we should put content. Ideally it is not just repeated in the final version
ref #7690
- [ ] Wait for #7743 to land
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7740/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7739
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7739/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7739/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7739/events
|
https://github.com/huggingface/datasets/issues/7739
| 3,331,537,762 |
I_kwDODunzps7Gkzti
| 7,739 |
Replacement of "Sequence" feature with "List" breaks backward compatibility
|
{
"login": "evmaki",
"id": 15764776,
"node_id": "MDQ6VXNlcjE1NzY0Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/15764776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/evmaki",
"html_url": "https://github.com/evmaki",
"followers_url": "https://api.github.com/users/evmaki/followers",
"following_url": "https://api.github.com/users/evmaki/following{/other_user}",
"gists_url": "https://api.github.com/users/evmaki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/evmaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evmaki/subscriptions",
"organizations_url": "https://api.github.com/users/evmaki/orgs",
"repos_url": "https://api.github.com/users/evmaki/repos",
"events_url": "https://api.github.com/users/evmaki/events{/privacy}",
"received_events_url": "https://api.github.com/users/evmaki/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-18T17:28:38 | 2025-08-18T17:28:38 | null |
NONE
| null | null | null | null |
PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of "fixing" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons.
Perhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7739/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7738
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7738/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7738/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7738/events
|
https://github.com/huggingface/datasets/issues/7738
| 3,328,948,690 |
I_kwDODunzps7Ga7nS
| 7,738 |
Allow saving multi-dimensional ndarray with dynamic shapes
|
{
"login": "ryan-minato",
"id": 82735346,
"node_id": "MDQ6VXNlcjgyNzM1MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryan-minato",
"html_url": "https://github.com/ryan-minato",
"followers_url": "https://api.github.com/users/ryan-minato/followers",
"following_url": "https://api.github.com/users/ryan-minato/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-minato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryan-minato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-minato/subscriptions",
"organizations_url": "https://api.github.com/users/ryan-minato/orgs",
"repos_url": "https://api.github.com/users/ryan-minato/repos",
"events_url": "https://api.github.com/users/ryan-minato/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryan-minato/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 1 | 2025-08-18T02:23:51 | 2025-08-22T03:15:19 | null |
NONE
| null | null | null | null |
### Feature request
I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dimensions are not fixed.
A possible implementation could be a new Array or Tensor feature type that stores the data in a structured format, for example,
```python
{
"shape": (5, 224, 224),
"dtype": "uint8",
"data": [...]
}
```
This would allow the datasets library to handle heterogeneous array sizes within a single column without requiring a fixed shape definition in the feature schema.
### Motivation
I am currently trying to upload data from astronomical telescopes, specifically FITS files, to the Hugging Face Hub. This type of data is very similar to images but often has more than three dimensions. For example, data from the SDSS project contains five channels (u, g, r, i, z), and the pixel values can exceed 255, making the Pillow based Image feature unsuitable.
The current datasets library requires a fixed shape to be defined in the feature schema for multi-dimensional arrays, which is a major roadblock. This prevents me from saving my data, as the dimensions of the arrays can vary across different FITS files.
https://github.com/huggingface/datasets/blob/985c9bee6bfc345787a8b9dd316e1d4f3b930503/src/datasets/features/features.py#L613-L614
A feature that supports dynamic shapes would be incredibly beneficial for the astronomy community and other fields dealing with similar high-dimensional, variable-sized data (e.g., medical imaging, scientific simulations).
### Your contribution
I am willing to create a PR to help implement this feature if the proposal is accepted.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7738/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7737
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7737/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7737/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7737/events
|
https://github.com/huggingface/datasets/pull/7737
| 3,318,670,801 |
PR_kwDODunzps6jf5io
| 7,737 |
docs: Add column overwrite example to batch mapping guide
|
{
"login": "Sanjaykumar030",
"id": 183703408,
"node_id": "U_kgDOCvMXcA",
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sanjaykumar030",
"html_url": "https://github.com/Sanjaykumar030",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-13T14:20:19 | 2025-08-25T17:54:00 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7737",
"html_url": "https://github.com/huggingface/datasets/pull/7737",
"diff_url": "https://github.com/huggingface/datasets/pull/7737.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7737.patch",
"merged_at": null
}
|
This PR adds a complementary example showing the **column-overwriting** pattern, which is both more direct and more flexible for many transformations.
### Proposed Change
The original `remove_columns` example remains untouched. Below it, this PR introduces an alternative approach that overwrites an existing column during batch mapping.
This teaches users a core `.map()` capability for in-place transformations without extra intermediate steps.
**New Example:**
> ```python
> >>> from datasets import Dataset
> >>> dataset = Dataset.from_dict({"a": [0, 1, 2]})
> # Overwrite "a" directly to duplicate each value
> >>> duplicated_dataset = dataset.map(
> ... lambda batch: {"a": [x for x in batch["a"] for _ in range(2)]},
> ... batched=True
> ... )
> >>> duplicated_dataset
> Dataset({
> features: ['a'],
> num_rows: 6
> })
> >>> duplicated_dataset["a"]
> [0, 0, 1, 1, 2, 2]
> ```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7737/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7736
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7736/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7736/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7736/events
|
https://github.com/huggingface/datasets/pull/7736
| 3,311,618,096 |
PR_kwDODunzps6jIWQ3
| 7,736 |
Fix type hint `train_test_split`
|
{
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 1 | 2025-08-11T20:46:53 | 2025-08-13T13:13:50 | 2025-08-13T13:13:48 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7736",
"html_url": "https://github.com/huggingface/datasets/pull/7736",
"diff_url": "https://github.com/huggingface/datasets/pull/7736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7736.patch",
"merged_at": "2025-08-13T13:13:48"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7736/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7735
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7735/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7735/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7735/events
|
https://github.com/huggingface/datasets/pull/7735
| 3,310,514,828 |
PR_kwDODunzps6jEq5w
| 7,735 |
fix largelist repr
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 1 | 2025-08-11T15:17:42 | 2025-08-11T15:39:56 | 2025-08-11T15:39:54 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7735",
"html_url": "https://github.com/huggingface/datasets/pull/7735",
"diff_url": "https://github.com/huggingface/datasets/pull/7735.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7735.patch",
"merged_at": "2025-08-11T15:39:54"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7735/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7734
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7734/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7734/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7734/events
|
https://github.com/huggingface/datasets/pull/7734
| 3,306,519,239 |
PR_kwDODunzps6i4pmA
| 7,734 |
Fixing __getitem__ of datasets which behaves inconsistent to documentation when setting _format_type to None
|
{
"login": "awagen",
"id": 40367113,
"node_id": "MDQ6VXNlcjQwMzY3MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/40367113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awagen",
"html_url": "https://github.com/awagen",
"followers_url": "https://api.github.com/users/awagen/followers",
"following_url": "https://api.github.com/users/awagen/following{/other_user}",
"gists_url": "https://api.github.com/users/awagen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/awagen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awagen/subscriptions",
"organizations_url": "https://api.github.com/users/awagen/orgs",
"repos_url": "https://api.github.com/users/awagen/repos",
"events_url": "https://api.github.com/users/awagen/events{/privacy}",
"received_events_url": "https://api.github.com/users/awagen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 2 | 2025-08-09T15:52:54 | 2025-08-17T07:23:00 | 2025-08-17T07:23:00 |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7734",
"html_url": "https://github.com/huggingface/datasets/pull/7734",
"diff_url": "https://github.com/huggingface/datasets/pull/7734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7734.patch",
"merged_at": null
}
|
Setting _format_type to None, should return plain python object but as of 4.0.0 returns Column. This fails in libs such as sentencetransformers (such as in generation of hard negatives) where plain python is expected.
|
{
"login": "awagen",
"id": 40367113,
"node_id": "MDQ6VXNlcjQwMzY3MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/40367113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awagen",
"html_url": "https://github.com/awagen",
"followers_url": "https://api.github.com/users/awagen/followers",
"following_url": "https://api.github.com/users/awagen/following{/other_user}",
"gists_url": "https://api.github.com/users/awagen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/awagen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awagen/subscriptions",
"organizations_url": "https://api.github.com/users/awagen/orgs",
"repos_url": "https://api.github.com/users/awagen/repos",
"events_url": "https://api.github.com/users/awagen/events{/privacy}",
"received_events_url": "https://api.github.com/users/awagen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7734/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7733
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7733/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7733/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7733/events
|
https://github.com/huggingface/datasets/issues/7733
| 3,304,979,299 |
I_kwDODunzps7E_ftj
| 7,733 |
Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path
|
{
"login": "dennys246",
"id": 27898715,
"node_id": "MDQ6VXNlcjI3ODk4NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dennys246",
"html_url": "https://github.com/dennys246",
"followers_url": "https://api.github.com/users/dennys246/followers",
"following_url": "https://api.github.com/users/dennys246/following{/other_user}",
"gists_url": "https://api.github.com/users/dennys246/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dennys246/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dennys246/subscriptions",
"organizations_url": "https://api.github.com/users/dennys246/orgs",
"repos_url": "https://api.github.com/users/dennys246/repos",
"events_url": "https://api.github.com/users/dennys246/events{/privacy}",
"received_events_url": "https://api.github.com/users/dennys246/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-08T19:10:58 | 2025-08-12T00:54:58 | null |
NONE
| null | null | null | null |
### Describe the bug
I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of trouble trying to have the images handled properly (at least in the way I’d expect them to be handled).
I find that I cannot use relative paths for loading images remotely from the Hugging Face repo or from a local repository. Any time I do it always simply appends my current working directory to the dataset. As a result to use the datasets library with my dataset I have to change my working directory to the dataset folder or abandon the dataset object structure, which I cannot imagine you intended. As a result I have to use URL’s since an absolute path on my system obviously wouldn’t work for others. The URL works ok, but despite me having it locally downloaded, it appears to be redownloading the dataset every time I train my snowGAN model on it (and often times I’m coming into HTTPS errors for over requesting the data).
Or maybe image relative paths aren't intended to be loaded directly through your datasets library as images and should be kept as strings for the user to handle? If so I feel like you’re missing out on some pretty seamless functionality
### Steps to reproduce the bug
1. Download a local copy of the dataset (rmdig/rocky_mountain_snowpack) through git or whatever you prefer.
2. Alter the README.md YAML for file_path (the relative path to each image) to be type Image instead of type string
`
---
dataset_info:
features:
- name: image
dtype: Image
- name: file_path
dtype: Image
`
3. Initialize the dataset locally, make sure your working directory is not the dataset directory root
`dataset = datasets.load_dataset(‘path/to/local/rocky_mountain_snowpack/‘)`
4. Call to one of the samples and you’ll get an error that the image was not found in current/working/directory/preprocessed/cores/image_1.png. Showing that it’s simply looking in the current working directory + relative path
`
>>> dataset['train'][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/image.py", line 171, in decode_example
image = PIL.Image.open(path)
^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/PIL/Image.py", line 3277, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dennyschaedig/Datasets/preprocessed/cores/image_1.png'
`
### Expected behavior
I expect the datasets and Image() to load the locally hosted data using path/to/local/rocky_mountain_snowpack/ (that I pass in with my datasets.load_dataset() or the you all handle on the backend) call + relative path.
Instead it appears to load from my current working directory + relative path.
### Environment info
Tested on…
Windows 11, Ubuntu Linux 22.04 and Mac Sequoia 15.5 Silicone M2
datasets version 4.0.0
Python 3.12 and 3.13
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7733/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7732
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7732/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7732/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7732/events
|
https://github.com/huggingface/datasets/issues/7732
| 3,304,673,383 |
I_kwDODunzps7E-VBn
| 7,732 |
webdataset: key errors when `field_name` has upper case characters
|
{
"login": "YassineYousfi",
"id": 29985433,
"node_id": "MDQ6VXNlcjI5OTg1NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YassineYousfi",
"html_url": "https://github.com/YassineYousfi",
"followers_url": "https://api.github.com/users/YassineYousfi/followers",
"following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
"gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
"organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
"repos_url": "https://api.github.com/users/YassineYousfi/repos",
"events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-08T16:56:42 | 2025-08-08T16:56:42 | null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
When using a webdataset each sample can be a collection of different "fields"
like this:
```
images17/image194.left.jpg
images17/image194.right.jpg
images17/image194.json
images17/image12.left.jpg
images17/image12.right.jpg
images17/image12.json
```
if the field_name contains upper case characters, the HF webdataset integration throws a key error when trying to load the dataset:
e.g. from a dataset (now updated so that it doesn't throw this error)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[1], line 2
1 from datasets import load_dataset
----> 2 ds = load_dataset("commaai/comma2k19", data_files={'train': ['data-00000.tar.gz']}, num_proc=1)
File ~/xx/.venv/lib/python3.11/site-packages/datasets/load.py:1412, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1409 return builder_instance.as_streaming_dataset(split=split)
1411 # Download and prepare data
-> 1412 builder_instance.download_and_prepare(
1413 download_config=download_config,
1414 download_mode=download_mode,
1415 verification_mode=verification_mode,
1416 num_proc=num_proc,
1417 storage_options=storage_options,
1418 )
1420 # Build dataset for splits
1421 keep_in_memory = (
1422 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1423 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:894, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
892 if num_proc is not None:
893 prepare_split_kwargs["num_proc"] = num_proc
--> 894 self._download_and_prepare(
895 dl_manager=dl_manager,
896 verification_mode=verification_mode,
897 **prepare_split_kwargs,
898 **download_and_prepare_kwargs,
899 )
900 # Sync info
901 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:1609, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1608 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1609 super()._download_and_prepare(
1610 dl_manager,
1611 verification_mode,
1612 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS
1613 or verification_mode == VerificationMode.ALL_CHECKS,
1614 **prepare_splits_kwargs,
1615 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:948, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
946 split_dict = SplitDict(dataset_name=self.dataset_name)
947 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 948 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
950 # Checksums verification
951 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:81, in WebDataset._split_generators(self, dl_manager)
78 if not self.info.features:
79 # Get one example to get the feature types
80 pipeline = self._get_pipeline_from_tar(tar_paths[0], tar_iterators[0])
---> 81 first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
82 if any(example.keys() != first_examples[0].keys() for example in first_examples):
83 raise ValueError(
84 "The TAR archives of the dataset should be in WebDataset format, "
85 "but the files in the archive don't share the same prefix or the same types."
86 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:55, in WebDataset._get_pipeline_from_tar(cls, tar_path, tar_iterator)
53 data_extension = field_name.split(".")[-1]
54 if data_extension in cls.DECODERS:
---> 55 current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name])
56 if current_example:
57 yield current_example
KeyError: 'processed_log_IMU_magnetometer_value.npy'
```
### Steps to reproduce the bug
unit test was added in: https://github.com/huggingface/datasets/pull/7726
it fails without the fixed proposed in the same PR
### Expected behavior
Not throwing a key error.
### Environment info
```
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
- Python version: 3.11.4
- `huggingface_hub` version: 0.33.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.7.0
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7732/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7731
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7731/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7731/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7731/events
|
https://github.com/huggingface/datasets/issues/7731
| 3,303,637,075 |
I_kwDODunzps7E6YBT
| 7,731 |
Add the possibility of a backend for audio decoding
|
{
"login": "intexcor",
"id": 142020129,
"node_id": "U_kgDOCHcOIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/intexcor",
"html_url": "https://github.com/intexcor",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/intexcor/subscriptions",
"organizations_url": "https://api.github.com/users/intexcor/orgs",
"repos_url": "https://api.github.com/users/intexcor/repos",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"received_events_url": "https://api.github.com/users/intexcor/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 2 | 2025-08-08T11:08:56 | 2025-08-20T16:29:33 | null |
NONE
| null | null | null | null |
### Feature request
Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset.
### Motivation
I use a service for training models in which ffmpeg cannot be installed.
### Your contribution
I use a service for training models in which ffmpeg cannot be installed.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7731/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7730
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7730/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7730/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7730/events
|
https://github.com/huggingface/datasets/pull/7730
| 3,301,907,242 |
PR_kwDODunzps6iqTZI
| 7,730 |
Grammar fix: correct "showed" to "shown" in fingerprint.py
|
{
"login": "brchristian",
"id": 2460418,
"node_id": "MDQ6VXNlcjI0NjA0MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2460418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brchristian",
"html_url": "https://github.com/brchristian",
"followers_url": "https://api.github.com/users/brchristian/followers",
"following_url": "https://api.github.com/users/brchristian/following{/other_user}",
"gists_url": "https://api.github.com/users/brchristian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brchristian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brchristian/subscriptions",
"organizations_url": "https://api.github.com/users/brchristian/orgs",
"repos_url": "https://api.github.com/users/brchristian/repos",
"events_url": "https://api.github.com/users/brchristian/events{/privacy}",
"received_events_url": "https://api.github.com/users/brchristian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 0 | 2025-08-07T21:22:56 | 2025-08-13T18:34:30 | 2025-08-13T13:12:56 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7730",
"html_url": "https://github.com/huggingface/datasets/pull/7730",
"diff_url": "https://github.com/huggingface/datasets/pull/7730.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7730.patch",
"merged_at": "2025-08-13T13:12:56"
}
|
This PR corrects a small grammatical issue in the outputs of fingerprint.py:
```diff
- "This warning is only showed once. Subsequent hashing failures won't be showed."
+ "This warning is only shown once. Subsequent hashing failures won't be shown."
```
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7730/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7729
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7729/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7729/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7729/events
|
https://github.com/huggingface/datasets/issues/7729
| 3,300,672,954 |
I_kwDODunzps7EvEW6
| 7,729 |
OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory
|
{
"login": "SaleemMalikAI",
"id": 115183904,
"node_id": "U_kgDOBt2RIA",
"avatar_url": "https://avatars.githubusercontent.com/u/115183904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaleemMalikAI",
"html_url": "https://github.com/SaleemMalikAI",
"followers_url": "https://api.github.com/users/SaleemMalikAI/followers",
"following_url": "https://api.github.com/users/SaleemMalikAI/following{/other_user}",
"gists_url": "https://api.github.com/users/SaleemMalikAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaleemMalikAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaleemMalikAI/subscriptions",
"organizations_url": "https://api.github.com/users/SaleemMalikAI/orgs",
"repos_url": "https://api.github.com/users/SaleemMalikAI/repos",
"events_url": "https://api.github.com/users/SaleemMalikAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaleemMalikAI/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-07T14:07:23 | 2025-08-07T14:07:23 | null |
NONE
| null | null | null | null |
> Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7729/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7728
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7728/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7728/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7728/events
|
https://github.com/huggingface/datasets/issues/7728
| 3,298,854,904 |
I_kwDODunzps7EoIf4
| 7,728 |
NonMatchingSplitsSizesError and ExpectedMoreSplitsError
|
{
"login": "efsotr",
"id": 104755879,
"node_id": "U_kgDOBj5ypw",
"avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efsotr",
"html_url": "https://github.com/efsotr",
"followers_url": "https://api.github.com/users/efsotr/followers",
"following_url": "https://api.github.com/users/efsotr/following{/other_user}",
"gists_url": "https://api.github.com/users/efsotr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efsotr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efsotr/subscriptions",
"organizations_url": "https://api.github.com/users/efsotr/orgs",
"repos_url": "https://api.github.com/users/efsotr/repos",
"events_url": "https://api.github.com/users/efsotr/events{/privacy}",
"received_events_url": "https://api.github.com/users/efsotr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-07T04:04:50 | 2025-08-07T07:31:47 | null |
NONE
| null | null | null | null |
### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz",
"validation": "en/c4-validation.00000-of-00008.json.gz"},
)
```
```log
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=828589180707, num_examples=364868892, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=809262831, num_examples=356317, shard_lengths=[223006, 133311], dataset_name='c4')}, {'expected': SplitInfo(name='validation', num_bytes=825767266, num_examples=364608, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=102199431, num_examples=45576, shard_lengths=None, dataset_name='c4')}]
```
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
split="train"
)
```
```log
ExpectedMoreSplitsError: {'validation'}
```
### Expected behavior
No error
### Environment info
datasets 4.0.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7728/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7727
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7727/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7727/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7727/events
|
https://github.com/huggingface/datasets/issues/7727
| 3,295,718,578 |
I_kwDODunzps7EcKyy
| 7,727 |
config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally
|
{
"login": "doctorpangloss",
"id": 2229300,
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doctorpangloss",
"html_url": "https://github.com/doctorpangloss",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-06T08:21:37 | 2025-08-06T08:21:37 | null |
NONE
| null | null | null | null |
### Describe the bug
```
- config_name: some_config
data_files:
- split: train
path:
- images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `load_dataset` on the same directory locally works fine.
### Steps to reproduce the bug
1. create a README.md with the front matter of the form
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
2. `touch ./images/xyz/1.jpg`
3. Observe this directory loads with `load_dataset("filesystem_path", "some_config")` correctly.
4. Observe exceptions when you load this with `load_dataset("repoid/filesystem_path", "some_config")`
### Expected behavior
`./` prefix should be interpreted correctly
### Environment info
datasets 4.0.0
datasets 3.4.0
reproduce
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7727/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7726
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7726/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7726/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7726/events
|
https://github.com/huggingface/datasets/pull/7726
| 3,293,789,832 |
PR_kwDODunzps6iO_oF
| 7,726 |
fix(webdataset): don't .lower() field_name
|
{
"login": "YassineYousfi",
"id": 29985433,
"node_id": "MDQ6VXNlcjI5OTg1NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YassineYousfi",
"html_url": "https://github.com/YassineYousfi",
"followers_url": "https://api.github.com/users/YassineYousfi/followers",
"following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
"gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
"organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
"repos_url": "https://api.github.com/users/YassineYousfi/repos",
"events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 3 | 2025-08-05T16:57:09 | 2025-08-20T16:35:55 | 2025-08-20T16:35:55 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7726",
"html_url": "https://github.com/huggingface/datasets/pull/7726",
"diff_url": "https://github.com/huggingface/datasets/pull/7726.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7726.patch",
"merged_at": "2025-08-20T16:35:55"
}
|
This fixes cases where keys have upper case identifiers
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7726/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7724
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7724/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7724/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7724/events
|
https://github.com/huggingface/datasets/issues/7724
| 3,292,315,241 |
I_kwDODunzps7EPL5p
| 7,724 |
Can not stepinto load_dataset.py?
|
{
"login": "micklexqg",
"id": 13776012,
"node_id": "MDQ6VXNlcjEzNzc2MDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/13776012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/micklexqg",
"html_url": "https://github.com/micklexqg",
"followers_url": "https://api.github.com/users/micklexqg/followers",
"following_url": "https://api.github.com/users/micklexqg/following{/other_user}",
"gists_url": "https://api.github.com/users/micklexqg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/micklexqg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/micklexqg/subscriptions",
"organizations_url": "https://api.github.com/users/micklexqg/orgs",
"repos_url": "https://api.github.com/users/micklexqg/repos",
"events_url": "https://api.github.com/users/micklexqg/events{/privacy}",
"received_events_url": "https://api.github.com/users/micklexqg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-05T09:28:51 | 2025-08-05T09:28:51 | null |
NONE
| null | null | null | null |
I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" -->
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7724/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7723
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7723/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7723/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7723/events
|
https://github.com/huggingface/datasets/issues/7723
| 3,289,943,261 |
I_kwDODunzps7EGIzd
| 7,723 |
Don't remove `trust_remote_code` arg!!!
|
{
"login": "autosquid",
"id": 758925,
"node_id": "MDQ6VXNlcjc1ODkyNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/758925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/autosquid",
"html_url": "https://github.com/autosquid",
"followers_url": "https://api.github.com/users/autosquid/followers",
"following_url": "https://api.github.com/users/autosquid/following{/other_user}",
"gists_url": "https://api.github.com/users/autosquid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/autosquid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/autosquid/subscriptions",
"organizations_url": "https://api.github.com/users/autosquid/orgs",
"repos_url": "https://api.github.com/users/autosquid/repos",
"events_url": "https://api.github.com/users/autosquid/events{/privacy}",
"received_events_url": "https://api.github.com/users/autosquid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 2025-08-04T15:42:07 | 2025-08-04T15:42:07 | null |
NONE
| null | null | null | null |
### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7723/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7722
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7722/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7722/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7722/events
|
https://github.com/huggingface/datasets/issues/7722
| 3,289,741,064 |
I_kwDODunzps7EFXcI
| 7,722 |
Out of memory even though using load_dataset(..., streaming=True)
|
{
"login": "padmalcom",
"id": 3961950,
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/padmalcom",
"html_url": "https://github.com/padmalcom",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-04T14:41:55 | 2025-08-04T14:41:55 | null |
NONE
| null | null | null | null |
### Describe the bug
I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom.
### Steps to reproduce the bug
```
ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True)
for i,sample in enumerate(tqdm(ds)):
target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav')
try:
sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate'])
except Exception as e:
print(f"Could not write audio {i} in ds: {e}")
```
### Expected behavior
I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same.
### Environment info
Python 3.12.11
Ubuntu 24
datasets 4.0.0 and 3.6.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7722/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7721
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7721/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7721/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7721/events
|
https://github.com/huggingface/datasets/issues/7721
| 3,289,426,104 |
I_kwDODunzps7EEKi4
| 7,721 |
Bad split error message when using percentages
|
{
"login": "padmalcom",
"id": 3961950,
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/padmalcom",
"html_url": "https://github.com/padmalcom",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 2 | 2025-08-04T13:20:25 | 2025-08-14T14:42:24 | null |
NONE
| null | null | null | null |
### Describe the bug
Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps.
When doing so, the library returns this error:
raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
ValueError: Bad split: train[0%:10%]. Available splits: ['train']
Edit: Same happens with a split like _train[:90000]_
### Steps to reproduce the bug
```
for split in range(10):
split_str = f"train[{split*10}%:{(split+1)*10}%]"
print(f"Processing split {split_str}...")
ds = load_dataset("user/dataset", split=split_str, streaming=True)
```
### Expected behavior
I'd expect the library to split my dataset in 10% steps.
### Environment info
python 3.12.11
ubuntu 24
dataset 4.0.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7721/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7720
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7720/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7720/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7720/events
|
https://github.com/huggingface/datasets/issues/7720
| 3,287,150,513 |
I_kwDODunzps7D7e-x
| 7,720 |
Datasets 4.0 map function causing column not found
|
{
"login": "Darejkal",
"id": 55143337,
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Darejkal",
"html_url": "https://github.com/Darejkal",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 3 | 2025-08-03T12:52:34 | 2025-08-07T19:23:34 | null |
NONE
| null | null | null | null |
### Describe the bug
Column returned after mapping is not found in new instance of the dataset.
### Steps to reproduce the bug
Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration`
```
def compute_duration(x):
return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]}
def get_total_audio_length(dataset):
data = dataset.map(compute_duration, num_proc=NUM_PROC)
print(data)
durations=data["duration"]
total_seconds = sum(durations)
return total_seconds
```
### Expected behavior
New datasets.Dataset instance should have new columns attached.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2023.12.2
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7720/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7719
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7719/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7719/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7719/events
|
https://github.com/huggingface/datasets/issues/7719
| 3,285,928,491 |
I_kwDODunzps7D20or
| 7,719 |
Specify dataset columns types in typehint
|
{
"login": "Samoed",
"id": 36135455,
"node_id": "MDQ6VXNlcjM2MTM1NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Samoed",
"html_url": "https://github.com/Samoed",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
"organizations_url": "https://api.github.com/users/Samoed/orgs",
"repos_url": "https://api.github.com/users/Samoed/repos",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"received_events_url": "https://api.github.com/users/Samoed/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 2025-08-02T13:22:31 | 2025-08-02T13:22:31 | null |
NONE
| null | null | null | null |
### Feature request
Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131
### Motivation
In MTEB we're using a lot of datasets objects, but they're a bit poor in typehints. E.g. we can specify this for dataloder
```python
from typing import TypedDict
from torch.utils.data import DataLoader
class CorpusInput(TypedDict):
title: list[str]
body: list[str]
class QueryInput(TypedDict):
query: list[str]
instruction: list[str]
def queries_loader() -> DataLoader[QueryInput]:
...
def corpus_loader() -> DataLoader[CorpusInput]:
...
```
But for datasets we can only specify columns in type in comments
```python
from datasets import Dataset
QueryDataset = Dataset
"""Query dataset should have `query` and `instructions` columns as `str` """
```
### Your contribution
I can create draft implementation
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7719/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7719/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7718
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7718/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7718/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7718/events
|
https://github.com/huggingface/datasets/pull/7718
| 3,284,221,177 |
PR_kwDODunzps6hvJ6R
| 7,718 |
add support for pyarrow string view in features
|
{
"login": "onursatici",
"id": 5051569,
"node_id": "MDQ6VXNlcjUwNTE1Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5051569?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/onursatici",
"html_url": "https://github.com/onursatici",
"followers_url": "https://api.github.com/users/onursatici/followers",
"following_url": "https://api.github.com/users/onursatici/following{/other_user}",
"gists_url": "https://api.github.com/users/onursatici/gists{/gist_id}",
"starred_url": "https://api.github.com/users/onursatici/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/onursatici/subscriptions",
"organizations_url": "https://api.github.com/users/onursatici/orgs",
"repos_url": "https://api.github.com/users/onursatici/repos",
"events_url": "https://api.github.com/users/onursatici/events{/privacy}",
"received_events_url": "https://api.github.com/users/onursatici/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-01T14:58:39 | 2025-08-13T13:09:44 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7718",
"html_url": "https://github.com/huggingface/datasets/pull/7718",
"diff_url": "https://github.com/huggingface/datasets/pull/7718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7718.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7718/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7718/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7748
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7748/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7748/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7748/events
|
https://github.com/huggingface/datasets/pull/7748
| 3,347,137,663 |
PR_kwDODunzps6k-adX
| 7,748 |
docs: Streaming best practices
|
{
"login": "Abdul-Omira",
"id": 32625230,
"node_id": "MDQ6VXNlcjMyNjI1MjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/32625230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abdul-Omira",
"html_url": "https://github.com/Abdul-Omira",
"followers_url": "https://api.github.com/users/Abdul-Omira/followers",
"following_url": "https://api.github.com/users/Abdul-Omira/following{/other_user}",
"gists_url": "https://api.github.com/users/Abdul-Omira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abdul-Omira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abdul-Omira/subscriptions",
"organizations_url": "https://api.github.com/users/Abdul-Omira/orgs",
"repos_url": "https://api.github.com/users/Abdul-Omira/repos",
"events_url": "https://api.github.com/users/Abdul-Omira/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abdul-Omira/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-23T00:18:43 | 2025-08-23T00:18:43 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7748",
"html_url": "https://github.com/huggingface/datasets/pull/7748",
"diff_url": "https://github.com/huggingface/datasets/pull/7748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7748.patch",
"merged_at": null
}
|
Add a new 'Streaming best practices' page with practical patterns and pitfalls for large-scale/production use of IterableDataset. Includes examples for batched map with remove_columns, deterministic shuffling with set_epoch, multi-worker sharding, checkpoint/resume, and persistence to Parquet/Hub. Linked from How-to > General usage, next to Stream.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7748/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7747
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7747/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7747/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7747/events
|
https://github.com/huggingface/datasets/pull/7747
| 3,347,098,038 |
PR_kwDODunzps6k-Rtd
| 7,747 |
Add wikipedia-2023-redirects dataset
|
{
"login": "Abdul-Omira",
"id": 32625230,
"node_id": "MDQ6VXNlcjMyNjI1MjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/32625230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abdul-Omira",
"html_url": "https://github.com/Abdul-Omira",
"followers_url": "https://api.github.com/users/Abdul-Omira/followers",
"following_url": "https://api.github.com/users/Abdul-Omira/following{/other_user}",
"gists_url": "https://api.github.com/users/Abdul-Omira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abdul-Omira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abdul-Omira/subscriptions",
"organizations_url": "https://api.github.com/users/Abdul-Omira/orgs",
"repos_url": "https://api.github.com/users/Abdul-Omira/repos",
"events_url": "https://api.github.com/users/Abdul-Omira/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abdul-Omira/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-22T23:49:53 | 2025-08-22T23:49:53 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7747",
"html_url": "https://github.com/huggingface/datasets/pull/7747",
"diff_url": "https://github.com/huggingface/datasets/pull/7747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7747.patch",
"merged_at": null
}
|
Title: Add wikipedia-2023-redirects dataset (redirect resolution + pageviews)
Summary
- New dataset loader: wikipedia_2023_redirects
- Canonical Wikipedia pages enriched with:
- redirects (aliases pointing to the page)
- 2023 pageviews (aggregated)
- Streaming support; robust parsing; license notes included
- Tests with tiny dummy data (XML + TSVs); covers streaming
Motivation
RAG/retrieval often benefits from:
- Query expansion via redirect aliases
- Popularity prior via pageviews
This loader offers a practical, maintenance-light way to access canonical pages alongside their redirect aliases and 2023 pageview totals.
Features
- id: string
- title: string
- url: string
- text: string
- redirects: list[string]
- pageviews_2023: int32
- timestamp: string
Licensing
- Wikipedia text: CC BY-SA 3.0 (attribution and share-alike apply)
- Pageviews: public domain
The PR docs mention both, and the module docstring cites sources.
Notes
- The URLs in _get_urls_for_config are wired to dummy files for tests. In production, these would point to Wikimedia dumps:
- XML page dumps: https://dumps.wikimedia.org/
- Pageviews: https://dumps.wikimedia.org/other/pageviews/
- The schema is intentionally simple and stable. Pageview aggregation is per-title sum across 2023.
Testing
- make style && make quality
- pytest -q tests/test_dataset_wikipedia_2023_redirects.py
Example
```python
from datasets import load_dataset
ds = load_dataset("wikipedia_2023_redirects", split="train")
print(ds[0]["title"], ds[0]["redirects"][:5], ds[0]["pageviews_2023"])
```
Acknowledgements
- Wikipedia/Wikimedia Foundation for the source data
- Hugging Face Datasets for the dataset infrastructure
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7747/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7746
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7746/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7746/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7746/events
|
https://github.com/huggingface/datasets/issues/7746
| 3,345,391,211 |
I_kwDODunzps7HZp5r
| 7,746 |
Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version
|
{
"login": "Awesome075",
"id": 187888489,
"node_id": "U_kgDOCzLzaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/187888489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Awesome075",
"html_url": "https://github.com/Awesome075",
"followers_url": "https://api.github.com/users/Awesome075/followers",
"following_url": "https://api.github.com/users/Awesome075/following{/other_user}",
"gists_url": "https://api.github.com/users/Awesome075/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Awesome075/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Awesome075/subscriptions",
"organizations_url": "https://api.github.com/users/Awesome075/orgs",
"repos_url": "https://api.github.com/users/Awesome075/repos",
"events_url": "https://api.github.com/users/Awesome075/events{/privacy}",
"received_events_url": "https://api.github.com/users/Awesome075/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-22T12:52:03 | 2025-08-23T12:34:39 | null |
NONE
| null | null | null | null |
Hi,
The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter.
The original maintainer's GitHub and Hugging Face repositories appear to be inactive, so a community-led fix is needed.
I have created a working fix by converting the dataset to the modern Parquet format, which does not require a loading script. The fixed version is available here and loads correctly:
**[Awesome075/multi_news_parquet](https://huggingface.co/datasets/Awesome075/multi_news_parquet)**
Could the maintainers please guide me or themselves update the official `multi_news` dataset to use this working Parquet version? This would involve updating the canonical pointer for "multi_news" to resolve to the new repository.
This action would fix the dataset for all users and ensure its continued availability.
Thank you!
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7746/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7745
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7745/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7745/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7745/events
|
https://github.com/huggingface/datasets/issues/7745
| 3,345,286,773 |
I_kwDODunzps7HZQZ1
| 7,745 |
Audio mono argument no longer supported, despite class documentation
|
{
"login": "jheitz",
"id": 5666041,
"node_id": "MDQ6VXNlcjU2NjYwNDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5666041?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jheitz",
"html_url": "https://github.com/jheitz",
"followers_url": "https://api.github.com/users/jheitz/followers",
"following_url": "https://api.github.com/users/jheitz/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jheitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitz/subscriptions",
"organizations_url": "https://api.github.com/users/jheitz/orgs",
"repos_url": "https://api.github.com/users/jheitz/repos",
"events_url": "https://api.github.com/users/jheitz/events{/privacy}",
"received_events_url": "https://api.github.com/users/jheitz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-22T12:15:41 | 2025-08-24T18:22:41 | null |
NONE
| null | null | null | null |
### Describe the bug
Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono)
### Steps to reproduce the bug
Audio(sampling_rate=16000, mono=True) raises the error
TypeError: Audio.__init__() got an unexpected keyword argument 'mono'
However, in the class documentation, is says:
Args:
sampling_rate (`int`, *optional*):
Target sampling rate. If `None`, the native sampling rate is used.
mono (`bool`, defaults to `True`):
Whether to convert the audio signal to mono by averaging samples across
channels.
[...]
### Expected behavior
The above call should either work, or the documentation within the Audio class should be updated
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7745/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7744
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7744/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7744/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7744/events
|
https://github.com/huggingface/datasets/issues/7744
| 3,343,510,686 |
I_kwDODunzps7HSeye
| 7,744 |
dtype: ClassLabel is not parsed correctly in `features.py`
|
{
"login": "cmatKhan",
"id": 43553003,
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmatKhan",
"html_url": "https://github.com/cmatKhan",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-21T23:28:50 | 2025-08-21T23:28:50 | null |
NONE
| null | null | null | null |
`dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail.
This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the error):
```yaml
license: mit
pretty_name: BrentLab Yeast Genome Resources
size_categories:
- 1K<n<10K
language:
- en
dataset_info:
features:
- name: start
dtype: int32
description: Start coordinate (1-based, **inclusive**)
- name: end
dtype: int32
description: End coordinate (1-based, **inclusive**)
- name: strand
dtype: ClassLabel
...
```
is producing the following error in the data viewer:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory
return HubDatasetModuleFactory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2027, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1872, in from_dict
obj = generate_from_dict(dic)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1465, in generate_from_dict
raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
ValueError: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
```
I think that this is caused by this line
https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/features/features.py#L2013
Reproducible example from [naming.py](https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/naming.py)
```python
import itertools
import os
import re
_uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])")
_lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])")
_single_underscore_re = re.compile(r"(?<!_)_(?!_)")
_multiple_underscores_re = re.compile(r"(_{2,})")
_split_re = r"^\w+(\.\w+)*$"
def snakecase_to_camelcase(name):
"""Convert snake-case string to camel-case string."""
name = _single_underscore_re.split(name)
name = [_multiple_underscores_re.split(n) for n in name]
return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "")
snakecase_to_camelcase("ClassLabel")
```
Result:
```raw
'Classlabel'
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7744/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7743
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7743/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7743/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7743/events
|
https://github.com/huggingface/datasets/pull/7743
| 3,342,611,297 |
PR_kwDODunzps6ku8Jw
| 7,743 |
Refactor HDF5 and preserve tree structure
|
{
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-21T17:28:17 | 2025-08-25T18:04:33 | null |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7743",
"html_url": "https://github.com/huggingface/datasets/pull/7743",
"diff_url": "https://github.com/huggingface/datasets/pull/7743.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7743.patch",
"merged_at": null
}
|
Closes #7741. Followup to #7690
- Recursive parsing and feature inference, to preserve the tree structure of the file. Note this means we now visit all links in the file. It also means we have to call` combine_chunks` on any large non-root datasets.
- Support for `complex64` (two `float32`s, used to be converted to two `float64`s)
- Support for ndim complex, compound, more field types for compound (due to reusing the main parser, compound types are treated like groups)
- Cleaned up varlen support
- Always do feature inference and always cast to features (used to cast to schema)
- Updated tests to use `load_dataset` instead of internal APIs
- Removed `columns` in config. Have to give Features (i.e., must specify types) if filtering
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7743/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7742
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7742/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7742/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7742/events
|
https://github.com/huggingface/datasets/issues/7742
| 3,336,704,928 |
I_kwDODunzps7G4hOg
| 7,742 |
module 'pyarrow' has no attribute 'PyExtensionType'
|
{
"login": "mnedelko",
"id": 6106392,
"node_id": "MDQ6VXNlcjYxMDYzOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6106392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnedelko",
"html_url": "https://github.com/mnedelko",
"followers_url": "https://api.github.com/users/mnedelko/followers",
"following_url": "https://api.github.com/users/mnedelko/following{/other_user}",
"gists_url": "https://api.github.com/users/mnedelko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mnedelko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnedelko/subscriptions",
"organizations_url": "https://api.github.com/users/mnedelko/orgs",
"repos_url": "https://api.github.com/users/mnedelko/repos",
"events_url": "https://api.github.com/users/mnedelko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mnedelko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-20T06:14:33 | 2025-08-20T06:23:47 | null |
NONE
| null | null | null | null |
### Describe the bug
When importing certain libraries, users will encounter the following error which can be traced back to the datasets library.
module 'pyarrow' has no attribute 'PyExtensionType'.
Example issue: https://github.com/explodinggradients/ragas/issues/2170
The issue occurs due to the following. I will proceed to submit a PR with the below fix:
**Issue Reason**
The issue is that PyArrow version 21.0.0 doesn’t have PyExtensionType. This was changed in newer versions of PyArrow. The
PyExtensionType class was renamed to ExtensionType in PyArrow 13.0.0 and later versions.
** Issue Solution**
Making the following changes to the following lib files should temporarily resolve the issue.
I will submit a PR to the dataets library in the meantime.
env_name/lib/python3.10/site-packages/datasets/features/features.py:
```
> 521 self.shape = tuple(shape)
522 self.value_type = dtype
523 self.storage_dtype = self._generate_dtype(self.value_type)
524 - pa.PyExtensionType.__init__(self, self.storage_dtype)
524 + pa.ExtensionType.__init__(self, self.storage_dtype)
525
526 def __reduce__(self):
527 return self.__class__, (
```
Updated venv_name/lib/python3.10/site-packages/datasets/features/features.py:
```
510 _type: str = field(default=“Array5D”, init=False, repr=False)
511
512
513 - class _ArrayXDExtensionType(pa.PyExtensionType):
513 + class _ArrayXDExtensionType(pa.ExtensionType):
514 ndims: Optional[int] = None
515
516 def __init__(self, shape: tuple, dtype: str):
```
### Steps to reproduce the bug
Ragas version: 0.3.1
Python version: 3.11
**Code to Reproduce**
_**In notebook:**_
!pip install ragas
from ragas import evaluate
### Expected behavior
The required package installs without issue.
### Environment info
In Jupyter Notebook.
venv
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7742/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7742/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7741
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7741/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7741/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7741/events
|
https://github.com/huggingface/datasets/issues/7741
| 3,334,848,656 |
I_kwDODunzps7GxcCQ
| 7,741 |
Preserve tree structure when loading HDF5
|
{
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 2025-08-19T15:42:05 | 2025-08-22T00:41:46 | null |
CONTRIBUTOR
| null | null | null | null |
### Feature request
https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374
### Motivation
`datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user.
### Your contribution
I'll open a PR (#7743)
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7741/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7740
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7740/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7740/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7740/events
|
https://github.com/huggingface/datasets/pull/7740
| 3,334,693,293 |
PR_kwDODunzps6kUMKM
| 7,740 |
Document HDF5 support
|
{
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-19T14:53:04 | 2025-08-21T19:56:58 | null |
CONTRIBUTOR
| null | null | true |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7740",
"html_url": "https://github.com/huggingface/datasets/pull/7740",
"diff_url": "https://github.com/huggingface/datasets/pull/7740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7740.patch",
"merged_at": null
}
|
I think these are at least the main places where we should put content. Ideally it is not just repeated in the final version
ref #7690
- [ ] Wait for #7743 to land
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7740/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7739
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7739/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7739/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7739/events
|
https://github.com/huggingface/datasets/issues/7739
| 3,331,537,762 |
I_kwDODunzps7Gkzti
| 7,739 |
Replacement of "Sequence" feature with "List" breaks backward compatibility
|
{
"login": "evmaki",
"id": 15764776,
"node_id": "MDQ6VXNlcjE1NzY0Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/15764776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/evmaki",
"html_url": "https://github.com/evmaki",
"followers_url": "https://api.github.com/users/evmaki/followers",
"following_url": "https://api.github.com/users/evmaki/following{/other_user}",
"gists_url": "https://api.github.com/users/evmaki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/evmaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evmaki/subscriptions",
"organizations_url": "https://api.github.com/users/evmaki/orgs",
"repos_url": "https://api.github.com/users/evmaki/repos",
"events_url": "https://api.github.com/users/evmaki/events{/privacy}",
"received_events_url": "https://api.github.com/users/evmaki/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-18T17:28:38 | 2025-08-18T17:28:38 | null |
NONE
| null | null | null | null |
PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of "fixing" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons.
Perhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7739/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7738
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7738/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7738/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7738/events
|
https://github.com/huggingface/datasets/issues/7738
| 3,328,948,690 |
I_kwDODunzps7Ga7nS
| 7,738 |
Allow saving multi-dimensional ndarray with dynamic shapes
|
{
"login": "ryan-minato",
"id": 82735346,
"node_id": "MDQ6VXNlcjgyNzM1MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryan-minato",
"html_url": "https://github.com/ryan-minato",
"followers_url": "https://api.github.com/users/ryan-minato/followers",
"following_url": "https://api.github.com/users/ryan-minato/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-minato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryan-minato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-minato/subscriptions",
"organizations_url": "https://api.github.com/users/ryan-minato/orgs",
"repos_url": "https://api.github.com/users/ryan-minato/repos",
"events_url": "https://api.github.com/users/ryan-minato/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryan-minato/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 1 | 2025-08-18T02:23:51 | 2025-08-22T03:15:19 | null |
NONE
| null | null | null | null |
### Feature request
I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dimensions are not fixed.
A possible implementation could be a new Array or Tensor feature type that stores the data in a structured format, for example,
```python
{
"shape": (5, 224, 224),
"dtype": "uint8",
"data": [...]
}
```
This would allow the datasets library to handle heterogeneous array sizes within a single column without requiring a fixed shape definition in the feature schema.
### Motivation
I am currently trying to upload data from astronomical telescopes, specifically FITS files, to the Hugging Face Hub. This type of data is very similar to images but often has more than three dimensions. For example, data from the SDSS project contains five channels (u, g, r, i, z), and the pixel values can exceed 255, making the Pillow based Image feature unsuitable.
The current datasets library requires a fixed shape to be defined in the feature schema for multi-dimensional arrays, which is a major roadblock. This prevents me from saving my data, as the dimensions of the arrays can vary across different FITS files.
https://github.com/huggingface/datasets/blob/985c9bee6bfc345787a8b9dd316e1d4f3b930503/src/datasets/features/features.py#L613-L614
A feature that supports dynamic shapes would be incredibly beneficial for the astronomy community and other fields dealing with similar high-dimensional, variable-sized data (e.g., medical imaging, scientific simulations).
### Your contribution
I am willing to create a PR to help implement this feature if the proposal is accepted.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7738/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7737
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7737/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7737/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7737/events
|
https://github.com/huggingface/datasets/pull/7737
| 3,318,670,801 |
PR_kwDODunzps6jf5io
| 7,737 |
docs: Add column overwrite example to batch mapping guide
|
{
"login": "Sanjaykumar030",
"id": 183703408,
"node_id": "U_kgDOCvMXcA",
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sanjaykumar030",
"html_url": "https://github.com/Sanjaykumar030",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-13T14:20:19 | 2025-08-25T17:54:00 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7737",
"html_url": "https://github.com/huggingface/datasets/pull/7737",
"diff_url": "https://github.com/huggingface/datasets/pull/7737.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7737.patch",
"merged_at": null
}
|
This PR adds a complementary example showing the **column-overwriting** pattern, which is both more direct and more flexible for many transformations.
### Proposed Change
The original `remove_columns` example remains untouched. Below it, this PR introduces an alternative approach that overwrites an existing column during batch mapping.
This teaches users a core `.map()` capability for in-place transformations without extra intermediate steps.
**New Example:**
> ```python
> >>> from datasets import Dataset
> >>> dataset = Dataset.from_dict({"a": [0, 1, 2]})
> # Overwrite "a" directly to duplicate each value
> >>> duplicated_dataset = dataset.map(
> ... lambda batch: {"a": [x for x in batch["a"] for _ in range(2)]},
> ... batched=True
> ... )
> >>> duplicated_dataset
> Dataset({
> features: ['a'],
> num_rows: 6
> })
> >>> duplicated_dataset["a"]
> [0, 0, 1, 1, 2, 2]
> ```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7737/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7736
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7736/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7736/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7736/events
|
https://github.com/huggingface/datasets/pull/7736
| 3,311,618,096 |
PR_kwDODunzps6jIWQ3
| 7,736 |
Fix type hint `train_test_split`
|
{
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 1 | 2025-08-11T20:46:53 | 2025-08-13T13:13:50 | 2025-08-13T13:13:48 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7736",
"html_url": "https://github.com/huggingface/datasets/pull/7736",
"diff_url": "https://github.com/huggingface/datasets/pull/7736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7736.patch",
"merged_at": "2025-08-13T13:13:48"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7736/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7735
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7735/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7735/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7735/events
|
https://github.com/huggingface/datasets/pull/7735
| 3,310,514,828 |
PR_kwDODunzps6jEq5w
| 7,735 |
fix largelist repr
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 1 | 2025-08-11T15:17:42 | 2025-08-11T15:39:56 | 2025-08-11T15:39:54 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7735",
"html_url": "https://github.com/huggingface/datasets/pull/7735",
"diff_url": "https://github.com/huggingface/datasets/pull/7735.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7735.patch",
"merged_at": "2025-08-11T15:39:54"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7735/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7734
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7734/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7734/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7734/events
|
https://github.com/huggingface/datasets/pull/7734
| 3,306,519,239 |
PR_kwDODunzps6i4pmA
| 7,734 |
Fixing __getitem__ of datasets which behaves inconsistent to documentation when setting _format_type to None
|
{
"login": "awagen",
"id": 40367113,
"node_id": "MDQ6VXNlcjQwMzY3MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/40367113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awagen",
"html_url": "https://github.com/awagen",
"followers_url": "https://api.github.com/users/awagen/followers",
"following_url": "https://api.github.com/users/awagen/following{/other_user}",
"gists_url": "https://api.github.com/users/awagen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/awagen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awagen/subscriptions",
"organizations_url": "https://api.github.com/users/awagen/orgs",
"repos_url": "https://api.github.com/users/awagen/repos",
"events_url": "https://api.github.com/users/awagen/events{/privacy}",
"received_events_url": "https://api.github.com/users/awagen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 2 | 2025-08-09T15:52:54 | 2025-08-17T07:23:00 | 2025-08-17T07:23:00 |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7734",
"html_url": "https://github.com/huggingface/datasets/pull/7734",
"diff_url": "https://github.com/huggingface/datasets/pull/7734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7734.patch",
"merged_at": null
}
|
Setting _format_type to None, should return plain python object but as of 4.0.0 returns Column. This fails in libs such as sentencetransformers (such as in generation of hard negatives) where plain python is expected.
|
{
"login": "awagen",
"id": 40367113,
"node_id": "MDQ6VXNlcjQwMzY3MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/40367113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awagen",
"html_url": "https://github.com/awagen",
"followers_url": "https://api.github.com/users/awagen/followers",
"following_url": "https://api.github.com/users/awagen/following{/other_user}",
"gists_url": "https://api.github.com/users/awagen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/awagen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awagen/subscriptions",
"organizations_url": "https://api.github.com/users/awagen/orgs",
"repos_url": "https://api.github.com/users/awagen/repos",
"events_url": "https://api.github.com/users/awagen/events{/privacy}",
"received_events_url": "https://api.github.com/users/awagen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7734/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7733
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7733/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7733/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7733/events
|
https://github.com/huggingface/datasets/issues/7733
| 3,304,979,299 |
I_kwDODunzps7E_ftj
| 7,733 |
Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path
|
{
"login": "dennys246",
"id": 27898715,
"node_id": "MDQ6VXNlcjI3ODk4NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dennys246",
"html_url": "https://github.com/dennys246",
"followers_url": "https://api.github.com/users/dennys246/followers",
"following_url": "https://api.github.com/users/dennys246/following{/other_user}",
"gists_url": "https://api.github.com/users/dennys246/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dennys246/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dennys246/subscriptions",
"organizations_url": "https://api.github.com/users/dennys246/orgs",
"repos_url": "https://api.github.com/users/dennys246/repos",
"events_url": "https://api.github.com/users/dennys246/events{/privacy}",
"received_events_url": "https://api.github.com/users/dennys246/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-08T19:10:58 | 2025-08-12T00:54:58 | null |
NONE
| null | null | null | null |
### Describe the bug
I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of trouble trying to have the images handled properly (at least in the way I’d expect them to be handled).
I find that I cannot use relative paths for loading images remotely from the Hugging Face repo or from a local repository. Any time I do it always simply appends my current working directory to the dataset. As a result to use the datasets library with my dataset I have to change my working directory to the dataset folder or abandon the dataset object structure, which I cannot imagine you intended. As a result I have to use URL’s since an absolute path on my system obviously wouldn’t work for others. The URL works ok, but despite me having it locally downloaded, it appears to be redownloading the dataset every time I train my snowGAN model on it (and often times I’m coming into HTTPS errors for over requesting the data).
Or maybe image relative paths aren't intended to be loaded directly through your datasets library as images and should be kept as strings for the user to handle? If so I feel like you’re missing out on some pretty seamless functionality
### Steps to reproduce the bug
1. Download a local copy of the dataset (rmdig/rocky_mountain_snowpack) through git or whatever you prefer.
2. Alter the README.md YAML for file_path (the relative path to each image) to be type Image instead of type string
`
---
dataset_info:
features:
- name: image
dtype: Image
- name: file_path
dtype: Image
`
3. Initialize the dataset locally, make sure your working directory is not the dataset directory root
`dataset = datasets.load_dataset(‘path/to/local/rocky_mountain_snowpack/‘)`
4. Call to one of the samples and you’ll get an error that the image was not found in current/working/directory/preprocessed/cores/image_1.png. Showing that it’s simply looking in the current working directory + relative path
`
>>> dataset['train'][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/image.py", line 171, in decode_example
image = PIL.Image.open(path)
^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/PIL/Image.py", line 3277, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dennyschaedig/Datasets/preprocessed/cores/image_1.png'
`
### Expected behavior
I expect the datasets and Image() to load the locally hosted data using path/to/local/rocky_mountain_snowpack/ (that I pass in with my datasets.load_dataset() or the you all handle on the backend) call + relative path.
Instead it appears to load from my current working directory + relative path.
### Environment info
Tested on…
Windows 11, Ubuntu Linux 22.04 and Mac Sequoia 15.5 Silicone M2
datasets version 4.0.0
Python 3.12 and 3.13
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7733/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7732
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7732/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7732/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7732/events
|
https://github.com/huggingface/datasets/issues/7732
| 3,304,673,383 |
I_kwDODunzps7E-VBn
| 7,732 |
webdataset: key errors when `field_name` has upper case characters
|
{
"login": "YassineYousfi",
"id": 29985433,
"node_id": "MDQ6VXNlcjI5OTg1NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YassineYousfi",
"html_url": "https://github.com/YassineYousfi",
"followers_url": "https://api.github.com/users/YassineYousfi/followers",
"following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
"gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
"organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
"repos_url": "https://api.github.com/users/YassineYousfi/repos",
"events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-08T16:56:42 | 2025-08-08T16:56:42 | null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
When using a webdataset each sample can be a collection of different "fields"
like this:
```
images17/image194.left.jpg
images17/image194.right.jpg
images17/image194.json
images17/image12.left.jpg
images17/image12.right.jpg
images17/image12.json
```
if the field_name contains upper case characters, the HF webdataset integration throws a key error when trying to load the dataset:
e.g. from a dataset (now updated so that it doesn't throw this error)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[1], line 2
1 from datasets import load_dataset
----> 2 ds = load_dataset("commaai/comma2k19", data_files={'train': ['data-00000.tar.gz']}, num_proc=1)
File ~/xx/.venv/lib/python3.11/site-packages/datasets/load.py:1412, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1409 return builder_instance.as_streaming_dataset(split=split)
1411 # Download and prepare data
-> 1412 builder_instance.download_and_prepare(
1413 download_config=download_config,
1414 download_mode=download_mode,
1415 verification_mode=verification_mode,
1416 num_proc=num_proc,
1417 storage_options=storage_options,
1418 )
1420 # Build dataset for splits
1421 keep_in_memory = (
1422 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1423 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:894, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
892 if num_proc is not None:
893 prepare_split_kwargs["num_proc"] = num_proc
--> 894 self._download_and_prepare(
895 dl_manager=dl_manager,
896 verification_mode=verification_mode,
897 **prepare_split_kwargs,
898 **download_and_prepare_kwargs,
899 )
900 # Sync info
901 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:1609, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1608 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1609 super()._download_and_prepare(
1610 dl_manager,
1611 verification_mode,
1612 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS
1613 or verification_mode == VerificationMode.ALL_CHECKS,
1614 **prepare_splits_kwargs,
1615 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:948, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
946 split_dict = SplitDict(dataset_name=self.dataset_name)
947 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 948 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
950 # Checksums verification
951 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:81, in WebDataset._split_generators(self, dl_manager)
78 if not self.info.features:
79 # Get one example to get the feature types
80 pipeline = self._get_pipeline_from_tar(tar_paths[0], tar_iterators[0])
---> 81 first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
82 if any(example.keys() != first_examples[0].keys() for example in first_examples):
83 raise ValueError(
84 "The TAR archives of the dataset should be in WebDataset format, "
85 "but the files in the archive don't share the same prefix or the same types."
86 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:55, in WebDataset._get_pipeline_from_tar(cls, tar_path, tar_iterator)
53 data_extension = field_name.split(".")[-1]
54 if data_extension in cls.DECODERS:
---> 55 current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name])
56 if current_example:
57 yield current_example
KeyError: 'processed_log_IMU_magnetometer_value.npy'
```
### Steps to reproduce the bug
unit test was added in: https://github.com/huggingface/datasets/pull/7726
it fails without the fixed proposed in the same PR
### Expected behavior
Not throwing a key error.
### Environment info
```
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
- Python version: 3.11.4
- `huggingface_hub` version: 0.33.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.7.0
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7732/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7731
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7731/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7731/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7731/events
|
https://github.com/huggingface/datasets/issues/7731
| 3,303,637,075 |
I_kwDODunzps7E6YBT
| 7,731 |
Add the possibility of a backend for audio decoding
|
{
"login": "intexcor",
"id": 142020129,
"node_id": "U_kgDOCHcOIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/intexcor",
"html_url": "https://github.com/intexcor",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/intexcor/subscriptions",
"organizations_url": "https://api.github.com/users/intexcor/orgs",
"repos_url": "https://api.github.com/users/intexcor/repos",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"received_events_url": "https://api.github.com/users/intexcor/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 2 | 2025-08-08T11:08:56 | 2025-08-20T16:29:33 | null |
NONE
| null | null | null | null |
### Feature request
Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset.
### Motivation
I use a service for training models in which ffmpeg cannot be installed.
### Your contribution
I use a service for training models in which ffmpeg cannot be installed.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7731/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7730
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7730/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7730/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7730/events
|
https://github.com/huggingface/datasets/pull/7730
| 3,301,907,242 |
PR_kwDODunzps6iqTZI
| 7,730 |
Grammar fix: correct "showed" to "shown" in fingerprint.py
|
{
"login": "brchristian",
"id": 2460418,
"node_id": "MDQ6VXNlcjI0NjA0MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2460418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brchristian",
"html_url": "https://github.com/brchristian",
"followers_url": "https://api.github.com/users/brchristian/followers",
"following_url": "https://api.github.com/users/brchristian/following{/other_user}",
"gists_url": "https://api.github.com/users/brchristian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brchristian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brchristian/subscriptions",
"organizations_url": "https://api.github.com/users/brchristian/orgs",
"repos_url": "https://api.github.com/users/brchristian/repos",
"events_url": "https://api.github.com/users/brchristian/events{/privacy}",
"received_events_url": "https://api.github.com/users/brchristian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 0 | 2025-08-07T21:22:56 | 2025-08-13T18:34:30 | 2025-08-13T13:12:56 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7730",
"html_url": "https://github.com/huggingface/datasets/pull/7730",
"diff_url": "https://github.com/huggingface/datasets/pull/7730.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7730.patch",
"merged_at": "2025-08-13T13:12:56"
}
|
This PR corrects a small grammatical issue in the outputs of fingerprint.py:
```diff
- "This warning is only showed once. Subsequent hashing failures won't be showed."
+ "This warning is only shown once. Subsequent hashing failures won't be shown."
```
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7730/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7729
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7729/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7729/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7729/events
|
https://github.com/huggingface/datasets/issues/7729
| 3,300,672,954 |
I_kwDODunzps7EvEW6
| 7,729 |
OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory
|
{
"login": "SaleemMalikAI",
"id": 115183904,
"node_id": "U_kgDOBt2RIA",
"avatar_url": "https://avatars.githubusercontent.com/u/115183904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaleemMalikAI",
"html_url": "https://github.com/SaleemMalikAI",
"followers_url": "https://api.github.com/users/SaleemMalikAI/followers",
"following_url": "https://api.github.com/users/SaleemMalikAI/following{/other_user}",
"gists_url": "https://api.github.com/users/SaleemMalikAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaleemMalikAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaleemMalikAI/subscriptions",
"organizations_url": "https://api.github.com/users/SaleemMalikAI/orgs",
"repos_url": "https://api.github.com/users/SaleemMalikAI/repos",
"events_url": "https://api.github.com/users/SaleemMalikAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaleemMalikAI/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-07T14:07:23 | 2025-08-07T14:07:23 | null |
NONE
| null | null | null | null |
> Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7729/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7728
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7728/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7728/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7728/events
|
https://github.com/huggingface/datasets/issues/7728
| 3,298,854,904 |
I_kwDODunzps7EoIf4
| 7,728 |
NonMatchingSplitsSizesError and ExpectedMoreSplitsError
|
{
"login": "efsotr",
"id": 104755879,
"node_id": "U_kgDOBj5ypw",
"avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efsotr",
"html_url": "https://github.com/efsotr",
"followers_url": "https://api.github.com/users/efsotr/followers",
"following_url": "https://api.github.com/users/efsotr/following{/other_user}",
"gists_url": "https://api.github.com/users/efsotr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efsotr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efsotr/subscriptions",
"organizations_url": "https://api.github.com/users/efsotr/orgs",
"repos_url": "https://api.github.com/users/efsotr/repos",
"events_url": "https://api.github.com/users/efsotr/events{/privacy}",
"received_events_url": "https://api.github.com/users/efsotr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-07T04:04:50 | 2025-08-07T07:31:47 | null |
NONE
| null | null | null | null |
### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz",
"validation": "en/c4-validation.00000-of-00008.json.gz"},
)
```
```log
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=828589180707, num_examples=364868892, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=809262831, num_examples=356317, shard_lengths=[223006, 133311], dataset_name='c4')}, {'expected': SplitInfo(name='validation', num_bytes=825767266, num_examples=364608, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=102199431, num_examples=45576, shard_lengths=None, dataset_name='c4')}]
```
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
split="train"
)
```
```log
ExpectedMoreSplitsError: {'validation'}
```
### Expected behavior
No error
### Environment info
datasets 4.0.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7728/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7727
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7727/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7727/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7727/events
|
https://github.com/huggingface/datasets/issues/7727
| 3,295,718,578 |
I_kwDODunzps7EcKyy
| 7,727 |
config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally
|
{
"login": "doctorpangloss",
"id": 2229300,
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doctorpangloss",
"html_url": "https://github.com/doctorpangloss",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-06T08:21:37 | 2025-08-06T08:21:37 | null |
NONE
| null | null | null | null |
### Describe the bug
```
- config_name: some_config
data_files:
- split: train
path:
- images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `load_dataset` on the same directory locally works fine.
### Steps to reproduce the bug
1. create a README.md with the front matter of the form
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
2. `touch ./images/xyz/1.jpg`
3. Observe this directory loads with `load_dataset("filesystem_path", "some_config")` correctly.
4. Observe exceptions when you load this with `load_dataset("repoid/filesystem_path", "some_config")`
### Expected behavior
`./` prefix should be interpreted correctly
### Environment info
datasets 4.0.0
datasets 3.4.0
reproduce
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7727/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7726
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7726/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7726/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7726/events
|
https://github.com/huggingface/datasets/pull/7726
| 3,293,789,832 |
PR_kwDODunzps6iO_oF
| 7,726 |
fix(webdataset): don't .lower() field_name
|
{
"login": "YassineYousfi",
"id": 29985433,
"node_id": "MDQ6VXNlcjI5OTg1NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YassineYousfi",
"html_url": "https://github.com/YassineYousfi",
"followers_url": "https://api.github.com/users/YassineYousfi/followers",
"following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
"gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
"organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
"repos_url": "https://api.github.com/users/YassineYousfi/repos",
"events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null | 3 | 2025-08-05T16:57:09 | 2025-08-20T16:35:55 | 2025-08-20T16:35:55 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7726",
"html_url": "https://github.com/huggingface/datasets/pull/7726",
"diff_url": "https://github.com/huggingface/datasets/pull/7726.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7726.patch",
"merged_at": "2025-08-20T16:35:55"
}
|
This fixes cases where keys have upper case identifiers
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7726/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7724
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7724/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7724/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7724/events
|
https://github.com/huggingface/datasets/issues/7724
| 3,292,315,241 |
I_kwDODunzps7EPL5p
| 7,724 |
Can not stepinto load_dataset.py?
|
{
"login": "micklexqg",
"id": 13776012,
"node_id": "MDQ6VXNlcjEzNzc2MDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/13776012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/micklexqg",
"html_url": "https://github.com/micklexqg",
"followers_url": "https://api.github.com/users/micklexqg/followers",
"following_url": "https://api.github.com/users/micklexqg/following{/other_user}",
"gists_url": "https://api.github.com/users/micklexqg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/micklexqg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/micklexqg/subscriptions",
"organizations_url": "https://api.github.com/users/micklexqg/orgs",
"repos_url": "https://api.github.com/users/micklexqg/repos",
"events_url": "https://api.github.com/users/micklexqg/events{/privacy}",
"received_events_url": "https://api.github.com/users/micklexqg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-05T09:28:51 | 2025-08-05T09:28:51 | null |
NONE
| null | null | null | null |
I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" -->
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7724/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7723
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7723/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7723/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7723/events
|
https://github.com/huggingface/datasets/issues/7723
| 3,289,943,261 |
I_kwDODunzps7EGIzd
| 7,723 |
Don't remove `trust_remote_code` arg!!!
|
{
"login": "autosquid",
"id": 758925,
"node_id": "MDQ6VXNlcjc1ODkyNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/758925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/autosquid",
"html_url": "https://github.com/autosquid",
"followers_url": "https://api.github.com/users/autosquid/followers",
"following_url": "https://api.github.com/users/autosquid/following{/other_user}",
"gists_url": "https://api.github.com/users/autosquid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/autosquid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/autosquid/subscriptions",
"organizations_url": "https://api.github.com/users/autosquid/orgs",
"repos_url": "https://api.github.com/users/autosquid/repos",
"events_url": "https://api.github.com/users/autosquid/events{/privacy}",
"received_events_url": "https://api.github.com/users/autosquid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 2025-08-04T15:42:07 | 2025-08-04T15:42:07 | null |
NONE
| null | null | null | null |
### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7723/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7722
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7722/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7722/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7722/events
|
https://github.com/huggingface/datasets/issues/7722
| 3,289,741,064 |
I_kwDODunzps7EFXcI
| 7,722 |
Out of memory even though using load_dataset(..., streaming=True)
|
{
"login": "padmalcom",
"id": 3961950,
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/padmalcom",
"html_url": "https://github.com/padmalcom",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 0 | 2025-08-04T14:41:55 | 2025-08-04T14:41:55 | null |
NONE
| null | null | null | null |
### Describe the bug
I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom.
### Steps to reproduce the bug
```
ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True)
for i,sample in enumerate(tqdm(ds)):
target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav')
try:
sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate'])
except Exception as e:
print(f"Could not write audio {i} in ds: {e}")
```
### Expected behavior
I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same.
### Environment info
Python 3.12.11
Ubuntu 24
datasets 4.0.0 and 3.6.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7722/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7721
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7721/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7721/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7721/events
|
https://github.com/huggingface/datasets/issues/7721
| 3,289,426,104 |
I_kwDODunzps7EEKi4
| 7,721 |
Bad split error message when using percentages
|
{
"login": "padmalcom",
"id": 3961950,
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/padmalcom",
"html_url": "https://github.com/padmalcom",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 2 | 2025-08-04T13:20:25 | 2025-08-14T14:42:24 | null |
NONE
| null | null | null | null |
### Describe the bug
Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps.
When doing so, the library returns this error:
raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
ValueError: Bad split: train[0%:10%]. Available splits: ['train']
Edit: Same happens with a split like _train[:90000]_
### Steps to reproduce the bug
```
for split in range(10):
split_str = f"train[{split*10}%:{(split+1)*10}%]"
print(f"Processing split {split_str}...")
ds = load_dataset("user/dataset", split=split_str, streaming=True)
```
### Expected behavior
I'd expect the library to split my dataset in 10% steps.
### Environment info
python 3.12.11
ubuntu 24
dataset 4.0.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7721/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7720
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7720/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7720/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7720/events
|
https://github.com/huggingface/datasets/issues/7720
| 3,287,150,513 |
I_kwDODunzps7D7e-x
| 7,720 |
Datasets 4.0 map function causing column not found
|
{
"login": "Darejkal",
"id": 55143337,
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Darejkal",
"html_url": "https://github.com/Darejkal",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 3 | 2025-08-03T12:52:34 | 2025-08-07T19:23:34 | null |
NONE
| null | null | null | null |
### Describe the bug
Column returned after mapping is not found in new instance of the dataset.
### Steps to reproduce the bug
Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration`
```
def compute_duration(x):
return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]}
def get_total_audio_length(dataset):
data = dataset.map(compute_duration, num_proc=NUM_PROC)
print(data)
durations=data["duration"]
total_seconds = sum(durations)
return total_seconds
```
### Expected behavior
New datasets.Dataset instance should have new columns attached.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2023.12.2
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7720/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7719
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7719/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7719/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7719/events
|
https://github.com/huggingface/datasets/issues/7719
| 3,285,928,491 |
I_kwDODunzps7D20or
| 7,719 |
Specify dataset columns types in typehint
|
{
"login": "Samoed",
"id": 36135455,
"node_id": "MDQ6VXNlcjM2MTM1NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Samoed",
"html_url": "https://github.com/Samoed",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
"organizations_url": "https://api.github.com/users/Samoed/orgs",
"repos_url": "https://api.github.com/users/Samoed/repos",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"received_events_url": "https://api.github.com/users/Samoed/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 2025-08-02T13:22:31 | 2025-08-02T13:22:31 | null |
NONE
| null | null | null | null |
### Feature request
Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131
### Motivation
In MTEB we're using a lot of datasets objects, but they're a bit poor in typehints. E.g. we can specify this for dataloder
```python
from typing import TypedDict
from torch.utils.data import DataLoader
class CorpusInput(TypedDict):
title: list[str]
body: list[str]
class QueryInput(TypedDict):
query: list[str]
instruction: list[str]
def queries_loader() -> DataLoader[QueryInput]:
...
def corpus_loader() -> DataLoader[CorpusInput]:
...
```
But for datasets we can only specify columns in type in comments
```python
from datasets import Dataset
QueryDataset = Dataset
"""Query dataset should have `query` and `instructions` columns as `str` """
```
### Your contribution
I can create draft implementation
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7719/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7719/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7718
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7718/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7718/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7718/events
|
https://github.com/huggingface/datasets/pull/7718
| 3,284,221,177 |
PR_kwDODunzps6hvJ6R
| 7,718 |
add support for pyarrow string view in features
|
{
"login": "onursatici",
"id": 5051569,
"node_id": "MDQ6VXNlcjUwNTE1Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5051569?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/onursatici",
"html_url": "https://github.com/onursatici",
"followers_url": "https://api.github.com/users/onursatici/followers",
"following_url": "https://api.github.com/users/onursatici/following{/other_user}",
"gists_url": "https://api.github.com/users/onursatici/gists{/gist_id}",
"starred_url": "https://api.github.com/users/onursatici/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/onursatici/subscriptions",
"organizations_url": "https://api.github.com/users/onursatici/orgs",
"repos_url": "https://api.github.com/users/onursatici/repos",
"events_url": "https://api.github.com/users/onursatici/events{/privacy}",
"received_events_url": "https://api.github.com/users/onursatici/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null | 1 | 2025-08-01T14:58:39 | 2025-08-13T13:09:44 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7718",
"html_url": "https://github.com/huggingface/datasets/pull/7718",
"diff_url": "https://github.com/huggingface/datasets/pull/7718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7718.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7718/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7718/timeline
| null | null | null | null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.