Dataset Viewer
_id
stringlengths 24
24
| id
stringlengths 5
121
| author
stringlengths 2
42
| cardData
stringlengths 2
1.07M
| disabled
bool 2
classes | gated
stringclasses 3
values | lastModified
timestamp[ns]date 2021-02-05 16:03:35
2025-08-21 13:14:28
| likes
int64 0
8.79k
| trendingScore
float64 0
99
| private
bool 1
class | sha
stringlengths 40
40
| description
stringlengths 0
6.67k
⌀ | downloads
int64 0
3.17M
| downloadsAllTime
int64 0
143M
| tags
listlengths 1
7.92k
| createdAt
timestamp[ns]date 2022-03-02 23:29:22
2025-08-21 13:11:06
| paperswithcode_id
stringclasses 677
values | citation
stringlengths 0
10.7k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
63990f21cc50af73d29ecfa3
|
fka/awesome-chatgpt-prompts
|
fka
|
{"license": "cc0-1.0", "tags": ["ChatGPT"], "task_categories": ["question-answering"], "size_categories": ["100K<n<1M"]}
| false |
False
| 2025-01-06T00:02:53 | 8,787 | 99 | false |
68ba7694e23014788dcc8ab5afe613824f45a05c
|
🧠 Awesome ChatGPT Prompts [CSV dataset]
This is a Dataset Repository of Awesome ChatGPT Prompts
View All Prompts on GitHub
License
CC-0
| 37,062 | 258,206 |
[
"task_categories:question-answering",
"license:cc0-1.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"ChatGPT"
] | 2022-12-13T23:47:45 | null | null |
682600d8e6a0ae86702e3da9
|
nvidia/Granary
|
nvidia
|
{"license": "cc-by-3.0", "task_categories": ["automatic-speech-recognition", "translation"], "language": ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "hr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "ro", "ru", "sk", "sl", "sv", "uk"], "pretty_name": "Granary", "size_categories": ["10M<n<100M"], "tags": ["granary", "multilingual", "nemo"], "configs": [{"config_name": "sv_voxpopuli", "data_files": [{"path": "sv/voxpopuli/sv_asr.jsonl", "split": "asr"}, {"path": "sv/voxpopuli/sv_ast-en.jsonl", "split": "ast"}]}, {"config_name": "sv_ytc", "data_files": [{"path": "sv/ytc/sv_asr.jsonl", "split": "asr"}, {"path": "sv/ytc/sv_ast-en.jsonl", "split": "ast"}]}, {"config_name": "mt_voxpopuli", "data_files": [{"path": "mt/voxpopuli/mt_ast-en.jsonl", "split": "ast"}, {"path": "mt/voxpopuli/mt_asr.jsonl", "split": "asr"}]}, {"config_name": "sk_voxpopuli", "data_files": [{"path": "sk/voxpopuli/sk_asr.jsonl", "split": "asr"}, {"path": "sk/voxpopuli/sk_ast-en.jsonl", "split": "ast"}]}, {"config_name": "sk_ytc", "data_files": [{"path": "sk/ytc/sk_asr.jsonl", "split": "asr"}, {"path": "sk/ytc/sk_ast-en.jsonl", "split": "ast"}]}, {"config_name": "it_voxpopuli", "data_files": [{"path": "it/voxpopuli/it_asr.jsonl", "split": "asr"}, {"path": "it/voxpopuli/it_ast-en.jsonl", "split": "ast"}]}, {"config_name": "it_ytc", "data_files": [{"path": "it/ytc/it_asr.jsonl", "split": "asr"}, {"path": "it/ytc/it_ast-en.jsonl", "split": "ast"}]}, {"config_name": "en_voxpopuli", "data_files": [{"path": "en/voxpopuli/en_asr.jsonl", "split": "asr"}]}, {"config_name": "en_ytc", "data_files": [{"path": "en/ytc/en_asr.jsonl", "split": "asr"}]}, {"config_name": "en_librilight", "data_files": [{"path": "en/librilight/en_asr.jsonl", "split": "asr"}]}, {"config_name": "en_yodas", "data_files": [{"path": "en/yodas/en_asr.jsonl", "split": "asr"}]}, {"config_name": "pt_voxpopuli", "data_files": [{"path": "pt/voxpopuli/pt_ast-en.jsonl", "split": "ast"}, {"path": "pt/voxpopuli/pt_asr.jsonl", "split": "asr"}]}, {"config_name": "pt_ytc", "data_files": [{"path": "pt/ytc/pt_ast-en.jsonl", "split": "ast"}, {"path": "pt/ytc/pt_asr.jsonl", "split": "asr"}]}, {"config_name": "lv_voxpopuli", "data_files": [{"path": "lv/voxpopuli/lv_ast-en.jsonl", "split": "ast"}, {"path": "lv/voxpopuli/lv_asr.jsonl", "split": "asr"}]}, {"config_name": "lv_ytc", "data_files": [{"path": "lv/ytc/lv_ast-en.jsonl", "split": "ast"}, {"path": "lv/ytc/lv_asr.jsonl", "split": "asr"}]}, {"config_name": "ro_voxpopuli", "data_files": [{"path": "ro/voxpopuli/ro_ast-en.jsonl", "split": "ast"}, {"path": "ro/voxpopuli/ro_asr.jsonl", "split": "asr"}]}, {"config_name": "ro_ytc", "data_files": [{"path": "ro/ytc/ro_ast-en.jsonl", "split": "ast"}, {"path": "ro/ytc/ro_asr.jsonl", "split": "asr"}]}, {"config_name": "pl_voxpopuli", "data_files": [{"path": "pl/voxpopuli/pl_asr.jsonl", "split": "asr"}, {"path": "pl/voxpopuli/pl_ast-en.jsonl", "split": "ast"}]}, {"config_name": "pl_ytc", "data_files": [{"path": "pl/ytc/pl_asr.jsonl", "split": "asr"}, {"path": "pl/ytc/pl_ast-en.jsonl", "split": "ast"}]}, {"config_name": "sl_voxpopuli", "data_files": [{"path": "sl/voxpopuli/sl_ast-en.jsonl", "split": "ast"}, {"path": "sl/voxpopuli/sl_asr.jsonl", "split": "asr"}]}, {"config_name": "sl_ytc", "data_files": [{"path": "sl/ytc/sl_ast-en.jsonl", "split": "ast"}, {"path": "sl/ytc/sl_asr.jsonl", "split": "asr"}]}, {"config_name": "cs_voxpopuli", "data_files": [{"path": "cs/voxpopuli/cs_asr.jsonl", "split": "asr"}, {"path": "cs/voxpopuli/cs_ast-en.jsonl", "split": "ast"}]}, {"config_name": "cs_ytc", "data_files": [{"path": "cs/ytc/cs_asr.jsonl", "split": "asr"}, {"path": "cs/ytc/cs_ast-en.jsonl", "split": "ast"}]}, {"config_name": "cs_yodas", "data_files": [{"path": "cs/yodas/cs_asr.jsonl", "split": "asr"}, {"path": "cs/yodas/cs_ast-en.jsonl", "split": "ast"}]}, {"config_name": "el_voxpopuli", "data_files": [{"path": "el/voxpopuli/el_asr.jsonl", "split": "asr"}, {"path": "el/voxpopuli/el_ast-en.jsonl", "split": "ast"}]}, {"config_name": "el_ytc", "data_files": [{"path": "el/ytc/el_asr.jsonl", "split": "asr"}, {"path": "el/ytc/el_ast-en.jsonl", "split": "ast"}]}, {"config_name": "hu_voxpopuli", "data_files": [{"path": "hu/voxpopuli/hu_asr.jsonl", "split": "asr"}, {"path": "hu/voxpopuli/hu_ast-en.jsonl", "split": "ast"}]}, {"config_name": "hu_ytc", "data_files": [{"path": "hu/ytc/hu_asr.jsonl", "split": "asr"}, {"path": "hu/ytc/hu_ast-en.jsonl", "split": "ast"}]}, {"config_name": "lt_voxpopuli", "data_files": [{"path": "lt/voxpopuli/lt_asr.jsonl", "split": "asr"}, {"path": "lt/voxpopuli/lt_ast-en.jsonl", "split": "ast"}]}, {"config_name": "lt_ytc", "data_files": [{"path": "lt/ytc/lt_asr.jsonl", "split": "asr"}, {"path": "lt/ytc/lt_ast-en.jsonl", "split": "ast"}]}, {"config_name": "et_voxpopuli", "data_files": [{"path": "et/voxpopuli/et_asr.jsonl", "split": "asr"}, {"path": "et/voxpopuli/et_ast-en.jsonl", "split": "ast"}]}, {"config_name": "et_ytc", "data_files": [{"path": "et/ytc/et_asr.jsonl", "split": "asr"}, {"path": "et/ytc/et_ast-en.jsonl", "split": "ast"}]}, {"config_name": "fr_voxpopuli", "data_files": [{"path": "fr/voxpopuli/fr_ast-en.jsonl", "split": "ast"}, {"path": "fr/voxpopuli/fr_asr.jsonl", "split": "asr"}]}, {"config_name": "fr_ytc", "data_files": [{"path": "fr/ytc/fr_ast-en.jsonl", "split": "ast"}, {"path": "fr/ytc/fr_asr.jsonl", "split": "asr"}]}, {"config_name": "da_voxpopuli", "data_files": [{"path": "da/voxpopuli/da_asr.jsonl", "split": "asr"}, {"path": "da/voxpopuli/da_ast-en.jsonl", "split": "ast"}]}, {"config_name": "da_ytc", "data_files": [{"path": "da/ytc/da_asr.jsonl", "split": "asr"}, {"path": "da/ytc/da_ast-en.jsonl", "split": "ast"}]}, {"config_name": "da_yodas", "data_files": [{"path": "da/yodas/da_asr.jsonl", "split": "asr"}, {"path": "da/yodas/da_ast-en.jsonl", "split": "ast"}]}, {"config_name": "bg_voxpopuli", "data_files": [{"path": "bg/voxpopuli/bg_asr.jsonl", "split": "asr"}, {"path": "bg/voxpopuli/bg_ast-en.jsonl", "split": "ast"}]}, {"config_name": "bg_ytc", "data_files": [{"path": "bg/ytc/bg_asr.jsonl", "split": "asr"}, {"path": "bg/ytc/bg_ast-en.jsonl", "split": "ast"}]}, {"config_name": "bg_yodas", "data_files": [{"path": "bg/yodas/bg_asr.jsonl", "split": "asr"}, {"path": "bg/yodas/bg_ast-en.jsonl", "split": "ast"}]}, {"config_name": "es_voxpopuli", "data_files": [{"path": "es/voxpopuli/es_asr.jsonl", "split": "asr"}, {"path": "es/voxpopuli/es_ast-en.jsonl", "split": "ast"}]}, {"config_name": "es_ytc", "data_files": [{"path": "es/ytc/es_asr.jsonl", "split": "asr"}, {"path": "es/ytc/es_ast-en.jsonl", "split": "ast"}]}, {"config_name": "nl_voxpopuli", "data_files": [{"path": "nl/voxpopuli/nl_ast-en.jsonl", "split": "ast"}, {"path": "nl/voxpopuli/nl_asr.jsonl", "split": "asr"}]}, {"config_name": "nl_ytc", "data_files": [{"path": "nl/ytc/nl_ast-en.jsonl", "split": "ast"}, {"path": "nl/ytc/nl_asr.jsonl", "split": "asr"}]}, {"config_name": "hr_voxpopuli", "data_files": [{"path": "hr/voxpopuli/hr_ast-en.jsonl", "split": "ast"}, {"path": "hr/voxpopuli/hr_asr.jsonl", "split": "asr"}]}, {"config_name": "hr_ytc", "data_files": [{"path": "hr/ytc/hr_ast-en.jsonl", "split": "ast"}, {"path": "hr/ytc/hr_asr.jsonl", "split": "asr"}]}, {"config_name": "fi_voxpopuli", "data_files": [{"path": "fi/voxpopuli/fi_asr.jsonl", "split": "asr"}, {"path": "fi/voxpopuli/fi_ast-en.jsonl", "split": "ast"}]}, {"config_name": "fi_ytc", "data_files": [{"path": "fi/ytc/fi_asr.jsonl", "split": "asr"}, {"path": "fi/ytc/fi_ast-en.jsonl", "split": "ast"}]}, {"config_name": "uk_ytc", "data_files": [{"path": "uk/ytc/uk_asr.jsonl", "split": "asr"}, {"path": "uk/ytc/uk_ast-en.jsonl", "split": "ast"}]}, {"config_name": "de_voxpopuli", "data_files": [{"path": "de/voxpopuli/de_asr.jsonl", "split": "asr"}, {"path": "de/voxpopuli/de_ast-en.jsonl", "split": "ast"}]}, {"config_name": "de_ytc", "data_files": [{"path": "de/ytc/de_asr.jsonl", "split": "asr"}, {"path": "de/ytc/de_ast-en.jsonl", "split": "ast"}]}, {"config_name": "de_yodas", "data_files": [{"path": "de/yodas/de_asr.jsonl", "split": "asr"}, {"path": "de/yodas/de_ast-en.jsonl", "split": "ast"}]}, {"config_name": "sv", "data_files": [{"path": ["sv/voxpopuli/sv_asr.jsonl", "sv/ytc/sv_asr.jsonl"], "split": "asr"}, {"path": ["sv/voxpopuli/sv_ast-en.jsonl", "sv/ytc/sv_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "mt", "data_files": [{"path": ["mt/voxpopuli/mt_ast-en.jsonl"], "split": "ast"}, {"path": ["mt/voxpopuli/mt_asr.jsonl"], "split": "asr"}]}, {"config_name": "sk", "data_files": [{"path": ["sk/voxpopuli/sk_asr.jsonl", "sk/ytc/sk_asr.jsonl"], "split": "asr"}, {"path": ["sk/voxpopuli/sk_ast-en.jsonl", "sk/ytc/sk_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "it", "data_files": [{"path": ["it/voxpopuli/it_asr.jsonl", "it/ytc/it_asr.jsonl"], "split": "asr"}, {"path": ["it/voxpopuli/it_ast-en.jsonl", "it/ytc/it_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "en", "data_files": [{"path": ["en/voxpopuli/en_asr.jsonl", "en/ytc/en_asr.jsonl", "en/librilight/en_asr.jsonl", "en/yodas/en_asr.jsonl"], "split": "asr"}]}, {"config_name": "pt", "data_files": [{"path": ["pt/voxpopuli/pt_ast-en.jsonl", "pt/ytc/pt_ast-en.jsonl"], "split": "ast"}, {"path": ["pt/voxpopuli/pt_asr.jsonl", "pt/ytc/pt_asr.jsonl"], "split": "asr"}]}, {"config_name": "lv", "data_files": [{"path": ["lv/voxpopuli/lv_ast-en.jsonl", "lv/ytc/lv_ast-en.jsonl"], "split": "ast"}, {"path": ["lv/voxpopuli/lv_asr.jsonl", "lv/ytc/lv_asr.jsonl"], "split": "asr"}]}, {"config_name": "ro", "data_files": [{"path": ["ro/voxpopuli/ro_ast-en.jsonl", "ro/ytc/ro_ast-en.jsonl"], "split": "ast"}, {"path": ["ro/voxpopuli/ro_asr.jsonl", "ro/ytc/ro_asr.jsonl"], "split": "asr"}]}, {"config_name": "pl", "data_files": [{"path": ["pl/voxpopuli/pl_asr.jsonl", "pl/ytc/pl_asr.jsonl"], "split": "asr"}, {"path": ["pl/voxpopuli/pl_ast-en.jsonl", "pl/ytc/pl_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "sl", "data_files": [{"path": ["sl/voxpopuli/sl_ast-en.jsonl", "sl/ytc/sl_ast-en.jsonl"], "split": "ast"}, {"path": ["sl/voxpopuli/sl_asr.jsonl", "sl/ytc/sl_asr.jsonl"], "split": "asr"}]}, {"config_name": "cs", "data_files": [{"path": ["cs/voxpopuli/cs_asr.jsonl", "cs/ytc/cs_asr.jsonl", "cs/yodas/cs_asr.jsonl"], "split": "asr"}, {"path": ["cs/voxpopuli/cs_ast-en.jsonl", "cs/ytc/cs_ast-en.jsonl", "cs/yodas/cs_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "el", "data_files": [{"path": ["el/voxpopuli/el_asr.jsonl", "el/ytc/el_asr.jsonl"], "split": "asr"}, {"path": ["el/voxpopuli/el_ast-en.jsonl", "el/ytc/el_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "hu", "data_files": [{"path": ["hu/voxpopuli/hu_asr.jsonl", "hu/ytc/hu_asr.jsonl"], "split": "asr"}, {"path": ["hu/voxpopuli/hu_ast-en.jsonl", "hu/ytc/hu_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "lt", "data_files": [{"path": ["lt/voxpopuli/lt_asr.jsonl", "lt/ytc/lt_asr.jsonl"], "split": "asr"}, {"path": ["lt/voxpopuli/lt_ast-en.jsonl", "lt/ytc/lt_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "et", "data_files": [{"path": ["et/voxpopuli/et_asr.jsonl", "et/ytc/et_asr.jsonl"], "split": "asr"}, {"path": ["et/voxpopuli/et_ast-en.jsonl", "et/ytc/et_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "fr", "data_files": [{"path": ["fr/voxpopuli/fr_ast-en.jsonl", "fr/ytc/fr_ast-en.jsonl"], "split": "ast"}, {"path": ["fr/voxpopuli/fr_asr.jsonl", "fr/ytc/fr_asr.jsonl"], "split": "asr"}]}, {"config_name": "da", "data_files": [{"path": ["da/voxpopuli/da_asr.jsonl", "da/ytc/da_asr.jsonl", "da/yodas/da_asr.jsonl"], "split": "asr"}, {"path": ["da/voxpopuli/da_ast-en.jsonl", "da/ytc/da_ast-en.jsonl", "da/yodas/da_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "bg", "data_files": [{"path": ["bg/voxpopuli/bg_asr.jsonl", "bg/ytc/bg_asr.jsonl", "bg/yodas/bg_asr.jsonl"], "split": "asr"}, {"path": ["bg/voxpopuli/bg_ast-en.jsonl", "bg/ytc/bg_ast-en.jsonl", "bg/yodas/bg_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "es", "data_files": [{"path": ["es/voxpopuli/es_asr.jsonl", "es/ytc/es_asr.jsonl"], "split": "asr"}, {"path": ["es/voxpopuli/es_ast-en.jsonl", "es/ytc/es_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "nl", "data_files": [{"path": ["nl/voxpopuli/nl_ast-en.jsonl", "nl/ytc/nl_ast-en.jsonl"], "split": "ast"}, {"path": ["nl/voxpopuli/nl_asr.jsonl", "nl/ytc/nl_asr.jsonl"], "split": "asr"}]}, {"config_name": "hr", "data_files": [{"path": ["hr/voxpopuli/hr_ast-en.jsonl", "hr/ytc/hr_ast-en.jsonl"], "split": "ast"}, {"path": ["hr/voxpopuli/hr_asr.jsonl", "hr/ytc/hr_asr.jsonl"], "split": "asr"}]}, {"config_name": "fi", "data_files": [{"path": ["fi/voxpopuli/fi_asr.jsonl", "fi/ytc/fi_asr.jsonl"], "split": "asr"}, {"path": ["fi/voxpopuli/fi_ast-en.jsonl", "fi/ytc/fi_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "uk", "data_files": [{"path": ["uk/ytc/uk_asr.jsonl"], "split": "asr"}, {"path": ["uk/ytc/uk_ast-en.jsonl"], "split": "ast"}]}, {"config_name": "de", "data_files": [{"path": ["de/voxpopuli/de_asr.jsonl", "de/ytc/de_asr.jsonl", "de/yodas/de_asr.jsonl"], "split": "asr"}, {"path": ["de/voxpopuli/de_ast-en.jsonl", "de/ytc/de_ast-en.jsonl", "de/yodas/de_ast-en.jsonl"], "split": "ast"}]}]}
| false |
False
| 2025-08-14T15:05:28 | 99 | 99 | false |
834bfb1011cb5d4efe52fd8e9f3501026647bef3
|
Granary: Speech Recognition and Translation Dataset in 25 European Languages
Granary is a large-scale, open-source multilingual speech dataset covering 25 European languages for Automatic Speech Recognition (ASR) and Automatic Speech Translation (AST) tasks.
Overview
Granary addresses the scarcity of high-quality speech data for low-resource languages by consolidating multiple datasets under a unified framework:
🗣️ ~1M hours of high-quality… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Granary.
| 9,671 | 9,671 |
[
"task_categories:automatic-speech-recognition",
"task_categories:translation",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sk",
"language:sl",
"language:sv",
"language:uk",
"license:cc-by-3.0",
"size_categories:100M<n<1B",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2406.00899",
"arxiv:2505.13404",
"region:us",
"granary",
"multilingual",
"nemo"
] | 2025-05-15T14:57:28 | null | null |
6891e8dbfab7a43a5a3c3ec2
|
nvidia/Llama-Nemotron-VLM-Dataset-v1
|
nvidia
|
{"license": "cc-by-4.0", "task_categories": ["visual-question-answering", "image-text-to-text", "image-to-text"], "pretty_name": "Llama-Nemotron-VLM-Dataset v1", "size_categories": ["n>1T"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "conversations", "sequence": {"struct": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}}, {"name": "metadata", "struct": [{"name": "pdf", "dtype": "string"}, {"name": "page_number", "dtype": "int32"}, {"name": "url", "dtype": "string"}]}], "splits": [{"name": "captioning_1", "num_bytes": null, "num_examples": 21953}, {"name": "captioning_2", "num_bytes": null, "num_examples": 109765}, {"name": "ocr_1", "num_bytes": null, "num_examples": 14525}, {"name": "ocr_2", "num_bytes": null, "num_examples": 29108}, {"name": "ocr_3", "num_bytes": null, "num_examples": 14533}, {"name": "ocr_4", "num_bytes": null, "num_examples": 193310}, {"name": "ocr_5", "num_bytes": null, "num_examples": 188569}, {"name": "ocr_6", "num_bytes": null, "num_examples": 48369}, {"name": "ocr_7", "num_bytes": null, "num_examples": 25281}, {"name": "ocr_8", "num_bytes": null, "num_examples": 57137}, {"name": "ocr_9", "num_bytes": null, "num_examples": 224170}, {"name": "ocr_10", "num_bytes": null, "num_examples": 19379}, {"name": "vqa_1", "num_bytes": null, "num_examples": 1278221}, {"name": "vqa_2", "num_bytes": null, "num_examples": 503275}, {"name": "vqa_3", "num_bytes": null, "num_examples": 34602}, {"name": "vqa_4", "num_bytes": null, "num_examples": 23571}, {"name": "vqa_5", "num_bytes": null, "num_examples": 971}, {"name": "vqa_6", "num_bytes": null, "num_examples": 199}, {"name": "vqa_7", "num_bytes": null, "num_examples": 15050}, {"name": "vqa_8", "num_bytes": null, "num_examples": 15121}, {"name": "vqa_9", "num_bytes": null, "num_examples": 46745}], "download_size": null, "dataset_size": null}, "configs": [{"config_name": "default", "data_files": [{"split": "captioning_1", "path": "captioning_1.jsonl"}, {"split": "captioning_2", "path": "captioning_2.jsonl"}, {"split": "ocr_1", "path": "ocr_1.jsonl"}, {"split": "ocr_2", "path": "ocr_2.jsonl"}, {"split": "ocr_3", "path": "ocr_3.jsonl"}, {"split": "ocr_4", "path": "ocr_4.jsonl"}, {"split": "ocr_5", "path": "ocr_5.jsonl"}, {"split": "ocr_6", "path": "ocr_6.jsonl"}, {"split": "ocr_7", "path": "ocr_7.jsonl"}, {"split": "ocr_8", "path": "ocr_8.jsonl"}, {"split": "ocr_9", "path": "ocr_9.jsonl"}, {"split": "ocr_10", "path": "ocr_10.jsonl"}, {"split": "vqa_1", "path": "vqa_1.jsonl"}, {"split": "vqa_2", "path": "vqa_2.jsonl"}, {"split": "vqa_3", "path": "vqa_3.jsonl"}, {"split": "vqa_4", "path": "vqa_4.jsonl"}, {"split": "vqa_5", "path": "vqa_5.jsonl"}, {"split": "vqa_6", "path": "vqa_6.jsonl"}, {"split": "vqa_7", "path": "vqa_7.jsonl"}, {"split": "vqa_8", "path": "vqa_8.jsonl"}, {"split": "vqa_9", "path": "vqa_9.jsonl"}]}]}
| false |
False
| 2025-08-19T15:03:46 | 110 | 49 | false |
ef85bef68f178201160a657abdd0b18d752166d5
|
Llama-Nemotron-VLM-Dataset v1
Versions
Date
Commit
Changes
11.08.2025
bdb3899
Initial release
18.08.2025
5abc7df
Fixes bug (ocr_1 and ocr_3 images were swapped)
19.08.2025
head
Update instructions for ocr_9
Data Description
This dataset is a compilation of high quality VLM post-training datasets that support NVIDIA’s release of https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-VL-8B-V1.
NVIDIA Llama Nemotron Nano VL is a vision language… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Llama-Nemotron-VLM-Dataset-v1.
| 2,974 | 2,974 |
[
"task_categories:visual-question-answering",
"task_categories:image-text-to-text",
"task_categories:image-to-text",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2501.14818",
"arxiv:2502.04223",
"region:us"
] | 2025-08-05T11:19:55 | null | null |
689629e0f60856afd8fa16ec
|
allenai/WildChat-4.8M
|
allenai
|
{"license": "odc-by", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "question-answering"], "pretty_name": "WildChat-4.8M", "dataset_info": {"features": [{"name": "conversation_hash", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[us]"}, {"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "created", "dtype": "int64"}, {"name": "header", "struct": [{"name": "accept-language", "dtype": "string"}, {"name": "user-agent", "dtype": "string"}]}, {"name": "hashed_ip", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "toxic", "dtype": "bool"}, {"name": "redacted", "dtype": "bool"}, {"name": "state", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "openai_id", "dtype": "string"}, {"name": "role", "dtype": "string"}, {"name": "temperature", "dtype": "float64"}, {"name": "timestamp", "dtype": "timestamp[us]"}, {"name": "token_counter", "dtype": "int64"}, {"name": "top_p", "dtype": "float64"}, {"name": "turn_identifier", "dtype": "int64"}, {"name": "system_fingerprint", "dtype": "string"}, {"name": "usage", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "completion_tokens_details", "struct": [{"name": "reasoning_tokens", "dtype": "int64"}, {"name": "text_tokens", "dtype": "int64"}, {"name": "audio_tokens", "dtype": "int64"}, {"name": "accepted_prediction_tokens", "dtype": "int64"}, {"name": "rejected_prediction_tokens", "dtype": "int64"}]}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "total_tokens", "dtype": "int64"}, {"name": "prompt_tokens_details", "struct": [{"name": "cached_tokens", "dtype": "int64"}, {"name": "audio_tokens", "dtype": "int64"}]}]}]}, {"name": "turn", "dtype": "int64"}, {"name": "language", "dtype": "string"}, {"name": "openai_moderation", "list": [{"name": "categories", "struct": [{"name": "harassment", "dtype": "bool"}, {"name": "harassment/threatening", "dtype": "bool"}, {"name": "harassment_threatening", "dtype": "bool"}, {"name": "hate", "dtype": "bool"}, {"name": "hate/threatening", "dtype": "bool"}, {"name": "hate_threatening", "dtype": "bool"}, {"name": "illicit", "dtype": "bool"}, {"name": "illicit/violent", "dtype": "bool"}, {"name": "illicit_violent", "dtype": "bool"}, {"name": "self-harm", "dtype": "bool"}, {"name": "self-harm/instructions", "dtype": "bool"}, {"name": "self-harm/intent", "dtype": "bool"}, {"name": "self_harm", "dtype": "bool"}, {"name": "self_harm_instructions", "dtype": "bool"}, {"name": "self_harm_intent", "dtype": "bool"}, {"name": "sexual", "dtype": "bool"}, {"name": "sexual/minors", "dtype": "bool"}, {"name": "sexual_minors", "dtype": "bool"}, {"name": "violence", "dtype": "bool"}, {"name": "violence/graphic", "dtype": "bool"}, {"name": "violence_graphic", "dtype": "bool"}]}, {"name": "category_applied_input_types", "struct": [{"name": "harassment", "list": "string"}, {"name": "harassment/threatening", "list": "string"}, {"name": "harassment_threatening", "list": "string"}, {"name": "hate", "list": "string"}, {"name": "hate/threatening", "list": "string"}, {"name": "hate_threatening", "list": "string"}, {"name": "illicit", "list": "string"}, {"name": "illicit/violent", "list": "string"}, {"name": "illicit_violent", "list": "string"}, {"name": "self-harm", "list": "string"}, {"name": "self-harm/instructions", "list": "string"}, {"name": "self-harm/intent", "list": "string"}, {"name": "self_harm", "list": "string"}, {"name": "self_harm_instructions", "list": "string"}, {"name": "self_harm_intent", "list": "string"}, {"name": "sexual", "list": "string"}, {"name": "sexual/minors", "list": "string"}, {"name": "sexual_minors", "list": "string"}, {"name": "violence", "list": "string"}, {"name": "violence/graphic", "list": "string"}, {"name": "violence_graphic", "list": "string"}]}, {"name": "category_scores", "struct": [{"name": "harassment", "dtype": "float64"}, {"name": "harassment/threatening", "dtype": "float64"}, {"name": "harassment_threatening", "dtype": "float64"}, {"name": "hate", "dtype": "float64"}, {"name": "hate/threatening", "dtype": "float64"}, {"name": "hate_threatening", "dtype": "float64"}, {"name": "illicit", "dtype": "float64"}, {"name": "illicit/violent", "dtype": "float64"}, {"name": "illicit_violent", "dtype": "float64"}, {"name": "self-harm", "dtype": "float64"}, {"name": "self-harm/instructions", "dtype": "float64"}, {"name": "self-harm/intent", "dtype": "float64"}, {"name": "self_harm", "dtype": "float64"}, {"name": "self_harm_instructions", "dtype": "float64"}, {"name": "self_harm_intent", "dtype": "float64"}, {"name": "sexual", "dtype": "float64"}, {"name": "sexual/minors", "dtype": "float64"}, {"name": "sexual_minors", "dtype": "float64"}, {"name": "violence", "dtype": "float64"}, {"name": "violence/graphic", "dtype": "float64"}, {"name": "violence_graphic", "dtype": "float64"}]}, {"name": "flagged", "dtype": "bool"}]}, {"name": "detoxify_moderation", "list": [{"name": "identity_attack", "dtype": "float64"}, {"name": "insult", "dtype": "float64"}, {"name": "obscene", "dtype": "float64"}, {"name": "severe_toxicity", "dtype": "float64"}, {"name": "sexual_explicit", "dtype": "float64"}, {"name": "threat", "dtype": "float64"}, {"name": "toxicity", "dtype": "float64"}]}, {"name": "toxic", "dtype": "bool"}, {"name": "redacted", "dtype": "bool"}, {"name": "state", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "hashed_ip", "dtype": "string"}, {"name": "header", "struct": [{"name": "accept-language", "dtype": "string"}, {"name": "user-agent", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 42645714270.23995, "num_examples": 3199860}], "download_size": 15282293424, "dataset_size": 42645714270.23995}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["instruction-finetuning"]}
| false |
False
| 2025-08-11T15:12:58 | 85 | 40 | false |
c827c6df8fcf008219ffaffa4d1dd77491099367
|
Dataset Card for WildChat-4.8M
Dataset Description
Interactive Search Tool: https://wildvisualizer.com
WildChat paper: https://arxiv.org/abs/2405.01470
WildVis paper: https://arxiv.org/abs/2409.03753
Point of Contact: Yuntian Deng
Dataset Summary
WildChat-4.8M is a collection of 3,199,860 conversations between human users and ChatGPT. This version only contains non-toxic user inputs and ChatGPT responses, as flagged by the OpenAI Moderations API or… See the full description on the dataset page: https://huggingface.co/datasets/allenai/WildChat-4.8M.
| 2,683 | 2,683 |
[
"task_categories:text-generation",
"task_categories:question-answering",
"license:odc-by",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2405.01470",
"arxiv:2409.03753",
"arxiv:2406.04770",
"arxiv:2406.08464",
"region:us",
"instruction-finetuning"
] | 2025-08-08T16:46:24 | null | null |
676f70846bf205795346d2be
|
FreedomIntelligence/medical-o1-reasoning-SFT
|
FreedomIntelligence
|
{"license": "apache-2.0", "task_categories": ["question-answering", "text-generation"], "language": ["en", "zh"], "tags": ["medical", "biology"], "configs": [{"config_name": "en", "data_files": "medical_o1_sft.json"}, {"config_name": "zh", "data_files": "medical_o1_sft_Chinese.json"}, {"config_name": "en_mix", "data_files": "medical_o1_sft_mix.json"}, {"config_name": "zh_mix", "data_files": "medical_o1_sft_mix_Chinese.json"}]}
| false |
False
| 2025-04-22T15:11:21 | 844 | 23 | false |
fc2c9e8a37b38f38da6d449564a8c350b244aef4
|
News
[2025/04/22] We split the data and kept only the medical SFT dataset (medical_o1_sft.json). The file medical_o1_sft_mix.json contains a mix of medical and general instruction data.
[2025/02/22] We released the distilled dataset from Deepseek-R1 based on medical verifiable problems. You can use it to initialize your models with the reasoning chain from Deepseek-R1.
[2024/12/25] We open-sourced the medical reasoning dataset for SFT, built on medical verifiable problems and an LLM… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT.
| 14,234 | 103,111 |
[
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2412.18925",
"region:us",
"medical",
"biology"
] | 2024-12-28T03:29:08 | null | null |
6894b0114a3a3a74e938e413
|
miromind-ai/MiroVerse-v0.1
|
miromind-ai
|
{"license": "cc-by-nc-4.0", "task_categories": ["question-answering"], "language": ["en"], "tags": ["deep research", "agent", "miromind"], "size_categories": ["100K<n<1M"], "configs": [{"config_name": "MiroVerse-v0.1-all", "data_files": [{"split": "train", "path": "zip_sft/MiroVerse-v0_1-SFT.zip"}]}, {"config_name": "MiroVerse-2WikiMultihopQA", "data_files": [{"split": "train", "path": "jsonl_sft/MiroVerse-2WikiMultihopQA.jsonl"}]}, {"config_name": "MiroVerse-HotpotQA", "data_files": [{"split": "train", "path": "jsonl_sft/MiroVerse-HotpotQA.jsonl"}]}, {"config_name": "MiroVerse-MegaScience", "data_files": [{"split": "train", "path": "jsonl_sft/MiroVerse-MegaScience.jsonl"}]}, {"config_name": "MiroVerse-MuSiQue", "data_files": [{"split": "train", "path": "jsonl_sft/MiroVerse-MuSiQue.jsonl"}]}, {"config_name": "MiroVerse-OneGen-TrainDataset-MultiHopQA", "data_files": [{"split": "train", "path": "jsonl_sft/MiroVerse-OneGen-TrainDataset-MultiHopQA.jsonl"}]}, {"config_name": "MiroVerse-QA-Expert-Multi-Hop-V1.0", "data_files": [{"split": "train", "path": "jsonl_sft/MiroVerse-QA-Expert-Multi-Hop-V1.0.jsonl"}]}, {"config_name": "MiroVerse-TaskCraft", "data_files": [{"split": "train", "path": "jsonl_sft/MiroVerse-TaskCraft.jsonl"}]}, {"config_name": "MiroVerse-Voyager1.0", "data_files": [{"split": "train", "path": "jsonl_sft/MiroVerse-Voyager1.0.jsonl"}]}, {"config_name": "MiroVerse-WebDancer", "data_files": [{"split": "train", "path": "jsonl_sft/MiroVerse-WebDancer.jsonl"}]}, {"config_name": "MiroVerse-WebShaper", "data_files": [{"split": "train", "path": "jsonl_sft/MiroVerse-WebShaper.jsonl"}]}, {"config_name": "MiroVerse-WebWalkerQA-Silver", "data_files": [{"split": "train", "path": "jsonl_sft/MiroVerse-WebWalkerQA-Silver.jsonl"}]}, {"config_name": "MiroVerse-WikiTables", "data_files": [{"split": "train", "path": "jsonl_sft/MiroVerse-WikiTables.jsonl"}]}, {"config_name": "MiroVerse-DPO", "data_files": [{"split": "MuSiQue_8B_DPO", "path": "dpo/MiroThinker-8B-DPO-Data.json"}, {"split": "MuSiQue_14B_DPO", "path": "dpo/MiroThinker-14B-DPO-Data.json"}, {"split": "MuSiQue_32B_DPO", "path": "dpo/MiroThinker-32B-DPO-Data.json"}]}]}
| false |
auto
| 2025-08-14T07:36:42 | 58 | 21 | false |
f7fefa7ec9415e13ca7b5f9cfc35fa00a4653ea0
|
MiroVerse: A Reproducible, Full-Trajectory, Ever-Growing Deep Research Dataset
🔥 News & Updates
MiroVerse v0.1 has been released. This dataset can be used with our training framework, MiroTrain. In MiroVerse v0.1, we provide both SFT and DPO data, making it easy to reproduce MiroThinker-v0.1’s benchmark performance on Qwen3. Give it a try!
The initial release of MiroVerse (v0.1) is coming this Friday—stay tuned!
🔥 First Batch of MiroVerse… See the full description on the dataset page: https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1.
| 1,026 | 1,026 |
[
"task_categories:question-answering",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:100K<n<1M",
"modality:text",
"region:us",
"deep research",
"agent",
"miromind"
] | 2025-08-07T13:54:25 | null | null |
6874b288e705a6646d49dd70
|
xlangai/AgentNet
|
xlangai
|
{"language": ["en"], "license": "mit", "task_categories": ["image-text-to-text"], "tags": ["Computer-Use", "Agent"]}
| false |
False
| 2025-08-15T03:39:43 | 31 | 20 | false |
b92269e2b42b18a12826036744def62beba60b4c
|
OpenCUA: Open Foundations for Computer-Use Agents
🌐 Website
📝 Paper
💻 Code
AgentNet Dataset
AgentNet is the first large-scale desktop computer-use agent trajectory dataset, containing 22.6K human-annotated computer-use tasks across Windows, macOS, and Ubuntu systems.
Applications
This dataset enables training and evaluation of:
Vision-language-action (VLA) models for computer use
Multi-modal agents for desktop automation
GUI… See the full description on the dataset page: https://huggingface.co/datasets/xlangai/AgentNet.
| 8,783 | 8,783 |
[
"task_categories:image-text-to-text",
"language:en",
"license:mit",
"arxiv:2508.09123",
"region:us",
"Computer-Use",
"Agent"
] | 2025-07-14T07:32:24 | null | null |
689d79028af09495df3c959b
|
nvidia/Nemotron-CC-v2
|
nvidia
|
{"license": "other", "task_categories": ["text-generation"], "extra_gated_prompt": "By clicking \u201cAgree\u201d I confirm I have read and agree to NVIDIA Data Agreement for Model Training and agree that I intend to use this data for model training purposes only. (https://huggingface.co/datasets/nvidia/Nemotron-Pretraining-Dataset-sample/raw/main/LICENSE.md) ", "extra_gated_fields": {"Company": "text", "Institutional Email": "text", "I agree to use this dataset for model training purposes ONLY": "checkbox"}, "configs": [{"config_name": "High-Quality", "data_files": [{"path": "High-Quality/*.parquet", "split": "train"}]}, {"config_name": "High-Quality-Synthetic", "data_files": [{"path": "High-Quality-Synthetic/*.parquet", "split": "train"}]}, {"config_name": "Medium-High-Quality", "data_files": [{"path": "Medium-High-Quality/*.parquet", "split": "train"}]}, {"config_name": "Medium-Quality", "data_files": [{"path": "Medium-Quality/*.parquet", "split": "train"}]}, {"config_name": "Diverse-QA", "data_files": [{"path": "Diverse-QA/*.parquet", "split": "train"}]}, {"config_name": "Translated-Diverse-QA", "data_files": [{"path": "Translated-Diverse-QA/*.parquet", "split": "train"}]}], "track_downloads": true}
| false |
manual
| 2025-08-20T16:20:07 | 20 | 20 | false |
1f2339f67cfec5b489c1be22f1609dec81f88cfd
|
Nemotron-Pre-Training-Dataset-v1 Release
Data Overview
This pretraining dataset, for generative AI model training, preserves high-value math and code while enriching it with diverse multilingual Q&A, fueling the next generation of intelligent, globally-capable models.
This dataset supports NVIDIA Nemotron Nano 2, a family of large language models (LLMs) that consists of the NVIDIA-Nemotron-Nano-9B-v2, NVIDIA-Nemotron-Nano-9B-v2-Base, and NVIDIA-Nemotron-Nano-12B-v2-Base… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-CC-v2.
| 31 | 31 |
[
"task_categories:text-generation",
"license:other",
"size_categories:1B<n<10B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-08-14T05:49:54 | null | null |
66212f29fb07c3e05ad0432e
|
HuggingFaceFW/fineweb
|
HuggingFaceFW
|
{"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2025-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-05/*"}]}, {"config_name": "CC-MAIN-2025-08", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-08/*"}]}, {"config_name": "CC-MAIN-2025-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-13/*"}]}, {"config_name": "CC-MAIN-2025-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-18/*"}]}, {"config_name": "CC-MAIN-2025-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-21/*"}]}, {"config_name": "CC-MAIN-2025-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-26/*"}]}, {"config_name": "CC-MAIN-2024-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-51/*"}]}, {"config_name": "CC-MAIN-2024-46", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-46/*"}]}, {"config_name": "CC-MAIN-2024-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-42/*"}]}, {"config_name": "CC-MAIN-2024-38", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-38/*"}]}, {"config_name": "CC-MAIN-2024-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-33/*"}]}, {"config_name": "CC-MAIN-2024-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-30/*"}]}, {"config_name": "CC-MAIN-2024-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-26/*"}]}, {"config_name": "CC-MAIN-2024-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-22/*"}]}, {"config_name": "CC-MAIN-2024-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-18/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]}
| false |
False
| 2025-07-11T20:16:53 | 2,319 | 19 | false |
9bb295ddab0e05d785b879661af7260fed5140fc
|
🍷 FineWeb
15 trillion tokens of the finest data the 🌐 web has to offer
What is it?
The 🍷 FineWeb dataset consists of more than 18.5T tokens (originally 15T tokens) of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the 🏭 datatrove library, our large scale data processing library.
🍷 FineWeb was originally meant to be a fully open replication of 🦅 RefinedWeb, with a release… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb.
| 307,138 | 4,794,276 |
[
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:10B<n<100B",
"modality:tabular",
"modality:text",
"arxiv:2306.01116",
"arxiv:2109.07445",
"arxiv:2406.17557",
"doi:10.57967/hf/2493",
"region:us"
] | 2024-04-18T14:33:13 | null | null |
689aeabe723f825ffb6d2635
|
Codatta/MM-Food-100K
|
Codatta
|
{"license": "openrail", "task_categories": ["image-classification", "image-to-text"], "language": ["en"], "size_categories": ["100K<n<1M"]}
| false |
False
| 2025-08-18T07:00:35 | 20 | 18 | false |
47afd00e23f527d952949d2699bbf39646da0d0d
|
Overview
This project aims to introduce and release a comprehensive food image dataset designed specifically for computer vision tasks, particularly food recognition, classification, and nutritional analysis. We hope this dataset will provide a reliable resource for researchers and developers to advance the field of food AI. By publishing on Hugging Face, we expect to foster community collaboration and accelerate innovation in applications such as smart recipe recommendations, meal… See the full description on the dataset page: https://huggingface.co/datasets/Codatta/MM-Food-100K.
| 387 | 387 |
[
"task_categories:image-classification",
"task_categories:image-to-text",
"language:en",
"license:openrail",
"size_categories:100K<n<1M",
"format:csv",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2508.10429",
"region:us"
] | 2025-08-12T07:18:22 | null | null |
688a94c5f5bb58bbd66655fb
|
allenai/MoNaCo_Benchmark
|
allenai
|
{"license": "odc-by", "language": ["en"], "task_categories": ["question-answering", "table-question-answering"], "pretty_name": "MoNaCo"}
| false |
auto
| 2025-08-18T16:23:59 | 17 | 17 | false |
8ca42024c346d5933bbe5f72db7bf117484b95c6
|
Website | Paper | Blogpost
MoNaCo Dataset Card
MoNaCo: More Natural and Complex Questions for Reasoning Across Dozens of Documents
MoNaCo is a benchmark of 1,315 human-written time-consuming questions that require retrieval, filtering and aggregation across text and tables --- with an average of 43.3 distinct documents per question!
The broad scope of MoNaCo questions makes it ideal as an LLM benchmark for at least five different settings:
Factuality: Evaluating models’ parametric… See the full description on the dataset page: https://huggingface.co/datasets/allenai/MoNaCo_Benchmark.
| 132 | 132 |
[
"task_categories:question-answering",
"task_categories:table-question-answering",
"language:en",
"license:odc-by",
"arxiv:2508.11133",
"region:us"
] | 2025-07-30T21:55:17 | null | null |
68895c3182e38006a8e9aa94
|
nvidia/Nemotron-Post-Training-Dataset-v1
|
nvidia
|
{"dataset_info": {"features": [{"name": "uuid", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "generator", "dtype": "string"}, {"name": "version", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "reasoning", "dtype": "string"}, {"name": "messages", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "tool_calls", "list": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "function", "struct": [{"name": "name", "dtype": "string"}, {"name": "arguments", "dtype": "string"}]}]}]}, {"name": "metadata", "dtype": "string"}], "splits": [{"name": "chat", "num_bytes": 3824039827, "num_examples": 746622}, {"name": "code", "num_bytes": 91391705833, "num_examples": 1896395}, {"name": "math", "num_bytes": 79173786238, "num_examples": 2044407}, {"name": "stem", "num_bytes": 329529074790, "num_examples": 20662167}, {"name": "tool_calling", "num_bytes": 6395081261, "num_examples": 310051}], "download_size": 203373185595, "dataset_size": 510313687949}, "configs": [{"config_name": "default", "data_files": [{"split": "chat", "path": "data/chat-*"}, {"split": "code", "path": "data/code-*"}, {"split": "math", "path": "data/math-*"}, {"split": "stem", "path": "data/stem-*"}, {"split": "tool_calling", "path": "data/tool-*"}]}], "license": "cc-by-4.0"}
| false |
False
| 2025-08-01T20:25:24 | 125 | 16 | false |
053ba262368bf80c5864d36524731271662be115
|
Nemotron-Post-Training-Dataset-v1 Release
This dataset is a compilation of SFT data that supports improvements of math, code, stem, general reasoning, and tool calling capabilities of the original Llama instruct model Llama-3.3-Nemotron-Super-49B-v1.5.
Llama-3.3-Nemotron-Super-49B-v1.5 is an LLM which is a derivative of Meta Llama-3.3-70B-Instruct (AKA the reference model).
Llama-3.3-Nemotron-Super-49B-v1.5 offers a great tradeoff between model accuracy and efficiency. Efficiency… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Post-Training-Dataset-v1.
| 24,223 | 24,223 |
[
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2505.00949",
"region:us"
] | 2025-07-29T23:41:37 | null | null |
689c0f182b384a7895b8a620
|
ttchungc/PRELUDE
|
ttchungc
|
{"configs": [{"config_name": "default", "data_files": [{"split": "subset", "path": "subset.parquet"}, {"split": "all", "path": "all.parquet"}, {"split": "public", "path": "public.parquet"}]}], "language": ["zh", "en"], "pretty_name": "PRELUDE: A Benchmark Designed to Require Global Comprehension and Reasoning over Long Contexts", "task_categories": ["question-answering", "text-generation", "text-classification"], "tags": ["question-answering", "long content reasoning", "narrative reasoning", "bilingual"], "size_categories": ["n<1K"]}
| false |
False
| 2025-08-14T11:28:39 | 16 | 16 | false |
c9aae0c1bce05335c759e26b36450b693e7a12ad
|
Dataset Card for PRELUDE
Dataset Card Authors
Mo Yu*, Tsz Ting Chung*, Chulun Zhou*, Tong Li*, Rui Lu*, Jiangnan Li*, Liyan Xu*, Haoshu Lu, Ning Zhang, Jing Li, Jie Zhou
| 282 | 282 |
[
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"language:zh",
"language:en",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2508.09848",
"region:us",
"question-answering",
"long content reasoning",
"narrative reasoning",
"bilingual"
] | 2025-08-13T04:05:44 | null | null |
6899dde80d9cbf5281d007f8
|
Yejy53/Echo-4o-Image
|
Yejy53
|
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["text-to-image"], "configs": [{"config_name": "default", "data_files": "Surrel-Fantasy-Image/images/0-5000.tar.gz", "default": true}], "tags": ["gpt4o", "synthetic"], "license": "mit"}
| false |
False
| 2025-08-19T12:47:16 | 21 | 15 | false |
6018b97fa2d894ddf74a1b7378075c5451ad6432
|
Echo-4o-Image Dataset
Paper | Project Page | Code
Introduction
Echo-4o-Image is a 180K-scale synthetic dataset generated by GPT-4o, designed to advance open-source models in image generation. While real-world image datasets are valuable, synthetic images offer crucial advantages, especially in addressing blind spots in real-world coverage:
Complementing Rare Scenarios: Synthetic data can generate examples for scenarios less represented in real-world datasets, such as… See the full description on the dataset page: https://huggingface.co/datasets/Yejy53/Echo-4o-Image.
| 3,256 | 3,256 |
[
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2508.09987",
"region:us",
"gpt4o",
"synthetic"
] | 2025-08-11T12:11:20 | null | null |
689e705664eb45be366848ed
|
We-Math/We-Math2.0-Standard
|
We-Math
|
{"license": "cc-by-nc-4.0", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "knowledge-level1", "dtype": "string"}, {"name": "knowledge-level2", "dtype": "string"}, {"name": "knowledge-level3", "dtype": "string"}, {"name": "knowledge-level4", "dtype": "string"}, {"name": "knowledge", "dtype": "string"}, {"name": "principle", "dtype": "string"}, {"name": "idx", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "standard", "num_bytes": 187530345.434, "num_examples": 5843}], "download_size": 241860062, "dataset_size": 187530345.434}}
| false |
False
| 2025-08-19T16:57:17 | 15 | 15 | false |
b176aac586fec023856bf6897fb4cf741f04e2b3
|
Dataset Card for We-Math 2.0
GitHub | Paper | Website
We-Math 2.0 is a unified system designed to comprehensively enhance the mathematical reasoning capabilities of Multimodal Large Language Models (MLLMs).
It integrates a structured mathematical knowledge system, model-centric data space modeling, and a reinforcement learning (RL)-based training paradigm to achieve both broad conceptual coverage and robust reasoning performance across varying difficulty levels.
The key… See the full description on the dataset page: https://huggingface.co/datasets/We-Math/We-Math2.0-Standard.
| 466 | 466 |
[
"license:cc-by-nc-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2508.10433",
"region:us"
] | 2025-08-14T23:25:10 | null | null |
688cf1c35243ffa37516d87b
|
HuggingFaceH4/Multilingual-Thinking
|
HuggingFaceH4
|
{"viewer": true, "dataset_info": {"features": [{"name": "reasoning_language", "dtype": "string"}, {"name": "developer", "dtype": "string"}, {"name": "user", "dtype": "string"}, {"name": "analysis", "dtype": "string"}, {"name": "final", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}, {"name": "thinking", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 8900623, "num_examples": 1000}], "download_size": 5290171, "dataset_size": 8900623}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "apache-2.0", "task_categories": ["text-generation"], "language": ["en", "de", "fr", "es", "it"], "pretty_name": "Multilingual-Thinking", "size_categories": ["1K<n<10K"]}
| false |
False
| 2025-08-07T08:14:11 | 63 | 13 | false |
f423949d2726f5a5633ea10ac45bc1ea1e0de6e7
|
Dataset summary
Multilingual-Thinking is a reasoning dataset where the chain-of-thought has been translated from English into one of 4 languages: Spanish, French, Italian, and German. The dataset was created by sampling 1k training samples from the SystemChat subset of SmolTalk2 and translating the reasoning traces with another language model.
This dataset was used in the OpenAI Cookbook to fine-tune the OpenAI gpt-oss models.
You can load the dataset using:
from datasets import… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking.
| 13,786 | 13,786 |
[
"task_categories:text-generation",
"language:en",
"language:de",
"language:fr",
"language:es",
"language:it",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-08-01T16:56:35 | null | null |
684d00f237c9aa4418cf8d65
|
lxucs/CapRetrieval
|
lxucs
|
{"license": "apache-2.0", "task_categories": ["text-retrieval"], "language": ["zh"], "tags": ["text", "retrieval"], "size_categories": ["1K<n<10K"], "configs": [{"config_name": "passages", "data_files": [{"split": "test", "path": "passages/test*"}]}, {"config_name": "queries", "data_files": [{"split": "test", "path": "queries/test*"}]}]}
| false |
False
| 2025-08-19T09:03:52 | 12 | 12 | false |
a17764a5626a1bcbc25e8b06514f9877b97facb0
|
The dataset CapRetrieval introduced in Dense Retrievers Can Fail on Simple Queries: Revealing The Granularity Dilemma of Embeddings.
CapRetrieval is prepared in Chinese; the English version of CapRetrieval is available at CapRetrievalEn, sharing the same queries, passages and labels.
Introduction
CapRetrieval evaluates the fine-grained embedding matching (dense passage retrieval) in Chinese, tailored towards a practical image search scenario:
Candidate passages are image captions… See the full description on the dataset page: https://huggingface.co/datasets/lxucs/CapRetrieval.
| 39 | 174 |
[
"task_categories:text-retrieval",
"language:zh",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2506.08592",
"region:us",
"text",
"retrieval"
] | 2025-06-14T04:56:18 | null | null |
689e70861c433ece934b3ad9
|
We-Math/We-Math2.0-Pro
|
We-Math
|
{"license": "cc-by-nc-4.0", "dataset_info": {"features": [{"name": "question_id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "difficulty", "dtype": "string"}, {"name": "knowledge points", "sequence": "string"}, {"name": "idx", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "pro", "num_bytes": 717964159.664, "num_examples": 4552}], "download_size": 97424709, "dataset_size": 717964159.664}}
| false |
False
| 2025-08-19T17:04:39 | 12 | 12 | false |
c1d9f3ccea7361069f0442362e781d1ae7a28e94
|
Dataset Card for We-Math 2.0
GitHub | Paper | Website
We-Math 2.0 is a unified system designed to comprehensively enhance the mathematical reasoning capabilities of Multimodal Large Language Models (MLLMs).
It integrates a structured mathematical knowledge system, model-centric data space modeling, and a reinforcement learning (RL)-based training paradigm to achieve both broad conceptual coverage and robust reasoning performance across varying difficulty levels.
The key… See the full description on the dataset page: https://huggingface.co/datasets/We-Math/We-Math2.0-Pro.
| 421 | 421 |
[
"license:cc-by-nc-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2508.10433",
"region:us"
] | 2025-08-14T23:25:58 | null | null |
68a439cc3a24e2df78a05f0a
|
lxucs/CapRetrievalEn
|
lxucs
|
{"license": "apache-2.0", "task_categories": ["text-retrieval"], "language": ["en"], "tags": ["text", "retrieval"], "size_categories": ["1K<n<10K"], "configs": [{"config_name": "passages", "data_files": [{"split": "test", "path": "passages/test*"}]}, {"config_name": "queries", "data_files": [{"split": "test", "path": "queries/test*"}]}]}
| false |
False
| 2025-08-19T08:58:02 | 12 | 12 | false |
456773dd808700b2e95ac4a18edd239601fe813a
|
The english version of CapRetrieval introduced in Dense Retrievers Can Fail on Simple Queries: Revealing The Granularity Dilemma of Embeddings.
Queries and passages are translated automatically by GPT-4.1; all IDs and labels are kept the same as CapRetrieval. A few labels thus are not entirely accurate due to different language traits and expressions, but most labels should remain consistent.
CapRetrieval evaluates the fine-grained embedding matching (dense passage retrieval) in Chinese… See the full description on the dataset page: https://huggingface.co/datasets/lxucs/CapRetrievalEn.
| 53 | 53 |
[
"task_categories:text-retrieval",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2506.08592",
"region:us",
"text",
"retrieval"
] | 2025-08-19T08:46:04 | null | null |
6860e1f20ec2862c77415b90
|
YiboZhang2001/TexVerse
|
YiboZhang2001
|
{"license": "odc-by", "language": ["en"]}
| false |
False
| 2025-08-18T11:15:06 | 19 | 11 | false |
a4fe9f473d6628e06c7187604a01c0d8b13de5e2
|
TexVerse: A Universe of 3D Objects with High-Resolution Textures
Yibo Zhang1,2, Li Zhang1,3, Rui Ma2 *, Nan Cao1,4
1Shanghai Innovation Institute
2Jilin University
3Fudan University
4Tongji University
* Corresponding Author
TexVerse is a large-scale 3D dataset featuring high-resolution textures. Its key characteristics include:
Scale & Source: TexVerse dataset has 858,669 unique 3D models curated from Sketchfab, including 158,518… See the full description on the dataset page: https://huggingface.co/datasets/YiboZhang2001/TexVerse.
| 154,545 | 154,575 |
[
"language:en",
"license:odc-by",
"arxiv:2508.10868",
"region:us"
] | 2025-06-29T06:49:22 | null | null |
689430e6d5dd6bec1f194b1c
|
HelpingAI/Intermediate-Thinking-130k
|
HelpingAI
|
{"license": "apache-2.0", "task_categories": ["text-generation"], "language": ["af", "ar", "bn", "bg", "ca", "zh", "cs", "da", "nl", "en", "et", "fi", "fr", "de", "el", "he", "hi", "hu", "id", "it", "ja", "ko", "mr", "no", "fa", "pl", "pt", "ro", "ru", "so", "es", "sw", "sv", "tl", "ta", "te", "th", "tr", "uk", "ur", "vi", "cy"], "tags": ["intermediate-thinking", "mathematical-reasoning", "logical-reasoning", "self-correction", "structured-thinking"], "pretty_name": "Intermediate Thinking Dataset"}
| false |
False
| 2025-08-07T06:04:45 | 28 | 11 | false |
7791d84cfb9d0b68b2ae5bcef3411eaf0342a70b
|
Intermediate-Thinking-130k
A comprehensive dataset of 135,000 high-quality samples designed to advance language model reasoning capabilities through structured intermediate thinking processes. This dataset enables training and evaluation of models with sophisticated self-correction and iterative reasoning abilities across 42 languages.
Overview
Intermediate-Thinking-130k addresses a fundamental limitation in current language models: their inability to pause, reflect, and… See the full description on the dataset page: https://huggingface.co/datasets/HelpingAI/Intermediate-Thinking-130k.
| 959 | 959 |
[
"task_categories:text-generation",
"language:af",
"language:ar",
"language:bn",
"language:bg",
"language:ca",
"language:zh",
"language:cs",
"language:da",
"language:nl",
"language:en",
"language:et",
"language:fi",
"language:fr",
"language:de",
"language:el",
"language:he",
"language:hi",
"language:hu",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:mr",
"language:no",
"language:fa",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:so",
"language:es",
"language:sw",
"language:sv",
"language:tl",
"language:ta",
"language:te",
"language:th",
"language:tr",
"language:uk",
"language:ur",
"language:vi",
"language:cy",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"intermediate-thinking",
"mathematical-reasoning",
"logical-reasoning",
"self-correction",
"structured-thinking"
] | 2025-08-07T04:51:50 | null | null |
689c3b49b81bb6c772345d05
|
DeepMount00/OpenItalianData
|
DeepMount00
|
{"dataset_info": {"features": [{"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3970741180, "num_examples": 2021922}], "download_size": 2251106125, "dataset_size": 3970741180}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "apache-2.0", "task_categories": ["text-generation"], "language": ["it"], "size_categories": ["1M<n<10M"]}
| false |
False
| 2025-08-21T12:19:08 | 14 | 11 | false |
c40c42d2cea188457d39e4986561b5b1b2f123cb
| null | 2,484 | 2,484 |
[
"task_categories:text-generation",
"language:it",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-08-13T07:14:17 | null | null |
689cca62d870fb1a8441783b
|
nvidia/Nemotron-Post-Training-Dataset-v2
|
nvidia
|
{"dataset_info": {"features": [{"name": "uuid", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "generator", "dtype": "string"}, {"name": "version", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "reasoning", "dtype": "string"}, {"name": "messages", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "stem", "num_bytes": 807639463, "num_examples": 355000}, {"name": "chat", "num_bytes": 5971361114, "num_examples": 627720}, {"name": "math", "num_bytes": 507431890, "num_examples": 239467}, {"name": "code", "num_bytes": 980267419, "num_examples": 175000}, {"name": "multilingual_ja", "num_bytes": 18014250907, "num_examples": 975202}, {"name": "multilingual_de", "num_bytes": 18891078015, "num_examples": 1015314}, {"name": "multilingual_it", "num_bytes": 18724137501, "num_examples": 1016503}, {"name": "multilingual_es", "num_bytes": 16273052735, "num_examples": 935704}, {"name": "multilingual_fr", "num_bytes": 18231554197, "num_examples": 1001504}], "download_size": 44423886661, "dataset_size": 98400773241}, "configs": [{"config_name": "default", "data_files": [{"split": "stem", "path": "data/stem-*"}, {"split": "chat", "path": "data/chat-*"}, {"split": "math", "path": "data/math-*"}, {"split": "code", "path": "data/code-*"}, {"split": "multilingual_ja", "path": "data/multilingual_ja-*"}, {"split": "multilingual_de", "path": "data/multilingual_de-*"}, {"split": "multilingual_it", "path": "data/multilingual_it-*"}, {"split": "multilingual_es", "path": "data/multilingual_es-*"}, {"split": "multilingual_fr", "path": "data/multilingual_fr-*"}]}], "license": "cc-by-4.0", "language": ["en", "de", "it", "fr", "es", "ja"], "extra_gated_fields": {"Company": "text", "Institutional Email": "text"}}
| false |
auto
| 2025-08-21T04:29:18 | 11 | 11 | false |
5c89e01dd720ae0f4058445ed49c5fb68a03c76e
|
Nemotron-Post-Training-Dataset-v2 Release
Data Overview
This dataset adds to NVIDIA’s post-training dataset releases with an extension of SFT and RL data into five target languages: Spanish, French, German, Italian and Japanese. The data supports improvements of math, code, general reasoning, and instruction following capabilities of the NVIDIA-Nemotron-Nano-9B-v2-Base, in support of release of NVIDIA-Nemotron-Nano-8B-v2-Reasoning.
NVIDIA-Nemotron-Nano-9B is a family of… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Post-Training-Dataset-v2.
| 88 | 88 |
[
"language:en",
"language:de",
"language:it",
"language:fr",
"language:es",
"language:ja",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2508.14444",
"region:us"
] | 2025-08-13T17:24:50 | null | null |
639244f571c51c43091df168
|
Anthropic/hh-rlhf
|
Anthropic
|
{"license": "mit", "tags": ["human-feedback"]}
| false |
False
| 2023-05-26T18:47:34 | 1,404 | 10 | false |
09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa
|
Dataset Card for HH-RLHF
Dataset Summary
This repository provides access to two different kinds of data:
Human preference data about helpfulness and harmlessness from Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. These data are meant to train preference (or reward) models for subsequent RLHF training. These data are not meant for supervised training of dialogue agents. Training dialogue agents on these data is likely to lead… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/hh-rlhf.
| 16,322 | 1,631,034 |
[
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2204.05862",
"region:us",
"human-feedback"
] | 2022-12-08T20:11:33 | null | null |
6655eb19d17e141dcb546ed5
|
HuggingFaceFW/fineweb-edu
|
HuggingFaceFW
|
{"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb-Edu", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}], "features": [{"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "dump", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_score", "dtype": "float64"}, {"name": "token_count", "dtype": "int64"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2025-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-05/*"}]}, {"config_name": "CC-MAIN-2025-08", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-08/*"}]}, {"config_name": "CC-MAIN-2025-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-13/*"}]}, {"config_name": "CC-MAIN-2025-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-18/*"}]}, {"config_name": "CC-MAIN-2025-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-21/*"}]}, {"config_name": "CC-MAIN-2025-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-26/*"}]}, {"config_name": "CC-MAIN-2024-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-51/*"}]}, {"config_name": "CC-MAIN-2024-46", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-46/*"}]}, {"config_name": "CC-MAIN-2024-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-42/*"}]}, {"config_name": "CC-MAIN-2024-38", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-38/*"}]}, {"config_name": "CC-MAIN-2024-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-33/*"}]}, {"config_name": "CC-MAIN-2024-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-30/*"}]}, {"config_name": "CC-MAIN-2024-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-26/*"}]}, {"config_name": "CC-MAIN-2024-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-22/*"}]}, {"config_name": "CC-MAIN-2024-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-18/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]}
| false |
False
| 2025-07-11T20:16:53 | 737 | 10 | false |
87f09149ef4734204d70ed1d046ddc9ca3f2b8f9
|
📚 FineWeb-Edu
1.3 trillion tokens of the finest educational data the 🌐 web has to offer
Paper: https://arxiv.org/abs/2406.17557
What is it?
📚 FineWeb-Edu dataset consists of 1.3T tokens and 5.4T tokens (FineWeb-Edu-score-2) of educational web pages filtered from 🍷 FineWeb dataset. This is the 1.3 trillion version.
To enhance FineWeb's quality, we developed an educational quality classifier using annotations generated by LLama3-70B-Instruct. We then… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu.
| 92,382 | 3,898,239 |
[
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:1B<n<10B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.17557",
"arxiv:2404.14219",
"arxiv:2401.10020",
"arxiv:2109.07445",
"doi:10.57967/hf/2497",
"region:us"
] | 2024-05-28T14:32:57 | null | null |
689d797321a2764d78695569
|
nvidia/Nemotron-Pretraining-Code-v1
|
nvidia
|
{"license": "other", "task_categories": ["text-generation"], "extra_gated_prompt": "By clicking \u201cAgree\u201d I confirm I have read and agree to NVIDIA Data Agreement for Model Training and agree that I intend to use this data for model training purposes only. (https://huggingface.co/datasets/nvidia/Nemotron-Pretraining-Dataset-sample/raw/main/LICENSE.md) ", "extra_gated_fields": {"Company": "text", "Institutional Email": "text", "I agree to use this dataset for model training purposes ONLY": "checkbox"}, "configs": [{"config_name": "Synthetic-Code", "data_files": [{"path": "Synthetic-Code/*.parquet", "split": "train"}]}, {"config_name": "Nemotron-Code-Metadata", "data_files": [{"path": "Nemotron-Code-Metadata/*.parquet", "split": "train"}]}], "track_downloads": true}
| false |
manual
| 2025-08-20T16:20:10 | 10 | 10 | false |
c7e681692e63630bea1d8419ed3e2080c57fb03e
|
Nemotron-Pre-Training-Dataset-v1 Release
Data Overview
This pretraining dataset, for generative AI model training, preserves high-value math and code while enriching it with diverse multilingual Q&A, fueling the next generation of intelligent, globally-capable models.
This dataset supports NVIDIA Nemotron Nano 2, a family of large language models (LLMs) that consists of the NVIDIA-Nemotron-Nano-9B-v2, NVIDIA-Nemotron-Nano-9B-v2-Base, and NVIDIA-Nemotron-Nano-12B-v2-Base… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Pretraining-Code-v1.
| 104 | 104 |
[
"task_categories:text-generation",
"license:other",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-08-14T05:51:47 | null | null |
689f5fa0e6d760e64838621f
|
bytedance-research/UNO-1M
|
bytedance-research
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-to-image", "image-to-image"], "tags": ["text-to-image", "image-to-image"], "configs": [{"config_name": "train", "data_files": "uno_1m_total_labels.json"}]}
| false |
False
| 2025-08-17T13:29:29 | 10 | 10 | false |
f25bb61db6d6d66d82f41d1e613c0e04ba342e84
|
Less-to-More Generalization: Unlocking More Controllability by In-Context Generation
Overview
UNO-1M is a large dataset (~1M paired images) constructed by the in-context generation pipeline introduced in the UNO paper. Its advantages include highly diverse categories (>365 categories), high-resolution images (around 1024x1024), variable resolutions (different aspect ratios), high quality (produced by state-of-the-art text-to-image models), and high subject… See the full description on the dataset page: https://huggingface.co/datasets/bytedance-research/UNO-1M.
| 795 | 795 |
[
"task_categories:text-to-image",
"task_categories:image-to-image",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"arxiv:2504.02160",
"region:us",
"text-to-image",
"image-to-image"
] | 2025-08-15T16:26:08 | null | null |
625552d2b339bb03abe3432d
|
openai/gsm8k
|
openai
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "gsm8k", "pretty_name": "Grade School Math 8K", "tags": ["math-word-problems"], "dataset_info": [{"config_name": "main", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3963202, "num_examples": 7473}, {"name": "test", "num_bytes": 713732, "num_examples": 1319}], "download_size": 2725633, "dataset_size": 4676934}, {"config_name": "socratic", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5198108, "num_examples": 7473}, {"name": "test", "num_bytes": 936859, "num_examples": 1319}], "download_size": 3164254, "dataset_size": 6134967}], "configs": [{"config_name": "main", "data_files": [{"split": "train", "path": "main/train-*"}, {"split": "test", "path": "main/test-*"}]}, {"config_name": "socratic", "data_files": [{"split": "train", "path": "socratic/train-*"}, {"split": "test", "path": "socratic/test-*"}]}]}
| false |
False
| 2024-01-04T12:05:15 | 839 | 9 | false |
e53f048856ff4f594e959d75785d2c2d37b678ee
|
Dataset Card for GSM8K
Dataset Summary
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
These problems take between 2 and 8 steps to solve.
Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the… See the full description on the dataset page: https://huggingface.co/datasets/openai/gsm8k.
| 384,924 | 6,455,315 |
[
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2110.14168",
"region:us",
"math-word-problems"
] | 2022-04-12T10:22:10 |
gsm8k
| null |
6532270e829e1dc2f293d6b8
|
gaia-benchmark/GAIA
|
gaia-benchmark
|
{"language": ["en"], "pretty_name": "General AI Assistants Benchmark", "extra_gated_prompt": "To avoid contamination and data leakage, you agree to not reshare this dataset outside of a gated or private repository on the HF hub.", "extra_gated_fields": {"I agree to not reshare the GAIA submissions set according to the above conditions": "checkbox"}}
| false |
auto
| 2025-02-13T08:36:12 | 420 | 9 | false |
897f2dfbb5c952b5c3c1509e648381f9c7b70316
|
GAIA dataset
GAIA is a benchmark which aims at evaluating next-generation LLMs (LLMs with augmented capabilities due to added tooling, efficient prompting, access to search, etc).
We added gating to prevent bots from scraping the dataset. Please do not reshare the validation or test set in a crawlable format.
Data and leaderboard
GAIA is made of more than 450 non-trivial question with an unambiguous answer, requiring different levels of tooling and autonomy to solve. It… See the full description on the dataset page: https://huggingface.co/datasets/gaia-benchmark/GAIA.
| 9,749 | 87,837 |
[
"language:en",
"arxiv:2311.12983",
"region:us"
] | 2023-10-20T07:06:54 | null | |
685a3e532ffa3324700102d5
|
interstellarninja/hermes_reasoning_tool_use
|
interstellarninja
|
{"dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "tools", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "scenario_category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 392137224, "num_examples": 51004}], "download_size": 128188655, "dataset_size": 392137224}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "apache-2.0", "task_categories": ["question-answering"], "language": ["en"], "tags": ["tool-use", "json-mode", "reasoning", "rl"], "size_categories": ["10K<n<100K"]}
| false |
False
| 2025-08-05T13:50:58 | 102 | 9 | false |
55d824b623303055d5a76eb6ab12861b80a4ee20
|
TL;DR
51 004 ShareGPT conversations that teach LLMs when, how and whether to call tools.Built with the Nous Research Atropos RL stack in Atropos using a custom MultiTurnToolCallingEnv, and aligned with BFCL v3 evaluation scenarios.Released by @interstellarninja under Apache-2.0.
1 Dataset Highlights
Count
Split
Scenarios covered
Size
51 004
train
single-turn · multi-turn · multi-step · relevance
392 MB
Each row: OpenAI-style conversations… See the full description on the dataset page: https://huggingface.co/datasets/interstellarninja/hermes_reasoning_tool_use.
| 2,590 | 2,908 |
[
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"tool-use",
"json-mode",
"reasoning",
"rl"
] | 2025-06-24T05:57:39 | null | null |
686176a165816f63e6edee56
|
theaidealab/workflows
|
theaidealab
|
nan
| false |
False
| 2025-08-09T06:53:55 | 38 | 9 | false |
c44baa69f397a4cd0b48638b909742b19b0befa8
| null | 10,358 | 13,184 |
[
"region:us"
] | 2025-06-29T17:23:45 | null | null |
End of preview. Expand
in Data Studio

Changelog
NEW Changes July 25th
- added
baseModels
field to models which shows the models that the user tagged as base models for that model
Example:
{
"models": [
{
"_id": "687de260234339fed21e768a",
"id": "Qwen/Qwen3-235B-A22B-Instruct-2507"
}
],
"relation": "quantized"
}
NEW Changes July 9th
- Fixed issue with
gguf
column with integer overflow causing import pipeline to be broken over a few weeks ✅
NEW Changes Feb 27th
Added new fields on the
models
split:downloadsAllTime
,safetensors
,gguf
Added new field on the
datasets
split:downloadsAllTime
Added new split:
papers
which is all of the Daily Papers
Updated Daily
- Downloads last month
- 1,941