Dataset Viewer
Auto-converted to Parquet
_id
stringlengths
24
24
id
stringlengths
5
121
author
stringlengths
2
42
cardData
stringlengths
2
1.09M
disabled
bool
2 classes
gated
stringclasses
3 values
lastModified
timestamp[ns]date
2021-02-05 16:03:35
2025-07-31 13:13:09
likes
int64
0
8.52k
trendingScore
float64
0
78
private
bool
1 class
sha
stringlengths
40
40
description
stringlengths
0
6.67k
downloads
int64
0
2.72M
downloadsAllTime
int64
0
143M
tags
listlengths
1
7.92k
createdAt
timestamp[ns]date
2022-03-02 23:29:22
2025-07-31 13:11:52
paperswithcode_id
stringclasses
677 values
citation
stringlengths
0
10.7k
63990f21cc50af73d29ecfa3
fka/awesome-chatgpt-prompts
fka
{"license": "cc0-1.0", "tags": ["ChatGPT"], "task_categories": ["question-answering"], "size_categories": ["100K<n<1M"]}
false
False
2025-01-06T00:02:53
8,516
78
false
68ba7694e23014788dcc8ab5afe613824f45a05c
🧠 Awesome ChatGPT Prompts [CSV dataset] This is a Dataset Repository of Awesome ChatGPT Prompts View All Prompts on GitHub License CC-0
33,140
230,082
[ "task_categories:question-answering", "license:cc0-1.0", "size_categories:n<1K", "format:csv", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "ChatGPT" ]
2022-12-13T23:47:45
null
null
687141e6df15b094718f28be
NousResearch/Hermes-3-Dataset
NousResearch
{"license": "apache-2.0"}
false
False
2025-07-11T17:43:25
265
52
false
b1fddbdcae4e6714889365d1e6ce266a45289cc9
6,664
6,664
[ "license:apache-2.0", "size_categories:100K<n<1M", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-07-11T16:55:02
null
null
687a0c02efb93725cd663b85
MegaScience/MegaScience
MegaScience
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "tags": ["science", "reasoning"], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "reference_answer", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3719840088, "num_examples": 1253230}], "download_size": 1878947811, "dataset_size": 3719840088}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
false
False
2025-07-24T04:55:24
60
44
false
8df5586005374acba25aecc4f5469ce30fec605c
MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning Code: https://github.com/GAIR-NLP/MegaScience Project Page: https://huggingface.co/MegaScience MegaScience is a large-scale mixture of high-quality open-source datasets consisting of 1.25 million instances. We first collect multiple public datasets, then conduct comprehensive ablation studies across different data selection methods to identify the optimal approach for each dataset, thereby… See the full description on the dataset page: https://huggingface.co/datasets/MegaScience/MegaScience.
3,737
3,737
[ "task_categories:text-generation", "language:en", "license:cc-by-nc-sa-4.0", "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2507.16812", "region:us", "science", "reasoning" ]
2025-07-18T08:55:30
null
null
68895c3182e38006a8e9aa94
nvidia/Nemotron-Post-Training-Dataset-v1
nvidia
{"dataset_info": {"features": [{"name": "uuid", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "generator", "dtype": "string"}, {"name": "version", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "reasoning", "dtype": "string"}, {"name": "messages", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "tool_calls", "list": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "function", "struct": [{"name": "name", "dtype": "string"}, {"name": "arguments", "dtype": "string"}]}]}]}, {"name": "metadata", "dtype": "string"}], "splits": [{"name": "chat", "num_bytes": 3824039827, "num_examples": 746622}, {"name": "code", "num_bytes": 91391705833, "num_examples": 1896395}, {"name": "math", "num_bytes": 79173786238, "num_examples": 2044407}, {"name": "stem", "num_bytes": 329529074790, "num_examples": 20662167}, {"name": "tool", "num_bytes": 6395081261, "num_examples": 310051}], "download_size": 203373185595, "dataset_size": 510313687949}, "configs": [{"config_name": "default", "data_files": [{"split": "chat", "path": "data/chat-*"}, {"split": "code", "path": "data/code-*"}, {"split": "math", "path": "data/math-*"}, {"split": "stem", "path": "data/stem-*"}, {"split": "tool_calling", "path": "data/tool-*"}]}], "license": "cc-by-4.0"}
false
False
2025-07-31T07:28:58
42
42
false
06d0aef56fb542903a8d368d93ef54428cef0f61
Nemotron-Post-Training-Dataset-v1 Release This dataset is a compilation of SFT data that supports improvements of math, code, stem, general reasoning, and tool calling capabilities of the original Llama instruct model Llama-3.3-Nemotron-Super-49B-v1.5. Llama-3.3-Nemotron-Super-49B-v1.5 is an LLM which is a derivative of Meta Llama-3.3-70B-Instruct (AKA the reference model). Llama-3.3-Nemotron-Super-49B-v1.5 offers a great tradeoff between model accuracy and efficiency. Efficiency… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Post-Training-Dataset-v1.
582
582
[ "license:cc-by-4.0", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2505.00949", "region:us" ]
2025-07-29T23:41:37
null
null
685a3e532ffa3324700102d5
interstellarninja/hermes_reasoning_tool_use
interstellarninja
{"dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "tools", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "scenario_category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 392137224, "num_examples": 51004}], "download_size": 128188655, "dataset_size": 392137224}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "apache-2.0", "task_categories": ["question-answering"], "language": ["en"], "tags": ["tool-use", "json-mode", "reasoning", "rl"], "size_categories": ["10K<n<100K"]}
false
False
2025-07-23T11:19:25
76
36
false
cf5c4ed24134666ffb642fd34bc38fa9ff2ca909
null
1,369
1,541
[ "task_categories:question-answering", "language:en", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "tool-use", "json-mode", "reasoning", "rl" ]
2025-06-24T05:57:39
null
null
684975418fb1ad8c76edc770
microsoft/rStar-Coder
microsoft
{"pretty_name": "rStar-Coder", "configs": [{"config_name": "synthetic_sft", "data_files": [{"split": "train", "path": "synthetic_sft/*.parquet"}]}, {"config_name": "synthetic_rl", "data_files": [{"split": "train", "path": "synthetic_rl/*.parquet"}]}, {"config_name": "synthetic_rl_testcase", "data_files": [{"split": "train", "path": "synthetic_rl_testcase/*.parquet"}]}, {"config_name": "seed_sft", "data_files": [{"split": "train", "path": "seed_sft/*.parquet"}]}, {"config_name": "seed_testcase", "data_files": [{"split": "train", "path": "seed_testcase/*.parquet"}]}], "license": "cc-by-4.0"}
false
False
2025-07-20T06:11:10
160
26
false
3a7a0a0636ec96e3c1ec42ebe79ade467caa040d
rStar-Coder Dataset Project GitHub | Paper Dataset Description rStar-Coder is a large-scale competitive code problem dataset containing 418K programming problems, 580K long-reasoning solutions, and rich test cases of varying difficulty levels. This dataset aims to enhance code reasoning capabilities in large language models, particularly in handling competitive code problems. Experiments on Qwen models (1.5B-14B) across various code reasoning benchmarks demonstrate… See the full description on the dataset page: https://huggingface.co/datasets/microsoft/rStar-Coder.
11,691
11,706
[ "license:cc-by-4.0", "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2505.21297", "region:us" ]
2025-06-11T12:23:29
null
null
688710657650ffcfbe174277
zai-org/CC-Bench-trajectories
zai-org
{"license": "mit", "task_categories": ["text-generation"], "language": ["en", "zh"], "tags": ["code", "agent", "coding", "trajectory", "benchmark"], "size_categories": ["n<1K"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train.parquet"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "task_id", "dtype": "int64"}, {"name": "trajectory", "dtype": "string"}, {"name": "model_name", "dtype": "string"}, {"name": "task_category", "dtype": "string"}, {"name": "user_messages", "dtype": "int64"}, {"name": "assistant_messages", "dtype": "int64"}, {"name": "total_input_tokens", "dtype": "int64"}, {"name": "total_output_tokens", "dtype": "int64"}, {"name": "total_tokens", "dtype": "int64"}, {"name": "tool_calls", "dtype": "int64"}, {"name": "tool_failures", "dtype": "int64"}, {"name": "failure_rate", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 21608817, "num_examples": 208}], "download_size": 21608817, "dataset_size": 21608817}}
false
False
2025-07-28T12:08:16
25
25
false
f6fd4b2c2c26cf3e1b6447c1749e24cb6699dd28
CC-Bench Trajectories Overview To evaluate GLM-4.5's agentic coding capabilities in real-world scenarios, we build CC-Bench (using Claude Code as the agentic coding testbed) to conduct comprehensive testing against Claude-4-Sonnet, Kimi-K2, and Qwen3-Coder using 52 carefully designed coding tasks spanning multiple development domains. This dataset contains complete agentic trajectories of all 52 coding tasks with four models. Test Dataset Our evaluation dataset consists… See the full description on the dataset page: https://huggingface.co/datasets/zai-org/CC-Bench-trajectories.
2,681
2,681
[ "task_categories:text-generation", "language:en", "language:zh", "license:mit", "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "code", "agent", "coding", "trajectory", "benchmark" ]
2025-07-28T05:53:41
null
null
687c6f08386709ad79871f40
UCSC-VLAA/GPT-Image-Edit-1.5M
UCSC-VLAA
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["image-to-image"], "pretty_name": "GPT-Image-Edit-1.5M", "tags": ["image", "image-editing", "instruction-tuning", "instruction-guided", "multimodal"], "library_name": "datasets"}
false
False
2025-07-30T16:38:38
24
22
false
b56063b84ae60196cfcb1d0bbc29502c3d0178cd
GPT-Image-Edit-1.5M A Million-Scale, GPT-Generated Image Dataset 📃Arxiv | 🌐 Project Page | 💻Github GPT-Image-Edit-1.5M is a comprehensive image editing dataset that is built upon HQ-Edit, UltraEdit, OmniEdit and Complex-Edit, with all output images regenerated with GPT-Image-1. 📣 News [2025.07.27] 🤗 We release GPT-Image-Edit, a state-of-the-art image editing model with 1.5M high-quality editing samples. All data, models, training code and evaluation code are… See the full description on the dataset page: https://huggingface.co/datasets/UCSC-VLAA/GPT-Image-Edit-1.5M.
7,967
7,967
[ "task_categories:image-to-image", "language:en", "license:cc-by-4.0", "size_categories:1M<n<10M", "format:webdataset", "modality:image", "modality:text", "library:datasets", "library:webdataset", "library:mlcroissant", "arxiv:2507.21033", "region:us", "image", "image-editing", "instruction-tuning", "instruction-guided", "multimodal" ]
2025-07-20T04:22:32
null
null
68328f9074e873192976717f
multimodal-reasoning-lab/Zebra-CoT
multimodal-reasoning-lab
{"license": "cc-by-nc-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["any-to-any", "image-text-to-text", "visual-question-answering"], "tags": ["visual-reasoning", "multimodal", "chain-of-thought"], "dataset_info": [{"config_name": "2D Visual Reasoning - Visual Jigsaw", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 12582901580.818, "num_examples": 21899}], "download_size": 12050671761, "dataset_size": 12582901580.818}, {"config_name": "2D Visual Reasoning - Visual Search", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 13219910500, "num_examples": 30000}], "download_size": 12844156433, "dataset_size": 13219910500}, {"config_name": "3D Visual Reasoning - Embodied CoT", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "problem_image_2", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}, {"name": "reasoning_image_5", "dtype": "image"}, {"name": "reasoning_image_6", "dtype": "image"}, {"name": "reasoning_image_7", "dtype": "image"}, {"name": "reasoning_image_8", "dtype": "image"}, {"name": "reasoning_image_9", "dtype": "image"}, {"name": "reasoning_image_10", "dtype": "image"}, {"name": "reasoning_image_11", "dtype": "image"}, {"name": "reasoning_image_12", "dtype": "image"}, {"name": "reasoning_image_13", "dtype": "image"}, {"name": "reasoning_image_14", "dtype": "image"}, {"name": "reasoning_image_15", "dtype": "image"}, {"name": "reasoning_image_16", "dtype": "image"}, {"name": "reasoning_image_17", "dtype": "image"}, {"name": "reasoning_image_18", "dtype": "image"}, {"name": "reasoning_image_19", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3951703486.138, "num_examples": 22666}], "download_size": 3915085114, "dataset_size": 3951703486.138}, {"config_name": "3D Visual Reasoning - Multi-Hop Objects Counting", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}, {"name": "reasoning_image_5", "dtype": "image"}, {"name": "reasoning_image_6", "dtype": "image"}, {"name": "reasoning_image_7", "dtype": "image"}, {"name": "reasoning_image_8", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 19515039955, "num_examples": 10000}], "download_size": 19790655896, "dataset_size": 19515039955}, {"config_name": "3D Visual Reasoning - Robot Planning", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "problem_image_2", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}, {"name": "reasoning_image_5", "dtype": "image"}, {"name": "reasoning_image_6", "dtype": "image"}, {"name": "reasoning_image_7", "dtype": "image"}, {"name": "reasoning_image_8", "dtype": "image"}, {"name": "reasoning_image_9", "dtype": "image"}, {"name": "reasoning_image_10", "dtype": "image"}, {"name": "reasoning_image_11", "dtype": "image"}, {"name": "reasoning_image_12", "dtype": "image"}, {"name": "reasoning_image_13", "dtype": "image"}, {"name": "reasoning_image_14", "dtype": "image"}, {"name": "reasoning_image_15", "dtype": "image"}, {"name": "reasoning_image_16", "dtype": "image"}, {"name": "reasoning_image_17", "dtype": "image"}, {"name": "reasoning_image_18", "dtype": "image"}, {"name": "reasoning_image_19", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1898502775.976, "num_examples": 6944}], "download_size": 1942223260, "dataset_size": 1898502775.976}, {"config_name": "Scientific Reasoning - Chemistry", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 163590839.116, "num_examples": 4666}], "download_size": 146028450, "dataset_size": 163590839.116}, {"config_name": "Scientific Reasoning - Competitive Programming", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 34048163.736, "num_examples": 1207}], "download_size": 22819479, "dataset_size": 34048163.736}, {"config_name": "Scientific Reasoning - Geometry", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 166153915.182, "num_examples": 1058}], "download_size": 71915579, "dataset_size": 166153915.182}, {"config_name": "Scientific Reasoning - Graph Algorithms", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}, {"name": "reasoning_image_5", "dtype": "image"}, {"name": "reasoning_image_6", "dtype": "image"}, {"name": "reasoning_image_7", "dtype": "image"}, {"name": "reasoning_image_8", "dtype": "image"}, {"name": "reasoning_image_9", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3137613949, "num_examples": 10000}], "download_size": 2795027626, "dataset_size": 3137613949}, {"config_name": "Scientific Reasoning - Physics", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "problem_image_2", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 860083796.85, "num_examples": 7090}], "download_size": 350630960, "dataset_size": 860083796.85}, {"config_name": "Visual Logic & Strategic Games - ARC-AGI", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "problem_image_2", "dtype": "image"}, {"name": "problem_image_3", "dtype": "image"}, {"name": "problem_image_4", "dtype": "image"}, {"name": "problem_image_5", "dtype": "image"}, {"name": "problem_image_6", "dtype": "image"}, {"name": "problem_image_7", "dtype": "image"}, {"name": "problem_image_8", "dtype": "image"}, {"name": "problem_image_9", "dtype": "image"}, {"name": "problem_image_10", "dtype": "image"}, {"name": "problem_image_11", "dtype": "image"}, {"name": "problem_image_12", "dtype": "image"}, {"name": "problem_image_13", "dtype": "image"}, {"name": "problem_image_14", "dtype": "image"}, {"name": "problem_image_15", "dtype": "image"}, {"name": "problem_image_16", "dtype": "image"}, {"name": "problem_image_17", "dtype": "image"}, {"name": "problem_image_18", "dtype": "image"}, {"name": "problem_image_19", "dtype": "image"}, {"name": "problem_image_20", "dtype": "image"}, {"name": "problem_image_21", "dtype": "image"}, {"name": "problem_image_22", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}, {"name": "reasoning_image_5", "dtype": "image"}, {"name": "reasoning_image_6", "dtype": "image"}, {"name": "reasoning_image_7", "dtype": "image"}, {"name": "reasoning_image_8", "dtype": "image"}, {"name": "reasoning_image_9", "dtype": "image"}, {"name": "reasoning_image_10", "dtype": "image"}, {"name": "reasoning_image_11", "dtype": "image"}, {"name": "reasoning_image_12", "dtype": "image"}, {"name": "reasoning_image_13", "dtype": "image"}, {"name": "reasoning_image_14", "dtype": "image"}, {"name": "reasoning_image_15", "dtype": "image"}, {"name": "reasoning_image_16", "dtype": "image"}, {"name": "reasoning_image_17", "dtype": "image"}, {"name": "reasoning_image_18", "dtype": "image"}, {"name": "reasoning_image_19", "dtype": "image"}, {"name": "reasoning_image_20", "dtype": "image"}, {"name": "reasoning_image_21", "dtype": "image"}, {"name": "reasoning_image_22", "dtype": "image"}, {"name": "reasoning_image_23", "dtype": "image"}, {"name": "reasoning_image_24", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3491272989, "num_examples": 2000}], "download_size": 1089272199, "dataset_size": 3491272989}, {"config_name": "Visual Logic & Strategic Games - Checkers", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}, {"name": "reasoning_image_5", "dtype": "image"}, {"name": "reasoning_image_6", "dtype": "image"}, {"name": "reasoning_image_7", "dtype": "image"}, {"name": "reasoning_image_8", "dtype": "image"}, {"name": "reasoning_image_9", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 798412047.376, "num_examples": 2753}], "download_size": 784302007, "dataset_size": 798412047.376}, {"config_name": "Visual Logic & Strategic Games - Chess", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3914720265.394, "num_examples": 20483}], "download_size": 3872363943, "dataset_size": 3914720265.394}, {"config_name": "Visual Logic & Strategic Games - Ciphers", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}, {"name": "reasoning_image_5", "dtype": "image"}, {"name": "reasoning_image_6", "dtype": "image"}, {"name": "reasoning_image_7", "dtype": "image"}, {"name": "reasoning_image_8", "dtype": "image"}, {"name": "reasoning_image_9", "dtype": "image"}, {"name": "reasoning_image_10", "dtype": "image"}, {"name": "reasoning_image_11", "dtype": "image"}, {"name": "reasoning_image_12", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2351943317.756, "num_examples": 6589}], "download_size": 1956729740, "dataset_size": 2351943317.756}, {"config_name": "Visual Logic & Strategic Games - Connect Four", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}, {"name": "reasoning_image_5", "dtype": "image"}, {"name": "reasoning_image_6", "dtype": "image"}, {"name": "reasoning_image_7", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1602931211.081, "num_examples": 2029}], "download_size": 1570393636, "dataset_size": 1602931211.081}, {"config_name": "Visual Logic & Strategic Games - Maze", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}, {"name": "reasoning_image_5", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5418428166, "num_examples": 20000}], "download_size": 5958257563, "dataset_size": 5418428166}, {"config_name": "Visual Logic & Strategic Games - RPM", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1113615987, "num_examples": 3000}], "download_size": 631931331, "dataset_size": 1113615987}, {"config_name": "Visual Logic & Strategic Games - Tetris", "features": [{"name": "Question", "dtype": "string"}, {"name": "Text Reasoning Trace", "dtype": "string"}, {"name": "Final Answer", "dtype": "string"}, {"name": "problem_image_1", "dtype": "image"}, {"name": "reasoning_image_1", "dtype": "image"}, {"name": "reasoning_image_2", "dtype": "image"}, {"name": "reasoning_image_3", "dtype": "image"}, {"name": "reasoning_image_4", "dtype": "image"}, {"name": "reasoning_image_5", "dtype": "image"}, {"name": "reasoning_image_6", "dtype": "image"}, {"name": "reasoning_image_7", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2745762328, "num_examples": 10000}], "download_size": 1176544601, "dataset_size": 2745762328}], "configs": [{"config_name": "2D Visual Reasoning - Visual Jigsaw", "data_files": [{"split": "train", "path": "2D Visual Reasoning - Visual Jigsaw/train-*"}]}, {"config_name": "2D Visual Reasoning - Visual Search", "data_files": [{"split": "train", "path": "2D Visual Reasoning - Visual Search/train-*"}]}, {"config_name": "3D Visual Reasoning - Embodied CoT", "data_files": [{"split": "train", "path": "3D Visual Reasoning - Embodied CoT/train-*"}]}, {"config_name": "3D Visual Reasoning - Multi-Hop Objects Counting", "data_files": [{"split": "train", "path": "3D Visual Reasoning - Multi-Hop Objects Counting/train-*"}]}, {"config_name": "3D Visual Reasoning - Robot Planning", "data_files": [{"split": "train", "path": "3D Visual Reasoning - Robot Planning/train-*"}]}, {"config_name": "Scientific Reasoning - Chemistry", "data_files": [{"split": "train", "path": "Scientific Reasoning - Chemistry/train-*"}]}, {"config_name": "Scientific Reasoning - Competitive Programming", "data_files": [{"split": "train", "path": "Scientific Reasoning - Competitive Programming/train-*"}]}, {"config_name": "Scientific Reasoning - Geometry", "data_files": [{"split": "train", "path": "Scientific Reasoning - Geometry/train-*"}]}, {"config_name": "Scientific Reasoning - Graph Algorithms", "data_files": [{"split": "train", "path": "Scientific Reasoning - Graph Algorithms/train-*"}]}, {"config_name": "Scientific Reasoning - Physics", "data_files": [{"split": "train", "path": "Scientific Reasoning - Physics/train-*"}]}, {"config_name": "Visual Logic & Strategic Games - ARC-AGI", "data_files": [{"split": "train", "path": "Visual Logic & Strategic Games - ARC-AGI/train-*"}]}, {"config_name": "Visual Logic & Strategic Games - Checkers", "data_files": [{"split": "train", "path": "Visual Logic & Strategic Games - Checkers/train-*"}]}, {"config_name": "Visual Logic & Strategic Games - Chess", "data_files": [{"split": "train", "path": "Visual Logic & Strategic Games - Chess/train-*"}]}, {"config_name": "Visual Logic & Strategic Games - Ciphers", "data_files": [{"split": "train", "path": "Visual Logic & Strategic Games - Ciphers/train-*"}]}, {"config_name": "Visual Logic & Strategic Games - Connect Four", "data_files": [{"split": "train", "path": "Visual Logic & Strategic Games - Connect Four/train-*"}]}, {"config_name": "Visual Logic & Strategic Games - Maze", "data_files": [{"split": "train", "path": "Visual Logic & Strategic Games - Maze/train-*"}]}, {"config_name": "Visual Logic & Strategic Games - RPM", "data_files": [{"split": "train", "path": "Visual Logic & Strategic Games - RPM/train-*"}]}, {"config_name": "Visual Logic & Strategic Games - Tetris", "data_files": [{"split": "train", "path": "Visual Logic & Strategic Games - Tetris/train-*"}]}]}
false
False
2025-07-26T02:00:54
34
20
false
0be141b18cb0986c3fa79f77daaec562622f1b1d
Zebra‑CoT A diverse large-scale dataset for interleaved vision‑language reasoning traces. Dataset Description Zebra‑CoT is a diverse large‑scale dataset with 182,384 samples containing logically coherent interleaved text‑image reasoning traces across four major categories: scientific reasoning, 2D visual reasoning, 3D visual reasoning, and visual logic & strategic games. Dataset Structure Each example in Zebra‑CoT consists of: Problem statement:… See the full description on the dataset page: https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT.
5,879
7,077
[ "task_categories:any-to-any", "task_categories:image-text-to-text", "task_categories:visual-question-answering", "license:cc-by-nc-4.0", "size_categories:100K<n<1M", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2507.16746", "region:us", "visual-reasoning", "multimodal", "chain-of-thought" ]
2025-05-25T03:33:36
null
null
66e0b225bd62a1da48328722
common-pile/caselaw_access_project
common-pile
{"task_categories": ["text-generation"], "language": ["en"], "pretty_name": "Caselaw Access Project"}
false
False
2025-06-06T03:51:23
176
16
false
3c2cb5080b3a16a04d8d8d07b28eaec7c1ba7a90
Caselaw Access Project Description This dataset contains 6.7 million cases from the Caselaw Access Project and Court Listener. The Caselaw Access Project consists of nearly 40 million pages of U.S. federal and state court decisions and judges’ opinions from the last 365 years. In addition, Court Listener adds over 900 thousand cases scraped from 479 courts. The Caselaw Access Project and Court Listener source legal data from a wide variety of resources such as the… See the full description on the dataset page: https://huggingface.co/datasets/common-pile/caselaw_access_project.
5,481
7,110
[ "task_categories:text-generation", "language:en", "size_categories:1M<n<10M", "format:json", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "arxiv:2506.05209", "region:us" ]
2024-09-10T20:55:01
null
null
67d3479522a51de18affff22
nvidia/Llama-Nemotron-Post-Training-Dataset
nvidia
{"license": "cc-by-4.0", "configs": [{"config_name": "SFT", "data_files": [{"split": "code", "path": "SFT/code/*.jsonl"}, {"split": "math", "path": "SFT/math/*.jsonl"}, {"split": "science", "path": "SFT/science/*.jsonl"}, {"split": "chat", "path": "SFT/chat/*.jsonl"}, {"split": "safety", "path": "SFT/safety/*.jsonl"}], "default": true}, {"config_name": "RL", "data_files": [{"split": "instruction_following", "path": "RL/instruction_following/*.jsonl"}]}]}
false
False
2025-05-08T17:51:50
547
15
false
ab2a40d258a6a4d9d4c277d702aeea445081766c
Llama-Nemotron-Post-Training-Dataset-v1.1 Release Update [4/8/2025]: v1.1: We are releasing an additional 2.2M Math and 500K Code Reasoning Data in support of our release of Llama-3.1-Nemotron-Ultra-253B-v1. 🎉 Data Overview This dataset is a compilation of SFT and RL data that supports improvements of math, code, general reasoning, and instruction following capabilities of the original Llama instruct model, in support of NVIDIA’s release of… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset.
7,455
36,738
[ "license:cc-by-4.0", "size_categories:1M<n<10M", "format:json", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "arxiv:2505.00949", "region:us" ]
2025-03-13T21:01:09
null
null
6807af7004bb82059e072037
deepvk/NonverbalTTS
deepvk
{"tags": ["audio"], "license": "apache-2.0", "language": ["en"], "pretty_name": "NonverbalTTS", "size_categories": ["1K<n<10K"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "default/train/**"}, {"split": "dev", "path": "default/dev/**"}, {"split": "test", "path": "default/test/**"}, {"split": "other", "path": "default/other/**"}]}], "task_categories": ["text-to-speech"]}
false
False
2025-07-22T14:47:53
29
15
false
de245c4a2b70f564f85f84b421635d4f5d6ff2ea
NonverbalTTS Dataset 🎵🗣️ NonverbalTTS is a 17-hour open-access English speech corpus with aligned text annotations for nonverbal vocalizations (NVs) and emotional categories, designed to advance expressive text-to-speech (TTS) research. Key Features ✨ 17 hours of high-quality speech data 10 NV types: Breathing, laughter, sighing, sneezing, coughing, throat clearing, groaning, grunting, snoring, sniffing 8 emotion categories: Angry, disgusted, fearful, happy… See the full description on the dataset page: https://huggingface.co/datasets/deepvk/NonverbalTTS.
908
1,043
[ "task_categories:text-to-speech", "language:en", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2507.13155", "arxiv:2409.09546", "region:us", "audio" ]
2025-04-22T15:02:08
null
null
67bb71f1aca0fe22d1e84b44
allenai/CoSyn-400K
allenai
{"license": "odc-by", "task_categories": ["visual-question-answering"], "dataset_info": [{"config_name": "chart", "features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "qa_pairs", "sequence": [{"name": "question", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "figure_type", "dtype": "string"}, {"name": "persona", "dtype": "string"}, {"name": "topic", "dtype": "string"}]}, {"name": "data", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 25262691844.136, "num_examples": 116814}, {"name": "validation", "num_bytes": 220083787.264, "num_examples": 1024}], "download_size": 24927449477, "dataset_size": 25482775631.4}, {"config_name": "chemical", "features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "qa_pairs", "sequence": [{"name": "question", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "figure_type", "dtype": "string"}, {"name": "persona", "dtype": "string"}, {"name": "topic", "dtype": "string"}]}, {"name": "data", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 282021984.062, "num_examples": 8942}, {"name": "validation", "num_bytes": 4186180, "num_examples": 128}], "download_size": 276447943, "dataset_size": 286208164.062}, {"config_name": "circuit", "features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "qa_pairs", "sequence": [{"name": "question", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "figure_type", "dtype": "string"}, {"name": "persona", "dtype": "string"}, {"name": "topic", "dtype": "string"}]}, {"name": "data", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 405803895.22, "num_examples": 10470}, {"name": "validation", "num_bytes": 5126755, "num_examples": 128}], "download_size": 392176815, "dataset_size": 410930650.22}, {"config_name": "diagram", "features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "qa_pairs", "sequence": [{"name": "question", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "figure_type", "dtype": "string"}, {"name": "persona", "dtype": "string"}, {"name": "topic", "dtype": "string"}]}, {"name": "data", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6647512945.646, "num_examples": 34963}, {"name": "validation", "num_bytes": 194765398, "num_examples": 1024}], "download_size": 6695298322, "dataset_size": 6842278343.646}, {"config_name": "document", "features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "qa_pairs", "sequence": [{"name": "question", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "figure_type", "dtype": "string"}, {"name": "persona", "dtype": "string"}, {"name": "topic", "dtype": "string"}]}, {"name": "data", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20408059180.798, "num_examples": 71282}, {"name": "validation", "num_bytes": 287297344.304, "num_examples": 1024}], "download_size": 20220923713, "dataset_size": 20695356525.102}, {"config_name": "graphic", "features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "qa_pairs", "sequence": [{"name": "question", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "figure_type", "dtype": "string"}, {"name": "persona", "dtype": "string"}, {"name": "topic", "dtype": "string"}]}, {"name": "data", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 401715264.464, "num_examples": 26968}, {"name": "validation", "num_bytes": 15527102.264, "num_examples": 1024}], "download_size": 360711845, "dataset_size": 417242366.728}, {"config_name": "math", "features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "qa_pairs", "sequence": [{"name": "question", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "figure_type", "dtype": "string"}, {"name": "persona", "dtype": "string"}, {"name": "topic", "dtype": "string"}]}, {"name": "data", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6288774127.884, "num_examples": 66714}, {"name": "validation", "num_bytes": 97463564.56, "num_examples": 1024}], "download_size": 6245281939, "dataset_size": 6386237692.444}, {"config_name": "music", "features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "qa_pairs", "sequence": [{"name": "question", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "figure_type", "dtype": "string"}, {"name": "persona", "dtype": "string"}, {"name": "topic", "dtype": "string"}]}, {"name": "data", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 436496623.452, "num_examples": 11969}, {"name": "validation", "num_bytes": 4754704, "num_examples": 128}], "download_size": 397428056, "dataset_size": 441251327.452}, {"config_name": "nutrition", "features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "qa_pairs", "sequence": [{"name": "question", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "figure_type", "dtype": "string"}, {"name": "persona", "dtype": "string"}, {"name": "topic", "dtype": "string"}]}, {"name": "data", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1445696898.35, "num_examples": 6931}, {"name": "validation", "num_bytes": 27712685, "num_examples": 128}], "download_size": 1410256975, "dataset_size": 1473409583.35}, {"config_name": "table", "features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "qa_pairs", "sequence": [{"name": "question", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "figure_type", "dtype": "string"}, {"name": "persona", "dtype": "string"}, {"name": "topic", "dtype": "string"}]}, {"name": "data", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7026511042.24, "num_examples": 46518}, {"name": "validation", "num_bytes": 152040498.064, "num_examples": 1024}], "download_size": 6918074537, "dataset_size": 7178551540.304}], "configs": [{"config_name": "chart", "data_files": [{"split": "train", "path": "chart/train-*"}, {"split": "validation", "path": "chart/validation-*"}]}, {"config_name": "chemical", "data_files": [{"split": "train", "path": "chemical/train-*"}, {"split": "validation", "path": "chemical/validation-*"}]}, {"config_name": "circuit", "data_files": [{"split": "train", "path": "circuit/train-*"}, {"split": "validation", "path": "circuit/validation-*"}]}, {"config_name": "diagram", "data_files": [{"split": "train", "path": "diagram/train-*"}, {"split": "validation", "path": "diagram/validation-*"}]}, {"config_name": "document", "data_files": [{"split": "train", "path": "document/train-*"}, {"split": "validation", "path": "document/validation-*"}]}, {"config_name": "graphic", "data_files": [{"split": "train", "path": "graphic/train-*"}, {"split": "validation", "path": "graphic/validation-*"}]}, {"config_name": "math", "data_files": [{"split": "train", "path": "math/train-*"}, {"split": "validation", "path": "math/validation-*"}]}, {"config_name": "music", "data_files": [{"split": "train", "path": "music/train-*"}, {"split": "validation", "path": "music/validation-*"}]}, {"config_name": "nutrition", "data_files": [{"split": "train", "path": "nutrition/train-*"}, {"split": "validation", "path": "nutrition/validation-*"}]}, {"config_name": "table", "data_files": [{"split": "train", "path": "table/train-*"}, {"split": "validation", "path": "table/validation-*"}]}]}
false
False
2025-02-28T19:14:42
32
13
false
86e46e1fd5e754d056169f0fb38f06c6997ff7de
CoSyn-400k CoSyn-400k is a collection of synthetic question-answer pairs about very diverse range of computer-generated images. The data was created by using the Claude large language model to generate code that can be executed to render an image, and using GPT-4o mini to generate Q/A pairs based on the code (without using the rendered image). The code used to generate this data is open source. Synthetic pointing data is available in a seperate repo. Quick links: 📃 CoSyn… See the full description on the dataset page: https://huggingface.co/datasets/allenai/CoSyn-400K.
2,172
16,916
[ "task_categories:visual-question-answering", "license:odc-by", "size_categories:100K<n<1M", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2502.14846", "arxiv:2409.17146", "region:us" ]
2025-02-23T19:07:29
null
null
6837854ff36dbe5068b5d602
open-thoughts/OpenThoughts3-1.2M
open-thoughts
{"dataset_info": {"features": [{"name": "difficulty", "dtype": "int64"}, {"name": "source", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 59763369750, "num_examples": 1200000}], "download_size": 28188197544, "dataset_size": 59763369750}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "apache-2.0", "task_categories": ["text-generation"], "tags": ["reasoning", "mathematics", "code", "science"], "library_name": "datasets"}
false
False
2025-06-09T16:14:06
147
12
false
61bcf9d4eb38b30295efc2021227a63cc5bb34c8
paper | dataset | model [!NOTE] We have released a paper for OpenThoughts! See our paper here. OpenThoughts3-1.2M Open-source state-of-the-art reasoning dataset with 1.2M rows. 🚀 OpenThoughts3-1.2M is the third iteration in our line of OpenThoughts datasets, building on our previous OpenThoughts-114k and OpenThoughts2-1M. This time around, we scale even further and generate our dataset in a much more systematic way -- OpenThoughts3-1.2M is the result of a… See the full description on the dataset page: https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M.
11,811
36,632
[ "task_categories:text-generation", "license:apache-2.0", "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2506.04178", "region:us", "reasoning", "mathematics", "code", "science" ]
2025-05-28T21:51:11
null
null
683fd7b68de3ffc58390f5e2
XenArcAI/MathX-5M
XenArcAI
{"license": "mit", "tags": ["Mathematics", "XenArcAI", "High-Performance-Math", "Sparse-Math-Optimization", "Deep-Learning-Mathematics", "Math-Reasoning-LLM", "Symbolic-Math", "Computational-Mathematics", "ML-Math", "HPC-AI", "Numerical-Computing"], "task_categories": ["question-answering", "text-generation"], "size_categories": ["50GB"]}
false
False
2025-07-26T05:19:46
53
12
false
718166a53a74e462705d55b0c9f9d40448a7ff20
XenArcAI Note : This datset is the part of a lineup MathX by XenArcAI you can get a lots of datasets on this same linup main focus is to provide very high quality datasets for model training and finetuning This dataset is curated from high-quality public sources and enhanced with synthetic data from both closed and open-source models. It serves as a strong foundation for instruction-based model tuning and fine-tuning, offering one of the most refined and extensive corpora… See the full description on the dataset page: https://huggingface.co/datasets/XenArcAI/MathX-5M.
5,539
6,433
[ "task_categories:question-answering", "task_categories:text-generation", "license:mit", "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "Mathematics", "XenArcAI", "High-Performance-Math", "Sparse-Math-Optimization", "Deep-Learning-Mathematics", "Math-Reasoning-LLM", "Symbolic-Math", "Computational-Mathematics", "ML-Math", "HPC-AI", "Numerical-Computing" ]
2025-06-04T05:20:54
null
null
68733036a88d572f1c84c9db
StyleXX/OmniStyle-150k
StyleXX
{"license": "apache-2.0"}
false
False
2025-07-23T08:00:36
13
12
false
b9264acb310d31e48b7115e958f1594226e63304
OmniStyle-150K Dataset OmniStyle-150K is a high-quality triplet dataset specifically designed to support generalizable, controllable, and high-resolution image style transfer. Each triplet includes a content image, a style reference image, and the corresponding stylized result. 📦 Dataset Structure OmniStyle-150K/: Stylized result images content/: Original content images style/: Style reference images Each file in the OmniStyle-150K/ folder is named using the… See the full description on the dataset page: https://huggingface.co/datasets/StyleXX/OmniStyle-150k.
419
419
[ "license:apache-2.0", "region:us" ]
2025-07-13T04:04:06
null
null
6878963273bedf813f4fef37
spatialverse/InteriorGS
spatialverse
{"viewer": false, "license": "other", "license_name": "interiorgs-terms-of-use", "license_link": "https://kloudsim-usa-cos.kujiale.com/InteriorGS/InteriorGS_Terms_of_Use.pdf"}
false
auto
2025-07-25T06:38:13
12
12
false
f41811680802f1e9f95f9f44658b79751ce76c63
InteriorGS: 3D Gaussian Splatting Dataset of Semantically Labeled Indoor Scenes A comprehensive indoor scene dataset featuring 3D Gaussian representations with semantic annotations and spatial occupancy information. Sample from the InteriorGS dataset. The dataset provides high-quality 3D Gaussian Splatting (3DGS) representations along with instance-level semantic bounding boxes and occupancy maps indicating agent-accessible areas. The red and yellow trajectories… See the full description on the dataset page: https://huggingface.co/datasets/spatialverse/InteriorGS.
653
653
[ "license:other", "region:us" ]
2025-07-17T06:20:34
null
null
676f70846bf205795346d2be
FreedomIntelligence/medical-o1-reasoning-SFT
FreedomIntelligence
{"license": "apache-2.0", "task_categories": ["question-answering", "text-generation"], "language": ["en", "zh"], "tags": ["medical", "biology"], "configs": [{"config_name": "en", "data_files": "medical_o1_sft.json"}, {"config_name": "zh", "data_files": "medical_o1_sft_Chinese.json"}, {"config_name": "en_mix", "data_files": "medical_o1_sft_mix.json"}, {"config_name": "zh_mix", "data_files": "medical_o1_sft_mix_Chinese.json"}]}
false
False
2025-04-22T15:11:21
802
11
false
fc2c9e8a37b38f38da6d449564a8c350b244aef4
News [2025/04/22] We split the data and kept only the medical SFT dataset (medical_o1_sft.json). The file medical_o1_sft_mix.json contains a mix of medical and general instruction data. [2025/02/22] We released the distilled dataset from Deepseek-R1 based on medical verifiable problems. You can use it to initialize your models with the reasoning chain from Deepseek-R1. [2024/12/25] We open-sourced the medical reasoning dataset for SFT, built on medical verifiable problems and an LLM… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT.
8,714
91,712
[ "task_categories:question-answering", "task_categories:text-generation", "language:en", "language:zh", "license:apache-2.0", "size_categories:10K<n<100K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2412.18925", "region:us", "medical", "biology" ]
2024-12-28T03:29:08
null
null
67bc84052cedbdaed9ee5c82
atalaydenknalbant/rawg-games-dataset
atalaydenknalbant
{"license": "cc0-1.0", "task_categories": ["sentence-similarity", "summarization", "feature-extraction"], "tags": ["games", "video-games"]}
false
False
2025-07-22T01:33:53
25
11
false
e8c649971a9c36836ffd1bea1334184d247fd59d
Description RAWG Games Dataset video game records data gathered directly from the RAWG API. It includes essential fields such as game id, title, release date, rating, genres, platforms, descriptive tags, Metacritic score, developers, publishers, playtime, and a detailed description. The data was collected to support studies, trend analysis, and insights into the gaming industry. Each field is aligned with the specifications provided in the RAWG API documentation.… See the full description on the dataset page: https://huggingface.co/datasets/atalaydenknalbant/rawg-games-dataset.
398
1,177
[ "task_categories:sentence-similarity", "task_categories:summarization", "task_categories:feature-extraction", "license:cc0-1.0", "size_categories:100K<n<1M", "format:parquet", "modality:image", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "games", "video-games" ]
2025-02-24T14:36:53
null
null
6858e379f9dc599076596798
facebook/seamless-interaction
facebook
{"license": "cc-by-nc-4.0", "configs": [{"config_name": "improvised", "data_files": [{"split": "dev", "path": ["improvised/dev/**/*"]}, {"split": "test", "path": ["improvised/test/**/*"]}, {"split": "train", "path": ["improvised/train/**/*"]}]}, {"config_name": "naturalistic", "data_files": [{"split": "dev", "path": ["naturalistic/dev/**/*"]}, {"split": "test", "path": ["naturalistic/test/**/*"]}, {"split": "train", "path": ["naturalistic/train/**/*"]}]}], "tags": ["webdataset", "audio", "video"], "pretty_name": "Seamless Interaction"}
false
False
2025-07-14T20:45:08
125
11
false
ba9e212ab927ba05bfd80778f53bf9de69f65e3b
Seamless Interaction Dataset A large-scale multimodal dataset of 4,000+ hours of human interactions for AI research 🖼️ Blog 🌐 Website 🎮 Demo 📦 GitHub 📄 Paper Human communication involves a complex interplay of verbal and nonverbal signals, essential for conveying meaning and achieving interpersonal goals. The Seamless Interaction Dataset is a large-scale collection of over 4,000 hours of face-to-face interaction footage from more than 4,000 participants in… See the full description on the dataset page: https://huggingface.co/datasets/facebook/seamless-interaction.
157,632
166,368
[ "license:cc-by-nc-4.0", "modality:audio", "modality:video", "library:webdataset", "region:us", "webdataset", "audio", "video" ]
2025-06-23T05:17:45
null
null
686321460e836b7a4c5621fa
atalaydenknalbant/MathCaptcha10k
atalaydenknalbant
{"license": "cc-by-4.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ocr_text", "dtype": "string"}, {"name": "result", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 60582512, "num_examples": 10000}, {"name": "test", "num_bytes": 70989855.334, "num_examples": 11766}], "download_size": 132297385, "dataset_size": 131572367.334}, "task_categories": ["question-answering"], "tags": ["captcha", "math", "mathcaptcha", "math-captcha", "mvccaptcha"]}
false
False
2025-07-06T21:35:38
17
11
false
34d0caf9c175034bae863678c28128fc06ab1d61
Dataset Details Dataset Name: MathCaptcha10k Curated by: Atalay Denknalbant License: Creative Commons Attribution 4.0 International (CC BY 4.0) Repository: https://www.kaggle.com/datasets/atalaydenknalbant/mathcaptcha10k Dataset Description A corpus of 10 000 synthetic arithmetic‐captcha images rendered at 200×70 px. Each image contains exactly two base-10 numbers (1–2 digits), a single + or – operator, an = sign and a trailing question mark (e.g.… See the full description on the dataset page: https://huggingface.co/datasets/atalaydenknalbant/MathCaptcha10k.
1,865
1,934
[ "task_categories:question-answering", "license:cc-by-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "captcha", "math", "mathcaptcha", "math-captcha", "mvccaptcha" ]
2025-06-30T23:44:06
null
null
687ea4b5432984e8877a06ed
atalaydenknalbant/Kinetics-700
atalaydenknalbant
{"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "pretty_name": "Kinetics-700", "tags": ["video", "action-recognition", "computer-vision", "large-scale", "research", "human-actions"], "dataset_info": {"features": [{"name": "video", "dtype": "video", "description": "Path to the video file."}, {"name": "label", "dtype": "string", "description": "Human action label for the video clip."}, {"name": "youtube_id", "dtype": "string", "description": "The YouTube ID of the source video."}, {"name": "start_time", "dtype": "int64", "description": "Start timestamp of the action clip within the YouTube video (in seconds)."}, {"name": "end_time", "dtype": "int64", "description": "End timestamp of the action clip within the YouTube video (in seconds)."}], "splits": [{"name": "train", "num_bytes": "737,862,498,037", "num_examples": 536499}, {"name": "val", "num_bytes": "50,623,801,874", "num_examples": 33966}, {"name": "test", "num_bytes": "147,390,516,680", "num_examples": 64535}]}, "citation": [{"doi": "10.1109/ICCV.2017.335", "text": "@inproceedings{kay2017kinetics,\n title={The Kinetics Human Action Video Dataset},\n author={Kay, Will and Carreira, Joaquin and Simonyan, Karen and Zhang, Brian and Hillier, Chloe and Vijayanarasimhan, Sudheendra and Viola, Fabio and Tim Green and Trevor Back and Paul Natsev and others},\n booktitle={Proceedings of the IEEE International Conference on Computer Vision},\n pages={6611--6619},\n year={2017}\n}"}, {"doi": "10.1109/CVPR.2019.00971", "text": "@inproceedings{carreira2019kinetics,\n title={A short note on Kinetics-700: a much larger dataset for human action recognition},\n author={Carreira, Joaquin and Chuan, Eric and Zisserman, Andrew},\n booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n pages={9503--9506},\n year={2019}\n}"}]}
false
False
2025-07-27T08:10:44
11
11
false
f3a2cb54af3d9eb6daee706535237af8aae10eca
🎬 Dataset Card for Kinetics-700 📦 🚨IMPORTANT Dataset Decompression for Kinetics-700🚨 To fully utilize the Kinetics-700 dataset, you must download and decompress all 22 zipped archives. This process is essential to access the complete video collection. Failure to decompress all archives will result in an incomplete dataset. 📝 Dataset Description The Kinetics-700 dataset is a large scale collection of YouTube video URLs for human action recognition. It is an… See the full description on the dataset page: https://huggingface.co/datasets/atalaydenknalbant/Kinetics-700.
294
294
[ "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "language:en", "license:other", "size_categories:100K<n<1M", "modality:video", "library:datasets", "library:mlcroissant", "region:us", "video", "action-recognition", "computer-vision", "large-scale", "research", "human-actions" ]
2025-07-21T20:36:05
null
null
661823b590a8b6724f1c6534
HuggingFaceM4/the_cauldron
HuggingFaceM4
{"dataset_info": [{"config_name": "ai2d", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 435362437.84770346, "num_examples": 2434}], "download_size": 438136609, "dataset_size": 435362437.84770346}, {"config_name": "aokvqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 871997710, "num_examples": 16539}], "download_size": 893265070, "dataset_size": 871997710}, {"config_name": "chart2text", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1060566797.2728182, "num_examples": 26961}], "download_size": 1103141721, "dataset_size": 1060566797.2728182}, {"config_name": "chartqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 784719364.9441738, "num_examples": 18265}], "download_size": 803192402, "dataset_size": 784719364.9441738}, {"config_name": "clevr", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 11522617868, "num_examples": 70000}], "download_size": 13267429872, "dataset_size": 11522617868}, {"config_name": "clevr_math", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 13308311206, "num_examples": 70000}], "download_size": 16315284, "dataset_size": 13308311206}, {"config_name": "cocoqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2213960474, "num_examples": 46287}], "download_size": 2393991009, "dataset_size": 2213960474}, {"config_name": "datikz", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 481233278, "num_examples": 47974}], "download_size": 613100257, "dataset_size": 481233278}, {"config_name": "diagram_image_to_text", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 18877197, "num_examples": 300}], "download_size": 18706661, "dataset_size": 18877197}, {"config_name": "docvqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 6885686042, "num_examples": 10189}], "download_size": 6887803845, "dataset_size": 6885686042}, {"config_name": "dvqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3689940101, "num_examples": 200000}], "download_size": 4295254110, "dataset_size": 3689940101}, {"config_name": "figureqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1901887152, "num_examples": 100000}], "download_size": 2220036667, "dataset_size": 1901887152}, {"config_name": "finqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 135268568, "num_examples": 5276}], "download_size": 123698250, "dataset_size": 135268568}, {"config_name": "geomverse", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 951640204, "num_examples": 9303}], "download_size": 323746516, "dataset_size": 951640204}, {"config_name": "hateful_memes", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3035059823, "num_examples": 8500}], "download_size": 3054208907, "dataset_size": 3035059823}, {"config_name": "hitab", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 161130580, "num_examples": 2500}], "download_size": 158295807, "dataset_size": 161130580}, {"config_name": "iam", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1129180352, "num_examples": 5663}], "download_size": 1128935602, "dataset_size": 1129180352}, {"config_name": "iconqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 264513634.7170419, "num_examples": 27307}], "download_size": 326674337, "dataset_size": 264513634.7170419}, {"config_name": "infographic_vqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 291677986, "num_examples": 2118}], "download_size": 292351760, "dataset_size": 291677986}, {"config_name": "intergps", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 24982328.291771192, "num_examples": 1280}], "download_size": 24870320, "dataset_size": 24982328.291771192}, {"config_name": "localized_narratives", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 21380844262.41927, "num_examples": 199998}], "download_size": 22164342699, "dataset_size": 21380844262.41927}, {"config_name": "mapqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3238062926, "num_examples": 37417}], "download_size": 3307676486, "dataset_size": 3238062926}, {"config_name": "mimic_cgd", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 12592929433, "num_examples": 70939}], "download_size": 13147641100, "dataset_size": 12592929433}, {"config_name": "multihiertt", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1356766489.046, "num_examples": 7619}], "download_size": 1360814135, "dataset_size": 1356766489.046}, {"config_name": "nlvr2", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 8375492591, "num_examples": 50426}], "download_size": 10838882020, "dataset_size": 8375492591}, {"config_name": "ocrvqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 5467134439, "num_examples": 165746}], "download_size": 6078073015, "dataset_size": 5467134439}, {"config_name": "okvqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 281454288182.492, "num_examples": 9009}], "download_size": 3009062, "dataset_size": 281454288182.492}, {"config_name": "plotqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 7837605221, "num_examples": 157070}], "download_size": 5320249066, "dataset_size": 7837605221}, {"config_name": "raven", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1506550467, "num_examples": 42000}], "download_size": 1720691636, "dataset_size": 1506550467}, {"config_name": "rendered_text", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 11086896502, "num_examples": 10000}], "download_size": 11086960376, "dataset_size": 11086896502}, {"config_name": "robut_sqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 679135952, "num_examples": 8514}], "download_size": 678722272, "dataset_size": 679135952}, {"config_name": "robut_wikisql", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 5950915477, "num_examples": 74989}], "download_size": 6160300141, "dataset_size": 5950915477}, {"config_name": "robut_wtq", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4023729236, "num_examples": 38246}], "download_size": 4061523247, "dataset_size": 4023729236}, {"config_name": "scienceqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 284601898.76188564, "num_examples": 4976}], "download_size": 283265438, "dataset_size": 284601898.76188564}, {"config_name": "screen2words", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1670723783, "num_examples": 15730}], "download_size": 1346254268, "dataset_size": 1670723783}, {"config_name": "spot_the_diff", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1643123792, "num_examples": 8566}], "download_size": 1526740548, "dataset_size": 1643123792}, {"config_name": "st_vqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 696265340, "num_examples": 17247}], "download_size": 720462890, "dataset_size": 696265340}, {"config_name": "tabmwp", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 265337140.19648907, "num_examples": 22722}], "download_size": 306643610, "dataset_size": 265337140.19648907}, {"config_name": "tallyqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4267143189, "num_examples": 98680}], "download_size": 4662245152, "dataset_size": 4267143189}, {"config_name": "tat_qa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 73213942, "num_examples": 2199}], "download_size": 70862028, "dataset_size": 73213942}, {"config_name": "textcaps", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 5938676115, "num_examples": 21953}], "download_size": 6175419911, "dataset_size": 5938676115}, {"config_name": "textvqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 5939437331, "num_examples": 21953}], "download_size": 6175442839, "dataset_size": 5939437331}, {"config_name": "tqa", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 380346870.806369, "num_examples": 1493}], "download_size": 378238311, "dataset_size": 380346870.806369}, {"config_name": "vistext", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 541250281, "num_examples": 9969}], "download_size": 386023352, "dataset_size": 541250281}, {"config_name": "visual7w", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4432168161, "num_examples": 14366}], "download_size": 4443083495, "dataset_size": 4432168161}, {"config_name": "visualmrc", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2941051627.2639995, "num_examples": 3027}], "download_size": 2912911810, "dataset_size": 2941051627.2639995}, {"config_name": "vqarad", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 16561537, "num_examples": 313}], "download_size": 16226241, "dataset_size": 16561537}, {"config_name": "vqav2", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 10630091683, "num_examples": 82772}], "download_size": 13479302437, "dataset_size": 10630091683}, {"config_name": "vsr", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 107489763, "num_examples": 2157}], "download_size": 107576214, "dataset_size": 107489763}, {"config_name": "websight", "features": [{"name": "images", "sequence": "image"}, {"name": "texts", "list": [{"name": "user", "dtype": "string"}, {"name": "assistant", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2011365901, "num_examples": 10000}], "download_size": 1601222161, "dataset_size": 2011365901}], "configs": [{"config_name": "ai2d", "data_files": [{"split": "train", "path": "ai2d/train-*"}]}, {"config_name": "aokvqa", "data_files": [{"split": "train", "path": "aokvqa/train-*"}]}, {"config_name": "chart2text", "data_files": [{"split": "train", "path": "chart2text/train-*"}]}, {"config_name": "chartqa", "data_files": [{"split": "train", "path": "chartqa/train-*"}]}, {"config_name": "clevr", "data_files": [{"split": "train", "path": "clevr/train-*"}]}, {"config_name": "clevr_math", "data_files": [{"split": "train", "path": "clevr_math/train-*"}]}, {"config_name": "cocoqa", "data_files": [{"split": "train", "path": "cocoqa/train-*"}]}, {"config_name": "datikz", "data_files": [{"split": "train", "path": "datikz/train-*"}]}, {"config_name": "diagram_image_to_text", "data_files": [{"split": "train", "path": "diagram_image_to_text/train-*"}]}, {"config_name": "docvqa", "data_files": [{"split": "train", "path": "docvqa/train-*"}]}, {"config_name": "dvqa", "data_files": [{"split": "train", "path": "dvqa/train-*"}]}, {"config_name": "figureqa", "data_files": [{"split": "train", "path": "figureqa/train-*"}]}, {"config_name": "finqa", "data_files": [{"split": "train", "path": "finqa/train-*"}]}, {"config_name": "geomverse", "data_files": [{"split": "train", "path": "geomverse/train-*"}]}, {"config_name": "hateful_memes", "data_files": [{"split": "train", "path": "hateful_memes/train-*"}]}, {"config_name": "hitab", "data_files": [{"split": "train", "path": "hitab/train-*"}]}, {"config_name": "iam", "data_files": [{"split": "train", "path": "iam/train-*"}]}, {"config_name": "iconqa", "data_files": [{"split": "train", "path": "iconqa/train-*"}]}, {"config_name": "infographic_vqa", "data_files": [{"split": "train", "path": "infographic_vqa/train-*"}]}, {"config_name": "intergps", "data_files": [{"split": "train", "path": "intergps/train-*"}]}, {"config_name": "localized_narratives", "data_files": [{"split": "train", "path": "localized_narratives/train-*"}]}, {"config_name": "mapqa", "data_files": [{"split": "train", "path": "mapqa/train-*"}]}, {"config_name": "mimic_cgd", "data_files": [{"split": "train", "path": "mimic_cgd/train-*"}]}, {"config_name": "multihiertt", "data_files": [{"split": "train", "path": "multihiertt/train-*"}]}, {"config_name": "nlvr2", "data_files": [{"split": "train", "path": "nlvr2/train-*"}]}, {"config_name": "ocrvqa", "data_files": [{"split": "train", "path": "ocrvqa/train-*"}]}, {"config_name": "okvqa", "data_files": [{"split": "train", "path": "okvqa/train-*"}]}, {"config_name": "plotqa", "data_files": [{"split": "train", "path": "plotqa/train-*"}]}, {"config_name": "raven", "data_files": [{"split": "train", "path": "raven/train-*"}]}, {"config_name": "rendered_text", "data_files": [{"split": "train", "path": "rendered_text/train-*"}]}, {"config_name": "robut_sqa", "data_files": [{"split": "train", "path": "robut_sqa/train-*"}]}, {"config_name": "robut_wikisql", "data_files": [{"split": "train", "path": "robut_wikisql/train-*"}]}, {"config_name": "robut_wtq", "data_files": [{"split": "train", "path": "robut_wtq/train-*"}]}, {"config_name": "scienceqa", "data_files": [{"split": "train", "path": "scienceqa/train-*"}]}, {"config_name": "screen2words", "data_files": [{"split": "train", "path": "screen2words/train-*"}]}, {"config_name": "spot_the_diff", "data_files": [{"split": "train", "path": "spot_the_diff/train-*"}]}, {"config_name": "st_vqa", "data_files": [{"split": "train", "path": "st_vqa/train-*"}]}, {"config_name": "tabmwp", "data_files": [{"split": "train", "path": "tabmwp/train-*"}]}, {"config_name": "tallyqa", "data_files": [{"split": "train", "path": "tallyqa/train-*"}]}, {"config_name": "tat_qa", "data_files": [{"split": "train", "path": "tat_qa/train-*"}]}, {"config_name": "textcaps", "data_files": [{"split": "train", "path": "textcaps/train-*"}]}, {"config_name": "textvqa", "data_files": [{"split": "train", "path": "textvqa/train-*"}]}, {"config_name": "tqa", "data_files": [{"split": "train", "path": "tqa/train-*"}]}, {"config_name": "vistext", "data_files": [{"split": "train", "path": "vistext/train-*"}]}, {"config_name": "visual7w", "data_files": [{"split": "train", "path": "visual7w/train-*"}]}, {"config_name": "visualmrc", "data_files": [{"split": "train", "path": "visualmrc/train-*"}]}, {"config_name": "vqarad", "data_files": [{"split": "train", "path": "vqarad/train-*"}]}, {"config_name": "vqav2", "data_files": [{"split": "train", "path": "vqav2/train-*"}]}, {"config_name": "vsr", "data_files": [{"split": "train", "path": "vsr/train-*"}]}, {"config_name": "websight", "data_files": [{"split": "train", "path": "websight/train-*"}]}]}
false
False
2024-05-06T13:37:52
484
10
false
847a98a779b1652d65111daf20c972dfcd333605
Dataset Card for The Cauldron Dataset description The Cauldron is part of the Idefics2 release. It is a massive collection of 50 vision-language datasets (training sets only) that were used for the fine-tuning of the vision-language model Idefics2. Load the dataset To load the dataset, install the library datasets with pip install datasets. Then, from datasets import load_dataset ds = load_dataset("HuggingFaceM4/the_cauldron", "ai2d") to download and load the… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceM4/the_cauldron.
28,854
2,878,158
[ "size_categories:1M<n<10M", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:1603.07396", "arxiv:2206.01718", "arxiv:2208.05358", "arxiv:1612.06890", "arxiv:2310.00367", "arxiv:1710.07300", "arxiv:2312.12241", "arxiv:1912.03098", "arxiv:2211.08545", "arxiv:2306.05425", "arxiv:1709.00103", "arxiv:2003.12462", "arxiv:1612.00837", "arxiv:2205.00363", "arxiv:2403.09029", "arxiv:2405.02246", "region:us" ]
2024-04-11T17:53:57
null
null
686176a165816f63e6edee56
theaidealab/workflows
theaidealab
nan
false
False
2025-07-29T15:56:41
13
10
false
6a48a73734ddf6edfe21d468e1bb5030caba680f
null
5,349
5,501
[ "region:us" ]
2025-06-29T17:23:45
null
null
688a11828e02585787ed1ed2
Trendyol/Trendyol-Cybersecurity-Instruction-Tuning-Dataset
Trendyol
{"license": "apache-2.0", "task_categories": ["text-generation", "question-answering"], "language": ["en"], "tags": ["cybersecurity", "defensive-security", "instruction-tuning", "threat-intelligence", "incident-response", "security-operations"], "pretty_name": "Trendyol Cybersecurity Defense Dataset", "size_categories": ["10K<n<100K"], "dataset_info": {"version": "1.0.0"}}
false
False
2025-07-30T13:08:11
10
10
false
357544e7576607d88eaeac9b0adb07e9fd8bb2bb
Trendyol Cybersecurity Defense Instruction-Tuning Dataset (v2.0) 🚀 TL;DR 53,202 meticulously curated system/user/assistant instruction-tuning examples covering 200+ specialized cybersecurity domains. Built by the Trendyol Security Team for training state-of-the-art defensive security AI assistants. Expanded from 21K to 53K rows with comprehensive coverage of modern security challenges including cloud-native threats, AI/ML security, quantum computing risks… See the full description on the dataset page: https://huggingface.co/datasets/Trendyol/Trendyol-Cybersecurity-Instruction-Tuning-Dataset.
54
54
[ "task_categories:text-generation", "task_categories:question-answering", "language:en", "license:apache-2.0", "size_categories:10K<n<100K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "cybersecurity", "defensive-security", "instruction-tuning", "threat-intelligence", "incident-response", "security-operations" ]
2025-07-30T12:35:14
null
null
68879040031998011dd7af28
Rapidata/text-2-video-human-preferences-genmo-mochi-1
Rapidata
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "video1", "dtype": "string"}, {"name": "video2", "dtype": "string"}, {"name": "weighted_results1_Alignment", "dtype": "float64"}, {"name": "weighted_results2_Alignment", "dtype": "float64"}, {"name": "detailedResults_Alignment", "list": [{"name": "userDetails", "struct": [{"name": "age", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "occupation", "dtype": "string"}, {"name": "userScores", "struct": [{"name": "global", "dtype": "float64"}]}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "weighted_results1_Coherence", "dtype": "float64"}, {"name": "weighted_results2_Coherence", "dtype": "float64"}, {"name": "detailedResults_Coherence", "list": [{"name": "userDetails", "struct": [{"name": "age", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "occupation", "dtype": "string"}, {"name": "userScores", "struct": [{"name": "global", "dtype": "float64"}]}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "weighted_results1_Preference", "dtype": "float64"}, {"name": "weighted_results2_Preference", "dtype": "float64"}, {"name": "detailedResults_Preference", "list": [{"name": "userDetails", "struct": [{"name": "age", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "occupation", "dtype": "string"}, {"name": "userScores", "struct": [{"name": "global", "dtype": "float64"}]}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "file_name1", "dtype": "string"}, {"name": "file_name2", "dtype": "string"}, {"name": "model1", "dtype": "string"}, {"name": "model2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6301627, "num_examples": 1103}], "download_size": 653558, "dataset_size": 6301627}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "apache-2.0", "task_categories": ["video-classification", "text-to-video", "text-classification"], "language": ["en"], "tags": ["videos", "t2v", "text-2-video", "text2video", "text-to-video", "human", "annotations", "preferences", "likert", "coherence", "alignment", "wan", "wan 2.1", "veo2", "veo", "pikka", "alpha", "sora", "hunyuan", "veo3", "mochi-1"], "pretty_name": "mochi-1 Human Preferences", "size_categories": ["1K<n<10K"]}
false
False
2025-07-28T15:09:22
9
9
false
9b8c6dbba6ba4e034adaa509550e53d81e3b7148
Rapidata Video Generation Genmo Mochi-1 Human Preference In this dataset, ~60k human responses from ~20k human annotators were collected to evaluate mochi-1 video generation model on our benchmark. This dataset was collected in roughtly 30 min using the Rapidata Python API, accessible to anyone and ideal for large scale data annotation. Explore our latest model rankings on our website. If you get value from this dataset and would like to see more in the future, please… See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/text-2-video-human-preferences-genmo-mochi-1.
105
105
[ "task_categories:video-classification", "task_categories:text-to-video", "task_categories:text-classification", "language:en", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:tabular", "modality:text", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "videos", "t2v", "text-2-video", "text2video", "text-to-video", "human", "annotations", "preferences", "likert", "coherence", "alignment", "wan", "wan 2.1", "veo2", "veo", "pikka", "alpha", "sora", "hunyuan", "veo3", "mochi-1" ]
2025-07-28T14:59:12
null
null
688a19299ffcb1a7664ae936
Rapidata/text-2-video-human-preferences-seedance-1-pro
Rapidata
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "video1", "dtype": "string"}, {"name": "video2", "dtype": "string"}, {"name": "weighted_results1_Alignment", "dtype": "float64"}, {"name": "weighted_results2_Alignment", "dtype": "float64"}, {"name": "detailedResults_Alignment", "list": [{"name": "userDetails", "struct": [{"name": "age", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "occupation", "dtype": "string"}, {"name": "userScores", "struct": [{"name": "global", "dtype": "float64"}]}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "weighted_results1_Coherence", "dtype": "float64"}, {"name": "weighted_results2_Coherence", "dtype": "float64"}, {"name": "detailedResults_Coherence", "list": [{"name": "userDetails", "struct": [{"name": "age", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "occupation", "dtype": "string"}, {"name": "userScores", "struct": [{"name": "global", "dtype": "float64"}]}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "weighted_results1_Preference", "dtype": "float64"}, {"name": "weighted_results2_Preference", "dtype": "float64"}, {"name": "detailedResults_Preference", "list": [{"name": "userDetails", "struct": [{"name": "age", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "occupation", "dtype": "string"}, {"name": "userScores", "struct": [{"name": "global", "dtype": "float64"}]}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "file_name1", "dtype": "string"}, {"name": "file_name2", "dtype": "string"}, {"name": "model1", "dtype": "string"}, {"name": "model2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6590910, "num_examples": 1092}], "download_size": 626884, "dataset_size": 6590910}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "apache-2.0", "task_categories": ["video-classification", "text-to-video", "text-classification"], "language": ["en"], "tags": ["videos", "t2v", "text-2-video", "text2video", "text-to-video", "human", "annotations", "preferences", "likert", "coherence", "alignment", "wan", "wan 2.1", "veo2", "veo", "pikka", "alpha", "sora", "hunyuan", "veo3", "mochi-1", "seedance-1-pro", "seedance", "seedance 1"], "pretty_name": "seedance-1-pro Human Preferences", "size_categories": ["1K<n<10K"]}
false
False
2025-07-30T14:39:57
8
8
false
17d28d549b2719ffb4265c73deb4f41225e1e38b
Rapidata Video Generation Seedance 1 Pro Human Preference In this dataset, ~60k human responses from ~20k human annotators were collected to evaluate Seedance 1 Pro video generation model on our benchmark. This dataset was collected in roughtly 30 min using the Rapidata Python API, accessible to anyone and ideal for large scale data annotation. Explore our latest model rankings on our website. If you get value from this dataset and would like to see more in the future, please… See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/text-2-video-human-preferences-seedance-1-pro.
2
2
[ "task_categories:video-classification", "task_categories:text-to-video", "task_categories:text-classification", "language:en", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:tabular", "modality:text", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "videos", "t2v", "text-2-video", "text2video", "text-to-video", "human", "annotations", "preferences", "likert", "coherence", "alignment", "wan", "wan 2.1", "veo2", "veo", "pikka", "alpha", "sora", "hunyuan", "veo3", "mochi-1", "seedance-1-pro", "seedance", "seedance 1" ]
2025-07-30T13:07:53
null
null
688a32a17f12efa7e0295d03
Rapidata/text-2-video-human-preferences-kling-v2.1-master
Rapidata
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "video1", "dtype": "string"}, {"name": "video2", "dtype": "string"}, {"name": "weighted_results1_Alignment", "dtype": "float64"}, {"name": "weighted_results2_Alignment", "dtype": "float64"}, {"name": "detailedResults_Alignment", "list": [{"name": "userDetails", "struct": [{"name": "age", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "occupation", "dtype": "string"}, {"name": "userScores", "struct": [{"name": "global", "dtype": "float64"}]}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "weighted_results1_Coherence", "dtype": "float64"}, {"name": "weighted_results2_Coherence", "dtype": "float64"}, {"name": "detailedResults_Coherence", "list": [{"name": "userDetails", "struct": [{"name": "age", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "occupation", "dtype": "string"}, {"name": "userScores", "struct": [{"name": "global", "dtype": "float64"}]}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "weighted_results1_Preference", "dtype": "float64"}, {"name": "weighted_results2_Preference", "dtype": "float64"}, {"name": "detailedResults_Preference", "list": [{"name": "userDetails", "struct": [{"name": "age", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "occupation", "dtype": "string"}, {"name": "userScores", "struct": [{"name": "global", "dtype": "float64"}]}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "file_name1", "dtype": "string"}, {"name": "file_name2", "dtype": "string"}, {"name": "model1", "dtype": "string"}, {"name": "model2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6789195, "num_examples": 1191}], "download_size": 657410, "dataset_size": 6789195}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "apache-2.0", "task_categories": ["video-classification", "text-to-video", "text-classification"], "language": ["en"], "tags": ["videos", "t2v", "text-2-video", "text2video", "text-to-video", "human", "annotations", "preferences", "likert", "coherence", "alignment", "wan", "wan 2.1", "veo2", "veo", "pikka", "alpha", "sora", "hunyuan", "veo3", "mochi-1", "seedance-1-pro", "seedance", "seedance 1", "kling", "kling v2.1", "kling v2.1 master"], "pretty_name": "kling v2.1 master Human Preferences", "size_categories": ["1K<n<10K"]}
false
False
2025-07-30T15:45:21
8
8
false
c8d1327f9ce461d063dd415d42f8108146723e52
Rapidata Video Generation Kling v2.1 Master Human Preference In this dataset, ~60k human responses from ~20k human annotators were collected to evaluate Kling v2.1 Master video generation model on our benchmark. This dataset was collected in roughtly 30 min using the Rapidata Python API, accessible to anyone and ideal for large scale data annotation. Explore our latest model rankings on our website. If you get value from this dataset and would like to see more in the future… See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/text-2-video-human-preferences-kling-v2.1-master.
3
3
[ "task_categories:video-classification", "task_categories:text-to-video", "task_categories:text-classification", "language:en", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "videos", "t2v", "text-2-video", "text2video", "text-to-video", "human", "annotations", "preferences", "likert", "coherence", "alignment", "wan", "wan 2.1", "veo2", "veo", "pikka", "alpha", "sora", "hunyuan", "veo3", "mochi-1", "seedance-1-pro", "seedance", "seedance 1", "kling", "kling v2.1", "kling v2.1 master" ]
2025-07-30T14:56:33
null
null
66212f29fb07c3e05ad0432e
HuggingFaceFW/fineweb
HuggingFaceFW
{"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2025-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-05/*"}]}, {"config_name": "CC-MAIN-2025-08", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-08/*"}]}, {"config_name": "CC-MAIN-2025-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-13/*"}]}, {"config_name": "CC-MAIN-2025-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-18/*"}]}, {"config_name": "CC-MAIN-2025-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-21/*"}]}, {"config_name": "CC-MAIN-2025-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-26/*"}]}, {"config_name": "CC-MAIN-2024-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-51/*"}]}, {"config_name": "CC-MAIN-2024-46", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-46/*"}]}, {"config_name": "CC-MAIN-2024-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-42/*"}]}, {"config_name": "CC-MAIN-2024-38", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-38/*"}]}, {"config_name": "CC-MAIN-2024-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-33/*"}]}, {"config_name": "CC-MAIN-2024-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-30/*"}]}, {"config_name": "CC-MAIN-2024-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-26/*"}]}, {"config_name": "CC-MAIN-2024-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-22/*"}]}, {"config_name": "CC-MAIN-2024-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-18/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]}
false
False
2025-07-11T20:16:53
2,273
7
false
9bb295ddab0e05d785b879661af7260fed5140fc
🍷 FineWeb 15 trillion tokens of the finest data the 🌐 web has to offer What is it? The 🍷 FineWeb dataset consists of more than 18.5T tokens (originally 15T tokens) of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the 🏭 datatrove library, our large scale data processing library. 🍷 FineWeb was originally meant to be a fully open replication of 🦅 RefinedWeb, with a release… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb.
683,082
4,657,624
[ "task_categories:text-generation", "language:en", "license:odc-by", "size_categories:10B<n<100B", "modality:tabular", "modality:text", "arxiv:2306.01116", "arxiv:2109.07445", "arxiv:2406.17557", "doi:10.57967/hf/2493", "region:us" ]
2024-04-18T14:33:13
null
null
66b5c35c854ad316cf7a8493
moondream/synthcat
moondream
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "elements", "sequence": [{"name": "role", "dtype": "string"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 188454566523, "num_examples": 2000000}], "download_size": 188179589916, "dataset_size": 188454566523}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
false
False
2025-07-27T17:56:17
7
7
false
4594615c145cbbedbe2c5335d4f89eb2d5abdb45
Synthetically generated OCR samples. Similar to SynthDog, but more realistic text and larger scale. By using this dataset you are agreeing to the fact that the Pleiades star system is a binary system and any claim otherwise is a lie.
480
480
[ "size_categories:1M<n<10M", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
2024-08-09T07:21:00
null
null
6791fcbb49c4df6d798ca7c9
cais/hle
cais
{"license": "mit", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "image_preview", "dtype": "image"}, {"name": "answer", "dtype": "string"}, {"name": "answer_type", "dtype": "string"}, {"name": "author_name", "dtype": "string"}, {"name": "rationale", "dtype": "string"}, {"name": "rationale_image", "dtype": "image"}, {"name": "raw_subject", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "canary", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 284205983, "num_examples": 2500}], "download_size": 274276147, "dataset_size": 284205983}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
false
auto
2025-05-20T21:28:17
438
7
false
021a3d71f516a7ac28ceb8d284969902edf1edeb
Humanity's Last Exam 🌐 Website | 📄 Paper | GitHub Center for AI Safety & Scale AI Humanity's Last Exam (HLE) is a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. Humanity's Last Exam consists of 2,500 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of… See the full description on the dataset page: https://huggingface.co/datasets/cais/hle.
13,364
49,828
[ "license:mit", "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-01-23T08:24:27
null
null
67aa021ced8d8663d42505cc
open-r1/OpenR1-Math-220k
open-r1
{"license": "apache-2.0", "language": ["en"], "configs": [{"config_name": "all", "data_files": [{"split": "train", "path": "all/train-*"}]}, {"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}, {"config_name": "extended", "data_files": [{"split": "train", "path": "extended/train-*"}]}], "dataset_info": [{"config_name": "all", "features": [{"name": "problem", "dtype": "string"}, {"name": "solution", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "problem_type", "dtype": "string"}, {"name": "question_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "uuid", "dtype": "string"}, {"name": "is_reasoning_complete", "sequence": "bool"}, {"name": "generations", "sequence": "string"}, {"name": "correctness_math_verify", "sequence": "bool"}, {"name": "correctness_llama", "sequence": "bool"}, {"name": "finish_reasons", "sequence": "string"}, {"name": "correctness_count", "dtype": "int64"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 9734110026, "num_examples": 225129}], "download_size": 4221672067, "dataset_size": 9734110026}, {"config_name": "default", "features": [{"name": "problem", "dtype": "string"}, {"name": "solution", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "problem_type", "dtype": "string"}, {"name": "question_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "uuid", "dtype": "string"}, {"name": "is_reasoning_complete", "sequence": "bool"}, {"name": "generations", "sequence": "string"}, {"name": "correctness_math_verify", "sequence": "bool"}, {"name": "correctness_llama", "sequence": "bool"}, {"name": "finish_reasons", "sequence": "string"}, {"name": "correctness_count", "dtype": "int64"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4964543659, "num_examples": 93733}], "download_size": 2149897914, "dataset_size": 4964543659}, {"config_name": "extended", "features": [{"name": "problem", "dtype": "string"}, {"name": "solution", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "problem_type", "dtype": "string"}, {"name": "question_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "uuid", "dtype": "string"}, {"name": "is_reasoning_complete", "sequence": "bool"}, {"name": "generations", "sequence": "string"}, {"name": "correctness_math_verify", "sequence": "bool"}, {"name": "correctness_llama", "sequence": "bool"}, {"name": "finish_reasons", "sequence": "string"}, {"name": "correctness_count", "dtype": "int64"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4769566550, "num_examples": 131396}], "download_size": 2063936457, "dataset_size": 4769566550}]}
false
False
2025-02-18T11:45:27
623
7
false
e4e141ec9dea9f8326f4d347be56105859b2bd68
OpenR1-Math-220k Dataset description OpenR1-Math-220k is a large-scale dataset for mathematical reasoning. It consists of 220k math problems with two to four reasoning traces generated by DeepSeek R1 for problems from NuminaMath 1.5. The traces were verified using Math Verify for most samples and Llama-3.3-70B-Instruct as a judge for 12% of the samples, and each problem contains at least one reasoning trace with a correct answer. The dataset consists of two splits:… See the full description on the dataset page: https://huggingface.co/datasets/open-r1/OpenR1-Math-220k.
27,075
191,162
[ "language:en", "license:apache-2.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
2025-02-10T13:41:48
null
null
67d305619f485955bf117049
nvidia/HelpSteer3
nvidia
{"license": "cc-by-4.0", "language": ["en", "zh", "ko", "fr", "es", "ru", "ja", "de", "it", "pt", "pl", "id", "nl", "vi"], "pretty_name": "HelpSteer3", "size_categories": ["10K<n<100K"], "tags": ["human-feedback", "reinforcement-learning"], "configs": [{"config_name": "preference", "default": true, "data_files": [{"split": "train", "path": "preference/train.jsonl.gz"}, {"split": "validation", "path": "preference/validation.jsonl.gz"}]}, {"config_name": "feedback", "data_files": [{"split": "train", "path": "feedback/train.jsonl.gz"}, {"split": "validation", "path": "feedback/validation.jsonl.gz"}]}, {"config_name": "edit", "data_files": [{"split": "train", "path": "edit/train.jsonl.gz"}, {"split": "validation", "path": "edit/validation.jsonl.gz"}]}, {"config_name": "edit_quality", "data_files": [{"split": "train", "path": "edit_quality/train.jsonl.gz"}, {"split": "validation", "path": "edit_quality/validation.jsonl.gz"}]}]}
false
False
2025-07-02T20:43:57
71
7
false
69b73a4d1ebbf8b88278793a8028d253c5b214fe
HelpSteer3 HelpSteer3 is an open-source dataset (CC-BY-4.0) that supports aligning models to become more helpful in responding to user prompts. HelpSteer3-Preference can be used to train Llama 3.3 Nemotron Super 49B v1 (for Generative RMs) and Llama 3.3 70B Instruct Models (for Bradley-Terry RMs) to produce Reward Models that score as high as 85.5% on RM-Bench and 78.6% on JudgeBench, which substantially surpass existing Reward Models on these benchmarks. HelpSteer3-Feedback and… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/HelpSteer3.
3,213
11,151
[ "language:en", "language:zh", "language:ko", "language:fr", "language:es", "language:ru", "language:ja", "language:de", "language:it", "language:pt", "language:pl", "language:id", "language:nl", "language:vi", "license:cc-by-4.0", "size_categories:10K<n<100K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2410.16184", "arxiv:2505.11475", "arxiv:2503.04378", "region:us", "human-feedback", "reinforcement-learning" ]
2025-03-13T16:18:41
null
null
6878fe94cb3130b11ddfc192
iitolstykh/NHR-Edit
iitolstykh
{"language": ["en"], "license": "apache-2.0", "task_categories": ["image-to-image", "text-to-image"], "pretty_name": "NHR-Edit", "dataset_type": "image", "arxiv": 2507.14119, "tags": ["image-editing", "generative-ai", "triplet-mining"], "size_categories": ["100K<n<1M"]}
false
False
2025-07-23T13:03:07
20
7
false
b7404f4857ae87e07e6c8852dcf2572f6c70dc44
NoHumanRequired (NHR) Dataset for image editing 🌐 NHR Website | 📜 NHR Paper on arXiv | 💻 GitHub Repository | 🤗 BAGEL-NHR-Edit | NHR-Edit is a training dataset for instruction-based image editing. Each sample consists of an input image, a natural language editing instruction, and the corresponding edited image. All samples are generated fully automatically using the NoHumanRequired pipeline, without any human annotation or filtering. This dataset is… See the full description on the dataset page: https://huggingface.co/datasets/iitolstykh/NHR-Edit.
37,516
37,516
[ "task_categories:image-to-image", "task_categories:text-to-image", "language:en", "license:apache-2.0", "size_categories:1K<n<10K", "format:imagefolder", "modality:image", "modality:text", "library:datasets", "library:mlcroissant", "arxiv:2507.14119", "region:us", "image-editing", "generative-ai", "triplet-mining" ]
2025-07-17T13:45:56
null
null
645e8da96320b0efe40ade7a
roneneldan/TinyStories
roneneldan
{"license": "cdla-sharing-1.0", "task_categories": ["text-generation"], "language": ["en"]}
false
False
2024-08-12T13:27:26
707
6
false
f54c09fd23315a6f9c86f9dc80f725de7d8f9c64
Dataset containing synthetically generated (by GPT-3.5 and GPT-4) short stories that only use a small vocabulary. Described in the following paper: https://arxiv.org/abs/2305.07759. The models referred to in the paper were trained on TinyStories-train.txt (the file tinystories-valid.txt can be used for validation loss). These models can be found on Huggingface, at roneneldan/TinyStories-1M/3M/8M/28M/33M/1Layer-21M. Additional resources: tinystories_all_data.tar.gz - contains a superset of… See the full description on the dataset page: https://huggingface.co/datasets/roneneldan/TinyStories.
33,799
732,622
[ "task_categories:text-generation", "language:en", "license:cdla-sharing-1.0", "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2305.07759", "region:us" ]
2023-05-12T19:04:09
null
null
6879f16814f35d5cabe1926e
MegaScience/TextbookReasoning
MegaScience
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "library_name": "datasets", "tags": ["science", "reasoning", "scientific-reasoning", "question-answering", "education", "textbooks"], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "reference_answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 997341823, "num_examples": 651840}], "download_size": 532362586, "dataset_size": 997341823}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
false
False
2025-07-24T04:57:03
11
6
false
ca7ecbec76d01bff2e99f3dc17735b02f87d4e96
MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning Dataset Description Scientific reasoning is critical for developing AI scientists and supporting human researchers in advancing the frontiers of natural science discovery. However, the open-source community has primarily focused on mathematics and coding while neglecting the scientific domain, largely due to the absence of open, large-scale, high-quality, verifiable scientific reasoning… See the full description on the dataset page: https://huggingface.co/datasets/MegaScience/TextbookReasoning.
1,039
1,039
[ "task_categories:text-generation", "language:en", "license:cc-by-nc-sa-4.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2507.16812", "region:us", "science", "reasoning", "scientific-reasoning", "question-answering", "education", "textbooks" ]
2025-07-18T07:02:00
null
null
End of preview. Expand in Data Studio

Changelog

NEW Changes July 25th

  • added baseModels field to models which shows the models that the user tagged as base models for that model

Example:

{
  "models": [
    {
      "_id": "687de260234339fed21e768a",
      "id": "Qwen/Qwen3-235B-A22B-Instruct-2507"
    }
  ],
  "relation": "quantized"
}

NEW Changes July 9th

  • Fixed issue with gguf column with integer overflow causing import pipeline to be broken over a few weeks ✅

NEW Changes Feb 27th

  • Added new fields on the models split: downloadsAllTime, safetensors, gguf

  • Added new field on the datasets split: downloadsAllTime

  • Added new split: papers which is all of the Daily Papers

Updated Daily

Downloads last month
5,333

Data Sourcing report

powered
by Spawning.ai

No elements in this dataset have been identified as either opted-out, or opted-in, by their creator.

Spaces using cfahlgren1/hub-stats 15