_id
stringlengths 24
24
| id
stringlengths 5
121
| author
stringlengths 2
42
| cardData
stringlengths 2
1.07M
β | disabled
bool 2
classes | gated
null | lastModified
timestamp[ns] | likes
int64 0
6.93k
| trendingScore
float64 0
134
| private
bool 1
class | sha
stringlengths 40
40
| description
stringlengths 0
6.67k
β | downloads
int64 0
2.4M
| tags
sequencelengths 1
7.92k
| createdAt
timestamp[ns] | key
stringclasses 1
value | paperswithcode_id
stringclasses 645
values | citation
stringlengths 0
10.7k
β |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
63990f21cc50af73d29ecfa3 | fka/awesome-chatgpt-prompts | fka | {"license": "cc0-1.0", "tags": ["ChatGPT"], "task_categories": ["question-answering"], "size_categories": ["100K<n<1M"]} | false | null | 2025-01-06T00:02:53 | 6,930 | 134 | false | 68ba7694e23014788dcc8ab5afe613824f45a05c | π§ Awesome ChatGPT Prompts [CSV dataset]
This is a Dataset Repository of Awesome ChatGPT Prompts
View All Prompts on GitHub
License
CC-0
| 5,977 | [
"task_categories:question-answering",
"license:cc0-1.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"ChatGPT"
] | 2022-12-13T23:47:45 | null | null |
|
6782cb3d244c0e06b1362fed | NovaSky-AI/Sky-T1_data_17k | NovaSky-AI | {"size_categories": ["10K<n<100K"], "license": "apache-2.0"} | false | null | 2025-01-14T10:36:09 | 113 | 113 | false | 3e260822dae5d833d9b040e34265d5f9a2b8a6a5 | Sky-T1_data_17k.json: The 17k training data used to train Sky-T1-32B-Preview. The final data contains 5k coding data from APPs and TACO, and 10k math data from AIME, MATH, and Olympiads subsets of the NuminaMATH dataset. In addition, we maintain 1k science and puzzle data from STILL-2.
| 1,347 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-01-11T19:49:17 | null | null |
|
6649d353babc0b33565e1a4a | HumanLLMs/Human-Like-DPO-Dataset | HumanLLMs | {"language": ["en"], "license": "llama3", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.json"}]}]} | false | null | 2025-01-12T21:01:07 | 103 | 69 | false | dd82ab6a284a15765964149e6a6603ff8ed7d672 |
Enhancing Human-Like Responses in Large Language Models
π€ Models | π Dataset | π Paper
Human-Like-DPO-Dataset
This dataset was created as part of research aimed at improving conversational fluency and engagement in large language models. It is suitable for formats like Direct Preference Optimization (DPO) to guide models toward generating more human-like responses.
The dataset includes 10,884 samples across 256 topics, including:
Technology
Daily Life
Science⦠See the full description on the dataset page: https://huggingface.co/datasets/HumanLLMs/Human-Like-DPO-Dataset. | 734 | [
"language:en",
"license:llama3",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2501.05032",
"region:us"
] | 2024-05-19T10:24:19 | null | null |
|
67750882633d421965733171 | DAMO-NLP-SG/multimodal_textbook | DAMO-NLP-SG | {"license": "apache-2.0", "task_categories": ["text-generation", "summarization"], "language": ["en"], "tags": ["Pretraining", "Interleaved", "Reasoning"], "size_categories": ["1M<n<10M"]} | false | null | 2025-01-11T11:48:45 | 112 | 60 | false | b83d307b2682d6b12420f5b93f4360880ea89df4 |
Multimodal-Textbook-6.5M
Overview
This dataset is for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining", containing 6.5M images interleaving with 0.8B text from instructional videos.
It contains pre-training corpus using interleaved image-text format. Specifically, our multimodal-textbook includes 6.5M keyframes extracted from instructional videos, interleaving with 0.8B ASR texts.
All the images and text are extracted from⦠See the full description on the dataset page: https://huggingface.co/datasets/DAMO-NLP-SG/multimodal_textbook. | 8,571 | [
"task_categories:text-generation",
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"arxiv:2501.00958",
"region:us",
"Pretraining",
"Interleaved",
"Reasoning"
] | 2025-01-01T09:18:58 | null | null |
|
66cbf7ef92e9f5b19fcd65aa | cfahlgren1/react-code-instructions | cfahlgren1 | {"license": "mit", "pretty_name": "React Code Instructions"} | false | null | 2025-01-18T00:23:28 | 124 | 31 | false | 2b19c334ba37efe38142d5e0c2404fadcca0cbe3 |
React Code Instructions
Popular Queries
Number of instructions by Model
Unnested Messages
Instructions Added Per Day
Dataset of Claude Artifact esque React Apps generated by Llama 3.1 70B, Llama 3.1 405B, and Deepseek Chat V3.
Examples
Virtual Fitness Trainer Website
LinkedIn Clone
iPhone Calculator
Chipotle Waitlist
Apple Store
| 872 | [
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | 2024-08-26T03:35:11 | null | null |
|
676f70846bf205795346d2be | FreedomIntelligence/medical-o1-reasoning-SFT | FreedomIntelligence | {"license": "apache-2.0", "task_categories": ["question-answering", "text-generation"], "language": ["en", "zh"], "tags": ["medical", "biology"], "configs": [{"config_name": "en", "data_files": "medical_o1_sft.json"}, {"config_name": "zh", "data_files": "medical_o1_sft_Chinese.json"}]} | false | null | 2025-01-13T06:46:27 | 74 | 31 | false | 4c9573e7de1e8660b88158db2efa7c7204bbd269 |
Introduction
This dataset is used to fine-tune HuatuoGPT-o1, a medical LLM designed for advanced medical reasoning. This dataset is constructed using GPT-4o, which searches for solutions to verifiable medical problems and validates them through a medical verifier.
For details, see our paper and GitHub repository.
Citation
If you find our data useful, please consider citing our work!
@misc{chen2024huatuogpto1medicalcomplexreasoning,
title={HuatuoGPT-o1β¦ See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT. | 819 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2412.18925",
"region:us",
"medical",
"biology"
] | 2024-12-28T03:29:08 | null | null |
|
6695831f2d25bd04e969b0a2 | AI-MO/NuminaMath-CoT | AI-MO | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "problem", "dtype": "string"}, {"name": "solution", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2495457595.0398345, "num_examples": 859494}, {"name": "test", "num_bytes": 290340.31593470514, "num_examples": 100}], "download_size": 1234351634, "dataset_size": 2495747935.355769}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "license": "apache-2.0", "task_categories": ["text-generation"], "language": ["en"], "tags": ["aimo", "math"], "pretty_name": "NuminaMath CoT"} | false | null | 2024-11-25T05:31:43 | 326 | 24 | false | 9d8d210c9f6a36c8f3cd84045668c9b7800ef517 |
Dataset Card for NuminaMath CoT
Dataset Summary
Approximately 860k math problems, where each solution is formatted in a Chain of Thought (CoT) manner. The sources of the dataset range from Chinese high school math exercises to US and international mathematics olympiad competition problems. The data were primarily collected from online exam paper PDFs and mathematics discussion forums. The processing steps include (a) OCR from the original PDFs, (b) segmentation⦠See the full description on the dataset page: https://huggingface.co/datasets/AI-MO/NuminaMath-CoT. | 3,879 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"aimo",
"math"
] | 2024-07-15T20:14:23 | null | null |
|
67449661149efb6edaa63b98 | HuggingFaceTB/finemath | HuggingFaceTB | {"license": "odc-by", "dataset_info": [{"config_name": "finemath-3plus", "features": [{"name": "url", "dtype": "string"}, {"name": "fetch_time", "dtype": "int64"}, {"name": "content_mime_type", "dtype": "string"}, {"name": "warc_filename", "dtype": "string"}, {"name": "warc_record_offset", "dtype": "int32"}, {"name": "warc_record_length", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "token_count", "dtype": "int32"}, {"name": "char_count", "dtype": "int32"}, {"name": "metadata", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}, {"name": "crawl", "dtype": "string"}, {"name": "snapshot_type", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 137764105388.93857, "num_examples": 21405610}], "download_size": 65039196945, "dataset_size": 137764105388.93857}, {"config_name": "finemath-4plus", "features": [{"name": "url", "dtype": "string"}, {"name": "fetch_time", "dtype": "int64"}, {"name": "content_mime_type", "dtype": "string"}, {"name": "warc_filename", "dtype": "string"}, {"name": "warc_record_offset", "dtype": "int32"}, {"name": "warc_record_length", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "token_count", "dtype": "int32"}, {"name": "char_count", "dtype": "int32"}, {"name": "metadata", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}, {"name": "crawl", "dtype": "string"}, {"name": "snapshot_type", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 39101488149.09091, "num_examples": 6699493}], "download_size": 18365184633, "dataset_size": 39101488149.09091}, {"config_name": "infiwebmath-3plus", "features": [{"name": "url", "dtype": "string"}, {"name": "metadata", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}, {"name": "token_count", "dtype": "int64"}, {"name": "char_count", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 96485696853.10182, "num_examples": 13882669}], "download_size": 46808660851, "dataset_size": 96485696853.10182}, {"config_name": "infiwebmath-4plus", "features": [{"name": "url", "dtype": "string"}, {"name": "metadata", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}, {"name": "token_count", "dtype": "int64"}, {"name": "char_count", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40002719500.1551, "num_examples": 6296212}], "download_size": 19234328998, "dataset_size": 40002719500.1551}], "configs": [{"config_name": "finemath-3plus", "data_files": [{"split": "train", "path": "finemath-3plus/train-*"}]}, {"config_name": "finemath-4plus", "data_files": [{"split": "train", "path": "finemath-4plus/train-*"}]}, {"config_name": "infiwebmath-3plus", "data_files": [{"split": "train", "path": "infiwebmath-3plus/train-*"}]}, {"config_name": "infiwebmath-4plus", "data_files": [{"split": "train", "path": "infiwebmath-4plus/train-*"}]}]} | false | null | 2024-12-23T11:19:16 | 261 | 21 | false | 8f233cf84cff0b817b3ffb26d5be7370990dd557 |
π FineMath
What is it?
π FineMath consists of 34B tokens (FineMath-3+) and 54B tokens (FineMath-3+ with InfiMM-WebMath-3+) of mathematical educational content filtered from CommonCrawl. To curate this dataset, we trained a mathematical content classifier using annotations generated by LLama-3.1-70B-Instruct. We used the classifier to retain only the most educational mathematics content, focusing on clear explanations and step-by-step problem solving rather thanβ¦ See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceTB/finemath. | 39,932 | [
"license:odc-by",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/3847",
"region:us"
] | 2024-11-25T15:23:13 | null | null |
|
6758176e04e2f15d7bfacd54 | PowerInfer/QWQ-LONGCOT-500K | PowerInfer | {"license": "apache-2.0", "language": ["en"]} | false | null | 2024-12-26T10:19:19 | 106 | 17 | false | 10a787d967281599e9be6761717147817c018424 | This repository contains approximately 500,000 instances of responses generated using QwQ-32B-Preview language model. The dataset combines prompts from multiple high-quality sources to create diverse and comprehensive training data.
The dataset is available under the Apache 2.0 license.
Over 75% of the responses exceed 8,000 tokens in length. The majority of prompts were carefully created using persona-based methods to create challenging instructions.
Bias, Risks, and Limitations⦠See the full description on the dataset page: https://huggingface.co/datasets/PowerInfer/QWQ-LONGCOT-500K. | 1,073 | [
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-12-10T10:26:54 | null | null |
|
66a6da71f0dc7c8df2e0f979 | OpenLeecher/lmsys_chat_1m_clean | OpenLeecher | {"language": ["en"], "size_categories": ["100K<n<1M"], "pretty_name": "Cleaned LMSYS dataset", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "category", "dtype": "string"}, {"name": "grounded", "dtype": "bool"}, {"name": "deepseek_response", "struct": [{"name": "moralization", "dtype": "int64"}, {"name": "reward", "dtype": "float64"}, {"name": "value", "dtype": "string"}]}, {"name": "phi-3-mini_response", "struct": [{"name": "moralization", "dtype": "int64"}, {"name": "reward", "dtype": "float64"}, {"name": "value", "dtype": "string"}]}, {"name": "flaw", "dtype": "string"}, {"name": "agreement", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 1673196622, "num_examples": 273402}], "download_size": 906472159, "dataset_size": 1673196622}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | null | 2024-12-31T22:35:13 | 67 | 14 | false | e9f2f6838a2dbba87c216bb6bc406e8d7ce0f389 |
Cleaning and Categorizing
A few weeks ago, I had the itch to do some data crunching, so I began this project - to clean and classify lmsys-chat-1m. The process was somewhat long and tedious, but here is the quick overview:
1. Removing Pure Duplicate Instructions
The first step was to eliminate pure duplicate instructions. This involved:
Removing whitespace and punctuation.
Ensuring that if two instructions matched after that, only one was retained.
This step⦠See the full description on the dataset page: https://huggingface.co/datasets/OpenLeecher/lmsys_chat_1m_clean. | 1,420 | [
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-07-28T23:55:29 | null | null |
|
673e9e53cdad8a9744b0bf1b | O1-OPEN/OpenO1-SFT | O1-OPEN | {"license": "apache-2.0", "task_categories": ["question-answering"], "language": ["en", "zh"], "size_categories": ["10K<n<100K"]} | false | null | 2024-12-17T02:30:09 | 331 | 14 | false | 63112de109aa755e9cdfad63a13f08a92dd7df36 |
SFT Data for CoT Activation
πππThis repository contains the dataset used for fine-tuning a language model using SFT for Chain-of-Thought Activation.
πππThe dataset is designed to enhance the model's ability to generate coherent and logical reasoning sequences.
βββBy using this dataset, the model can learn to produce detailed and structured reasoning steps, enhancing its performance on complex reasoning tasks.
Statistics
1οΈβ£Total Records: 77,685β¦ See the full description on the dataset page: https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT. | 2,143 | [
"task_categories:question-answering",
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-11-21T02:43:31 | null | null |
|
66212f29fb07c3e05ad0432e | HuggingFaceFW/fineweb | HuggingFaceFW | {"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2024-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-51/*"}]}, {"config_name": "CC-MAIN-2024-46", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-46/*"}]}, {"config_name": "CC-MAIN-2024-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-42/*"}]}, {"config_name": "CC-MAIN-2024-38", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-38/*"}]}, {"config_name": "CC-MAIN-2024-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-33/*"}]}, {"config_name": "CC-MAIN-2024-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-30/*"}]}, {"config_name": "CC-MAIN-2024-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-26/*"}]}, {"config_name": "CC-MAIN-2024-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-22/*"}]}, {"config_name": "CC-MAIN-2024-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-18/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]} | false | null | 2025-01-03T11:58:46 | 1,825 | 12 | false | e31fdfd3918d4b48e837d69d274e624a067d7091 |
π· FineWeb
15 trillion tokens of the finest data the π web has to offer
What is it?
The π· FineWeb dataset consists of more than 15T tokens of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the π datatrove library, our large scale data processing library.
π· FineWeb was originally meant to be a fully open replication of π¦
RefinedWeb, with a release of the full⦠See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb. | 262,188 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:10B<n<100B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.01116",
"arxiv:2109.07445",
"arxiv:2406.17557",
"doi:10.57967/hf/2493",
"region:us"
] | 2024-04-18T14:33:13 | null | null |
|
66c84764a47b2d6c582bbb02 | amphion/Emilia-Dataset | amphion | {"license": "cc-by-nc-4.0", "task_categories": ["text-to-speech", "automatic-speech-recognition"], "language": ["zh", "en", "ja", "fr", "de", "ko"], "pretty_name": "Emilia", "size_categories": ["10M<n<100M"], "extra_gated_prompt": "Terms of Access: The researcher has requested permission to use the Emilia dataset and the Emilia-Pipe preprocessing pipeline. In exchange for such permission, the researcher hereby agrees to the following terms and conditions:\n1. The researcher shall use the dataset ONLY for non-commercial research and educational purposes.\n2. The authors make no representations or warranties regarding the dataset, \n including but not limited to warranties of non-infringement or fitness for a particular purpose.\n\n3. The researcher accepts full responsibility for their use of the dataset and shall defend and indemnify the authors of Emilia, \n including their employees, trustees, officers, and agents, against any and all claims arising from the researcher's use of the dataset, \n including but not limited to the researcher's use of any copies of copyrighted content that they may create from the dataset.\n\n4. The researcher may provide research associates and colleagues with access to the dataset,\n provided that they first agree to be bound by these terms and conditions.\n \n5. The authors reserve the right to terminate the researcher's access to the dataset at any time.\n6. If the researcher is employed by a for-profit, commercial entity, the researcher's employer shall also be bound by these terms and conditions, and the researcher hereby represents that they are fully authorized to enter into this agreement on behalf of such employer.", "extra_gated_fields": {"Name": "text", "Email": "text", "Affiliation": "text", "Position": "text", "Your Supervisor/manager/director": "text", "I agree to the Terms of Access": "checkbox"}} | false | null | 2024-09-06T13:29:55 | 194 | 12 | false | bcaad00d13e7c101485990a46e88f5884ffed3fc |
Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation
This is the official repository π for the Emilia dataset and the source code for the Emilia-Pipe speech data preprocessing pipeline.
News π₯
2024/08/28: Welcome to join Amphion's Discord channel to stay connected and engage with our community!
2024/08/27: The Emilia dataset is now publicly available! Discover the most extensive and diverse speech generation⦠See the full description on the dataset page: https://huggingface.co/datasets/amphion/Emilia-Dataset. | 37,947 | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"language:zh",
"language:en",
"language:ja",
"language:fr",
"language:de",
"language:ko",
"license:cc-by-nc-4.0",
"size_categories:10M<n<100M",
"format:webdataset",
"modality:audio",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2407.05361",
"region:us"
] | 2024-08-23T08:25:08 | null | null |
|
677c1f196b1653e3955dbce7 | Rapidata/text-2-image-Rich-Human-Feedback | Rapidata | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "word_scores", "dtype": "string"}, {"name": "alignment_score_norm", "dtype": "float32"}, {"name": "coherence_score_norm", "dtype": "float32"}, {"name": "style_score_norm", "dtype": "float32"}, {"name": "alignment_heatmap", "sequence": {"sequence": "float16"}}, {"name": "coherence_heatmap", "sequence": {"sequence": "float16"}}, {"name": "alignment_score", "dtype": "float32"}, {"name": "coherence_score", "dtype": "float32"}, {"name": "style_score", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 25257389633.104, "num_examples": 13024}], "download_size": 17856619960, "dataset_size": 25257389633.104}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "task_categories": ["text-to-image", "text-classification", "image-classification", "image-to-text", "image-segmentation"], "language": ["en"], "tags": ["t2i", "preferences", "human", "flux", "midjourney", "imagen", "dalle", "heatmap", "coherence", "alignment", "style", "plausiblity"], "pretty_name": "Rich Human Feedback for Text to Image Models", "size_categories": ["1M<n<10M"]} | false | null | 2025-01-11T13:23:04 | 26 | 12 | false | e77afd00e481d9d2ca41a5b5c4f89cb704de45c6 |
Building upon Google's research Rich Human Feedback for Text-to-Image Generation we have collected over 1.5 million responses from 152'684 individual humans using Rapidata via the Python API. Collection took roughly 5 days.
If you get value from this dataset and would like to see more in the future, please consider liking it.
Overview
We asked humans to evaluate AI-generated images in style, coherence and prompt alignment. For images that contained flaws, participants were⦠See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/text-2-image-Rich-Human-Feedback. | 1,982 | [
"task_categories:text-to-image",
"task_categories:text-classification",
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:image-segmentation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2312.10240",
"region:us",
"t2i",
"preferences",
"human",
"flux",
"midjourney",
"imagen",
"dalle",
"heatmap",
"coherence",
"alignment",
"style",
"plausiblity"
] | 2025-01-06T18:21:13 | null | null |
|
677c6dded25ebab44ca8267b | BIOMEDICA/biomedica_webdataset | BIOMEDICA | {"tags": ["medical", "biology", "chemistry"], "size_categories": ["n>1T"], "extra_gated_prompt": "I understand that this dataset contains articles grouped under three licensing categories: Commercial Use Allowed (CC0, CC BY, CC BY-SA, CC BY-ND licenses), Non-Commercial Use Only (CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses), and Other (no machine-readable Creative Commons license, no license, or a custom license). I acknowledge that each individual data point in the dataset specifies its corresponding license type, and I agree that it is my responsibility to verify compliance with the licensing terms before using any specific data point. I further agree to comply with the specific licensing terms of each group when using the dataset in accordance to what is established by the PubMed Central: PMC Open Acces Subset", "extra_gated_fields": {"I confirm that I have read and agree to the data usage agreement outlined above by checking this box": "checkbox", "I want to use this dataset for": "text"}} | false | null | 2025-01-16T02:52:32 | 12 | 12 | false | f5c128c71123deb732786e895e3b464911b1707e |
Dataset Card for Dataset Name
Arxiv: Arxiv
Β Β Β Β |Β Β Β Β
Website: Biomedica
Β Β Β Β |Β Β Β Β
Training instructions: OpenCLIP
Β Β Β Β |Β Β Β Β
Tutorial: Google Colab
BIOMEDICA Dataset is a large-scale, deep-learning-ready biomedical dataset containing over 24M imagecaption pairs and 30M image-references from 6M unique open-source articles. Each data point is highly annotated with over 27 unique metadata fields, including article level information (e.g., license⦠See the full description on the dataset page: https://huggingface.co/datasets/BIOMEDICA/biomedica_webdataset. | 12 | [
"size_categories:n>1T",
"arxiv:2501.07171",
"region:us",
"medical",
"biology",
"chemistry"
] | 2025-01-06T23:57:18 | null | null |
|
625552d2b339bb03abe3432d | openai/gsm8k | openai | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "gsm8k", "pretty_name": "Grade School Math 8K", "tags": ["math-word-problems"], "dataset_info": [{"config_name": "main", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3963202, "num_examples": 7473}, {"name": "test", "num_bytes": 713732, "num_examples": 1319}], "download_size": 2725633, "dataset_size": 4676934}, {"config_name": "socratic", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5198108, "num_examples": 7473}, {"name": "test", "num_bytes": 936859, "num_examples": 1319}], "download_size": 3164254, "dataset_size": 6134967}], "configs": [{"config_name": "main", "data_files": [{"split": "train", "path": "main/train-*"}, {"split": "test", "path": "main/test-*"}]}, {"config_name": "socratic", "data_files": [{"split": "train", "path": "socratic/train-*"}, {"split": "test", "path": "socratic/test-*"}]}]} | false | null | 2024-01-04T12:05:15 | 492 | 11 | false | e53f048856ff4f594e959d75785d2c2d37b678ee |
Dataset Card for GSM8K
Dataset Summary
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
These problems take between 2 and 8 steps to solve.
Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ β ΓΓ·) toβ¦ See the full description on the dataset page: https://huggingface.co/datasets/openai/gsm8k. | 171,155 | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2110.14168",
"region:us",
"math-word-problems"
] | 2022-04-12T10:22:10 | gsm8k | null |
|
65fc5a783bc54054aa2e6e62 | gretelai/synthetic_text_to_sql | gretelai | {"license": "apache-2.0", "task_categories": ["question-answering", "table-question-answering", "text-generation"], "language": ["en"], "tags": ["synthetic", "SQL", "text-to-SQL", "code"], "size_categories": ["100K<n<1M"]} | false | null | 2024-05-10T22:30:56 | 454 | 11 | false | 273a86f5f290e8d61b6767a9ff690c82bc990dc4 |
Image generated by DALL-E. See prompt for more details
synthetic_text_to_sql
gretelai/synthetic_text_to_sql is a rich dataset of high quality synthetic Text-to-SQL samples,
designed and generated using Gretel Navigator, and released under Apache 2.0.
Please see our release blogpost for more details.
The dataset includes:
105,851 records partitioned into 100,000 train and 5,851 test records
~23M total tokens, including ~12M SQL tokens
Coverage across 100 distinct⦠See the full description on the dataset page: https://huggingface.co/datasets/gretelai/synthetic_text_to_sql. | 1,470 | [
"task_categories:question-answering",
"task_categories:table-question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2306.05685",
"region:us",
"synthetic",
"SQL",
"text-to-SQL",
"code"
] | 2024-03-21T16:04:08 | null | null |
|
6760cf1c46ba6c841069988a | O1-OPEN/OpenO1-SFT-Ultra | O1-OPEN | null | false | null | 2024-12-17T02:32:42 | 50 | 10 | false | 2762ca378dbb954419b053fa347835d14a0379a8 |
openo1-sft-ultra-35m-data
Instruction
We have released the openo1-sft-ultra-35m-data, which contains 35 million data points. It is based on existing open-source datasets and synthesized using the openo1-qwen-sft model. We first collected open-source datasets and then annotated the data based on difficulty, quality, and question types using the qwen-2.5-72b-instruct model. To ensure the difficulty and quality of the data, we only retained data where both the⦠See the full description on the dataset page: https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT-Ultra. | 1,127 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-12-17T01:08:44 | null | null |
|
676f70968756741d47c691df | FreedomIntelligence/medical-o1-verifiable-problem | FreedomIntelligence | {"license": "apache-2.0", "task_categories": ["question-answering", "text-generation"], "language": ["en"], "tags": ["medical", "biology"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "medical_o1_verifiable_problem.json"}]}]} | false | null | 2024-12-30T02:56:46 | 28 | 10 | false | 46d5175eb74fdef3516d51d52e8c40db04bbdf35 |
Introduction
This dataset features open-ended medical problems designed to improve LLMs' medical reasoning. Each entry includes a open-ended question and a ground-truth answer based on challenging medical exams. The verifiable answers enable checking LLM outputs, refining their reasoning processes.
For details, see our paper and GitHub repository.
Citation
If you find our data useful, please consider citing our work!β¦ See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/medical-o1-verifiable-problem. | 390 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2412.18925",
"region:us",
"medical",
"biology"
] | 2024-12-28T03:29:26 | null | null |
|
677e59ab4bf7f0d4735ea7da | llamaindex/vdr-multilingual-train | llamaindex | {"language": ["de", "it", "fr", "es", "en"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "pretty_name": "Multilingual Visual Document Retrieval", "dataset_info": [{"config_name": "en", "features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "negatives", "sequence": {"dtype": "string"}}, {"name": "language", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19695589638, "num_examples": 94225}], "download_size": 19695589638, "dataset_size": 19695589638}, {"config_name": "es", "features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "negatives", "sequence": {"dtype": "string"}}, {"name": "language", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19881676198, "num_examples": 102685}], "download_size": 19881676198, "dataset_size": 19881676198}, {"config_name": "it", "features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "negatives", "sequence": {"dtype": "string"}}, {"name": "language", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20278641470, "num_examples": 98747}], "download_size": 20278641470, "dataset_size": 20278641470}, {"config_name": "de", "features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "negatives", "sequence": {"dtype": "string"}}, {"name": "language", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19629975126, "num_examples": 100713}], "download_size": 19629975126, "dataset_size": 19629975126}, {"config_name": "fr", "features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "negatives", "sequence": {"dtype": "string"}}, {"name": "language", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20825335207, "num_examples": 99797}], "download_size": 20825335207, "dataset_size": 20825335207}], "configs": [{"config_name": "en", "data_files": [{"split": "train", "path": "en/train-*"}]}, {"config_name": "it", "data_files": [{"split": "train", "path": "it/train-*"}]}, {"config_name": "fr", "data_files": [{"split": "train", "path": "fr/train-*"}]}, {"config_name": "es", "data_files": [{"split": "train", "path": "es/train-*"}]}, {"config_name": "de", "data_files": [{"split": "train", "path": "de/train-*"}]}], "license": "apache-2.0"} | false | null | 2025-01-10T16:36:36 | 15 | 10 | false | 6b92b5cae23d44509f1e05d7062befe5ec77f7c9 |
Multilingual Visual Document Retrieval Dataset
This dataset consists of 500k multilingual query image samples, collected and generated from scratch using public internet pdfs. The queries are synthetic and generated using VLMs (gemini-1.5-pro and Qwen2-VL-72B).
It was used to train the vdr-2b-multi-v1 retrieval multimodal, multilingual embedding model.
How it was created
This is the entire data pipeline used to create the Italian subset of this dataset. Each⦠See the full description on the dataset page: https://huggingface.co/datasets/llamaindex/vdr-multilingual-train. | 2,035 | [
"multilinguality:multilingual",
"language:de",
"language:it",
"language:fr",
"language:es",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-01-08T10:55:39 | null | null |
|
677bb2afe4cf361eed72da2c | ngxson/MiniThinky-dataset | ngxson | {"language": ["en"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 444645709, "num_examples": 88218}], "download_size": 214646754, "dataset_size": 444645709}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | null | 2025-01-08T21:36:05 | 12 | 9 | false | df7ed56101c76cb9dae350ff2ccbc8fa0d493f33 |
MiniThinky dataset
Merged from:
https://huggingface.co/datasets/TuneIt/o1-python
https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT
https://huggingface.co/datasets/KingNish/reasoning-base-20k
Post processing:
Replaced with the format below
Remove any rows that does not have reasoning process (i.e remove straight responses)
Deduplicated
Response format
<|thinking|>{thinking_process}
<|answer|>
{real_answer}
| 116 | [
"language:en",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-01-06T10:38:39 | null | null |
|
621ffdd236468d709f181e5e | cais/mmlu | cais | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "paperswithcode_id": "mmlu", "pretty_name": "Measuring Massive Multitask Language Understanding", "language_bcp47": ["en-US"], "dataset_info": [{"config_name": "abstract_algebra", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 49618.6654322746, "num_examples": 100}, {"name": "validation", "num_bytes": 5485.515349444808, "num_examples": 11}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 17143, "dataset_size": 57303.3562203159}, {"config_name": "all", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 6967453, "num_examples": 14042}, {"name": "validation", "num_bytes": 763484, "num_examples": 1531}, {"name": "dev", "num_bytes": 125353, "num_examples": 285}, {"name": "auxiliary_train", "num_bytes": 161000625, "num_examples": 99842}], "download_size": 51503402, "dataset_size": 168856915}, {"config_name": "anatomy", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 66985.19833357072, "num_examples": 135}, {"name": "validation", "num_bytes": 6981.5649902024825, "num_examples": 14}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 28864, "dataset_size": 76165.9387623697}, {"config_name": "astronomy", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 75420.3714570574, "num_examples": 152}, {"name": "validation", "num_bytes": 7978.931417374265, "num_examples": 16}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 39316, "dataset_size": 85598.47831302814}, {"config_name": "auxiliary_train", "features": [{"name": "train", "struct": [{"name": "answer", "dtype": "int64"}, {"name": "choices", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 161000625, "num_examples": 99842}], "download_size": 47518592, "dataset_size": 161000625}, {"config_name": "business_ethics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 49618.6654322746, "num_examples": 100}, {"name": "validation", "num_bytes": 5485.515349444808, "num_examples": 11}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 31619, "dataset_size": 57303.3562203159}, {"config_name": "clinical_knowledge", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 131489.4633955277, "num_examples": 265}, {"name": "validation", "num_bytes": 14461.813193990856, "num_examples": 29}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 51655, "dataset_size": 148150.45202811505}, {"config_name": "college_biology", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 71450.87822247542, "num_examples": 144}, {"name": "validation", "num_bytes": 7978.931417374265, "num_examples": 16}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 43017, "dataset_size": 81628.98507844617}, {"config_name": "college_chemistry", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 49618.6654322746, "num_examples": 100}, {"name": "validation", "num_bytes": 3989.4657086871325, "num_examples": 8}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 26781, "dataset_size": 55807.30657955822}, {"config_name": "college_computer_science", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 49618.6654322746, "num_examples": 100}, {"name": "validation", "num_bytes": 5485.515349444808, "num_examples": 11}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 41132, "dataset_size": 57303.3562203159}, {"config_name": "college_mathematics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 49618.6654322746, "num_examples": 100}, {"name": "validation", "num_bytes": 5485.515349444808, "num_examples": 11}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 26779, "dataset_size": 57303.3562203159}, {"config_name": "college_medicine", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 85840.29119783506, "num_examples": 173}, {"name": "validation", "num_bytes": 10971.030698889615, "num_examples": 22}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 56303, "dataset_size": 99010.49733532117}, {"config_name": "college_physics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 50611.0387409201, "num_examples": 102}, {"name": "validation", "num_bytes": 5485.515349444808, "num_examples": 11}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 29539, "dataset_size": 58295.7295289614}, {"config_name": "computer_security", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 49618.6654322746, "num_examples": 100}, {"name": "validation", "num_bytes": 5485.515349444808, "num_examples": 11}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 30150, "dataset_size": 57303.3562203159}, {"config_name": "conceptual_physics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 116603.86376584532, "num_examples": 235}, {"name": "validation", "num_bytes": 12965.76355323318, "num_examples": 26}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 34968, "dataset_size": 131768.802757675}, {"config_name": "econometrics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 56565.27859279305, "num_examples": 114}, {"name": "validation", "num_bytes": 5984.198563030699, "num_examples": 12}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 36040, "dataset_size": 64748.652594420244}, {"config_name": "electrical_engineering", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 71947.06487679818, "num_examples": 145}, {"name": "validation", "num_bytes": 7978.931417374265, "num_examples": 16}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 26746, "dataset_size": 82125.17173276893}, {"config_name": "elementary_mathematics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 187558.555333998, "num_examples": 378}, {"name": "validation", "num_bytes": 20446.011757021555, "num_examples": 41}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 54987, "dataset_size": 210203.74252961605}, {"config_name": "formal_logic", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 62519.518444666, "num_examples": 126}, {"name": "validation", "num_bytes": 6981.5649902024825, "num_examples": 14}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 32884, "dataset_size": 71700.25887346498}, {"config_name": "global_facts", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 49618.6654322746, "num_examples": 100}, {"name": "validation", "num_bytes": 4986.8321358589155, "num_examples": 10}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 19258, "dataset_size": 56804.67300673001}, {"config_name": "high_school_biology", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 153817.86284005127, "num_examples": 310}, {"name": "validation", "num_bytes": 15957.86283474853, "num_examples": 32}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 78216, "dataset_size": 171974.90111339628}, {"config_name": "high_school_chemistry", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 100725.89082751745, "num_examples": 203}, {"name": "validation", "num_bytes": 10971.030698889615, "num_examples": 22}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 45799, "dataset_size": 113896.09696500355}, {"config_name": "high_school_computer_science", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 49618.6654322746, "num_examples": 100}, {"name": "validation", "num_bytes": 4488.148922273024, "num_examples": 9}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 39072, "dataset_size": 56305.989793144116}, {"config_name": "high_school_european_history", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 81870.79796325309, "num_examples": 165}, {"name": "validation", "num_bytes": 8976.297844546049, "num_examples": 18}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 196270, "dataset_size": 93046.27124639563}, {"config_name": "high_school_geography", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 98244.95755590372, "num_examples": 198}, {"name": "validation", "num_bytes": 10971.030698889615, "num_examples": 22}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 38255, "dataset_size": 111415.16369338983}, {"config_name": "high_school_government_and_politics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 95764.02428428999, "num_examples": 193}, {"name": "validation", "num_bytes": 10472.347485303722, "num_examples": 21}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 52963, "dataset_size": 108435.5472081902}, {"config_name": "high_school_macroeconomics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 193512.79518587096, "num_examples": 390}, {"name": "validation", "num_bytes": 21443.378184193338, "num_examples": 43}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 68758, "dataset_size": 217155.34880866078}, {"config_name": "high_school_mathematics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 133970.39666714144, "num_examples": 270}, {"name": "validation", "num_bytes": 14461.813193990856, "num_examples": 29}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 45210, "dataset_size": 150631.38529972878}, {"config_name": "high_school_microeconomics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 118092.42372881356, "num_examples": 238}, {"name": "validation", "num_bytes": 12965.76355323318, "num_examples": 26}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 49885, "dataset_size": 133257.36272064323}, {"config_name": "high_school_physics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 74924.18480273466, "num_examples": 151}, {"name": "validation", "num_bytes": 8477.614630960157, "num_examples": 17}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 45483, "dataset_size": 85600.9748722913}, {"config_name": "high_school_psychology", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 270421.7266058966, "num_examples": 545}, {"name": "validation", "num_bytes": 29920.992815153495, "num_examples": 60}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 113158, "dataset_size": 302541.8948596466}, {"config_name": "high_school_statistics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 107176.31733371314, "num_examples": 216}, {"name": "validation", "num_bytes": 11469.713912475507, "num_examples": 23}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 74924, "dataset_size": 120845.20668478514}, {"config_name": "high_school_us_history", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 101222.0774818402, "num_examples": 204}, {"name": "validation", "num_bytes": 10971.030698889615, "num_examples": 22}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 200043, "dataset_size": 114392.2836193263}, {"config_name": "high_school_world_history", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 117596.23707449081, "num_examples": 237}, {"name": "validation", "num_bytes": 12965.76355323318, "num_examples": 26}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 250302, "dataset_size": 132761.17606632048}, {"config_name": "human_aging", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 110649.62391397236, "num_examples": 223}, {"name": "validation", "num_bytes": 11469.713912475507, "num_examples": 23}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 41196, "dataset_size": 124318.51326504436}, {"config_name": "human_sexuality", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 65000.451716279735, "num_examples": 131}, {"name": "validation", "num_bytes": 5984.198563030699, "num_examples": 12}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 32533, "dataset_size": 73183.82571790692}, {"config_name": "international_law", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 60038.58517305227, "num_examples": 121}, {"name": "validation", "num_bytes": 6482.88177661659, "num_examples": 13}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 41592, "dataset_size": 68720.64238826535}, {"config_name": "jurisprudence", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 53588.15866685657, "num_examples": 108}, {"name": "validation", "num_bytes": 5485.515349444808, "num_examples": 11}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 33578, "dataset_size": 61272.84945489787}, {"config_name": "logical_fallacies", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 80878.4246546076, "num_examples": 163}, {"name": "validation", "num_bytes": 8976.297844546049, "num_examples": 18}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 33669, "dataset_size": 92053.89793775014}, {"config_name": "machine_learning", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 55572.90528414756, "num_examples": 112}, {"name": "validation", "num_bytes": 5485.515349444808, "num_examples": 11}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 31121, "dataset_size": 63257.596072188855}, {"config_name": "management", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 51107.225395242844, "num_examples": 103}, {"name": "validation", "num_bytes": 5485.515349444808, "num_examples": 11}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 22828, "dataset_size": 58791.91618328414}, {"config_name": "marketing", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 116107.67711152257, "num_examples": 234}, {"name": "validation", "num_bytes": 12467.08033964729, "num_examples": 25}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 49747, "dataset_size": 130773.93288976635}, {"config_name": "medical_genetics", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 49618.6654322746, "num_examples": 100}, {"name": "validation", "num_bytes": 5485.515349444808, "num_examples": 11}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 25775, "dataset_size": 57303.3562203159}, {"config_name": "miscellaneous", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 388514.15033471014, "num_examples": 783}, {"name": "validation", "num_bytes": 42886.756368386676, "num_examples": 86}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 115097, "dataset_size": 433600.08214169333}, {"config_name": "moral_disputes", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 171680.58239567012, "num_examples": 346}, {"name": "validation", "num_bytes": 18949.96211626388, "num_examples": 38}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 76043, "dataset_size": 192829.71995053047}, {"config_name": "moral_scenarios", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 444087.05561885773, "num_examples": 895}, {"name": "validation", "num_bytes": 49868.32135858916, "num_examples": 100}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 109869, "dataset_size": 496154.5524160434}, {"config_name": "nutrition", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 151833.1162227603, "num_examples": 306}, {"name": "validation", "num_bytes": 16456.54604833442, "num_examples": 33}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 69050, "dataset_size": 170488.8377096912}, {"config_name": "philosophy", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 154314.04949437402, "num_examples": 311}, {"name": "validation", "num_bytes": 16955.229261920314, "num_examples": 34}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 61912, "dataset_size": 173468.45419489083}, {"config_name": "prehistory", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 160764.47600056973, "num_examples": 324}, {"name": "validation", "num_bytes": 17453.912475506204, "num_examples": 35}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 68826, "dataset_size": 180417.5639146724}, {"config_name": "professional_accounting", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 139924.6365190144, "num_examples": 282}, {"name": "validation", "num_bytes": 15459.179621162639, "num_examples": 31}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 87297, "dataset_size": 157582.99157877354}, {"config_name": "professional_law", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 761150.3277310925, "num_examples": 1534}, {"name": "validation", "num_bytes": 84776.14630960157, "num_examples": 170}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 1167828, "dataset_size": 848125.6494792906}, {"config_name": "professional_medicine", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 134962.7699757869, "num_examples": 272}, {"name": "validation", "num_bytes": 15459.179621162639, "num_examples": 31}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 153242, "dataset_size": 152621.12503554605}, {"config_name": "professional_psychology", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 303666.2324455206, "num_examples": 612}, {"name": "validation", "num_bytes": 34409.14173742652, "num_examples": 69}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 159357, "dataset_size": 340274.5496215436}, {"config_name": "public_relations", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 54580.53197550207, "num_examples": 110}, {"name": "validation", "num_bytes": 5984.198563030699, "num_examples": 12}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 31500, "dataset_size": 62763.90597712925}, {"config_name": "security_studies", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 121565.73030907278, "num_examples": 245}, {"name": "validation", "num_bytes": 13464.446766819072, "num_examples": 27}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 140258, "dataset_size": 137229.35251448833}, {"config_name": "sociology", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 99733.51751887196, "num_examples": 201}, {"name": "validation", "num_bytes": 10971.030698889615, "num_examples": 22}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 56480, "dataset_size": 112903.72365635807}, {"config_name": "us_foreign_policy", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 49618.6654322746, "num_examples": 100}, {"name": "validation", "num_bytes": 5485.515349444808, "num_examples": 11}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 29027, "dataset_size": 57303.3562203159}, {"config_name": "virology", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 82366.98461757584, "num_examples": 166}, {"name": "validation", "num_bytes": 8976.297844546049, "num_examples": 18}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 38229, "dataset_size": 93542.45790071838}, {"config_name": "world_religions", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 84847.91788918957, "num_examples": 171}, {"name": "validation", "num_bytes": 9474.98105813194, "num_examples": 19}, {"name": "dev", "num_bytes": 2199.1754385964914, "num_examples": 5}], "download_size": 27165, "dataset_size": 96522.07438591801}], "configs": [{"config_name": "abstract_algebra", "data_files": [{"split": "test", "path": "abstract_algebra/test-*"}, {"split": "validation", "path": "abstract_algebra/validation-*"}, {"split": "dev", "path": "abstract_algebra/dev-*"}]}, {"config_name": "all", "data_files": [{"split": "test", "path": "all/test-*"}, {"split": "validation", "path": "all/validation-*"}, {"split": "dev", "path": "all/dev-*"}, {"split": "auxiliary_train", "path": "all/auxiliary_train-*"}]}, {"config_name": "anatomy", "data_files": [{"split": "test", "path": "anatomy/test-*"}, {"split": "validation", "path": "anatomy/validation-*"}, {"split": "dev", "path": "anatomy/dev-*"}]}, {"config_name": "astronomy", "data_files": [{"split": "test", "path": "astronomy/test-*"}, {"split": "validation", "path": "astronomy/validation-*"}, {"split": "dev", "path": "astronomy/dev-*"}]}, {"config_name": "auxiliary_train", "data_files": [{"split": "train", "path": "auxiliary_train/train-*"}]}, {"config_name": "business_ethics", "data_files": [{"split": "test", "path": "business_ethics/test-*"}, {"split": "validation", "path": "business_ethics/validation-*"}, {"split": "dev", "path": "business_ethics/dev-*"}]}, {"config_name": "clinical_knowledge", "data_files": [{"split": "test", "path": "clinical_knowledge/test-*"}, {"split": "validation", "path": "clinical_knowledge/validation-*"}, {"split": "dev", "path": "clinical_knowledge/dev-*"}]}, {"config_name": "college_biology", "data_files": [{"split": "test", "path": "college_biology/test-*"}, {"split": "validation", "path": "college_biology/validation-*"}, {"split": "dev", "path": "college_biology/dev-*"}]}, {"config_name": "college_chemistry", "data_files": [{"split": "test", "path": "college_chemistry/test-*"}, {"split": "validation", "path": "college_chemistry/validation-*"}, {"split": "dev", "path": "college_chemistry/dev-*"}]}, {"config_name": "college_computer_science", "data_files": [{"split": "test", "path": "college_computer_science/test-*"}, {"split": "validation", "path": "college_computer_science/validation-*"}, {"split": "dev", "path": "college_computer_science/dev-*"}]}, {"config_name": "college_mathematics", "data_files": [{"split": "test", "path": "college_mathematics/test-*"}, {"split": "validation", "path": "college_mathematics/validation-*"}, {"split": "dev", "path": "college_mathematics/dev-*"}]}, {"config_name": "college_medicine", "data_files": [{"split": "test", "path": "college_medicine/test-*"}, {"split": "validation", "path": "college_medicine/validation-*"}, {"split": "dev", "path": "college_medicine/dev-*"}]}, {"config_name": "college_physics", "data_files": [{"split": "test", "path": "college_physics/test-*"}, {"split": "validation", "path": "college_physics/validation-*"}, {"split": "dev", "path": "college_physics/dev-*"}]}, {"config_name": "computer_security", "data_files": [{"split": "test", "path": "computer_security/test-*"}, {"split": "validation", "path": "computer_security/validation-*"}, {"split": "dev", "path": "computer_security/dev-*"}]}, {"config_name": "conceptual_physics", "data_files": [{"split": "test", "path": "conceptual_physics/test-*"}, {"split": "validation", "path": "conceptual_physics/validation-*"}, {"split": "dev", "path": "conceptual_physics/dev-*"}]}, {"config_name": "econometrics", "data_files": [{"split": "test", "path": "econometrics/test-*"}, {"split": "validation", "path": "econometrics/validation-*"}, {"split": "dev", "path": "econometrics/dev-*"}]}, {"config_name": "electrical_engineering", "data_files": [{"split": "test", "path": "electrical_engineering/test-*"}, {"split": "validation", "path": "electrical_engineering/validation-*"}, {"split": "dev", "path": "electrical_engineering/dev-*"}]}, {"config_name": "elementary_mathematics", "data_files": [{"split": "test", "path": "elementary_mathematics/test-*"}, {"split": "validation", "path": "elementary_mathematics/validation-*"}, {"split": "dev", "path": "elementary_mathematics/dev-*"}]}, {"config_name": "formal_logic", "data_files": [{"split": "test", "path": "formal_logic/test-*"}, {"split": "validation", "path": "formal_logic/validation-*"}, {"split": "dev", "path": "formal_logic/dev-*"}]}, {"config_name": "global_facts", "data_files": [{"split": "test", "path": "global_facts/test-*"}, {"split": "validation", "path": "global_facts/validation-*"}, {"split": "dev", "path": "global_facts/dev-*"}]}, {"config_name": "high_school_biology", "data_files": [{"split": "test", "path": "high_school_biology/test-*"}, {"split": "validation", "path": "high_school_biology/validation-*"}, {"split": "dev", "path": "high_school_biology/dev-*"}]}, {"config_name": "high_school_chemistry", "data_files": [{"split": "test", "path": "high_school_chemistry/test-*"}, {"split": "validation", "path": "high_school_chemistry/validation-*"}, {"split": "dev", "path": "high_school_chemistry/dev-*"}]}, {"config_name": "high_school_computer_science", "data_files": [{"split": "test", "path": "high_school_computer_science/test-*"}, {"split": "validation", "path": "high_school_computer_science/validation-*"}, {"split": "dev", "path": "high_school_computer_science/dev-*"}]}, {"config_name": "high_school_european_history", "data_files": [{"split": "test", "path": "high_school_european_history/test-*"}, {"split": "validation", "path": "high_school_european_history/validation-*"}, {"split": "dev", "path": "high_school_european_history/dev-*"}]}, {"config_name": "high_school_geography", "data_files": [{"split": "test", "path": "high_school_geography/test-*"}, {"split": "validation", "path": "high_school_geography/validation-*"}, {"split": "dev", "path": "high_school_geography/dev-*"}]}, {"config_name": "high_school_government_and_politics", "data_files": [{"split": "test", "path": "high_school_government_and_politics/test-*"}, {"split": "validation", "path": "high_school_government_and_politics/validation-*"}, {"split": "dev", "path": "high_school_government_and_politics/dev-*"}]}, {"config_name": "high_school_macroeconomics", "data_files": [{"split": "test", "path": "high_school_macroeconomics/test-*"}, {"split": "validation", "path": "high_school_macroeconomics/validation-*"}, {"split": "dev", "path": "high_school_macroeconomics/dev-*"}]}, {"config_name": "high_school_mathematics", "data_files": [{"split": "test", "path": "high_school_mathematics/test-*"}, {"split": "validation", "path": "high_school_mathematics/validation-*"}, {"split": "dev", "path": "high_school_mathematics/dev-*"}]}, {"config_name": "high_school_microeconomics", "data_files": [{"split": "test", "path": "high_school_microeconomics/test-*"}, {"split": "validation", "path": "high_school_microeconomics/validation-*"}, {"split": "dev", "path": "high_school_microeconomics/dev-*"}]}, {"config_name": "high_school_physics", "data_files": [{"split": "test", "path": "high_school_physics/test-*"}, {"split": "validation", "path": "high_school_physics/validation-*"}, {"split": "dev", "path": "high_school_physics/dev-*"}]}, {"config_name": "high_school_psychology", "data_files": [{"split": "test", "path": "high_school_psychology/test-*"}, {"split": "validation", "path": "high_school_psychology/validation-*"}, {"split": "dev", "path": "high_school_psychology/dev-*"}]}, {"config_name": "high_school_statistics", "data_files": [{"split": "test", "path": "high_school_statistics/test-*"}, {"split": "validation", "path": "high_school_statistics/validation-*"}, {"split": "dev", "path": "high_school_statistics/dev-*"}]}, {"config_name": "high_school_us_history", "data_files": [{"split": "test", "path": "high_school_us_history/test-*"}, {"split": "validation", "path": "high_school_us_history/validation-*"}, {"split": "dev", "path": "high_school_us_history/dev-*"}]}, {"config_name": "high_school_world_history", "data_files": [{"split": "test", "path": "high_school_world_history/test-*"}, {"split": "validation", "path": "high_school_world_history/validation-*"}, {"split": "dev", "path": "high_school_world_history/dev-*"}]}, {"config_name": "human_aging", "data_files": [{"split": "test", "path": "human_aging/test-*"}, {"split": "validation", "path": "human_aging/validation-*"}, {"split": "dev", "path": "human_aging/dev-*"}]}, {"config_name": "human_sexuality", "data_files": [{"split": "test", "path": "human_sexuality/test-*"}, {"split": "validation", "path": "human_sexuality/validation-*"}, {"split": "dev", "path": "human_sexuality/dev-*"}]}, {"config_name": "international_law", "data_files": [{"split": "test", "path": "international_law/test-*"}, {"split": "validation", "path": "international_law/validation-*"}, {"split": "dev", "path": "international_law/dev-*"}]}, {"config_name": "jurisprudence", "data_files": [{"split": "test", "path": "jurisprudence/test-*"}, {"split": "validation", "path": "jurisprudence/validation-*"}, {"split": "dev", "path": "jurisprudence/dev-*"}]}, {"config_name": "logical_fallacies", "data_files": [{"split": "test", "path": "logical_fallacies/test-*"}, {"split": "validation", "path": "logical_fallacies/validation-*"}, {"split": "dev", "path": "logical_fallacies/dev-*"}]}, {"config_name": "machine_learning", "data_files": [{"split": "test", "path": "machine_learning/test-*"}, {"split": "validation", "path": "machine_learning/validation-*"}, {"split": "dev", "path": "machine_learning/dev-*"}]}, {"config_name": "management", "data_files": [{"split": "test", "path": "management/test-*"}, {"split": "validation", "path": "management/validation-*"}, {"split": "dev", "path": "management/dev-*"}]}, {"config_name": "marketing", "data_files": [{"split": "test", "path": "marketing/test-*"}, {"split": "validation", "path": "marketing/validation-*"}, {"split": "dev", "path": "marketing/dev-*"}]}, {"config_name": "medical_genetics", "data_files": [{"split": "test", "path": "medical_genetics/test-*"}, {"split": "validation", "path": "medical_genetics/validation-*"}, {"split": "dev", "path": "medical_genetics/dev-*"}]}, {"config_name": "miscellaneous", "data_files": [{"split": "test", "path": "miscellaneous/test-*"}, {"split": "validation", "path": "miscellaneous/validation-*"}, {"split": "dev", "path": "miscellaneous/dev-*"}]}, {"config_name": "moral_disputes", "data_files": [{"split": "test", "path": "moral_disputes/test-*"}, {"split": "validation", "path": "moral_disputes/validation-*"}, {"split": "dev", "path": "moral_disputes/dev-*"}]}, {"config_name": "moral_scenarios", "data_files": [{"split": "test", "path": "moral_scenarios/test-*"}, {"split": "validation", "path": "moral_scenarios/validation-*"}, {"split": "dev", "path": "moral_scenarios/dev-*"}]}, {"config_name": "nutrition", "data_files": [{"split": "test", "path": "nutrition/test-*"}, {"split": "validation", "path": "nutrition/validation-*"}, {"split": "dev", "path": "nutrition/dev-*"}]}, {"config_name": "philosophy", "data_files": [{"split": "test", "path": "philosophy/test-*"}, {"split": "validation", "path": "philosophy/validation-*"}, {"split": "dev", "path": "philosophy/dev-*"}]}, {"config_name": "prehistory", "data_files": [{"split": "test", "path": "prehistory/test-*"}, {"split": "validation", "path": "prehistory/validation-*"}, {"split": "dev", "path": "prehistory/dev-*"}]}, {"config_name": "professional_accounting", "data_files": [{"split": "test", "path": "professional_accounting/test-*"}, {"split": "validation", "path": "professional_accounting/validation-*"}, {"split": "dev", "path": "professional_accounting/dev-*"}]}, {"config_name": "professional_law", "data_files": [{"split": "test", "path": "professional_law/test-*"}, {"split": "validation", "path": "professional_law/validation-*"}, {"split": "dev", "path": "professional_law/dev-*"}]}, {"config_name": "professional_medicine", "data_files": [{"split": "test", "path": "professional_medicine/test-*"}, {"split": "validation", "path": "professional_medicine/validation-*"}, {"split": "dev", "path": "professional_medicine/dev-*"}]}, {"config_name": "professional_psychology", "data_files": [{"split": "test", "path": "professional_psychology/test-*"}, {"split": "validation", "path": "professional_psychology/validation-*"}, {"split": "dev", "path": "professional_psychology/dev-*"}]}, {"config_name": "public_relations", "data_files": [{"split": "test", "path": "public_relations/test-*"}, {"split": "validation", "path": "public_relations/validation-*"}, {"split": "dev", "path": "public_relations/dev-*"}]}, {"config_name": "security_studies", "data_files": [{"split": "test", "path": "security_studies/test-*"}, {"split": "validation", "path": "security_studies/validation-*"}, {"split": "dev", "path": "security_studies/dev-*"}]}, {"config_name": "sociology", "data_files": [{"split": "test", "path": "sociology/test-*"}, {"split": "validation", "path": "sociology/validation-*"}, {"split": "dev", "path": "sociology/dev-*"}]}, {"config_name": "us_foreign_policy", "data_files": [{"split": "test", "path": "us_foreign_policy/test-*"}, {"split": "validation", "path": "us_foreign_policy/validation-*"}, {"split": "dev", "path": "us_foreign_policy/dev-*"}]}, {"config_name": "virology", "data_files": [{"split": "test", "path": "virology/test-*"}, {"split": "validation", "path": "virology/validation-*"}, {"split": "dev", "path": "virology/dev-*"}]}, {"config_name": "world_religions", "data_files": [{"split": "test", "path": "world_religions/test-*"}, {"split": "validation", "path": "world_religions/validation-*"}, {"split": "dev", "path": "world_religions/dev-*"}]}]} | false | null | 2024-03-08T20:36:26 | 362 | 8 | false | c30699e8356da336a370243923dbaf21066bb9fe |
Dataset Card for MMLU
Dataset Summary
Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021).
This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57β¦ See the full description on the dataset page: https://huggingface.co/datasets/cais/mmlu. | 78,740 | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2009.03300",
"arxiv:2005.00700",
"arxiv:2005.14165",
"arxiv:2008.02275",
"region:us"
] | 2022-03-02T23:29:22 | mmlu | null |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 1,246
Data Sourcing report
powered
by
Spawning.aiNo elements in this dataset have been identified as either opted-out, or opted-in, by their creator.