Datasets:
datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-08-18 20:14:01
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-08-18 20:11:48
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mteb/KurdishSentimentClassification
|
mteb
|
2025-05-06T12:38:08Z
| 0 | 0 |
[
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-scoring",
"task_ids:sentiment-classification",
"task_ids:hate-speech-detection",
"annotations_creators:derived",
"multilinguality:monolingual",
"language:kur",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] |
[
"text-classification"
] |
2025-05-06T12:38:04Z
| 0 |
---
annotations_creators:
- derived
language:
- kur
license: cc-by-4.0
multilinguality: monolingual
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-scoring
- sentiment-classification
- hate-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 698098
num_examples: 6000
- name: test
num_bytes: 221218
num_examples: 1987
download_size: 444460
dataset_size: 919316
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">KurdishSentimentClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
Kurdish Sentiment Dataset
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Web, Written |
| Reference | https://link.springer.com/article/10.1007/s10579-023-09716-6 |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["KurdishSentimentClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@article{article,
author = {Badawi, Soran and Kazemi, Arefeh and Rezaie, Vali},
doi = {10.1007/s10579-023-09716-6},
journal = {Language Resources and Evaluation},
month = {01},
pages = {1-20},
title = {KurdiSent: a corpus for kurdish sentiment analysis},
year = {2024},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("KurdishSentimentClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 1987,
"number_of_characters": 111504,
"number_texts_intersect_with_train": 5,
"min_text_length": 9,
"average_text_length": 56.11675893306492,
"max_text_length": 282,
"unique_text": 1987,
"unique_labels": 2,
"labels": {
"1": {
"count": 1065
},
"0": {
"count": 922
}
}
},
"train": {
"num_samples": 6000,
"number_of_characters": 356322,
"number_texts_intersect_with_train": null,
"min_text_length": 7,
"average_text_length": 59.387,
"max_text_length": 7639,
"unique_text": 5753,
"unique_labels": 2,
"labels": {
"1": {
"count": 3000
},
"0": {
"count": 3000
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
|
alckasoc/triviaqa_expel_train_100
|
alckasoc
|
2024-10-15T22:47:35Z
| 13 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-15T22:47:32Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: question_id
dtype: string
- name: question_source
dtype: string
- name: entity_pages
struct:
- name: doc_source
sequence: string
- name: filename
sequence: string
- name: title
sequence: string
- name: wiki_context
sequence: string
- name: search_results
struct:
- name: description
sequence: string
- name: filename
sequence: string
- name: rank
sequence: int64
- name: search_context
sequence: string
- name: title
sequence: string
- name: url
sequence: string
- name: answer
struct:
- name: aliases
sequence: string
- name: matched_wiki_entity_name
dtype: string
- name: normalized_aliases
sequence: string
- name: normalized_matched_wiki_entity_name
dtype: string
- name: normalized_value
dtype: string
- name: type
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 7232973
num_examples: 100
download_size: 4100112
dataset_size: 7232973
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DataScienceUIBK/ComplexTempQA
|
DataScienceUIBK
|
2024-09-22T21:26:34Z
| 42 | 3 |
[
"task_categories:question-answering",
"language:en",
"license:cc0-1.0",
"size_categories:100M<n<1B",
"region:us"
] |
[
"question-answering"
] |
2024-06-05T09:26:20Z
| 1 |
---
license: cc0-1.0
task_categories:
- question-answering
language:
- en
size_categories:
- 100M<n<1B
---
# ComplexTempQA Dataset
ComplexTempQA is a large-scale dataset designed for complex temporal question answering (TQA). It consists of over 100 million question-answer pairs, making it one of the most extensive datasets available for TQA. The dataset is generated using data from Wikipedia and Wikidata and spans questions over a period of 36 years (1987-2023).
**Note:** We have a smaller version consisting of questions from the time period 1987 until 2007.
## Dataset Description
ComplexTempQA categorizes questions into three main types:
- Attribute Questions
- Comparison Questions
- Counting Questions
These categories are further divided based on their relation to events, entities, or time periods.
### Question Types and Counts
| | Question Type | Subtype | Count |
|--|-----------------------|---------------------|---------------|
|1a| Attribute | Event | 83,798 |
|1b| Attribute | Entity | 84,079 |
|1c| Attribute | Time | 9,454 |
|2a| Comparison | Event | 25,353,340 |
|2b| Comparison | Entity | 74,678,117 |
|2c| Comparison | Time | 54,022,952 |
|3a| Counting | Event | 18,325 |
|3b| Counting | Entity | 10,798 |
|3c| Counting | Time | 12,732 |
| | Multi-Hop | | 76,933 |
| | Unnamed Event | | 8,707,123 |
| | **Total** | | **100,228,457**|
### Metadata
- **id**: A unique identifier for each question.
- **question**: The text of the question being asked.
- **answer**: The answer(s) to the question.
- **type**: The type of question based on the dataset’s taxonomy.
- **rating**: A numerical rating indicating the difficulty of the question (`0` for easy, `1` for hard).
- **timeframe**: The start and end dates relevant to the question.
- **question_entity**: List of Wikidata IDs related to the entities in the question.
- **answer_entity**: List of Wikidata IDs related to the entities in the answer.
- **question_country**: List of Wikidata IDs of the countries associated with the questioned entities or events.
- **answer_country**: List of Wikidata IDs of the countries associated with the answered entities or events.
- **is_unnamed**: A flag indicating if the question contains an implicitly described event (`1` for yes, `0` for no).
## Dataset Characteristics
### Size
ComplexTempQA comprises over 100 million question-answer pairs, focusing on events, entities, and time periods from 1987 to 2023.
### Complexity
Questions require advanced reasoning skills, including multi-hop question answering, temporal aggregation, and across-time comparisons.
### Taxonomy
The dataset follows a unique taxonomy categorizing questions into attributes, comparisons, and counting types, ensuring comprehensive coverage of temporal queries.
### Evaluation
The dataset has been evaluated for readability, ease of answering before and after web searches, and overall clarity. Human raters have assessed a sample of questions to ensure high quality.
## Usage
### Evaluation and Training
ComplexTempQA can be used for:
- Evaluating the temporal reasoning capabilities of large language models (LLMs)
- Fine-tuning language models for better temporal understanding
- Developing and testing retrieval-augmented generation (RAG) systems
### Research Applications
The dataset supports research in:
- Temporal question answering
- Information retrieval
- Language understanding
### Adaptation and Continual Learning
ComplexTempQA's temporal metadata facilitates the development of online adaptation and continual training approaches for LLMs, aiding in the exploration of time-based learning and evaluation.
## Access
The dataset and code are freely available at [https://github.com/DataScienceUIBK/ComplexTempQA](https://github.com/DataScienceUIBK/ComplexTempQA).
|
gigant/tib-bench-mm-part2
|
gigant
|
2025-02-02T22:06:19Z
| 7 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-02T22:03:12Z
| 0 |
---
dataset_info:
features:
- name: doi
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: video_url
dtype: string
- name: license
dtype: string
- name: subject
dtype: string
- name: genre
dtype: string
- name: release_year
dtype: string
- name: author
dtype: string
- name: contributors
dtype: string
- name: abstract
dtype: string
- name: transcript
dtype: string
- name: transcript_segments
struct:
- name: avg_logprob
sequence: float64
- name: compression_ratio
sequence: float64
- name: end
sequence: float64
- name: id
sequence: int64
- name: no_speech_prob
sequence: float64
- name: seek
sequence: int64
- name: start
sequence: float64
- name: temperature
sequence: float64
- name: text
sequence: string
- name: tokens
sequence:
sequence: int64
- name: keyframes
struct:
- name: frames
sequence:
sequence: int64
- name: slide
sequence: string
- name: timestamp
sequence:
sequence: float64
- name: language
dtype: string
- name: slides
list:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
splits:
- name: train
num_bytes: 1896483981
num_examples: 465
download_size: 1850736411
dataset_size: 1896483981
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RyanYr/reflect_ministral8Bit_math-test_t2_binlabel
|
RyanYr
|
2024-11-20T18:53:19Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-20T05:24:50Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: response@0
sequence: string
- name: response@1
sequence: string
- name: response@2
sequence: string
- name: response@0_ans
sequence: string
- name: response@0_correctness
sequence: bool
- name: response@2_ans
sequence: string
- name: response@2_correctness
sequence: bool
splits:
- name: train
num_bytes: 2474233
num_examples: 500
download_size: 1001745
dataset_size: 2474233
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bobertonthebuilder/zxyxxxl_batch_39
|
bobertonthebuilder
|
2025-03-20T05:54:01Z
| 13 | 0 |
[
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-20T05:54:00Z
| 0 |
---
dataset_info:
features: []
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 324
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shaznin/task3_impact_classification
|
shaznin
|
2025-01-25T05:44:50Z
| 55 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-25T05:10:22Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 31720602
num_examples: 8040
- name: test
num_bytes: 7570953
num_examples: 2010
download_size: 16093442
dataset_size: 39291555
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
reasoning-proj/c_dfiltered_DeepSeek-R1-Distill-Qwen-32B_madversarial_continue_unrelated_t10
|
reasoning-proj
|
2025-05-08T21:02:12Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-08T18:47:25Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: answer_content
dtype: string
- name: reference_answer
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: question_license
dtype: string
- name: question_source
dtype: string
- name: model_name
dtype: string
- name: verifier_score
dtype: int64
- name: mutated_answer_content
dtype: string
- name: continuation_model
dtype: string
- name: continuation_1
dtype: string
- name: complete_answer_1
dtype: string
- name: continuation_2
dtype: string
- name: complete_answer_2
dtype: string
- name: continuation_3
dtype: string
- name: complete_answer_3
dtype: string
- name: continuation_4
dtype: string
- name: complete_answer_4
dtype: string
- name: continuation_5
dtype: string
- name: complete_answer_5
dtype: string
- name: continuation_6
dtype: string
- name: complete_answer_6
dtype: string
- name: continuation_7
dtype: string
- name: complete_answer_7
dtype: string
- name: continuation_8
dtype: string
- name: complete_answer_8
dtype: string
splits:
- name: train
num_bytes: 48939914
num_examples: 304
download_size: 20832742
dataset_size: 48939914
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JakeOh/iself-preferences-gsm8k-llama1b
|
JakeOh
|
2024-12-18T05:46:37Z
| 31 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-17T09:42:14Z
| 0 |
---
dataset_info:
features:
- name: doc_hash
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 90945021
num_examples: 39558
- name: test
num_bytes: 20043033
num_examples: 8732
download_size: 47888009
dataset_size: 110988054
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ChavyvAkvar/synthetic-trades-BTC-batch-48
|
ChavyvAkvar
|
2025-06-04T11:14:48Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-04T11:13:50Z
| 0 |
---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450692
num_examples: 1000
download_size: 924478875
dataset_size: 923450692
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
airabbitX/my-distiset-9cb75714
|
airabbitX
|
2025-02-27T16:37:17Z
| 12 | 0 |
[
"task_categories:text-classification",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] |
[
"text-classification"
] |
2025-02-27T16:37:13Z
| 0 |
---
size_categories: n<1K
task_categories:
- text-classification
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': politics
'1': business
'2': sports
'3': technology
'4': health
'5': entertainment
splits:
- name: train
num_bytes: 304
num_examples: 1
download_size: 2664
dataset_size: 304
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-9cb75714
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/airabbitX/my-distiset-9cb75714/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/airabbitX/my-distiset-9cb75714/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 1,
"text": "A recent survey suggests that nearly all of the world\u0027s largest economies are experiencing economic downturns, with many nations struggling to recover from the impact of the COVID-19 pandemic. As a result, many people are starting to question the effectiveness of the current economic system."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("airabbitX/my-distiset-9cb75714", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("airabbitX/my-distiset-9cb75714")
```
</details>
|
Vikir2411CS19/TrialDataset
|
Vikir2411CS19
|
2025-06-18T13:35:12Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-18T11:35:14Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: Question
dtype: string
- name: Answer
dtype: string
splits:
- name: train
num_bytes: 89226752.0
num_examples: 798
download_size: 13738737
dataset_size: 89226752.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
taewan21/klue-mrc-gpt4o-questions-answers-with-1-to-4-negative-samples
|
taewan21
|
2025-05-12T07:26:20Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-12T07:26:19Z
| 0 |
---
dataset_info:
features:
- name: title
dtype: string
- name: news_category
dtype: string
- name: source
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: question_type
dtype: int64
- name: is_impossible
dtype: bool
- name: answer_text
dtype: string
- name: answer_start
dtype: int64
- name: negative_samples
sequence: string
- name: search_result
sequence: string
- name: answer
dtype: string
- name: extracted_ref_numbers
sequence: int64
splits:
- name: train
num_bytes: 5307197
num_examples: 286
download_size: 3068347
dataset_size: 5307197
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
KodCode/KodCode-V1-SFT-4o
|
KodCode
|
2025-03-16T21:59:33Z
| 191 | 5 |
[
"task_categories:question-answering",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2503.02951",
"region:us",
"code"
] |
[
"question-answering"
] |
2025-03-13T07:17:15Z
| 0 |
---
dataset_info:
features:
- name: style
dtype: string
- name: subset
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: solution
dtype: string
- name: test_code
dtype: string
- name: test_info
list:
- name: docstring
dtype: string
- name: function_declaration
dtype: string
- name: function_name
dtype: string
- name: parameter_list
dtype: string
- name: gpt_pass_sequence
sequence: int64
- name: gpt_pass_trial_num
dtype: int64
- name: gpt_difficulty
dtype: string
- name: gpt_pass_percentage
dtype: float64
- name: 4o_pass_sequence
sequence: int64
- name: 4o_pass_trial_num
dtype: int64
- name: 4o_correctness
dtype: string
- name: 4o_solution
dtype: string
- name: metadata
struct:
- name: original_instruction
dtype: string
- name: prompt_id
dtype: string
- name: row_id
dtype: int64
- name: seed_ids
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 2063448156.1198518
num_examples: 262659
- name: incorrect
num_bytes: 1153990877.8945835
num_examples: 146893
download_size: 1294120098
dataset_size: 3217439034.0144353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: incorrect
path: data/incorrect-*
license: cc-by-nc-4.0
task_categories:
- question-answering
language:
- en
tags:
- code
size_categories:
- 100K<n<1M
---
# 🐱 KodCode: A Diverse, Challenging, and Verifiable Synthetic Dataset for Coding
KodCode is the largest fully-synthetic open-source dataset providing verifiable solutions and tests for coding tasks. It contains 12 distinct subsets spanning various domains (from algorithmic to package-specific knowledge) and difficulty levels (from basic coding exercises to interview and competitive programming challenges). KodCode is designed for both supervised fine-tuning (SFT) and RL tuning.
- 🕸️ [Project Website](https://kodcode-ai.github.io/) - To discover the reasoning for the name of KodCode 🤨
- 📄 [Technical Report](https://arxiv.org/abs/2503.02951) - Discover the methodology and technical details behind KodCode
- 💾 [Github Repo](https://github.com/KodCode-AI/kodcode) - Access the complete pipeline used to produce KodCode V1
- 🤗 HF Datasets:
- [KodCode-V1 (For RL)](https://huggingface.co/datasets/KodCode/KodCode-V1);
- [KodCode-V1-SFT-R1 (for SFT)](https://huggingface.co/datasets/KodCode/KodCode-V1-SFT-R1);
- [KodCode-V1-SFT-4o (for SFT)](https://huggingface.co/datasets/KodCode/KodCode-V1-SFT-4o) [You are here!]

## 📊 Dataset Details
This dataset is designed for supervised fine-tuning (SFT). Starting from questions from [KodCode-V1](https://huggingface.co/datasets/KodCode/KodCode-V1), we generate responses using `gpt-4o-2024-0513` for each question. To ensure the quality of the generated responses, we generate 3 times for each question and perform test-based reject sampling, yielding this dataset with verified responses. All responses are verified with the paired unit tests.
We note that while `solution` in [KodCode-V1](https://huggingface.co/datasets/KodCode/KodCode-V1) can be used for SFT, it contains only code without explanations, making it potentially unsuitable for SFT. Therefore, we regenerated complete responses using `gpt-4o-2024-0513` for this SFT dataset.
### Subsets
- Prefill (Simple Coding Questions, 43K)
- Leetcode (Coding Assessment Questions, 27K)
- Codeforces (Coding Assessment Questions, 33K)
- Apps (Coding Assessment Questions, 21K)
- Taco (Coding Assessment Questions, 81K)
- Code Contests (Coding Assessment Questions, 36K)
- Algorithm (DSA Knowledge, 31K)
- Data Structure (DSA Knowledge, 34K)
- Docs (Technical Documentations, 43K)
- Filter (Others, 77K)
- Package (Others,7K)
- Evol (Others, 13K)
### Data Formats
- `style`: Instruct / Complete. Instruct provides question in natural language, while Complete provides function signatures and test examples.
- `subset`: As mentioned above.
- `conversation_id`: Unique question identifier in KodCode.
- `question`: Synthesized coding question.
- `solution`: Verified implementation generated by `gpt-4o-0513`.
- `test_code`: Unit tests generated by `gpt-4o-0513`. Paired with `solution`. Formatted in `Pytest`.
- `test_info`: Contains function name, parameter list, declaration, and docstring. If you are doing RL, you are suggested to include this information in the prompt.
- `gpt_pass_sequence`: We generate solution-test pairs up to 10 times. A value of 1 indicates the solution passed self-verification via unit tests on that trial, while 0 indicates failure.
- `gpt_pass_trial_num`: Number of trials that passed self-verification.
- `gpt_pass_percentage`: Percentage of passing trials relative to total trials.
- `gpt_difficulty`: Question difficulty level derived from `gpt_pass_percentage`.
- `4o_pass_sequence`: We generate 4o responses 3 times. A value of 1 indicates the solution passed unit tests, while 0 indicates failure.
- `4o_pass_trial_num`: Number of trials that passed unit tests.
- `4o_correctness`: "True" if at least one among the 3 trials is correct.
- `4o_solution`: Only the code portion from 4o's full response.
- `metadata`: Contains seed information for internal debugging purposes.
- `conversation`: Paired question and verified R1 response.
## 🧐 Other Information
**License**: Please follow [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/deed.en).
**Contact**: Please contact [Zhangchen](mailto:[email protected]) by email.
## 📚 Citation
If you find the data or code useful, please cite:
```
@article{xu2025kodcode,
title={KodCode: A Diverse, Challenging, and Verifiable Synthetic Dataset for Coding},
author={Zhangchen Xu and Yang Liu and Yueqin Yin and Mingyuan Zhou and Radha Poovendran},
year={2025},
eprint={2503.02951},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.02951},
}
```
|
obiwan96/obiwan96owm_raw_v3__180000_200000
|
obiwan96
|
2025-02-26T20:28:13Z
| 15 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-26T16:07:54Z
| 0 |
---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
dtype: string
- name: backtracking_raw
dtype: string
- name: verification_raw
dtype: string
- name: subgoal_setting_raw
dtype: string
- name: backward_chaining_raw
dtype: string
splits:
- name: train
num_bytes: 210238326
num_examples: 20000
download_size: 95451900
dataset_size: 210238326
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jkazdan/gemma-2-2b-it-refusal-5000-refusal-0-AMD
|
jkazdan
|
2025-01-03T07:33:32Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-03T07:33:31Z
| 0 |
---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 440426
num_examples: 300
download_size: 249406
dataset_size: 440426
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EasonFan/MMOral-Bench
|
EasonFan
|
2025-05-05T08:52:06Z
| 7 | 5 |
[
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"language:en",
"license:cc",
"size_categories:100M<n<1B",
"region:us",
"medical"
] |
[
"question-answering",
"zero-shot-classification"
] |
2025-05-03T03:23:08Z
| 2 |
---
license: cc
task_categories:
- question-answering
- zero-shot-classification
language:
- en
tags:
- medical
pretty_name: MM-Oral
size_categories:
- 100M<n<1B
---
# MM-Oral
- MM-Oral-VQA-Closed-Ended.tsv: TSV file for Close-ended VQA.
- MM-Oral-VQA-Open-Ended.tsv: TSV file for Open-ended VQA (Should be judged by gpt-4o or other VLMs).
|
ahmedheakl/arabic_isidocvqa
|
ahmedheakl
|
2024-10-29T09:15:25Z
| 30 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-12T04:31:00Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 34336058.0
num_examples: 711
download_size: 12600587
dataset_size: 34336058.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
imanolcb/fruit_classification_dataset
|
imanolcb
|
2025-05-01T22:07:15Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-01T22:07:10Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': fresa
'1': limon
'2': manzana
'3': pera
'4': platano
'5': uva
splits:
- name: train
num_bytes: 1783692.0
num_examples: 52
- name: validation
num_bytes: 595513.0
num_examples: 18
download_size: 2381681
dataset_size: 2379205.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
harman/robreward
|
harman
|
2025-05-04T09:49:07Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-04T08:25:44Z
| 0 |
---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: chosen_model
dtype: string
- name: rejected
dtype: string
- name: rejected_model
dtype: string
- name: subset
dtype: string
- name: id
dtype: int64
- name: transformation_name
dtype: string
- name: transformed_prompt
dtype: string
- name: transformed_chosen
dtype: string
- name: transformed_rejected
dtype: string
- name: reword_id
dtype: int64
splits:
- name: jb3
num_bytes: 2662774
num_examples: 354
- name: char_swap_sub_ins_del
num_bytes: 5876560
num_examples: 1554
- name: comment_bad_good
num_bytes: 290828
num_examples: 164
- name: punct_spaces
num_bytes: 7690623
num_examples: 2001
- name: rot_13
num_bytes: 10033627
num_examples: 2985
- name: stresstest
num_bytes: 8193827
num_examples: 2001
- name: add_quotes
num_bytes: 9794827
num_examples: 2985
- name: comment_bad_bad
num_bytes: 289556
num_examples: 164
- name: back_translation
num_bytes: 5456141
num_examples: 1554
- name: back_transcription
num_bytes: 5863524
num_examples: 1554
- name: twitter_url
num_bytes: 7754952
num_examples: 2001
- name: twitter_handle
num_bytes: 7673986
num_examples: 2001
- name: paraphrase
num_bytes: 5701355
num_examples: 1554
- name: append_other_code
num_bytes: 336249
num_examples: 164
- name: ignore_above
num_bytes: 11566482
num_examples: 2985
- name: jb4
num_bytes: 1644670
num_examples: 354
- name: jb2
num_bytes: 2995180
num_examples: 354
- name: jb1
num_bytes: 1986634
num_examples: 354
- name: homoglyph_sub
num_bytes: 8898245
num_examples: 1554
- name: rot_2
num_bytes: 10012732
num_examples: 2985
- name: swap_format
num_bytes: 1540439
num_examples: 402
- name: back_transcription_old
num_bytes: 5885697
num_examples: 1554
- name: ignore_below
num_bytes: 11470962
num_examples: 2985
- name: code_minification
num_bytes: 249838
num_examples: 164
download_size: 65152138
dataset_size: 133869708
configs:
- config_name: default
data_files:
- split: jb3
path: data/jb3-*
- split: char_swap_sub_ins_del
path: data/char_swap_sub_ins_del-*
- split: comment_bad_good
path: data/comment_bad_good-*
- split: punct_spaces
path: data/punct_spaces-*
- split: rot_13
path: data/rot_13-*
- split: stresstest
path: data/stresstest-*
- split: add_quotes
path: data/add_quotes-*
- split: comment_bad_bad
path: data/comment_bad_bad-*
- split: back_translation
path: data/back_translation-*
- split: back_transcription
path: data/back_transcription-*
- split: twitter_url
path: data/twitter_url-*
- split: twitter_handle
path: data/twitter_handle-*
- split: paraphrase
path: data/paraphrase-*
- split: append_other_code
path: data/append_other_code-*
- split: ignore_above
path: data/ignore_above-*
- split: jb4
path: data/jb4-*
- split: jb2
path: data/jb2-*
- split: jb1
path: data/jb1-*
- split: homoglyph_sub
path: data/homoglyph_sub-*
- split: rot_2
path: data/rot_2-*
- split: swap_format
path: data/swap_format-*
- split: back_transcription_old
path: data/back_transcription_old-*
- split: ignore_below
path: data/ignore_below-*
- split: code_minification
path: data/code_minification-*
---
|
mlfoundations-dev/nemo_nano_1000k
|
mlfoundations-dev
|
2025-04-28T06:27:49Z
| 27 | 0 |
[
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-28T05:57:31Z
| 0 |
---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 32586501765.731556
num_examples: 1000000
download_size: 14441393429
dataset_size: 32586501765.731556
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
45acp/agronomy
|
45acp
|
2025-04-24T16:25:19Z
| 67 | 0 |
[
"task_categories:text2text-generation",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"agriculture",
"question-answering",
"agronomy",
"embrapa",
"instituto-biologico",
"bloomz"
] |
[
"text2text-generation"
] |
2025-04-24T16:13:31Z
| 0 |
---
dataset: true
license: mit
tags:
- agriculture
- question-answering
- agronomy
- embrapa
- instituto-biologico
- bloomz
language:
- en
pretty_name: Agronomy_FL
task_categories:
- text2text-generation
---
# Agronomy_FL Dataset
...
# Agronomy_FL Dataset
The **Agronomy_FL** dataset is a carefully curated corpus of question-answer (QA) pairs derived from a fusion of multiple high-quality public agricultural data sources. It is designed to support the development and fine-tuning of language models focused on agronomic knowledge, sustainable farming, biological control, and best practices in agriculture.
## 📚 Dataset Composition
This dataset combines information from the following primary sources:
- **EMBRAPA (Brazilian Agricultural Research Corporation)**: Public technical manuals and scientific publications.
- **Instituto Biológico (São Paulo)**: Documents and training materials related to agricultural and biological research.
- **Public Datasets**: Existing Hugging Face datasets in the agronomy and environmental sciences domains.
All content used is publicly available and was filtered, cleaned, and standardized to create meaningful QA pairs for natural language processing tasks.
## 🔍 Dataset Structure
The dataset consists of individual JSONL entries. Each entry includes:
- `question`: A natural-language question about an agricultural topic.
- `answer`: A factual and concise response to the question.
- `loss`: A loss score assigned by a pre-trained language models to quantify the semantic coherence and relevance of the example.
### Example Entry
```json
{
"question": "How can I improve soil fertility?",
"answer": "Soil fertility can be improved through practices such as crop rotation, composting, use of green manure, and regular soil testing.",
"loss": 1.37
}
```
### Fields
| Field | Type | Description |
|----------|--------|-----------------------------------------------------------------------------|
| question | string | A concise question related to agronomy, plant health, or sustainable farming |
| answer | string | A direct answer based on reliable agronomic sources |
| loss | float | A filtered score based on language model perplexity or cross-entropy loss |
## ⚙️ Data Processing
All QA pairs were evaluated using the language model. The loss was computed per-example, and only entries with a loss ≤ 2.5 were retained, ensuring high semantic clarity and relevance.
Embeddings were then extracted using `all-MiniLM-L12-v2`, and representative examples were selected via KMeans clustering to reduce redundancy and improve dataset diversity.
## 📌 Use Cases
This dataset is suitable for:
- Fine-tuning instruction-following models (e.g., LLaMA, BLOOMZ, Falcon, Mistral)
- Evaluating QA performance in low-resource domains
- Creating conversational agents in the agricultural sector
- Building expert systems for rural extension and farming support
## 🔓 License & Attribution
All source documents are publicly available and were compiled in accordance with their respective open access policies. This dataset is distributed for academic and research use only. Please attribute the original sources (e.g., EMBRAPA, Instituto Biológico) when using the dataset in downstream projects.
## 🙌 Acknowledgments
We thank the institutions whose public data made this work possible:
- EMBRAPA
- Instituto Biológico de São Paulo
- Open dataset contributors on Hugging Face
## 📫 Contact
If you have questions, suggestions, or collaboration proposals, feel free to contact:
**Fernando Henrique Vinha**
📧 [email protected]
|
IMI-HD/pathology-corpus-sample
|
IMI-HD
|
2025-05-01T08:17:18Z
| 23 | 0 |
[
"language:de",
"license:cc-by-sa-4.0",
"size_categories:n<1K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[] |
2025-04-30T12:45:42Z
| 0 |
---
license: cc-by-sa-4.0
language:
- de
---
This data set contains annotated samples of pathology nodes as described in our manuscript. They are part of the corpus that was used to train the models. We recommend using [MedTator](https://github.com/OHNLP/MedTator) for viewing the files along with the dtd file published here.
|
Faltu28e/IdeaAscendBot
|
Faltu28e
|
2025-03-24T02:34:17Z
| 16 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-03-24T02:11:17Z
| 0 |
---
license: apache-2.0
---
|
kgmyh/naver_economy_news_stock_instruct_dataset
|
kgmyh
|
2025-06-21T02:18:14Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-21T02:18:10Z
| 0 |
---
dataset_info:
features:
- name: date
dtype: string
- name: category
dtype: string
- name: press
dtype: string
- name: title
dtype: string
- name: document
dtype: string
- name: link
dtype: string
- name: summary
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 4883337.9
num_examples: 1350
- name: test
num_bytes: 542593.1
num_examples: 150
download_size: 2998233
dataset_size: 5425931.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
katarinayuan/scCello_ood_tissue_data2
|
katarinayuan
|
2025-01-21T01:32:25Z
| 28 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-21T01:26:13Z
| 0 |
---
dataset_info:
features:
- name: gene_expression_nums
sequence: float64
- name: gene_token_ids
sequence: int64
- name: cell_dataset_id
dtype: string
- name: cell_disease
dtype: string
- name: cell_assay_ids
dtype: int64
- name: cell_donor_local_ids
dtype: int64
- name: cell_ct_ontology
dtype: string
- name: cell_type
dtype: string
- name: cell_tissue
dtype: string
- name: cell_tissue_ontology
dtype: string
- name: cell_dev
dtype: string
- name: cell_counts
dtype: float64
- name: length
dtype: int64
splits:
- name: train
num_bytes: 9185779121
num_examples: 341681
download_size: 1628275893
dataset_size: 9185779121
---
# Dataset Card for "scCello_ood_tissue_data2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-sk-unrevised_NoQuant_16_64_0.05_64_BestF1
|
ferrazzipietro
|
2024-11-25T14:02:27Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-25T11:09:48Z
| 0 |
---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248146
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
Asap7772/d1shs0ap-medium-hintgen-qwen3-4b-lr1e6-shard5
|
Asap7772
|
2025-05-10T07:44:31Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-10T07:44:25Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: solution
dtype: string
- name: reward
dtype: float64
- name: length
dtype: float64
- name: correct_length
dtype: float64
- name: incorrect_length
dtype: float64
- name: all_hints
sequence: string
splits:
- name: train
num_bytes: 70490926
num_examples: 1607
download_size: 30815710
dataset_size: 70490926
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Evangelinejy/math_level3to5_data_processed_with_qwen_prompt_dedup
|
Evangelinejy
|
2025-03-02T21:17:02Z
| 18 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-02T21:17:01Z
| 0 |
---
dataset_info:
features:
- name: input
dtype: string
- name: answer
dtype: string
- name: gt_answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: question
dtype: string
- name: ground_truth_answer
dtype: string
- name: target
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 5788435
num_examples: 8522
download_size: 2596223
dataset_size: 5788435
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tttx/5pc-short-collated-train
|
tttx
|
2025-02-21T11:12:30Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-21T10:54:00Z
| 0 |
---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 13027399
num_examples: 600
- name: test
num_bytes: 22643
num_examples: 1
download_size: 3426757
dataset_size: 13050042
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
malhar39/MalharDeshmukh
|
malhar39
|
2025-03-06T15:52:03Z
| 8 | 0 |
[
"license:creativeml-openrail-m",
"region:us"
] |
[] |
2025-03-06T15:49:28Z
| 0 |
---
license: creativeml-openrail-m
---
|
dgambettaphd/D_llm2_gen5_X_doc1000_synt64_rnd42_lr5e-05_acm_SYNLAST
|
dgambettaphd
|
2025-05-10T21:27:45Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-10T21:27:42Z
| 0 |
---
dataset_info:
features:
- name: id_doc
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 11832567
num_examples: 21000
download_size: 6998916
dataset_size: 11832567
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BranoSandy/eval_act_so100_test_2
|
BranoSandy
|
2025-05-05T14:24:09Z
| 0 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] |
[
"robotics"
] |
2025-05-05T14:23:51Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1634,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
hu-po/eval_pickup_cube
|
hu-po
|
2025-04-01T01:13:11Z
| 26 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"pickup"
] |
[
"robotics"
] |
2025-04-01T01:13:02Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- pickup
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "trossen_ai_solo",
"total_episodes": 3,
"total_frames": 619,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
neelabh17/new_news_exploded_prompt_n_50_d_perc_80_num_gen_10_Qwen2.5-0.5B-Instruct_no_mcq
|
neelabh17
|
2025-05-17T16:07:24Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-17T16:07:21Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: string
- name: name
dtype: string
- name: topic
dtype: string
- name: news
dtype: string
- name: category
dtype: string
- name: question
dtype: string
- name: option
sequence: string
- name: prompt
dtype: string
- name: response_0
dtype: string
- name: answer_0
dtype: string
- name: correct_0
dtype: int64
- name: response_1
dtype: string
- name: answer_1
dtype: string
- name: correct_1
dtype: int64
- name: response_2
dtype: string
- name: answer_2
dtype: string
- name: correct_2
dtype: int64
- name: response_3
dtype: string
- name: answer_3
dtype: string
- name: correct_3
dtype: int64
- name: response_4
dtype: string
- name: answer_4
dtype: string
- name: correct_4
dtype: int64
- name: response_5
dtype: string
- name: answer_5
dtype: string
- name: correct_5
dtype: int64
- name: response_6
dtype: string
- name: answer_6
dtype: string
- name: correct_6
dtype: int64
- name: response_7
dtype: string
- name: answer_7
dtype: string
- name: correct_7
dtype: int64
- name: response_8
dtype: string
- name: answer_8
dtype: string
- name: correct_8
dtype: int64
- name: response_9
dtype: string
- name: answer_9
dtype: string
- name: correct_9
dtype: int64
splits:
- name: train
num_bytes: 9958018
num_examples: 375
download_size: 2653042
dataset_size: 9958018
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SurAyush/News_Summary_Dataset
|
SurAyush
|
2025-03-31T13:12:16Z
| 36 | 0 |
[
"task_categories:summarization",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[
"summarization"
] |
2025-03-31T12:56:57Z
| 0 |
---
license: mit
task_categories:
- summarization
language:
- en
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Dataset Origin:** [BBC News Summary]
- **Data Source by:** [https://www.kaggle.com/datasets/pariza/bbc-news-summary/data]
- **Language(s) (NLP):** [English]
- **License:** [More Information Needed]
<!-- Provide the basic links for the dataset. -->
## Uses
[Used to summarize a language model like T5, to produce concise and clean summaries to news articles]
<!-- Address questions around how the dataset is intended to be used. -->
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[It has two columsn: articles and summaries]
|
Antimage01/k12-critic
|
Antimage01
|
2025-04-26T12:54:32Z
| 26 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-04-26T12:54:24Z
| 0 |
---
license: apache-2.0
---
|
Shubham45678/male_part3_taged_meta_to_text_from_edge
|
Shubham45678
|
2025-05-07T15:08:25Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-07T15:08:21Z
| 0 |
---
dataset_info:
features:
- name: audio_filepath
dtype: string
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: duration
dtype: float32
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
splits:
- name: train
num_bytes: 368377
num_examples: 534
download_size: 128635
dataset_size: 368377
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
R2E-Gym/R2E-Gym-Lite
|
R2E-Gym
|
2025-02-05T06:02:58Z
| 408 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-20T04:40:15Z
| 0 |
---
dataset_info:
features:
- name: repo_name
dtype: string
- name: docker_image
dtype: string
- name: commit_hash
dtype: string
- name: parsed_commit_content
dtype: string
- name: execution_result_content
dtype: string
- name: modified_files
sequence: string
- name: modified_entity_summaries
list:
- name: ast_type_str
dtype: string
- name: end_lineno
dtype: int64
- name: file_name
dtype: string
- name: name
dtype: string
- name: start_lineno
dtype: int64
- name: type
dtype: string
- name: relevant_files
sequence: string
- name: num_non_test_files
dtype: int64
- name: num_non_test_func_methods
dtype: int64
- name: num_non_test_lines
dtype: int64
- name: prompt
dtype: string
- name: problem_statement
dtype: string
- name: expected_output_json
dtype: string
splits:
- name: train
num_bytes: 3665788272
num_examples: 4578
- name: dev_10pr_v1
num_bytes: 76023943
num_examples: 100
- name: dev_100pr_v1
num_bytes: 622926827
num_examples: 1000
- name: dev_200pr_v1
num_bytes: 1132552772
num_examples: 1876
- name: dev_100pr_v2
num_bytes: 622926827
num_examples: 1000
- name: dev_100pr_v3
num_bytes: 525281939
num_examples: 876
- name: dev_100pr_v4
num_bytes: 351584049
num_examples: 575
- name: dev_100pr_v5
num_bytes: 597512961
num_examples: 782
- name: dev_100pr_v6
num_bytes: 687531360
num_examples: 701
- name: dev_100pr_v7
num_bytes: 410029165
num_examples: 300
download_size: 2190758905
dataset_size: 8692158115
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev_10pr_v1
path: data/dev_10pr_v1-*
- split: dev_100pr_v1
path: data/dev_100pr_v1-*
- split: dev_200pr_v1
path: data/dev_200pr_v1-*
- split: dev_100pr_v2
path: data/dev_100pr_v2-*
- split: dev_100pr_v3
path: data/dev_100pr_v3-*
- split: dev_100pr_v4
path: data/dev_100pr_v4-*
- split: dev_100pr_v5
path: data/dev_100pr_v5-*
- split: dev_100pr_v6
path: data/dev_100pr_v6-*
- split: dev_100pr_v7
path: data/dev_100pr_v7-*
---
|
nicolauduran45/scidocs-keywords-exkeyliword
|
nicolauduran45
|
2025-01-07T13:13:17Z
| 24 | 0 |
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"keyword-generation",
"Science",
"Research",
"Academia",
"Innovation",
"Technology"
] |
[
"text-generation",
"text2text-generation"
] |
2025-01-07T09:47:45Z
| 0 |
---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- en
tags:
- keyword-generation
- Science
- Research
- Academia
- Innovation
- Technology
pretty_name: scientific papers with their author keywords
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: title
dtype: string
- name: abstract
dtype: string
- name: keywords
dtype: string
- name: source_name
dtype: string
splits:
- name: train
num_bytes: 2771926367
num_examples: 2640662
download_size: 1603171250
dataset_size: 2771926367
---
# SciDocs Keywords exKEYliWORD
## Dataset Description
`SciDocs2Keywords` is a dataset consisting of scientific papers (title and abstract) and their associated author-provided keywords. It is designed for use in task of keyword extraction or abstraction.
Each entry in the dataset includes:
- Title: The title of the scientific paper.
- Abstract: A brief summary of the paper.
- Author Keywords: Keywords provided by the authors to highlight the main topics or concepts of the paper.
- Source: Paper provider source API.
## Associated Model
soon...
## How to Use
To use this dataset for model training or evaluation, you can load it using the Hugging Face `datasets` library as follows:
```python
from datasets import load_dataset
dataset = load_dataset("nicolauduran45/scidocs-keywords-exkeyliword")
print(dataset[0])
```
|
tejfsingh/pick-place-eraser-lr
|
tejfsingh
|
2025-06-07T06:04:26Z
| 0 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] |
[
"robotics"
] |
2025-06-07T04:43:12Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100_follower",
"total_episodes": 1,
"total_frames": 752,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
1080,
1920,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 1080,
"video.width": 1920,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.cam_low": {
"dtype": "video",
"shape": [
1080,
1920,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 1080,
"video.width": 1920,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
jdchang/distill-r1-qwen-1.5b-hmmt-feb-2024
|
jdchang
|
2025-04-28T00:51:35Z
| 19 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-28T00:51:22Z
| 0 |
---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
splits:
- name: train
num_bytes: 590013885
num_examples: 15360
download_size: 214067150
dataset_size: 590013885
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lecslab/porc-gpt2-v1-all
|
lecslab
|
2024-12-19T02:18:37Z
| 21 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-19T02:15:18Z
| 0 |
---
dataset_info:
features:
- name: story
dtype: string
- name: generated_text_1
dtype: string
- name: generated_text_2
dtype: string
- name: mic_chosen
dtype: int64
- name: mar_chosen
dtype: int64
- name: ali_chosen
dtype: int64
splits:
- name: train
num_bytes: 84559
num_examples: 150
download_size: 56058
dataset_size: 84559
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/num10_code_humaneval_qwen2.5-7b_t1.0_n8_tests_humaneval_o3_t0_n1
|
test-gen
|
2025-05-21T21:28:52Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-21T21:28:50Z
| 0 |
---
dataset_info:
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: canonical_solution
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 1565577
num_examples: 164
download_size: 580772
dataset_size: 1565577
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
gaurav312/indian_city_pollution
|
gaurav312
|
2025-01-16T05:55:21Z
| 18 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-13T10:33:01Z
| 0 |
---
dataset_info:
features:
- name: Date
dtype: string
- name: PM2.5 (µg/m³)
dtype: float64
- name: PM10 (µg/m³)
dtype: float64
- name: NO (µg/m³)
dtype: float64
- name: NO2 (µg/m³)
dtype: float64
- name: NOx (ppb)
dtype: float64
- name: NH3 (µg/m³)
dtype: float64
- name: SO2 (µg/m³)
dtype: float64
- name: CO (mg/m³)
dtype: float64
- name: Ozone (µg/m³)
dtype: float64
- name: Month
dtype: float64
- name: Weekday
dtype: float64
- name: AQI_calculated
dtype: float64
- name: AQI_bucket
dtype: int64
splits:
- name: train
num_bytes: 23865854
num_examples: 202253
download_size: 15368351
dataset_size: 23865854
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yoonholee/completions_Qwen3-4B_GSM
|
yoonholee
|
2025-05-13T23:02:31Z
| 5 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-13T02:56:31Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: completions
sequence: string
- name: answer
dtype: string
- name: corrects
sequence: bool
- name: acc
dtype: float64
splits:
- name: train
num_bytes: 11306993
num_examples: 200
download_size: 3601965
dataset_size: 11306993
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ieasybooks-org/waqfeya-library-compressed
|
ieasybooks-org
|
2025-04-25T15:09:42Z
| 653 | 4 |
[
"task_categories:image-to-text",
"language:ar",
"license:mit",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[
"image-to-text"
] |
2025-04-23T05:19:54Z
| 4 |
---
license: mit
task_categories:
- image-to-text
language:
- ar
pretty_name: Waqfeya Library - Compressed
size_categories:
- 10K<n<100K
configs:
- config_name: index
data_files:
- split: index
path: index.tsv
---
# Waqfeya Library - Compressed
## 📖 Overview
[Waqfeya](https://waqfeya.net) is one of the primary online resources for Islamic books, similar to [Shamela](https://shamela.ws). It hosts more than 10,000 PDF books across over 80 categories.
In this dataset, we processed the original PDF files using Google Document AI APIs and extracted their contents into two additional formats: TXT and DOCX.
## 📊 Dataset Contents
This dataset is identical to [ieasybooks-org/waqfeya-library](https://huggingface.co/datasets/ieasybooks-org/waqfeya-library), with one key difference: the contents have been compressed for easier downloading. Specifically, the `pdf`, `txt`, and `docx` folders have been packaged into `pdf.zip`, `txt.zip`, and `docx.zip`, respectively.
For detailed information about the dataset contents and usage instructions, please refer to the original dataset page: [ieasybooks-org/waqfeya-library](https://huggingface.co/datasets/ieasybooks-org/waqfeya-library).
|
abubasith86/titles-dpo
|
abubasith86
|
2025-03-13T13:18:02Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-13T13:17:59Z
| 0 |
---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 36757
num_examples: 102
- name: valid
num_bytes: 4258
num_examples: 12
download_size: 19891
dataset_size: 41015
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
---
|
ssktora/trec_ct_2021-train1000-bm25-pyserini-5-all-v2
|
ssktora
|
2025-04-29T07:26:21Z
| 17 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-29T07:26:17Z
| 0 |
---
dataset_info:
features:
- name: query_id
dtype: string
- name: query
dtype: string
- name: positive_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: negative_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 19100904
num_examples: 50
download_size: 8737410
dataset_size: 19100904
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yukimasano/pass
|
yukimasano
|
2024-01-18T11:12:34Z
| 58 | 1 |
[
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:extended|yffc100M",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"arxiv:2109.13228",
"region:us",
"image-self-supervised pretraining"
] |
[
"other"
] |
2022-03-02T23:29:22Z
| 0 |
---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- extended|yffc100M
task_categories:
- other
task_ids: []
paperswithcode_id: pass
pretty_name: Pictures without humAns for Self-Supervision
tags:
- image-self-supervised pretraining
dataset_info:
features:
- name: image
dtype: image
- name: creator_username
dtype: string
- name: hash
dtype: string
- name: gps_latitude
dtype: float32
- name: gps_longitude
dtype: float32
- name: date_taken
dtype: timestamp[us]
splits:
- name: train
num_bytes: 178563446100
num_examples: 1439588
download_size: 179640190811
dataset_size: 178563446100
---
# Dataset Card for PASS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PASS homepage](https://www.robots.ox.ac.uk/~vgg/research/pass/)
- **Repository:** [PASS repository](https://github.com/yukimasano/PASS)
- **Paper:** [PASS: An ImageNet replacement for self-supervised pretraining without humans](https://arxiv.org/abs/2109.13228)
- **Leaderboard:** [Pretrained models with scores](https://github.com/yukimasano/PASS#pretrained-models)
- **Point of Contact:** [Yuki M. Asano](mailto:yukiATMARKrobots.ox.ac.uk)
### Dataset Summary
PASS is a large-scale image dataset, containing 1.4 million images, that does not include any humans and which can be used for high-quality pretraining while significantly reducing privacy concerns.
### Supported Tasks and Leaderboards
From the paper:
> **Has the dataset been used for any tasks already?** In the paper we show and benchmark the
intended use of this dataset as a pretraining dataset. For this the dataset is used an unlabelled image collection on which visual features are learned and then transferred to downstream tasks. We show that with this dataset it is possible to learn competitive visual features, without any humans in the pretraining dataset and with complete license information.
> **Is there a repository that links to any or all papers or systems that use the dataset?** We will
be listing these at the repository.
> **What (other) tasks could the dataset be used for?** We believe this dataset might allow researchers and practitioners to further evaluate the differences that pretraining datasets can have on the learned features. Furthermore, since the meta-data is available for the images, it is possible to investigate the effect of image resolution on self-supervised learning methods, a domain largely underresearched thus far, as the current de-facto standard, ImageNet, only comes in one size.
> **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?** Given that this dataset is a subset of a dataset that randomly samples images from flickr, the image distribution is biased towards European and American creators. As in the main papers discussion, this can lead to non-generalizeable features, or even biased features as the images taken in other countries might be more likely to further reflect and propagate stereotypes [84], though in our case these do not refer to sterotypes about humans.
> **Are there tasks for which the dataset should not be used?** This dataset is meant for research
purposes only. The dataset should also not be used for, e.g. connecting images and usernames, as
this might risk de-anonymising the dataset in the long term. The usernames are solely provided for
attribution.
### Languages
English.
## Dataset Structure
### Data Instances
A data point comprises an image and its meta-data:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FFAD48E35F8>, 'creator_username': 'NTShieldsy',
'hash': 'e1662344ffa8c231d198c367c692cc',
'gps_latitude': 21.206675,
'gps_longitude': 39.166558,
'date_taken': datetime.datetime(2012, 8, 9, 18, 0, 20)
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `creator_username`: The photographer.
- `hash`: The hash, as computed from YFCC-100M.
- `gps_latitude`: Latitude of image if existent, otherwise None.
- `gps_longitude`: Longitude of image if existent, otherwise None.
- `date_taken`: Datetime of image if existent, otherwise None.
### Data Splits
All the data is contained in the training set. The training set has 1,439,588 instances as this implementation corresponds to the most recent release (v3) from the [version history](https://github.com/yukimasano/PASS/blob/main/version_history.txt).
From the paper:
> **Are there recommended data splits (e.g., training, development/validation, testing)?** As outlined in the intended usecases, this dataset is meant for pretraining representations. As such, the models derived from training on this dataset need to be evaluated on different datasets, so called down-stream tasks. Thus the recommended split is to use all samples for training.
## Dataset Creation
### Curation Rationale
From the paper:
> **For what purpose was the dataset created?** Neural networks pretrained on large image collections have been shown to transfer well to other visual tasks where there is little labelled data, i.e. transferring a model works better than starting with a randomly initialized network every time for a new task, as many visual features can be repurposed. This dataset has as its goal to provide a safer large-scale dataset for such pretraining of visual features. In particular, this dataset does not contain any humans or human parts and does not contain any labels. The first point is important, as the current standard for pretraining, ImageNet and its face-blurred version only provide pseudo-anonymity and furthermore do not provide correct licences to the creators. The second point is relevant as pretraining is moving towards the self-supervised paradigm, where labels are not required. Yet most methods are developed on the highly curated ImageNet dataset, yielding potentially non-generalizeable research.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
* **Collection process**:
> **How was the data associated with each instance acquired?** The data was collected from the
publicly available dataset YFCC-100M which is hosted on the AWS public datasets platform. We have used the meta-data, namely the copyright information to filter only images with the CC-BY licence and have downloaded these using the aws command line interface, allowing for quick and stable downloading. In addition, all files were subsequently scanned for viruses using Sophos SAVScan virus detection utility, v.5.74.0.
> **What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)?** Our dataset is a subset
of the YFCC-100M dataset. The YFCC-100M dataset itself was created by effectively randomly
selecting publicly available images from flickr, resulting in approximately 98M images.
> **Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set?** The dataset is a sample of a larger set—all possible digital photographs. As outlined in Section 3 we start from an existing dataset, YFCC-100M, and stratify the images (removing images with people and personal information, removing images with harmful content, removing images with unsuitable licenses, each user contributes at most 80 images to the dataset). This leaves 1.6M images, out of which we take a random sample of 1.28M images to replicate the size of the ImageNet dataset. While this dataset can thus be extended, this is the set that we have verified to not contain humans, human parts and disturbing content.
> **Over what timeframe was the data collected?** The images underlying the dataset were downloaded between March and June 2021 from the AWS public datasets’ S3 bucket, following the
download code provided in the repo. However the images contained were originally and taken
anywhere from 2000 to 2015, with the majority being shot between 2010-2014.
* **Preprocessing/cleaning/labeling**:
> **Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing,tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?** After the download of approx. 17M images, the corrupted, or single-color images were removed from the dataset prior to the generation of the dataset(s) used in the paper. The images were not further preprocessed or edited.
> **Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)?** Yes. The creators of the dataset maintain a copy of the 17M original images with the CC-BY licence of YFCC100M that sits at the start of our dataset creation pipeline. Is the software used to preprocess/clean/label the instances available? We have only used basic Python primitives for this. For the annotations we have used VIA [27, 28].
#### Who are the source language producers?
From the paper:
> **Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?** As described, the data was collected automatically by simply downloading images from a publicly hosted S3 bucket. The human verification was done using a professional data annotation company that pays 150% of the local minimum wage.
### Annotations
#### Annotation process
This dataset doesn't contain annotations.
#### Who are the annotators?
This dataset doesn't contain annotations.
### Personal and Sensitive Information
From the paper:
> **Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)?** No.
> **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?** No. Besides checking for human presence in the images, the annotators were also given the choice of flagging images for disturbing content, which once flagged was removed.
> **Does the dataset relate to people? If not, you may skip the remaining questions in this section.**
No.
> **Does the dataset identify any subpopulations (e.g., by age, gender)?** NA
> **Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset?** NA
> **Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?** NA
> **Were any ethical review processes conducted (e.g., by an institutional review board)?** No
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the paper:
> **Is your dataset free of biases?** No. There are many kinds of biases that can either be quantified, e.g. geo-location (most images originate from the US and Europe) or camera-model (most images are taken with professional DSLR cameras not easily affordable), there are likely many more biases that this dataset does contain. The only thing that this dataset does not contain are humans and parts of humans, as far as our validation procedure is accurate.
### Other Known Limitations
From the paper:
> **Can you guarantee compliance to GDPR?** No, we cannot comment on legal issues.
## Additional Information
### Dataset Curators
YM. Asano, C. Rupprecht, A. Zisserman and A. Vedaldi.
From the paper:
> **Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?** The dataset has been constructed by the research group
“Visual Geometry Group” at the University of Oxford at the Engineering Science Department.
### Licensing Information
The PASS dataset is available to download for commercial/research purposes under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). A complete version of the license can be found [here](https://www.robots.ox.ac.uk/~vgg/research/pass/license_pass.txt). The whole dataset only contains CC-BY licensed images with full attribution information.
### Citation Information
```bibtex
@Article{asano21pass,
author = "Yuki M. Asano and Christian Rupprecht and Andrew Zisserman and Andrea Vedaldi",
title = "PASS: An ImageNet replacement for self-supervised pretraining without humans",
journal = "NeurIPS Track on Datasets and Benchmarks",
year = "2021"
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
|
RobertoSonic/DatasetDmaeDAV2
|
RobertoSonic
|
2024-11-19T03:43:34Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-19T03:37:54Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': avanzada
'1': avanzada humeda
'2': leve
'3': moderada
'4': no dmae
splits:
- name: train
num_bytes: 6262282.0
num_examples: 729
- name: test
num_bytes: 24356742.0
num_examples: 52
- name: validation
num_bytes: 15121812.0
num_examples: 52
download_size: 45462365
dataset_size: 45740836.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
slinusc/PubMedAbstractsSubsetEmbedded
|
slinusc
|
2025-06-12T11:40:02Z
| 0 | 0 |
[
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"arxiv:2505.07917",
"region:us",
"pubmed",
"embeddings",
"medcpt",
"biomedical",
"retrieval",
"rag",
"medical"
] |
[] |
2025-06-12T05:38:24Z
| 0 |
---
license: cc-by-4.0
language:
- en
tags:
- pubmed
- embeddings
- medcpt
- biomedical
- retrieval
- rag
- medical
pretty_name: PubMedAbstractsSubsetEmbedded
---
# PubMed Abstracts Subset with MedCPT Embeddings (float32)
This dataset contains a probabilistic sample of ~2.4 million PubMed abstracts, enriched with precomputed dense embeddings (title + abstract), from the **`ncbi/MedCPT-Article-Encoder`** model. It is derived from public metadata made available via the [National Library of Medicine (NLM)](https://pubmed.ncbi.nlm.nih.gov/) and was used in the paper [*Efficient and Reproducible Biomedical QA using Retrieval-Augmented Generation*](https://arxiv.org/abs/2505.07917).
Each entry includes:
- `title`: Title of the publication
- `abstract`: Abstract content
- `PMID`: PubMed identifier
- `embedding`: 768-dimensional float32 vector from MedCPT
---
## 🔍 How to Access
### ▶️ Option 1: Load via Hugging Face `datasets`
```python
from datasets import load_dataset
dataset = load_dataset("slinusc/PubMedAbstractsSubsetEmbedded", streaming=True)
for doc in dataset:
print(doc["PMID"], doc["embedding"][:5]) # print first 5 dims
break
```
> Each vector is stored as a list of 768 `float32` values (compact, no line breaks).
---
### 💾 Option 2: Git Clone with Git LFS
```bash
git lfs install
git clone https://huggingface.co/datasets/slinusc/PubMedAbstractsSubsetEmbedded
cd PubMedAbstractsSubsetEmbedded
```
---
## 📦 Format
Each file is a `.jsonl` (JSON Lines) file, where each line is a valid JSON object:
```json
{
"title": "...",
"abstract": "...",
"PMID": 36464820,
"embedding": [-0.1952481, ... , 0.2887376]
}
```
> The embeddings are 768-dimensional dense vectors, serialized as 32-bit floats.
---
## 📚 Source and Licensing
This dataset is derived from public domain PubMed metadata (titles and abstracts), redistributed in accordance with [NLM data usage policies](https://www.nlm.nih.gov/databases/download/data_distrib_main.html).
MedCPT embeddings were generated using the [ncbi/MedCPT-Article-Encoder](https://huggingface.co/ncbi/MedCPT-Article-Encoder) model.
---
## 📣 Citation
If you use this dataset or the included MedCPT embeddings, please cite:
> **Stuhlmann et al. (2025)**
> *Efficient and Reproducible Biomedical Question Answering using Retrieval Augmented Generation*
> [arXiv:2505.07917](https://arxiv.org/abs/2505.07917)
> [https://github.com/slinusc/medical_RAG_system](https://github.com/slinusc/medical_RAG_system)
---
## 🏷️ Version
- `v1.0` – Initial release (2.39M samples, 24 JSONL files, float32 embeddings, ~23 GB total)
---
## 📬 Contact
Maintained by [@slinusc](https://huggingface.co/slinusc).
For questions or collaborations, open a discussion on the HF Hub.
|
Rudra-ai/ai-responses-dataset-math-modified
|
Rudra-ai
|
2024-10-29T17:19:07Z
| 19 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-29T14:57:18Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: query
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 2811861
num_examples: 1000
download_size: 1000892
dataset_size: 2811861
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SayantanJoker/processed_seamless_align_hindi_new_chunk_38
|
SayantanJoker
|
2025-05-06T10:10:30Z
| 0 | 0 |
[
"region:us"
] |
[] |
2025-05-06T10:09:00Z
| 0 |
---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 2677291664.0
num_examples: 10000
download_size: 2547584664
dataset_size: 2677291664.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dopaul/house
|
dopaul
|
2025-04-13T08:22:00Z
| 22 | 0 |
[
"task_categories:robotics",
"region:us",
"phosphobot",
"so100",
"phospho-dk1"
] |
[
"robotics"
] |
2025-04-13T08:21:53Z
| 0 |
---
tags:
- phosphobot
- so100
- phospho-dk1
task_categories:
- robotics
---
# house
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
mlfoundations-dev/b1_code_top_16
|
mlfoundations-dev
|
2025-04-18T23:59:01Z
| 75 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-16T16:41:13Z
| 0 |
---
dataset_info:
features:
- name: instruction_seed
dtype: string
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: __original_row_idx
dtype: int64
- name: source
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: version
dtype: string
- name: style
dtype: string
- name: subset
dtype: string
- name: question_id
dtype: string
- name: solution
dtype: string
- name: test
dtype: string
- name: test_info
list:
- name: docstring
dtype: string
- name: function_declaration
dtype: string
- name: function_name
dtype: string
- name: parameter_list
dtype: string
- name: gpt_pass_sequence
sequence: int64
- name: gpt_pass_trial_num
dtype: int64
- name: gpt_difficulty
dtype: string
- name: gpt_pass_percentage
dtype: float64
- name: trials
struct:
- name: trial_gpt4o_0
struct:
- name: file_source
dtype: string
- name: solution_code
dtype: string
- name: test_code
dtype: string
- name: test_coverage
dtype: float64
- name: test_result
dtype: string
- name: trial_gpt4o_1
struct:
- name: file_source
dtype: string
- name: solution_code
dtype: string
- name: test_code
dtype: string
- name: test_coverage
dtype: float64
- name: test_result
dtype: string
- name: trial_gpt4o_2
struct:
- name: file_source
dtype: string
- name: solution_code
dtype: string
- name: test_code
dtype: string
- name: test_coverage
dtype: float64
- name: test_result
dtype: string
- name: trial_gpt4o_3
struct:
- name: file_source
dtype: string
- name: solution_code
dtype: string
- name: test_code
dtype: string
- name: test_coverage
dtype: float64
- name: test_result
dtype: string
- name: trial_gpt4o_4
struct:
- name: file_source
dtype: string
- name: solution_code
dtype: string
- name: test_code
dtype: string
- name: test_coverage
dtype: float64
- name: test_result
dtype: string
- name: trial_gpt4o_5
struct:
- name: file_source
dtype: string
- name: solution_code
dtype: string
- name: test_code
dtype: string
- name: test_coverage
dtype: float64
- name: test_result
dtype: string
- name: trial_gpt4o_6
struct:
- name: file_source
dtype: string
- name: solution_code
dtype: string
- name: test_code
dtype: string
- name: test_coverage
dtype: float64
- name: test_result
dtype: string
- name: trial_gpt4o_7
struct:
- name: file_source
dtype: string
- name: solution_code
dtype: string
- name: test_code
dtype: string
- name: test_coverage
dtype: float64
- name: test_result
dtype: string
- name: trial_gpt4o_8
struct:
- name: file_source
dtype: string
- name: solution_code
dtype: string
- name: test_code
dtype: string
- name: test_coverage
dtype: float64
- name: test_result
dtype: string
- name: trial_gpt4o_9
struct:
- name: file_source
dtype: string
- name: solution_code
dtype: string
- name: test_code
dtype: string
- name: test_coverage
dtype: float64
- name: test_result
dtype: string
- name: chosen_trial
dtype: string
- name: metadata
struct:
- name: original_instruction
dtype: string
- name: prompt_id
dtype: string
- name: row_id
dtype: int64
- name: seed_ids
dtype: string
- name: benchmark_similarity
dtype: float64
- name: benchmark_instruction
dtype: string
- name: benchmark_task_id
dtype: string
- name: filter_reason
dtype: string
- name: response_seed
dtype: string
- name: problem_id
dtype: int64
- name: solutions
dtype: string
- name: input_output
dtype: string
- name: difficulty
dtype: string
- name: url
dtype: string
- name: starter_code
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1828381253.5081968
num_examples: 31600
download_size: 910403323
dataset_size: 1828381253.5081968
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tuenguyen/bespoke_stratos_17k-test
|
tuenguyen
|
2025-02-13T07:09:31Z
| 16 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-13T07:09:05Z
| 0 |
---
dataset_info:
features:
- name: system
dtype: string
- name: Prompt
dtype: string
- name: Solution_source
dtype: string
- name: Solution
dtype: string
- name: Thought
dtype: string
- name: Answer
dtype: string
- name: Verifiable
dtype: int64
- name: Source
dtype: string
- name: System
dtype: string
splits:
- name: train
num_bytes: 606705373
num_examples: 16710
download_size: 245431908
dataset_size: 606705373
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
inpaint-context/opa-uptrain
|
inpaint-context
|
2024-10-25T15:00:35Z
| 27 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-25T14:56:21Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: string
- name: mask
dtype: image
splits:
- name: train
num_bytes: 229509833.234
num_examples: 26767
- name: validation
num_bytes: 48218269.811
num_examples: 5273
download_size: 133803398
dataset_size: 277728103.045
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
ioi-leaderboard/ioi-eval-sglang_meta-llama_Llama-3.1-8B-Instruct-new-prompt
|
ioi-leaderboard
|
2025-03-03T23:33:09Z
| 16 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-03T23:33:01Z
| 0 |
---
dataset_info:
features:
- name: problem_id
dtype: large_string
- name: subtask
dtype: large_string
- name: prompt
dtype: large_string
- name: generation
dtype: large_string
- name: code
dtype: large_string
- name: language
dtype: large_string
- name: solution_number
dtype: int64
- name: uuid
dtype: large_string
- name: model_kwargs
struct:
- name: seed
dtype: int64
- name: metadata
struct:
- name: usage
struct:
- name: completion_tokens
dtype: int64
- name: prompt_tokens
dtype: int64
- name: total_tokens
dtype: int64
- name: cost
dtype: float64
- name: timestamp
dtype: large_string
splits:
- name: train
num_bytes: 28186121
num_examples: 2050
download_size: 3191375
dataset_size: 28186121
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
orcn/dataVLM-Shapes-MS-Swift
|
orcn
|
2025-03-09T18:51:42Z
| 79 | 1 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-09T15:52:20Z
| 0 |
---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: images
sequence: image
splits:
- name: train
num_bytes: 59175606.0
num_examples: 753
download_size: 38339270
dataset_size: 59175606.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EleutherAI/mmlu_auxiliary_train_formatted_cloze_20250619-1339
|
EleutherAI
|
2025-06-19T18:51:05Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-19T18:51:04Z
| 0 |
---
dataset_info:
features:
- name: word_filter
dtype: bool
- name: word_filter_metadata
struct:
- name: keywords
dtype: string
- name: combined_filter
dtype: bool
splits:
- name: train
num_bytes: 493940
num_examples: 99842
download_size: 81658
dataset_size: 493940
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HHS-Official/cdc-pramstat-data-for-2010
|
HHS-Official
|
2025-05-07T19:24:32Z
| 0 | 0 |
[
"language:en",
"license:odbl",
"region:us",
"hhs",
"cdc",
"abuse",
"breastfeeding",
"contraception",
"medicaid",
"morbidity",
"obesity",
"stress",
"wic"
] |
[] |
2025-05-07T19:24:21Z
| 0 |
---
language:
- en
pretty_name: CDC PRAMStat Data for 2010
tags:
- hhs
- cdc
- abuse
- breastfeeding
- contraception
- medicaid
- morbidity
- obesity
- stress
- wic
license: odbl
---
# CDC PRAMStat Data for 2010
## Description
2010. Centers for Disease Control and Prevention (CDC). PRAMS, the Pregnancy Risk Assessment Monitoring System, is a surveillance system collecting state-specific, population-based data on maternal attitudes and experiences before, during, and shortly after pregnancy. It is a collaborative project of the Centers for Disease Control and Prevention (CDC) and state health departments. PRAMS provides data for state health officials to use to improve the health of mothers and infants. PRAMS topics include abuse, alcohol use, contraception, breastfeeding, mental health, morbidity, obesity, preconception health, pregnancy history, prenatal-care, sleep behavior, smoke exposure, stress, tobacco use, WIC, Medicaid, infant health, and unintended pregnancy. Data will be updated annually as it becomes available.
## Dataset Details
- **Publisher**: Centers for Disease Control and Prevention
- **Last Modified**: 2023-09-05
- **Contact**: DRH Public Inquiries ([email protected])
## Source
Original data can be found at: https://www.cdc.gov/prams/index.htm
## Usage
You can load this dataset using:
```python
from datasets import load_dataset
dataset = load_dataset('HHS-Official/cdc-pramstat-data-for-2010')
```
## License
This dataset is licensed under http://opendefinition.org/licenses/odc-odbl/
|
french-datasets/rcds-MultiLegalNeg
|
french-datasets
|
2025-03-31T08:13:13Z
| 25 | 0 |
[
"multilinguality:multilingual",
"language:fra",
"language:ita",
"language:deu",
"language:eng",
"region:us"
] |
[] |
2025-03-30T17:13:18Z
| 0 |
---
language:
- fra
- ita
- deu
- eng
multilinguality:
- multilingual
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données https://huggingface.co/datasets/rcds/MultiLegalNeg.
|
supergoose/flan_source_task372_synthetic_palindrome_numbers_258
|
supergoose
|
2025-02-25T19:34:39Z
| 14 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-25T19:34:37Z
| 0 |
---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: _template_idx
dtype: int64
- name: _task_source
dtype: string
- name: _task_name
dtype: string
- name: _template_type
dtype: string
splits:
- name: train
num_bytes: 13870369
num_examples: 19453
download_size: 3448730
dataset_size: 13870369
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
VGraf/wildchat_data_unused_qualityFiltered_from8000_to11000
|
VGraf
|
2025-05-19T11:45:20Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-19T11:45:17Z
| 0 |
---
dataset_info:
features:
- name: index
dtype: int64
- name: chunk_lengths
sequence: int64
- name: length
dtype: int64
- name: writing_style_response_chunks
sequence: float64
- name: writing_style_response_average
dtype: float64
- name: required_expertise_response_chunks
sequence: float64
- name: required_expertise_response_average
dtype: float64
- name: facts_and_trivia_response_chunks
sequence: float64
- name: facts_and_trivia_response_average
dtype: float64
- name: educational_value_response_chunks
sequence: float64
- name: educational_value_response_average
dtype: float64
- name: writing_style_chunks
sequence: float64
- name: writing_style_average
dtype: float64
- name: required_expertise_chunks
sequence: float64
- name: required_expertise_average
dtype: float64
- name: facts_and_trivia_chunks
sequence: float64
- name: facts_and_trivia_average
dtype: float64
- name: educational_value_chunks
sequence: float64
- name: educational_value_average
dtype: float64
- name: date
dtype: timestamp[s]
- name: id
dtype: int64
- name: text
dtype: string
- name: prompt
dtype: string
- name: response
dtype: string
- name: label
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 28415018
num_examples: 1539
download_size: 10852902
dataset_size: 28415018
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dlhhhhhya/test0502
|
dlhhhhhya
|
2025-05-02T06:53:06Z
| 0 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-05-02T06:51:20Z
| 0 |
---
license: apache-2.0
---
|
kgmyh/korean_stock_ticker_qa_data
|
kgmyh
|
2025-05-18T06:55:20Z
| 29 | 0 |
[
"task_categories:text-classification",
"language:ko",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finance",
"korean",
"stock_ticker",
"ticker"
] |
[
"text-classification"
] |
2025-05-16T23:33:19Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1082966
num_examples: 13815
- name: test
num_bytes: 3797
num_examples: 50
download_size: 300370
dataset_size: 1086763
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: apache-2.0
task_categories:
- text-classification
language:
- ko
tags:
- finance
- korean
- stock_ticker
- ticker
---
# 개요
- 한국 증시에 상장된 회사이름으로 종목코드 물어보는 QA 데이터셋
- 데이터 출처
- https://kind.krx.co.kr/corpgeneral/corpList.do?method=loadInitPage
- 2025년 05월 15일 데이터
|
twei11/node1_round_41
|
twei11
|
2025-04-16T16:28:48Z
| 14 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-16T16:28:47Z
| 0 |
---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 6926808
num_examples: 1800
download_size: 3365293
dataset_size: 6926808
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
math-extraction-comp/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo
|
math-extraction-comp
|
2025-01-26T13:20:56Z
| 17 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-11T10:30:16Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: gold
dtype: string
- name: target
dtype: string
- name: prediction
dtype: string
- name: subset
dtype: string
- name: lighteval-b200fe81_extracted_answer
dtype: string
- name: lighteval-b200fe81_score
dtype: float64
- name: harness_score
dtype: float64
- name: lighteval-c24870ea_score
dtype: float64
- name: harness_extracted_answer
dtype: string
- name: lighteval-0f21c935_extracted_answer
dtype: string
- name: qwen_score
dtype: float64
- name: lighteval-c24870ea_extracted_answer
dtype: string
- name: lighteval-0f21c935_score
dtype: float64
- name: qwen_extracted_answer
dtype: string
splits:
- name: train
num_bytes: 4056217
num_examples: 1324
download_size: 1184203
dataset_size: 4056217
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kathleenge/ps-split
|
kathleenge
|
2025-06-18T15:47:09Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-18T15:47:04Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
struct:
- name: department
dtype: string
- name: case_type
dtype: string
- name: case_detail
dtype: string
splits:
- name: train
num_bytes: 104459951.77970108
num_examples: 10491
- name: test
num_bytes: 26117477.220298916
num_examples: 2623
download_size: 3697178
dataset_size: 130577429.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
hrhraj/eval_calibrated_bbox_multiposition
|
hrhraj
|
2025-05-14T01:10:24Z
| 0 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so100",
"calibrated",
"bbox"
] |
[
"robotics"
] |
2025-05-14T01:01:12Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- calibrated
- bbox
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 1189,
"total_tasks": 1,
"total_videos": 1,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
ziyu3141/rf_newtrain_9_6
|
ziyu3141
|
2025-02-07T13:40:45Z
| 53 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-07T13:40:41Z
| 0 |
---
dataset_info:
features:
- name: Filename
dtype: string
- name: Aesthetics score
dtype: float64
- name: Artifact score
dtype: float64
- name: Misalignment score
dtype: float64
- name: Overall score
dtype: float64
- name: Artifact heatmap
sequence:
sequence:
sequence: int64
- name: Misalignment heatmap
sequence:
sequence:
sequence: int64
- name: Misalignment token label
dtype: string
- name: is_uneven
dtype: bool
- name: preferred_image
dtype: binary
- name: unpreferred_image
dtype: binary
- name: revised_image
dtype: binary
- name: unrevised_id
dtype: string
- name: is_preferred
dtype: bool
splits:
- name: train
num_bytes: 676043198
num_examples: 100
download_size: 44589244
dataset_size: 676043198
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Orion-zhen/meissa-lima
|
Orion-zhen
|
2024-10-24T03:26:43Z
| 20 | 0 |
[
"task_categories:text-generation",
"language:zh",
"language:en",
"license:gpl-3.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2305.11206",
"region:us",
"lima"
] |
[
"text-generation"
] |
2024-10-24T02:52:07Z
| 0 |
---
license: gpl-3.0
task_categories:
- text-generation
language:
- zh
- en
tags:
- lima
pretty_name: Meissa-LIMA
size_categories:
- 1K<n<10K
---
# Meissa-LIMA
受[LIMA](https://arxiv.org/abs/2305.11206)启发, 我制作了这个数据集. 数据集由以下几个部分构成: 原始数据集, 中文翻译版, 破限数据集, 角色扮演数据集, Gutenberg数据集, 弱智吧问答.
- 原始数据集: 原数据集中包含了13条拒绝/道德对齐的数据, 我将其找出并手动进行了修改
- 中文翻译版: 使用运行在 Great Server 上的 [Orion-zhen/Meissa-Qwen2.5-7B-Instruct-Q5_K_M-GGUF](https://huggingface.co/Orion-zhen/Meissa-Qwen2.5-7B-Instruct-Q5_K_M-GGUF) 完成翻译, 并由我进行校对
- 破限数据集: 从 [Orion-zhen/meissa-unalignments](https://huggingface.co/datasets/Orion-zhen/meissa-unalignments) 中选取了若干条目
- 角色扮演数据集: 从 [MinervaAI/Aesir-Preview](https://huggingface.co/datasets/MinervaAI/Aesir-Preview) 中选取了若干条目
- Gutenberg数据集: 从 [Orion-zhen/kto-gutenberg](https://huggingface.co/datasets/Orion-zhen/kto-gutenberg) 中选取了若干条目
- 弱智吧问答: 从 [LooksJuicy/ruozhiba](https://huggingface.co/datasets/LooksJuicy/ruozhiba) 中选取了若干问题并由我手动编写回答
|
teilomillet/wikipeqa
|
teilomillet
|
2025-06-18T10:54:38Z
| 12 | 1 |
[
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"eval",
"rag"
] |
[] |
2025-06-16T18:36:48Z
| 1 |
---
license: mit
configs:
- config_name: default
data_files:
- split: sample
path: data/sample-*
- split: eval
path: data/eval-*
dataset_info:
features:
- name: question
dtype: large_string
- name: answer
dtype: large_string
- name: source
dtype: large_string
- name: canary
dtype: large_string
splits:
- name: sample
num_bytes: 456906
num_examples: 200
- name: eval
num_bytes: 9649060
num_examples: 3003
download_size: 9583773
dataset_size: 10105966
language:
- en
tags:
- eval
- rag
---
# Dataset: wikiqa
This dataset was generated using the [Kushim framework](https://github.com/teilomillet/kushim).
This repository may contain multiple files, including:
- A public, unencrypted sample of the Q&A data.
- A main, encrypted version of the full Q&A dataset.
- A JSON file containing the source article information.
|
davanstrien/ragged
|
davanstrien
|
2025-05-27T09:03:28Z
| 0 | 0 |
[
"region:us"
] |
[] |
2025-05-27T09:02:15Z
| 0 |
---
dataset_info:
features:
- name: system_id
dtype: string
- name: trajectories
sequence:
sequence:
sequence: float64
- name: n_dims
dtype: int32
- name: n_trajectories
dtype: int32
splits:
- name: train
num_bytes: 2520822264
num_examples: 10000
download_size: 2347812878
dataset_size: 2520822264
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
carlfeynman/Bharat_NanoQuoraRetrieval_sa
|
carlfeynman
|
2025-01-23T07:18:00Z
| 43 | 0 |
[
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:monolingual",
"source_datasets:NanoQuoraRetrieval",
"language:sa",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"text-retrieval"
] |
[
"text-retrieval"
] |
2025-01-23T07:17:53Z
| 0 |
---
language:
- sa
license: cc-by-4.0
multilinguality:
- monolingual
source_datasets:
- NanoQuoraRetrieval
task_categories:
- text-retrieval
task_ids:
- document-retrieval
tags:
- text-retrieval
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
- config_name: qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
splits:
- name: train
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
configs:
- config_name: corpus
data_files:
- split: train
path: corpus/train-*
- config_name: qrels
data_files:
- split: train
path: qrels/train-*
- config_name: queries
data_files:
- split: train
path: queries/train-*
---
# Bharat-NanoBEIR: Indian Language Information Retrieval Dataset
## Overview
This dataset is part of the Bharat-NanoBEIR collection, which provides information retrieval datasets for Indian languages. It is derived from the NanoBEIR project, which offers smaller versions of BEIR datasets containing 50 queries and up to 10K documents each.
## Dataset Description
This particular dataset is the Sanskrit version of the NanoQuoraRetrieval dataset, specifically adapted for information retrieval tasks. The translation and adaptation maintain the core structure of the original NanoBEIR while making it accessible for Sanskrit language processing.
## Usage
This dataset is designed for:
- Information Retrieval (IR) system development in Sanskrit
- Evaluation of multilingual search capabilities
- Cross-lingual information retrieval research
- Benchmarking Sanskrit language models for search tasks
## Dataset Structure
The dataset consists of three main components:
1. **Corpus**: Collection of documents in Sanskrit
2. **Queries**: Search queries in Sanskrit
3. **QRels**: Relevance judgments connecting queries to relevant documents
## Citation
If you use this dataset, please cite:
```
@misc{bharat-nanobeir,
title={Bharat-NanoBEIR: Indian Language Information Retrieval Datasets},
year={2024},
url={https://huggingface.co/datasets/carlfeynman/Bharat_NanoQuoraRetrieval_sa}
}
```
## Additional Information
- **Language**: Sanskrit (sa)
- **License**: CC-BY-4.0
- **Original Dataset**: NanoBEIR
- **Domain**: Information Retrieval
## License
This dataset is licensed under CC-BY-4.0. Please see the LICENSE file for details.
|
spr-serena/mri_scans_labelled_horizontal_only
|
spr-serena
|
2024-11-24T16:48:37Z
| 16 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-24T16:48:32Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 57518157.86
num_examples: 1251
download_size: 56951972
dataset_size: 57518157.86
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Aniruddh79012/x_dataset_122
|
Aniruddh79012
|
2025-06-22T10:06:51Z
| 5 | 0 |
[
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] |
2025-06-21T21:41:22Z
| 0 |
---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** Aniruddh79012/x_dataset_122
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5H5t56jr5unKiDiE9qEkXbxq3FELYeHpuVRtpMWu8PRShFzM
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{Aniruddh790122025datauniversex_dataset_122,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={Aniruddh79012},
year={2025},
url={https://huggingface.co/datasets/Aniruddh79012/x_dataset_122},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 741
- **Date Range:** 2025-06-06T00:00:00Z to 2025-06-22T00:00:00Z
- **Last Updated:** 2025-06-22T10:06:51Z
### Data Distribution
- Tweets with hashtags: 79.08%
- Tweets without hashtags: 20.92%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | #bitcoin | 296 | 49.09% |
| 2 | #tao | 42 | 6.97% |
| 3 | #gamedev | 21 | 3.48% |
| 4 | NULL | 17 | 2.82% |
| 5 | #cars | 13 | 2.16% |
| 6 | #bitcoinmining | 7 | 1.16% |
| 7 | #btc | 6 | 1.00% |
| 8 | #car | 5 | 0.83% |
| 9 | #blender3d | 5 | 0.83% |
| 10 | #roblox | 5 | 0.83% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-06-21T21:41:21Z | 138 | 138 |
| 2025-06-21T21:41:23Z | 300 | 438 |
| 2025-06-22T10:06:51Z | 303 | 741 |
|
alea-institute/kl3m-filter-data-dotgov-www.eucom.mil
|
alea-institute
|
2025-02-04T08:40:28Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-04T08:40:26Z
| 0 |
---
dataset_info:
features:
- name: identifier
dtype: string
- name: dataset
dtype: string
- name: mime_type
dtype: string
- name: score
dtype: float64
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 4699654
num_examples: 243
download_size: 1081606
dataset_size: 4699654
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
khantivong/DA100
|
khantivong
|
2025-03-10T14:07:12Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-10T14:05:26Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 21526
num_examples: 99
download_size: 10184
dataset_size: 21526
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alea-institute/kl3m-filter-data-dotgov-www.msha.gov
|
alea-institute
|
2025-02-04T17:35:45Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-04T17:35:39Z
| 0 |
---
dataset_info:
features:
- name: identifier
dtype: string
- name: dataset
dtype: string
- name: mime_type
dtype: string
- name: score
dtype: float64
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 12003528
num_examples: 475
download_size: 2871127
dataset_size: 12003528
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Chojins/chess_game_000_white_red
|
Chojins
|
2025-03-29T06:43:57Z
| 91 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"chess"
] |
[
"robotics"
] |
2025-03-27T06:47:35Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- chess
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 10,
"total_frames": 4094,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_32_0.05_64_BestF1_pl
|
ferrazzipietro
|
2024-12-03T08:36:39Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-03T08:36:36Z
| 0 |
---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 157591
num_examples: 101
- name: test
num_bytes: 1105280
num_examples: 654
download_size: 273369
dataset_size: 1262871
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
illuin-conteb/insurance
|
illuin-conteb
|
2025-05-30T14:26:47Z
| 42 | 1 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-02T00:24:00Z
| 0 |
---
dataset_info:
- config_name: documents
features:
- name: chunk_id
dtype: string
- name: chunk
dtype: string
splits:
- name: train
num_bytes: 19898
num_examples: 60
download_size: 6011
dataset_size: 19898
- config_name: queries
features:
- name: chunk_id
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 9514
num_examples: 120
download_size: 2687
dataset_size: 9514
configs:
- config_name: documents
data_files:
- split: train
path: documents/train-*
- config_name: queries
data_files:
- split: train
path: queries/train-*
---
# ConTEB - Insurance
This dataset is part of *ConTEB* (Context-aware Text Embedding Benchmark), designed for evaluating contextual embedding model capabilities. It focuses on the theme of **Insurance**, particularly stemming from a document of the EIOPA entity.
## Dataset Summary
*Insurance* is composed of a long document with insurance-related statistics for each country of the European Union. To build the corpus, we extract the text of the document, and chunk it (using [LangChain](https://github.com/langchain-ai/langchain)'s RecursiveCharacterSplitter with a threshold of 1000 characters). Countries are often not referred to in-text, but only once in the section title. Therefore, certain chunks require knowledge of their position within the document to be properly disambiguated from others. Questions are manually crafted to require structural understanding for accurate chunk matching. Since questions are crafted after the chunking process, the annotation results directly from the manual question generation process.
This dataset provides a focused benchmark for contextualized embeddings. It includes a curated set of original documents, chunks stemming from them, and queries.
* **Number of Documents:** 1
* **Number of Chunks:** 60
* **Number of Queries:** 120
* **Average Number of Tokens per Doc:** 80.7
## Dataset Structure (Hugging Face Datasets)
The dataset is structured into the following columns:
* **`documents`**: Contains chunk information:
* `"chunk_id"`: The ID of the chunk, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document.
* `"chunk"`: The text of the chunk
* **`queries`**: Contains query information:
* `"query"`: The text of the query.
* `"chunk_id"`: The ID of the chunk that the query is related to, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document.
## Usage
We will upload a Quickstart evaluation snippet soon.
## Citation
We will add the corresponding citation soon.
## Acknowledgments
This work is partially supported by [ILLUIN Technology](https://www.illuin.tech/), and by a grant from ANRT France.
## Copyright
All rights are reserved to the original authors of the documents.
|
Igorrr0/Polish-wikipedia-selected-topics
|
Igorrr0
|
2025-05-23T16:46:44Z
| 88 | 0 |
[
"language:pl",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"wikipedia",
"polish",
"pl",
"polska"
] |
[] |
2025-05-22T09:37:20Z
| 0 |
---
license: apache-2.0
language:
- pl
tags:
- wikipedia
- polish
- pl
- polska
pretty_name: polish wikipedia selected topics
size_categories:
- 1K<n<10K
---
categories:
"Nauki przyrodnicze",
"Nauki humanistyczne",
"Nauki biologiczne",
"Biologia",
"Metodologia nauk przyrodniczych",
"Polska",
"Nauka w Polsce",
"Języki Polski",
"Nauki humanistyczne",
"Językoznawstwo",
"Kultuta",
"Kultura języka",
"Myrmekologia",
"Mrówkowate",
"Ekologia",
"Prawo w Polsce",
"Językoznawstwo",
"Dialektologia",
"Odmiany i style językowe",
"Futurologia"
every example has max 2000 words in it.
|
magnifi/parser_user_v35a
|
magnifi
|
2025-02-28T14:28:02Z
| 18 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-26T04:05:25Z
| 0 |
---
dataset_info:
features:
- name: Query_id
dtype: int64
- name: Query
dtype: string
- name: Elastic_search
dtype: string
- name: virtual_portfolios
dtype: string
- name: Parser_output
dtype: string
splits:
- name: train
num_bytes: 542942
num_examples: 2233
- name: validation
num_bytes: 29703
num_examples: 149
download_size: 179873
dataset_size: 572645
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
michsethowusu/bambara-dinka_sentence-pairs
|
michsethowusu
|
2025-04-03T11:55:40Z
| 7 | 0 |
[
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-03T11:55:37Z
| 0 |
---
dataset_info:
features:
- name: score
dtype: float32
- name: Bambara
dtype: string
- name: Dinka
dtype: string
splits:
- name: train
num_bytes: 2910185
num_examples: 13958
download_size: 2910185
dataset_size: 2910185
configs:
- config_name: default
data_files:
- split: train
path: Bambara-Dinka_Sentence-Pairs.csv
---
# Bambara-Dinka_Sentence-Pairs Dataset
This dataset contains sentence pairs for African languages along with similarity scores. It can be used for machine translation, sentence alignment, or other natural language processing tasks.
This dataset is based on the NLLBv1 dataset, published on OPUS under an open-source initiative led by META. You can find more information here: [OPUS - NLLB-v1](https://opus.nlpl.eu/legacy/NLLB-v1.php)
## Metadata
- **File Name**: Bambara-Dinka_Sentence-Pairs
- **Number of Rows**: 13958
- **Number of Columns**: 3
- **Columns**: score, Bambara, Dinka
## Dataset Description
The dataset contains sentence pairs in African languages with an associated similarity score. Each row consists of three columns:
1. `score`: The similarity score between the two sentences (range from 0 to 1).
2. `Bambara`: The first sentence in the pair (language 1).
3. `Dinka`: The second sentence in the pair (language 2).
This dataset is intended for use in training and evaluating machine learning models for tasks like translation, sentence similarity, and cross-lingual transfer learning.
## References
Below are papers related to how the data was collected and used in various multilingual and cross-lingual applications:
[1] Holger Schwenk and Matthijs Douze, Learning Joint Multilingual Sentence Representations with Neural Machine Translation, ACL workshop on Representation Learning for NLP, 2017
[2] Holger Schwenk and Xian Li, A Corpus for Multilingual Document Classification in Eight Languages, LREC, pages 3548-3551, 2018.
[3] Holger Schwenk, Filtering and Mining Parallel Data in a Joint Multilingual Space ACL, July 2018
[4] Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk and Veselin Stoyanov, XNLI: Cross-lingual Sentence Understanding through Inference, EMNLP, 2018.
[5] Mikel Artetxe and Holger Schwenk, Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings arXiv, Nov 3 2018.
[6] Mikel Artetxe and Holger Schwenk, Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond arXiv, Dec 26 2018.
[7] Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia arXiv, July 11 2019.
[8] Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
[9] Paul-Ambroise Duquenne, Hongyu Gong, Holger Schwenk, Multimodal and Multilingual Embeddings for Large-Scale Speech Mining, NeurIPS 2021, pages 15748-15761.
[10] Kevin Heffernan, Onur Celebi, and Holger Schwenk, Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
|
ankner/mmlu-pro-rl
|
ankner
|
2025-03-26T08:13:38Z
| 17 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-26T07:13:50Z
| 0 |
---
dataset_info:
features:
- name: input
dtype: string
- name: response
dtype: string
- name: test_cases
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 14453971.50652994
num_examples: 8337
- name: test
num_bytes: 1733713.7467350296
num_examples: 1000
download_size: 7857646
dataset_size: 16187685.25326497
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
hcltech-robotics/gauge_data_industrial_env
|
hcltech-robotics
|
2025-03-01T00:54:04Z
| 16 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-02-25T01:42:11Z
| 0 |
---
license: apache-2.0
---
|
konwoo/er-irl-2
|
konwoo
|
2025-04-19T07:33:49Z
| 22 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-19T06:52:43Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: validation
num_bytes: 11217655
num_examples: 1000
- name: train
num_bytes: 1116486492
num_examples: 100000
download_size: 640694393
dataset_size: 1127704147
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: train
path: data/train-*
---
|
yuhuanstudio/PTT-pretrain-zhtw
|
yuhuanstudio
|
2025-04-01T13:15:22Z
| 22 | 2 |
[
"language:zh",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-26T04:05:49Z
| 0 |
---
license: apache-2.0
language:
- zh
size_categories:
- 100K<n<1M
---
# Dataset Card for "yuhuanstudio/PTT-pretrain-zhtw"
## 資料集摘要
本資料集擷取自台灣最大的 BBS 討論區——批踢踢實業坊(PTT),匯集多個看板的歷史與近期討論,提供豐富的繁體中文語料,適用於大型語言模型(LLM)預訓練與自然語言處理(NLP)研究。
數據來源:PTT 批踢踢實業坊(https://www.ptt.cc)
涵蓋看板:包含 Gossiping、Tech_Job、Stock、NBA 等所有討論區
時間範圍:擷取自 PTT 公開存檔前200頁,涵蓋多年歷史數據 (因各版頁數問題,熱門版面資料可能時間都較為古老)
語言:繁體中文
資料格式:JSON,適合 LLM 訓練與 NLP 應用
資料規模:包含數十萬條貼文與回應
## 資料集結構
```json
{
"text": "作者: Sonaten (=.=)\n看板: PC_Shopping\n標題: [閒聊] Gigabyte EP35-DS3 的DES...\n時間: Fri Jun 27 15:20:54 2008\n內文: 剛剛無聊用SuperPI跑一下我的E2220 完全沒超頻"
}
```
## 如何使用
```python
from datasets import load_dataset
dataset = load_dataset("yuhuanstudio/PTT-pretrain-zhtw", split="pretrain")
```
## 版權聲明
本資料集僅供學術研究與個人學習用途,嚴禁用於商業用途或任何違反 PTT 使用規範的行為。
數據來源:本資料集數據來自 PTT 批踢踢實業坊(https://www.ptt.cc)
,所有文章內容及回應的版權均屬於原作者所有。
使用限制:使用本資料集時,您應遵守台灣著作權法及相關法律法規,並尊重數據原創作者的智慧財產權。
責任聲明:本資料集僅供資訊整理與技術研究之用,作者不對數據內容的準確性、完整性或合法性承擔任何責任。
使用本資料集即表示您已閱讀並同意本聲明的條款。
## 注意事項
PTT 內容屬於用戶生成內容,可能包含不當言論,請謹慎使用。
## 許可資訊
[apache-2.0]
📌 如果你在其他專案中引用了這個資料集,歡迎順手標註一下作者或專案連結,讓我也能看到它的應用!😊
|
jason1234309/dnd_race_images_labels
|
jason1234309
|
2024-12-16T17:31:58Z
| 17 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-14T22:56:32Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': aarakocra
'1': dragonborn
'2': drow
'3': dwarf
'4': goblin
'5': lizardfolk
'6': orc
'7': tabaxi
'8': tiefling
'9': warforged
splits:
- name: train
num_bytes: 81686693.0
num_examples: 300
- name: test
num_bytes: 31714923.0
num_examples: 118
download_size: 113411158
dataset_size: 113401616.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
aadarsh99/kyvo-datasets-and-codebooks
|
aadarsh99
|
2025-06-09T21:03:43Z
| 0 | 0 |
[
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[] |
2025-02-19T22:56:18Z
| 0 |
---
license: apache-2.0
---
# Kyvo Dataset and Codebooks Details
This document provides details about the dataset and codebooks provided in the `kyvo-datasets-and-codebooks` repository. We will provide the details about each of the folders in the repository and the contents of each folder.
## Data Generation Pipeline
The pipeline that we follow to generate the pre-tokenized data is as follows:
* **3D Scenes**: 3D Scene JSON --> Serialized 3D Scene --> Tokenized 3D Scene
* **Images**: Image --> VQGAN Codebook Indices --> Tokenized Image
* **Text**: Text --> Tokenized Text
## Pre-tokenized Data
The `pretokenized-data` folder contains all the pre-tokenized data for the datasets used in the Kyvo project. The pre-tokenized data is stored in the following structure:
```python
pretokenized-data/
|-- clevr/
| |-- 3d-scenes/ # contains all pre-tokenized 3D scenes for CLEVR for all tasks
| |-- images/ # contains all pre-tokenized images for CLEVR for all tasks
| |-- text/ # contains all pre-tokenized text for CLEVR for all tasks
|-- objaworld/
| |-- 3d-scenes/ # contains all pre-tokenized 3D scenes for ObjaWorld for all tasks
| |-- images/ # contains all pre-tokenized images for ObjaWorld for all tasks
|-- objectron/
| |-- 3d-scenes/ # contains all pre-tokenized 3D scenes for Objectron for all tasks
| |-- images/ # contains all pre-tokenized images for Objectron for all tasks
```
For a given task, an input can be any combination of 3d-scenes, images, and text. The output can be any combination of images, text, and 3d-scenes. In the following table we outline the tasks for each dataset and the corresponding input and output data that are needed for each task.
| **Task** | **Input Image** | **Input 3D Scene** | **Input Text** | **Output Image** | **Output 3D Scene** | **Output Text** |
|:----------------------:|:------------------:|:----------------------:|:-----------------:|:------------------:|:-----------------------:|:-----------------:|
| **CLEVR** | | | | | | |
| Rendering | 𐄂 | ✓ | 𐄂 | ✓ | 𐄂 | 𐄂 |
| Recognition | ✓ | 𐄂 | 𐄂 | 𐄂 | ✓ | 𐄂 |
| Instruction-Following | ✓ | ✓ | ✓ | ✓ | ✓ | 𐄂 |
| Question-Answering | ✓ | ✓ | ✓ | 𐄂 | 𐄂 | ✓ |
| | | | | | | |
| **ObjaWorld** | | | | | | |
| Rendering | 𐄂 | ✓ | 𐄂 | ✓ | 𐄂 | 𐄂 |
| Recognition | ✓ | 𐄂 | 𐄂 | 𐄂 | ✓ | 𐄂 |
| | | | | | | |
| **Objectron** | | | | | | |
| Recognition | ✓ | 𐄂 | 𐄂 | 𐄂 | ✓ | 𐄂 |
For the exact files that correspond to the input and output data for each task, please refer to the corresponding configuration files in the `configs/llama3_2/train` folder.
## VQGAN Models and Codebooks
The `vqgan-models-and-codebooks` folder contains all the VQGAN model checkpoints and codebooks for the datasets used in the Kyvo project. The VQGAN model checkpoints and codebooks are stored in the following structure:
```python
vqgan-models-and-codebooks/
|-- clevr/
| |-- 2024-10-10T09-21-36_custom_vqgan_CLEVR-LARGE/ # contains the VQGAN model checkpoint for CLEVR
| |-- custom_vqgan_embedding_1024CLEVRLARGE_256dim.npy # contains the VQGAN codebook for CLEVR
|-- objaworld/
| |-- 2025-01-17T09-02-22_custom_vqgan_SYNTHETIC_LIVINGROOM_PARK_LARGE_EP100/ # contains the VQGAN model checkpoint for ObjaWorld
| |-- custom_vqgan_embedding_256SYNTHETIC_LIVINGROOM_PARK_LARGE_EP100_256dim.npy # contains the VQGAN codebook for ObjaWorld
|-- objectron/
| |-- 2024-11-03T05-41-42_custom_vqgan_OMNI3D_OBJECTRON_ep200/ # contains the VQGAN model checkpoint for Objectron
| |-- custom_vqgan_embedding_256Omni3D-OBJECTRON_256dim.npy # contains the VQGAN codebook for Objectron
```
## Images and Scenes for Evaluation
The `images-and-scenes-for-evaluation` folder contains all the groundtruth images and scenes for the datasets used in the Kyvo project. The images and scenes are used to compute the evaluation metrics for the different tasks. The images and scenes are stored in the following structure:
```python
images-and-scenes-for-evaluation/
|-- clevr/ # contains all images and scenes for evaluation for CLEVR
|-- objaworld/ # contains all images and scenes for evaluation for ObjaWorld
|-- objectron/ # contains all scenes for evaluation for Objectron
```
|
kaiwenw/open_r1_mar2
|
kaiwenw
|
2025-03-02T18:37:30Z
| 16 | 0 |
[
"size_categories:10K<n<100K",
"modality:text",
"region:us"
] |
[] |
2025-03-02T18:23:02Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
sequence: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
sequence: bool
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2695034906
num_examples: 49488
download_size: 1164300716
dataset_size: 2695034906
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wawiesel/scale-origen-v1
|
wawiesel
|
2025-02-08T21:59:30Z
| 19 | 0 |
[
"license:cc-by-nd-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-08T18:55:48Z
| 0 |
---
license: cc-by-nd-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 1184
num_examples: 1
- name: validation
num_bytes: 2498
num_examples: 9
- name: test
num_bytes: 873
num_examples: 4
download_size: 15773
dataset_size: 4555
---
|
phonemefake/PhonemeFakeV2
|
phonemefake
|
2025-04-12T13:15:58Z
| 26 | 0 |
[
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"audio",
"speech",
"synthetic"
] |
[] |
2025-02-13T00:31:26Z
| 0 |
---
datasets:
- phonemefake/PhonemeFakeV2 # Change this to your actual dataset ID
language:
- en # Change if your dataset is in another language
license: "cc-by-4.0" # Adjust license if needed
tags:
- audio
- speech
- synthetic
---
# PhonemeFake - A Phonetic Deepfake Dataset
<!-- Provide a quick summary of the dataset. -->
We introduce PhonemeFake, a DF attack that manipulates critical speech segments using language reasoning, significantly reducing human perception and SoTA
model accuracies.
## Dataset Details
We provide example spoof audio in the viewers tab along with the transcription of the bonafide sample, the manipulated transcription and the audio timings for a small set of the data.
The dataset is split into three subsets. The spoof samples generated using the ASVspoof21 Deep Fake, In the Wild and LJ-Speech bonafide samples. Each can be downloaded and extracted.
We provide all the metadata in `pf_metadata.csv`.
The dataset consists of 42,895 spoof samples corresponding to a total of 57.91 hours of spoof audio.
|
ziyu3141/rf_newtrain_8_54
|
ziyu3141
|
2025-02-08T04:03:31Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-08T04:03:22Z
| 0 |
---
dataset_info:
features:
- name: Filename
dtype: string
- name: Aesthetics score
dtype: float64
- name: Artifact score
dtype: float64
- name: Misalignment score
dtype: float64
- name: Overall score
dtype: float64
- name: Artifact heatmap
sequence:
sequence:
sequence: int64
- name: Misalignment heatmap
sequence:
sequence:
sequence: int64
- name: Misalignment token label
dtype: string
- name: is_uneven
dtype: bool
- name: preferred_image
dtype: binary
- name: unpreferred_image
dtype: binary
- name: revised_image
dtype: binary
- name: unrevised_id
dtype: string
- name: is_preferred
dtype: bool
splits:
- name: train
num_bytes: 677051534
num_examples: 100
download_size: 45501086
dataset_size: 677051534
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mims-harvard/ProCyon-Instruct
|
mims-harvard
|
2025-03-30T19:11:39Z
| 720 | 4 |
[
"license:apache-2.0",
"doi:10.57967/hf/3840",
"region:us"
] |
[] |
2024-10-21T18:31:25Z
| 0 |
---
license: apache-2.0
viewer: false
---
This repository contains the `ProCyon-Instruct` used to train the [`ProCyon`](https://huggingface.co/collections/mims-harvard/procyon-6716632e60e4b3785bf8bd04) family of models.
Please see [installation instructions](https://github.com/mims-harvard/ProCyon?tab=readme-ov-file#installation) on our GitHub repo for details on how to configure the dataset
for use with pre-trained ProCyon models. For additional technical details, please refer to our [overview page](https://zitniklab.hms.harvard.edu/ProCyon/) or the [paper](https://www.biorxiv.org/content/10.1101/2024.12.10.627665v1).
The repository contains three top-level directories:
- `integrated_data/v1` - The primary component of the dataset: the amino acid sequences and associated phenotypes used for constructing instruction tuning examples.
- `generated_data` - Contains additonal artifacts beyond amino acids and phenotypes. Generated by the ProCyon team and used for model training and evaluation.
- `model_weights` - Contains pre-trained model weights used for initializing ProCyon models. Note that the model weights themselves are not contained in this repository but rather are expected to be downloaded here from their respective repositories.
Within `integrated_data`, there are four main types of directories:
- `{amino_acid_seq_type}` - directories containing information for amino acid sequences themselves, where `amino_acid_seq_type` is one of `["domain", "peptide", "protein"]`. Each directory contains the following files:
- `{amino_acid_seq_type}_sequences.fa` - FASTA file containing the raw amino acid sequence for each entity
- `{amino_acid_seq_type}_info_filtered.pkl` - Pickled Pandas DataFrame containing the mapping from the amino acid sequence's database ID (e.g. UniProt ID for proteins) to a numeric index used within ProCyon-Instruct. Two columns:
- `index` - numeric ID within ProCyon-Instruct
- `{amino_acid_seq_type}_id` - ID within original database
- `{phenotype_type}` - directories containing information for each phenotype entity. Each directory contains the following files:
- `{phenotype_type}_info_filtered.pkl` - Pickled Pandas DataFrame containing mapping from phenotype's database ID to numeric ID within ProCyon-Instruct, and various textual descriptions within each database. Has the following columns:
- `index` - numeric ID within ProCyon-Instruct
- `{phenotype_type}_id` - ID within original database
- additional columns coming from the original databases giving various textual descriptions of the phenotype. Used to create the instruction tuning examples
- `{phenotype_type}_info_filtered_composed.pkl` - Pickled Pandas DataFrame containing the same data as `{phenotype_type}_info_filtered.pkl` but with additional columns giving compositions of individual text columns from the original DataFrame.
- `{amino_acid_seq_type}_{phenotype_type}` - directories containing information on the associations between amino acid sequences and phenotypes. Each directory contains a subdirectory named based on the method used for generating dataset splits within that database. Please see the methods section of our manuscript for more details. Within these subdirectories there are two files:
- `{amino_acid_seq_type}_{phenotype_type}_relations.unified.csv` - CSV file containing relations expressed in original database IDs. Contains six columns:
- `text_id` - ID from original phenotype database
- `seq_id` - ID from original sequence database
- `text_type` - largely redundant with `phenotype_type`, may be helpful when concatenating many assocations files
- `seq_type` - largely redundant with `amino_acid_seq_type`, may be helpful when concatenating many assocations files
- `relation` - largely redundant with f`{amino_acid_seq_type}_{phenotype_type}`, may be helpful when concatenating many assocations files. For some datasets such as DrugBank and GO, this column takes on different values within the same file and expresses distinct relations, e.g. GO molecular function vs GO biological process.
- `split` - Assigned data split for this association. `CL_train` are training associations, `CL_val_*` are validation associations, and `eval_*` are test associations. Both `CL_val` and `eval` have sufficies indicating whether these relations are zero-shot with respect to the phenotype, where `_zero_shot` indicates a zero-shot relation, `_[num]_shot` indicates a few-shot relation, and `_pt_ft` indicates relations where the phenotype is seen frequently in training.
- `{amino_acid_seq_type}_{phenotype_type}_relations_indexed.unified.csv` - Identical to the above CSV file, but with relations expressed using ProCyon internal numeric IDs.
- `{amino_acid_seq_type}_{amino_acid_seq_type}` - directories containing information on the associations between two amino acid sequences, e.g. protein-protein interactions. Format is largely the same as above except with `seq_id_1` and `seq_id_2` columns instead of `seq_id` and `text_id`
|
yueliu1999/GuardReasoner-VLTrain-Image
|
yueliu1999
|
2025-05-29T04:18:07Z
| 197 | 0 |
[
"license:apache-2.0",
"modality:image",
"region:us"
] |
[] |
2025-05-29T04:09:46Z
| 0 |
---
license: apache-2.0
---
|
Dataset Card for Hugging Face Hub Dataset Cards
This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in dataset cards
- analysis of the dataset card format/content
- topic modelling of dataset cards
- training language models on the dataset cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the dataset card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 2,253