Dataset Viewer
datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-06-03 10:14:14
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-03 10:13:51
| trending_score
float64 1
36
⌀ | card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
louisbrulenaudet/code-forestier-nouveau | louisbrulenaudet | 2025-06-03T05:05:36Z | 437 | 0 | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:text-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finetuning",
"legal",
"french law",
"droit français",
"Code forestier (nouveau)"
] | [
"text-generation",
"table-question-answering",
"summarization",
"text-retrieval",
"question-answering",
"text-classification"
] | 2024-03-25T21:53:40Z | null | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code forestier (nouveau)
source_datasets:
- original
pretty_name: Code forestier (nouveau)
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code forestier (nouveau), non-instruct (2025-06-02)
The objective of this project is to provide researchers, professionals and law students with simplified, up-to-date access to all French legal texts, enriched with a wealth of data to facilitate their integration into Community and European projects.
Normally, the data is refreshed daily on all legal codes, and aims to simplify the production of training sets and labeling pipelines for the development of free, open-source language models based on open data accessible to all.
## Concurrent reading of the LegalKit
[<img src="https://raw.githubusercontent.com/louisbrulenaudet/ragoon/main/assets/badge.svg" alt="Built with RAGoon" width="200" height="32"/>](https://github.com/louisbrulenaudet/ragoon)
To use all the legal data published on LegalKit, you can use RAGoon:
```bash
pip3 install ragoon
```
Then, you can load multiple datasets using this code snippet:
```python
# -*- coding: utf-8 -*-
from ragoon import load_datasets
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=False
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
### Data Structure for Article Information
This section provides a detailed overview of the elements contained within the `item` dictionary. Each key represents a specific attribute of the legal article, with its associated value providing detailed information.
1. **Basic Information**
- `ref` (string): **Reference** - A reference to the article, combining the title_main and the article `number` (e.g., "Code Général des Impôts, art. 123").
- `texte` (string): **Text Content** - The textual content of the article.
- `dateDebut` (string): **Start Date** - The date when the article came into effect.
- `dateFin` (string): **End Date** - The date when the article was terminated or superseded.
- `num` (string): **Article Number** - The number assigned to the article.
- `id` (string): **Article ID** - Unique identifier for the article.
- `cid` (string): **Chronical ID** - Chronical identifier for the article.
- `type` (string): **Type** - The type or classification of the document (e.g., "AUTONOME").
- `etat` (string): **Legal Status** - The current legal status of the article (e.g., "MODIFIE_MORT_NE").
2. **Content and Notes**
- `nota` (string): **Notes** - Additional notes or remarks associated with the article.
- `version_article` (string): **Article Version** - The version number of the article.
- `ordre` (integer): **Order Number** - A numerical value used to sort articles within their parent section.
3. **Additional Metadata**
- `conditionDiffere` (string): **Deferred Condition** - Specific conditions related to collective agreements.
- `infosComplementaires` (string): **Additional Information** - Extra information pertinent to the article.
- `surtitre` (string): **Subtitle** - A subtitle or additional title information related to collective agreements.
- `nature` (string): **Nature** - The nature or category of the document (e.g., "Article").
- `texteHtml` (string): **HTML Content** - The article's content in HTML format.
4. **Versioning and Extensions**
- `dateFinExtension` (string): **End Date of Extension** - The end date if the article has an extension.
- `versionPrecedente` (string): **Previous Version** - Identifier for the previous version of the article.
- `refInjection` (string): **Injection Reference** - Technical reference to identify the date of injection.
- `idTexte` (string): **Text ID** - Identifier for the legal text to which the article belongs.
- `idTechInjection` (string): **Technical Injection ID** - Technical identifier for the injected element.
5. **Origin and Relationships**
- `origine` (string): **Origin** - The origin of the document (e.g., "LEGI").
- `dateDebutExtension` (string): **Start Date of Extension** - The start date if the article has an extension.
- `idEliAlias` (string): **ELI Alias** - Alias for the European Legislation Identifier (ELI).
- `cidTexte` (string): **Text Chronical ID** - Chronical identifier of the text.
6. **Hierarchical Relationships**
- `sectionParentId` (string): **Parent Section ID** - Technical identifier of the parent section.
- `multipleVersions` (boolean): **Multiple Versions** - Indicates if the article has multiple versions.
- `comporteLiensSP` (boolean): **Contains Public Service Links** - Indicates if the article contains links to public services.
- `sectionParentTitre` (string): **Parent Section Title** - Title of the parent section (e.g., "I : Revenu imposable").
- `infosRestructurationBranche` (string): **Branch Restructuring Information** - Information about branch restructuring.
- `idEli` (string): **ELI ID** - European Legislation Identifier (ELI) for the article.
- `sectionParentCid` (string): **Parent Section Chronical ID** - Chronical identifier of the parent section.
7. **Additional Content and History**
- `numeroBo` (string): **Official Bulletin Number** - Number of the official bulletin where the article was published.
- `infosRestructurationBrancheHtml` (string): **Branch Restructuring Information (HTML)** - Branch restructuring information in HTML format.
- `historique` (string): **History** - Historical context or changes specific to collective agreements.
- `infosComplementairesHtml` (string): **Additional Information (HTML)** - Additional information in HTML format.
- `renvoi` (string): **Reference** - References to content within the article (e.g., "(1)").
- `fullSectionsTitre` (string): **Full Section Titles** - Concatenation of all titles in the parent chain.
- `notaHtml` (string): **Notes (HTML)** - Additional notes or remarks in HTML format.
- `inap` (string): **INAP** - A placeholder for INAP-specific information.
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]). |
louisbrulenaudet/code-communes | louisbrulenaudet | 2025-06-03T05:05:26Z | 468 | 0 | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:text-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finetuning",
"legal",
"french law",
"droit français",
"Code des communes"
] | [
"text-generation",
"table-question-answering",
"summarization",
"text-retrieval",
"question-answering",
"text-classification"
] | 2024-03-25T19:57:26Z | null | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code des communes
source_datasets:
- original
pretty_name: Code des communes
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code des communes, non-instruct (2025-06-02)
The objective of this project is to provide researchers, professionals and law students with simplified, up-to-date access to all French legal texts, enriched with a wealth of data to facilitate their integration into Community and European projects.
Normally, the data is refreshed daily on all legal codes, and aims to simplify the production of training sets and labeling pipelines for the development of free, open-source language models based on open data accessible to all.
## Concurrent reading of the LegalKit
[<img src="https://raw.githubusercontent.com/louisbrulenaudet/ragoon/main/assets/badge.svg" alt="Built with RAGoon" width="200" height="32"/>](https://github.com/louisbrulenaudet/ragoon)
To use all the legal data published on LegalKit, you can use RAGoon:
```bash
pip3 install ragoon
```
Then, you can load multiple datasets using this code snippet:
```python
# -*- coding: utf-8 -*-
from ragoon import load_datasets
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=False
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
### Data Structure for Article Information
This section provides a detailed overview of the elements contained within the `item` dictionary. Each key represents a specific attribute of the legal article, with its associated value providing detailed information.
1. **Basic Information**
- `ref` (string): **Reference** - A reference to the article, combining the title_main and the article `number` (e.g., "Code Général des Impôts, art. 123").
- `texte` (string): **Text Content** - The textual content of the article.
- `dateDebut` (string): **Start Date** - The date when the article came into effect.
- `dateFin` (string): **End Date** - The date when the article was terminated or superseded.
- `num` (string): **Article Number** - The number assigned to the article.
- `id` (string): **Article ID** - Unique identifier for the article.
- `cid` (string): **Chronical ID** - Chronical identifier for the article.
- `type` (string): **Type** - The type or classification of the document (e.g., "AUTONOME").
- `etat` (string): **Legal Status** - The current legal status of the article (e.g., "MODIFIE_MORT_NE").
2. **Content and Notes**
- `nota` (string): **Notes** - Additional notes or remarks associated with the article.
- `version_article` (string): **Article Version** - The version number of the article.
- `ordre` (integer): **Order Number** - A numerical value used to sort articles within their parent section.
3. **Additional Metadata**
- `conditionDiffere` (string): **Deferred Condition** - Specific conditions related to collective agreements.
- `infosComplementaires` (string): **Additional Information** - Extra information pertinent to the article.
- `surtitre` (string): **Subtitle** - A subtitle or additional title information related to collective agreements.
- `nature` (string): **Nature** - The nature or category of the document (e.g., "Article").
- `texteHtml` (string): **HTML Content** - The article's content in HTML format.
4. **Versioning and Extensions**
- `dateFinExtension` (string): **End Date of Extension** - The end date if the article has an extension.
- `versionPrecedente` (string): **Previous Version** - Identifier for the previous version of the article.
- `refInjection` (string): **Injection Reference** - Technical reference to identify the date of injection.
- `idTexte` (string): **Text ID** - Identifier for the legal text to which the article belongs.
- `idTechInjection` (string): **Technical Injection ID** - Technical identifier for the injected element.
5. **Origin and Relationships**
- `origine` (string): **Origin** - The origin of the document (e.g., "LEGI").
- `dateDebutExtension` (string): **Start Date of Extension** - The start date if the article has an extension.
- `idEliAlias` (string): **ELI Alias** - Alias for the European Legislation Identifier (ELI).
- `cidTexte` (string): **Text Chronical ID** - Chronical identifier of the text.
6. **Hierarchical Relationships**
- `sectionParentId` (string): **Parent Section ID** - Technical identifier of the parent section.
- `multipleVersions` (boolean): **Multiple Versions** - Indicates if the article has multiple versions.
- `comporteLiensSP` (boolean): **Contains Public Service Links** - Indicates if the article contains links to public services.
- `sectionParentTitre` (string): **Parent Section Title** - Title of the parent section (e.g., "I : Revenu imposable").
- `infosRestructurationBranche` (string): **Branch Restructuring Information** - Information about branch restructuring.
- `idEli` (string): **ELI ID** - European Legislation Identifier (ELI) for the article.
- `sectionParentCid` (string): **Parent Section Chronical ID** - Chronical identifier of the parent section.
7. **Additional Content and History**
- `numeroBo` (string): **Official Bulletin Number** - Number of the official bulletin where the article was published.
- `infosRestructurationBrancheHtml` (string): **Branch Restructuring Information (HTML)** - Branch restructuring information in HTML format.
- `historique` (string): **History** - Historical context or changes specific to collective agreements.
- `infosComplementairesHtml` (string): **Additional Information (HTML)** - Additional information in HTML format.
- `renvoi` (string): **Reference** - References to content within the article (e.g., "(1)").
- `fullSectionsTitre` (string): **Full Section Titles** - Concatenation of all titles in the parent chain.
- `notaHtml` (string): **Notes (HTML)** - Additional notes or remarks in HTML format.
- `inap` (string): **INAP** - A placeholder for INAP-specific information.
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]). |
zhifeishen/grasp_place_one | zhifeishen | 2025-06-03T03:43:59Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-03T03:23:20Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "aloha",
"total_episodes": 20,
"total_frames": 14306,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 50,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
14
],
"names": [
[
"right_waist",
"right_shoulder",
"right_elbow",
"right_forearm_roll",
"right_wrist_angle",
"right_wrist_rotate",
"right_gripper",
"left_waist",
"left_shoulder",
"left_elbow",
"left_forearm_roll",
"left_wrist_angle",
"left_wrist_rotate",
"left_gripper"
]
]
},
"action": {
"dtype": "float32",
"shape": [
14
],
"names": [
[
"right_waist",
"right_shoulder",
"right_elbow",
"right_forearm_roll",
"right_wrist_angle",
"right_wrist_rotate",
"right_gripper",
"left_waist",
"left_shoulder",
"left_elbow",
"left_forearm_roll",
"left_wrist_angle",
"left_wrist_rotate",
"left_gripper"
]
]
},
"observation.velocity": {
"dtype": "float32",
"shape": [
14
],
"names": [
[
"right_waist",
"right_shoulder",
"right_elbow",
"right_forearm_roll",
"right_wrist_angle",
"right_wrist_rotate",
"right_gripper",
"left_waist",
"left_shoulder",
"left_elbow",
"left_forearm_roll",
"left_wrist_angle",
"left_wrist_rotate",
"left_gripper"
]
]
},
"observation.effort": {
"dtype": "float32",
"shape": [
14
],
"names": [
[
"right_waist",
"right_shoulder",
"right_elbow",
"right_forearm_roll",
"right_wrist_angle",
"right_wrist_rotate",
"right_gripper",
"left_waist",
"left_shoulder",
"left_elbow",
"left_forearm_roll",
"left_wrist_angle",
"left_wrist_rotate",
"left_gripper"
]
]
},
"observation.images.cam_high": {
"dtype": "image",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
]
},
"observation.images.cam_low": {
"dtype": "image",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
]
},
"observation.images.cam_left_wrist": {
"dtype": "image",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
]
},
"observation.images.cam_right_wrist": {
"dtype": "image",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
icedwind/x_dataset_34576 | icedwind | 2025-06-03T02:55:21Z | 1,188 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-29T06:54:29Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** icedwind/x_dataset_34576
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5CoHRJSrdnojNtZ5x9n7YHKb35ySPrSwk8oCrim3BYP6kern
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{icedwind2025datauniversex_dataset_34576,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={icedwind},
year={2025},
url={https://huggingface.co/datasets/icedwind/x_dataset_34576},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 44847129
- **Date Range:** 2025-01-23T00:00:00Z to 2025-02-12T00:00:00Z
- **Last Updated:** 2025-02-18T21:35:02Z
### Data Distribution
- Tweets with hashtags: 40.76%
- Tweets without hashtags: 59.24%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 26569050 | 59.24% |
| 2 | #riyadh | 304003 | 0.68% |
| 3 | #zelena | 244307 | 0.54% |
| 4 | #tiktok | 180248 | 0.40% |
| 5 | #jhope_at_galadespiècesjaunes | 127683 | 0.28% |
| 6 | #bbb25 | 110751 | 0.25% |
| 7 | #ad | 108206 | 0.24% |
| 8 | #royalrumble | 94571 | 0.21% |
| 9 | #bbmzansi | 61439 | 0.14% |
| 10 | #theheartkillersep10 | 59616 | 0.13% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-29T06:55:38Z | 3976866 | 3976866 |
| 2025-02-01T18:58:26Z | 8396141 | 12373007 |
| 2025-02-05T07:02:02Z | 11364902 | 23737909 |
| 2025-02-08T19:06:38Z | 9126902 | 32864811 |
| 2025-02-12T07:14:04Z | 10462808 | 43327619 |
| 2025-02-18T06:33:56Z | 829865 | 44157484 |
| 2025-02-18T21:35:02Z | 689645 | 44847129 |
|
mothnaZl/seq_dis_T0.4-Qwen2.5-7B-best_of_n-VLLM-Skywork-o1-Open-PRM-Qwen-2.5-7B-completions | mothnaZl | 2025-06-03T02:36:10Z | 10 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-01T23:30:00Z | null | ---
dataset_info:
config_name: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
- name: pass@n
dtype: float64
- name: div_avg
dtype: float64
- name: div_sum
dtype: float64
- name: div_mean
dtype: float64
- name: Unigrams
dtype: float64
- name: Bigrams
dtype: float64
- name: Trigrams
dtype: float64
- name: Fourgrams
dtype: float64
- name: pass_tag
sequence: 'null'
- name: BM25
dtype: int64
- name: pred_entropy
dtype: float64
splits:
- name: train
num_bytes: 928
num_examples: 8
download_size: 7131
dataset_size: 928
configs:
- config_name: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals
data_files:
- split: train
path: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals/train-*
---
|
smanni/train_so100_pick_place_blue_pencil | smanni | 2025-05-28T13:56:37Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-28T13:56:23Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 50,
"total_frames": 17924,
"total_tasks": 1,
"total_videos": 50,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.intel_realsense": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
prithivMLmods/Shoe-Net-10K | prithivMLmods | 2025-05-28T13:16:03Z | 0 | 0 | [
"task_categories:image-classification",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"image",
"shoe-type",
"classification",
"video",
"10k",
"rgb"
] | [
"image-classification"
] | 2025-05-28T12:55:56Z | null | ---
license: apache-2.0
task_categories:
- image-classification
language:
- en
tags:
- image
- shoe-type
- classification
- video
- 10k
- rgb
size_categories:
- 1K<n<10K
--- |
elfray/multiplication_2x2 | elfray | 2025-05-28T11:29:13Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-28T11:12:44Z | null | ---
dataset_info:
features:
- name: task
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 174960
num_examples: 7290
- name: valid
num_bytes: 19440
num_examples: 810
download_size: 377879
dataset_size: 194400
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
---
|
PushkarA07/2016-6-patches-May28 | PushkarA07 | 2025-05-28T10:10:46Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-28T10:10:44Z | null | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: image_name
dtype: string
splits:
- name: train
num_bytes: 1831857.0
num_examples: 32
download_size: 1833191
dataset_size: 1831857.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Jaafer/EnglishRelatedConcepts2025_CUI1_CUI2_RELA_SAB_Clean | Jaafer | 2025-05-28T09:14:52Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-28T09:14:30Z | null | ---
dataset_info:
features:
- name: CUI1
dtype: string
- name: CUI2
dtype: string
- name: RELA
dtype: string
- name: SAB
dtype: string
splits:
- name: train
num_bytes: 1234618458
num_examples: 23555619
download_size: 164523109
dataset_size: 1234618458
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vidyc/m1_preference_data | vidyc | 2025-05-28T09:14:23Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-28T09:14:16Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 81029859
num_examples: 21816
- name: validation
num_bytes: 4482105
num_examples: 1212
- name: test
num_bytes: 4546794
num_examples: 1212
download_size: 43896617
dataset_size: 90058758
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
large-traversaal/Agent-Benchmarks-Data | large-traversaal | 2025-05-28T08:07:41Z | 21 | 0 | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | [] | 2025-05-22T07:58:12Z | null | ---
license: cc-by-nc-nd-4.0
---
|
acarballocastro/MNLP_M2_quantized_dataset | acarballocastro | 2025-05-28T08:04:54Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-27T13:02:37Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 632204
num_examples: 512
download_size: 311955
dataset_size: 632204
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Calibration dataset for the quantized model.
- Number of samples: 512 |
zwa73/SoulTide-AudioData-Dataset | zwa73 | 2025-05-28T07:09:22Z | 838 | 0 | [
"license:cc0-1.0",
"size_categories:1K<n<10K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-04-14T10:01:27Z | null | ---
configs:
- config_name: Akaset
data_files:
- split: audio
path:
- "character/Akaset/resource/audio/*.flac"
- "character/Akaset/resource/metadata.csv"
- config_name: Alisa
data_files:
- split: audio
path:
- "character/Alisa/resource/audio/*.flac"
- "character/Alisa/resource/metadata.csv"
- config_name: AmaneInori
data_files:
- split: audio
path:
- "character/AmaneInori/resource/audio/*.flac"
- "character/AmaneInori/resource/metadata.csv"
- config_name: Andrea
data_files:
- split: audio
path:
- "character/Andrea/resource/audio/*.flac"
- "character/Andrea/resource/metadata.csv"
- config_name: Antonina
data_files:
- split: audio
path:
- "character/Antonina/resource/audio/*.flac"
- "character/Antonina/resource/metadata.csv"
- config_name: Aoling
data_files:
- split: audio
path:
- "character/Aoling/resource/audio/*.flac"
- "character/Aoling/resource/metadata.csv"
- config_name: Asuna
data_files:
- split: audio
path:
- "character/Asuna/resource/audio/*.flac"
- "character/Asuna/resource/metadata.csv"
- config_name: Aurora
data_files:
- split: audio
path:
- "character/Aurora/resource/audio/*.flac"
- "character/Aurora/resource/metadata.csv"
- config_name: Benten
data_files:
- split: audio
path:
- "character/Benten/resource/audio/*.flac"
- "character/Benten/resource/metadata.csv"
- config_name: Cecilia
data_files:
- split: audio
path:
- "character/Cecilia/resource/audio/*.flac"
- "character/Cecilia/resource/metadata.csv"
- config_name: Clarice
data_files:
- split: audio
path:
- "character/Clarice/resource/audio/*.flac"
- "character/Clarice/resource/metadata.csv"
- config_name: Clotho
data_files:
- split: audio
path:
- "character/Clotho/resource/audio/*.flac"
- "character/Clotho/resource/metadata.csv"
- config_name: Colcher
data_files:
- split: audio
path:
- "character/Colcher/resource/audio/*.flac"
- "character/Colcher/resource/metadata.csv"
- config_name: Dolores
data_files:
- split: audio
path:
- "character/Dolores/resource/audio/*.flac"
- "character/Dolores/resource/metadata.csv"
- config_name: Dora
data_files:
- split: audio
path:
- "character/Dora/resource/audio/*.flac"
- "character/Dora/resource/metadata.csv"
- config_name: Dreizehn
data_files:
- split: audio
path:
- "character/Dreizehn/resource/audio/*.flac"
- "character/Dreizehn/resource/metadata.csv"
- config_name: Ennis
data_files:
- split: audio
path:
- "character/Ennis/resource/audio/*.flac"
- "character/Ennis/resource/metadata.csv"
- config_name: Erinnern
data_files:
- split: audio
path:
- "character/Erinnern/resource/audio/*.flac"
- "character/Erinnern/resource/metadata.csv"
- config_name: EtsukazuMiko
data_files:
- split: audio
path:
- "character/EtsukazuMiko/resource/audio/*.flac"
- "character/EtsukazuMiko/resource/metadata.csv"
- config_name: Fanny
data_files:
- split: audio
path:
- "character/Fanny/resource/audio/*.flac"
- "character/Fanny/resource/metadata.csv"
- config_name: Freesia
data_files:
- split: audio
path:
- "character/Freesia/resource/audio/*.flac"
- "character/Freesia/resource/metadata.csv"
- config_name: Gawana
data_files:
- split: audio
path:
- "character/Gawana/resource/audio/*.flac"
- "character/Gawana/resource/metadata.csv"
- config_name: HagakureRuri
data_files:
- split: audio
path:
- "character/HagakureRuri/resource/audio/*.flac"
- "character/HagakureRuri/resource/metadata.csv"
- config_name: Haliva
data_files:
- split: audio
path:
- "character/Haliva/resource/audio/*.flac"
- "character/Haliva/resource/metadata.csv"
- config_name: HazukiYuki
data_files:
- split: audio
path:
- "character/HazukiYuki/resource/audio/*.flac"
- "character/HazukiYuki/resource/metadata.csv"
- config_name: HeLing
data_files:
- split: audio
path:
- "character/HeLing/resource/audio/*.flac"
- "character/HeLing/resource/metadata.csv"
- config_name: Ithil
data_files:
- split: audio
path:
- "character/Ithil/resource/audio/*.flac"
- "character/Ithil/resource/metadata.csv"
- config_name: JoanofArcLoire
data_files:
- split: audio
path:
- "character/JoanofArcLoire/resource/audio/*.flac"
- "character/JoanofArcLoire/resource/metadata.csv"
- config_name: Juewa
data_files:
- split: audio
path:
- "character/Juewa/resource/audio/*.flac"
- "character/Juewa/resource/metadata.csv"
- config_name: Kokkoro
data_files:
- split: audio
path:
- "character/Kokkoro/resource/audio/*.flac"
- "character/Kokkoro/resource/metadata.csv"
- config_name: Lavira
data_files:
- split: audio
path:
- "character/Lavira/resource/audio/*.flac"
- "character/Lavira/resource/metadata.csv"
- config_name: LightCloud
data_files:
- split: audio
path:
- "character/LightCloud/resource/audio/*.flac"
- "character/LightCloud/resource/metadata.csv"
- config_name: Lilyiro
data_files:
- split: audio
path:
- "character/Lilyiro/resource/audio/*.flac"
- "character/Lilyiro/resource/metadata.csv"
- config_name: Micha
data_files:
- split: audio
path:
- "character/Micha/resource/audio/*.flac"
- "character/Micha/resource/metadata.csv"
- config_name: Minerdwen
data_files:
- split: audio
path:
- "character/Minerdwen/resource/audio/*.flac"
- "character/Minerdwen/resource/metadata.csv"
- config_name: Mist
data_files:
- split: audio
path:
- "character/Mist/resource/audio/*.flac"
- "character/Mist/resource/metadata.csv"
- config_name: NankungLin
data_files:
- split: audio
path:
- "character/NankungLin/resource/audio/*.flac"
- "character/NankungLin/resource/metadata.csv"
- config_name: Netsuki
data_files:
- split: audio
path:
- "character/Netsuki/resource/audio/*.flac"
- "character/Netsuki/resource/metadata.csv"
- config_name: NicoletteLamel
data_files:
- split: audio
path:
- "character/NicoletteLamel/resource/audio/*.flac"
- "character/NicoletteLamel/resource/metadata.csv"
- config_name: Philodoxy
data_files:
- split: audio
path:
- "character/Philodoxy/resource/audio/*.flac"
- "character/Philodoxy/resource/metadata.csv"
- config_name: QingDai
data_files:
- split: audio
path:
- "character/QingDai/resource/audio/*.flac"
- "character/QingDai/resource/metadata.csv"
- config_name: QingHao
data_files:
- split: audio
path:
- "character/QingHao/resource/audio/*.flac"
- "character/QingHao/resource/metadata.csv"
- config_name: QuLing
data_files:
- split: audio
path:
- "character/QuLing/resource/audio/*.flac"
- "character/QuLing/resource/metadata.csv"
- config_name: RubyRose
data_files:
- split: audio
path:
- "character/RubyRose/resource/audio/*.flac"
- "character/RubyRose/resource/metadata.csv"
- config_name: SakuyaMako
data_files:
- split: audio
path:
- "character/SakuyaMako/resource/audio/*.flac"
- "character/SakuyaMako/resource/metadata.csv"
- config_name: Satya
data_files:
- split: audio
path:
- "character/Satya/resource/audio/*.flac"
- "character/Satya/resource/metadata.csv"
- config_name: Silenus
data_files:
- split: audio
path:
- "character/Silenus/resource/audio/*.flac"
- "character/Silenus/resource/metadata.csv"
- config_name: Truda
data_files:
- split: audio
path:
- "character/Truda/resource/audio/*.flac"
- "character/Truda/resource/metadata.csv"
- config_name: TsukinoMiyo
data_files:
- split: audio
path:
- "character/TsukinoMiyo/resource/audio/*.flac"
- "character/TsukinoMiyo/resource/metadata.csv"
- config_name: Virgina
data_files:
- split: audio
path:
- "character/Virgina/resource/audio/*.flac"
- "character/Virgina/resource/metadata.csv"
license: cc0-1.0
---
character
____[char]
________resource
____________audio - 原始音频
____________srt - 原始srt
____________processed - 利用 Process-Resource 根据原始资源处理后的资源
________recognized - Whisper-LargeV2 识别的srt
________calibrated - 人工校准的srt
________tmp - build临时文件
搭配此管理器来生成所需的训练集:
https://github.com/Sosarciel/SoulTide-AudioData-Manager
|
zerostratos/vi-cc100-parquet-dataset | zerostratos | 2025-05-28T06:39:12Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-28T05:38:47Z | null | ---
license: apache-2.0
---
|
JesusCrist/mt_bench_prompts | JesusCrist | 2025-05-28T06:38:29Z | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-28T05:54:42Z | null | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question_id
dtype: int64
- name: category
dtype: string
- name: turns
sequence: string
- name: reference
sequence: string
- name: gpt4_reference
sequence: string
splits:
- name: train
num_bytes: 91373
num_examples: 80
download_size: 53582
dataset_size: 91373
---
|
LadyMia/x_dataset_63648 | LadyMia | 2025-05-28T06:01:17Z | 1,169 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-27T01:53:26Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** LadyMia/x_dataset_63648
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5GxSoUZjTtZrPCjvjJb3pMZYhkKehpx8NE7ueruDzt1pcXVu
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{LadyMia2025datauniversex_dataset_63648,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={LadyMia},
year={2025},
url={https://huggingface.co/datasets/LadyMia/x_dataset_63648},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 46948534
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-12T00:00:00Z
- **Last Updated:** 2025-02-18T19:01:32Z
### Data Distribution
- Tweets with hashtags: 42.40%
- Tweets without hashtags: 57.60%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 27040155 | 57.60% |
| 2 | #riyadh | 327476 | 0.70% |
| 3 | #zelena | 239003 | 0.51% |
| 4 | #tiktok | 196037 | 0.42% |
| 5 | #bbb25 | 131939 | 0.28% |
| 6 | #ad | 113489 | 0.24% |
| 7 | #royalrumble | 91252 | 0.19% |
| 8 | #jhope_at_galadespiècesjaunes | 67775 | 0.14% |
| 9 | #granhermano | 66116 | 0.14% |
| 10 | #bbmzansi | 60926 | 0.13% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-27T01:54:19Z | 3109806 | 3109806 |
| 2025-01-30T14:08:36Z | 9957939 | 13067745 |
| 2025-02-03T02:11:55Z | 8628746 | 21696491 |
| 2025-02-06T14:14:41Z | 7395527 | 29092018 |
| 2025-02-10T02:19:10Z | 7700406 | 36792424 |
| 2025-02-13T14:25:51Z | 8841353 | 45633777 |
| 2025-02-18T04:00:18Z | 696224 | 46330001 |
| 2025-02-18T19:01:32Z | 618533 | 46948534 |
|
chfeng/categories_50_samples_category5 | chfeng | 2025-05-28T05:55:35Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-28T05:55:32Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_path
dtype: string
- name: image_url
dtype: string
- name: category
dtype: string
- name: problem
dtype: string
- name: original_caption
dtype: string
- name: changed_caption
dtype: string
- name: fixed_caption
dtype: string
- name: solution_original
dtype: string
- name: solution_target
dtype: string
splits:
- name: train
num_bytes: 56102107.0
num_examples: 50
download_size: 56037575
dataset_size: 56102107.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chfeng/categories_50_samples_category3 | chfeng | 2025-05-28T05:55:01Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-28T05:54:58Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_path
dtype: string
- name: image_url
dtype: string
- name: category
dtype: string
- name: problem
dtype: string
- name: original_caption
dtype: string
- name: changed_caption
dtype: string
- name: fixed_caption
dtype: string
- name: solution_original
dtype: string
- name: solution_target
dtype: string
splits:
- name: train
num_bytes: 43403230.0
num_examples: 50
download_size: 43317426
dataset_size: 43403230.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aochongoliverli/Qwen2.5-Math-1.5B-deepmath-hard-1800-steps-4096 | aochongoliverli | 2025-05-28T05:24:41Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-28T05:24:40Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: correct_responses
sequence: string
- name: attempts
dtype: int64
splits:
- name: train
num_bytes: 33115
num_examples: 6
download_size: 21195
dataset_size: 33115
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nguyentranai07/TechniqueIndicator_Analyze | nguyentranai07 | 2025-05-28T05:01:05Z | 223 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-26T05:49:29Z | null | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Answer
dtype: string
splits:
- name: train
num_bytes: 76206461
num_examples: 37700
download_size: 29244812
dataset_size: 76206461
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chfeng/categories_20_samples_category7 | chfeng | 2025-05-28T04:12:11Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-28T04:02:00Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_path
dtype: string
- name: image_url
dtype: string
- name: category
dtype: string
- name: problem
dtype: string
- name: original_caption
dtype: string
- name: changed_caption
dtype: string
- name: fixed_caption
dtype: string
- name: solution_original
dtype: string
- name: solution_target
dtype: string
splits:
- name: train
num_bytes: 9008502.0
num_examples: 20
download_size: 8953660
dataset_size: 9008502.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chfeng/categories_20_samples_category4 | chfeng | 2025-05-28T04:11:54Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-28T04:01:54Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_path
dtype: string
- name: image_url
dtype: string
- name: category
dtype: string
- name: problem
dtype: string
- name: original_caption
dtype: string
- name: changed_caption
dtype: string
- name: fixed_caption
dtype: string
- name: solution_original
dtype: string
- name: solution_target
dtype: string
splits:
- name: train
num_bytes: 5509366.0
num_examples: 20
download_size: 5490004
dataset_size: 5509366.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Svngoku/PompiersDeParisDomainSpecificQA | Svngoku | 2025-05-28T04:10:44Z | 0 | 0 | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"urgence",
"pompiers",
"rex",
"rag",
"synthetic"
] | [
"text-generation"
] | 2025-05-27T22:16:46Z | null | ---
task_categories:
- text-generation
tags:
- urgence
- pompiers
- rex
- rag
- synthetic
size_categories:
- 10K<n<100K
---
# QA Dataset for 'Les Pompiers de Paris' for SFT
## Dataset Description
- **Repository**: [Link to repository, if applicable]
- **Language(s)**: French (fr)
- **Task(s)**: Text Generation, Question Answering
- **Size**: Between 10,000 and 100,000 entries
- **License**: [Specify license, e.g., CC-BY-4.0 or proprietary]
### Overview
The QA Dataset for 'Les Pompiers de Paris' is a specialized dataset designed for supervised fine-tuning (SFT) of language models. It contains question-answer pairs in French, focusing on procedures, definitions, and scenarios relevant to railway safety and emergency operations, particularly those involving the Paris Fire Brigade (Les Pompiers de Paris) and SNCF (French National Railway Company) protocols. The dataset is derived from procedural documents, such as `procedures_secours_ferroviaires.pdf`, and is structured to support training models for generating accurate, context-specific responses.
### Dataset Structure
The dataset consists of JSON objects, each containing a `messages` field with a user-assistant dialogue pair. Each entry follows this format:
```json
{
"messages": [
{
"content": "<question>",
"role": "user"
},
{
"content": "<answer>",
"role": "assistant"
}
]
} |
Haviet2003/finetomevn | Haviet2003 | 2025-05-28T03:59:54Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-28T03:59:09Z | null | ---
license: apache-2.0
---
|
shin1107/eval_koch_base_pi0_pretrained_80000 | shin1107 | 2025-05-28T03:03:56Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-28T03:03:46Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "koch",
"total_episodes": 8,
"total_frames": 4609,
"total_tasks": 1,
"total_videos": 16,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:8"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ricardomonti08/wikipedia-vi-1percent | ricardomonti08 | 2025-05-28T02:44:35Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-28T02:44:32Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16177297.936743025
num_examples: 12886
download_size: 8810387
dataset_size: 16177297.936743025
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RenzKa/simlingo | RenzKa | 2025-05-28T01:54:46Z | 167 | 3 | [
"task_categories:visual-question-answering",
"task_categories:robotics",
"language:en",
"license:other",
"region:us",
"AutonomousDriving",
"VQA",
"Commentary",
"VLA"
] | [
"visual-question-answering",
"robotics"
] | 2025-05-23T11:43:53Z | 3 | ---
license: other
task_categories:
- visual-question-answering
- robotics
language:
- en
tags:
- AutonomousDriving
- VQA
- Commentary
- VLA
---
# SimLingo Dataset
## Overview
SimLingo-Data is a large-scale autonomous driving CARLA 2.0 dataset containing sensor data, action labels, a wide range of simulator state information, and language labels for VQA, commentary and instruction following. The driving data is collected with the privileged rule-based expert [PDM-Lite](https://github.com/OpenDriveLab/DriveLM/tree/DriveLM-CARLA/pdm_lite).
## Dataset Statistics
- **Large-scale dataset**: 3,308,315 total samples (note: these are not from unique routes as the provided CARLA route files are limited)
- **Diverse Scenarios:** Covers 38 complex scenarios, including urban traffic, participants violating traffic rules, and high-speed highway driving
- **Focused Evaluation:** Short routes with 1 scenario (62.1%) or 3 scenarios (37.9%) per route
- **Data Types**: RGB images (.jpg), LiDAR point clouds (.laz), Sensor measurements (.json.gz), Bounding boxes (.json.gz), Language annotations (.json.gz)
## Dataset Structure
The dataset is organized hierarchically with the following main components:
- `data/`: Raw sensor data (RGB, LiDAR, measurements, bounding boxes)
- `commentary/`: Natural language descriptions of driving decisions
- `dreamer/`: Instruction following data with multiple instruction/action pairs per sample
- `drivelm/`: VQA data, based on DriveLM
### Data Details
- **RGB Images**: 1024x512 front-view camera image
- **Augmented RGB Images**: 1024x512 front-view camera image with a random shift and orientation offset of the camera
- **LiDAR**: Point cloud data saved in LAZ format
- **Measurements**: Vehicle state, simulator state, and sensor readings in JSON format
- **Bounding Boxes**: Detailed information about each object in the scene.
- **Commentary, Dreamer, VQA**: Language annotations
## Usage
This dataset is chunked into groups of multiple routes for efficient download and processing.
### Download the whole dataset using git with Git LFS
```bash
# Clone the repository
git clone https://huggingface.co/datasets/RenzKa/simlingo
# Navigate to the directory
cd simlingo
# Pull the LFS files
git lfs pull
```
### Download a single file with wget
```bash
# Download individual files (replace with actual file URLs from Hugging Face)
wget https://huggingface.co/datasets/RenzKa/simlingo/resolve/main/[filename].tar.gz
```
### Extract to a single directory - please specify the location where you want to store the dataset
```bash
# Create output directory
mkdir -p database/simlingo
# Extract all archives to the same directory
for file in *.tar.gz; do
echo "Extracting $file to database/simlingo/..."
tar -xzf "$file" -C database/simlingo/
done
```
## License
Please refer to the license file for usage terms and conditions.
## Citation
If you use this dataset in your research, please cite:
```bibtex
@inproceedings{renz2025simlingo,
title={SimLingo: Vision-Only Closed-Loop Autonomous Driving with Language-Action Alignment},
author={Renz, Katrin and Chen, Long and Arani, Elahe and Sinavski, Oleg},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025},
}
@inproceedings{sima2024drivelm,
title={DriveLM: Driving with Graph Visual Question Answering},
author={Chonghao Sima and Katrin Renz and Kashyap Chitta and Li Chen and Hanxue Zhang and Chengen Xie and Jens Beißwenger and Ping Luo and Andreas Geiger and Hongyang Li},
booktitle={European Conference on Computer Vision},
year={2024},
}
```
|
LucidityAI/Qwen2.5-math-code-200k | LucidityAI | 2025-05-28T01:08:51Z | 0 | 0 | [
"task_categories:text-generation",
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"question-answering"
] | 2025-05-28T00:53:10Z | null | ---
license: apache-2.0
task_categories:
- text-generation
- question-answering
language:
- en
size_categories:
- 100K<n<1M
---
# Magpie Coder+Math 200k Dataset
This dataset combines samples from two high-quality Magpie datasets:
- **Magpie-Qwen2.5-Coder-Pro-300K-v0.1**: Programming and coding instructions
- **Magpie-Qwen2.5-Math-Pro-300K-v0.1**: Mathematical problem-solving instructions
- **Total entries**: 200,000
- **Coder entries**: 100,000 (from Magpie-Qwen2.5-Coder-Pro-300K-v0.1)
- **Math entries**: 100,000 (from Magpie-Qwen2.5-Math-Pro-300K-v0.1)
## Original Sources
- [Magpie-Qwen2.5-Coder-Pro-300K-v0.1](https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2.5-Coder-Pro-300K-v0.1)
- [Magpie-Qwen2.5-Math-Pro-300K-v0.1](https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2.5-Math-Pro-300K-v0.1)
|
twigs/openmathinstruct2_chat_50k | twigs | 2025-05-28T00:59:57Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-28T00:59:53Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 71986931
num_examples: 50000
download_size: 30439791
dataset_size: 71986931
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
siyavash/so101_test | siyavash | 2025-05-28T00:55:48Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-05-28T00:55:33Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 50,
"total_frames": 22350,
"total_tasks": 1,
"total_videos": 100,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
DrAliGomaa/test_no_ffmpeg_dontuse_worse_performance | DrAliGomaa | 2025-05-27T23:56:31Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-27T12:17:25Z | null | ---
dataset_info:
features:
- name: audio_path
dtype: string
- name: sentence
dtype: string
- name: audio
dtype: audio
splits:
- name: mgb2_validation
num_bytes: 39052557.0
num_examples: 494
- name: validation
num_bytes: 2764912640.52
num_examples: 7280
download_size: 2348397160
dataset_size: 2803965197.52
configs:
- config_name: default
data_files:
- split: mgb2_validation
path: data/mgb2_validation-*
- split: validation
path: data/validation-*
---
|
viveriveniversumvivusvici/bazi_comprehensive_dataset | viveriveniversumvivusvici | 2025-05-27T23:42:30Z | 11 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-27T00:26:45Z | null | ---
license: mit
---
# AstroAlchemy BaZi Dataset Documentation
## Overview
This documentation describes the comprehensive BaZi dataset created for the AstroAlchemy Web3 dApp project. The dataset is designed for fine-tuning a Mistral B instruct model to generate hyper-personalized, BaZi-powered "spiritual strategies" across multiple domains.
## Dataset Structure
The dataset is provided in JSONL (JSON Lines) format, with each line containing a complete JSON object with two fields:
1. **input**: A string containing the BaZi chart information
2. **output**: A string containing the comprehensive advice across multiple domains
### Input Format
Each input contains the following information:
```
Year: [Heavenly Stem][Earthly Branch]
Month: [Heavenly Stem][Earthly Branch]
Day: [Heavenly Stem][Earthly Branch]
Hour: [Heavenly Stem][Earthly Branch]
Element Balance: Wood:[count], Fire:[count], Earth:[count], Metal:[count], Water:[count]
Hidden Stems: [Stem1], [Stem2], ...
Current Year: [Heavenly Stem][Earthly Branch]
```
### Output Format
Each output contains detailed advice across five domains:
```
Advice:
Wealth & Investment:
- [Specific investment advice based on elements]
- [Asset allocation recommendations]
- [Risk management strategies]
Relationships & Compatibility:
- [Interpersonal dynamics guidance]
- [Compatibility insights]
- [Relationship timing recommendations]
Career & Professional Development:
- [Career path suggestions]
- [Professional growth strategies]
- [Leadership and collaboration advice]
Health & Wellness:
- [Element-based health recommendations]
- [Preventative measures]
- [Lifestyle suggestions]
Daily Activities & Practices:
- [Timing recommendations]
- [Element-balancing practices]
- [Decision-making guidance]
Lucky Directions: [Direction1], [Direction2], ...
Risk Warnings: [Warning1], [Warning2], ...
```
## Dataset Statistics
- **Total Samples**: 1,000
- **Element Distribution**: Balanced representation of all Five Elements (Wood, Fire, Earth, Metal, Water)
- **Advice Domains**: All samples include advice for all five domains (Wealth, Relationships, Career, Health, Daily Activities)
- **Format**: JSONL (JSON Lines)
## BaZi Components
The dataset incorporates all fundamental components of BaZi:
### Heavenly Stems (天干)
1. **Jia (甲)** - Yang Wood
2. **Yi (乙)** - Yin Wood
3. **Bing (丙)** - Yang Fire
4. **Ding (丁)** - Yin Fire
5. **Wu (戊)** - Yang Earth
6. **Ji (己)** - Yin Earth
7. **Geng (庚)** - Yang Metal
8. **Xin (辛)** - Yin Metal
9. **Ren (壬)** - Yang Water
10. **Gui (癸)** - Yin Water
### Earthly Branches (地支)
1. **Zi (子)** - Rat, Water
2. **Chou (丑)** - Ox, Earth
3. **Yin (寅)** - Tiger, Wood
4. **Mao (卯)** - Rabbit, Wood
5. **Chen (辰)** - Dragon, Earth
6. **Si (巳)** - Snake, Fire
7. **Wu (午)** - Horse, Fire
8. **Wei (未)** - Goat, Earth
9. **Shen (申)** - Monkey, Metal
10. **You (酉)** - Rooster, Metal
11. **Xu (戌)** - Dog, Earth
12. **Hai (亥)** - Pig, Water
### Five Elements (五行)
1. **Wood (木)** - Growth, expansion, creativity
2. **Fire (火)** - Transformation, passion, visibility
3. **Earth (土)** - Stability, nourishment, centeredness
4. **Metal (金)** - Structure, precision, boundaries
5. **Water (水)** - Communication, wisdom, flexibility
## Usage for Fine-Tuning
This dataset is specifically designed for fine-tuning the Mistral B instruct model on Hugging Face. The comprehensive coverage of BaZi components and advice domains ensures the model will be able to generate accurate, detailed, and personalized spiritual strategies for the AstroAlchemy Web3 dApp.
To use this dataset for fine-tuning:
1. Upload the JSONL file to your Hugging Face account
2. Configure the fine-tuning parameters for the Mistral B instruct model
3. Specify the input and output fields as described in this documentation
4. Start the fine-tuning process
## Generation Methodology
The dataset was systematically generated to ensure:
1. Exhaustive coverage of all possible BaZi chart combinations
2. Balanced representation of all Five Elements
3. Comprehensive advice across all domains
4. Detailed, action-oriented recommendations
5. Culturally universal interpretations
Each entry was created using a custom algorithm that ensures diversity while maintaining BaZi principles and relationships between elements.
dataset = load_dataset("viveriveniversumvivusvici/bazi_comprehensive_dataset")
Citation If you use the bazi_comprehensive_dataset in your research, please cite:
Edit @dataset{viveriveniversumvivusvici/bazi_comprehensive_dataset, author = {BENIDO}, title = {bazi_comprehensive_dataset}, year = {2025}, publisher = {Hugging Face}, url = {https://huggingface.co/datasets/viveriveniversumvivusvici/bazi_comprehensive_dataset} }
Contact For questions or feedback.
|
dranreb1660/medimaven-qa-data | dranreb1660 | 2025-05-27T23:32:01Z | 0 | 0 | [
"annotations_creators:machine-generated",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"region:us",
"medical",
"rag",
"synthetic-qa",
"lay-symptom"
] | [] | 2025-05-27T17:12:25Z | null | ---
annotations_creators:
- machine-generated
language:
- en
license: cc-by-4.0
tags:
- medical
- rag
- synthetic-qa
- lay-symptom
pretty_name: MediMaven-QA v1.0
size_categories:
- 100K<n<1M
dataset_info:
- config_name: kb_chunks
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: section
dtype: string
- name: source
dtype: string
- name: text
dtype: string
- name: retrieved_date
dtype: string
- name: n_tokens
dtype: int64
splits:
- name: train
num_bytes: 133140842
num_examples: 70743
download_size: 51361461
dataset_size: 133140842
- config_name: qa_long
features:
- name: chunk_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 52621793
num_examples: 143280
download_size: 26138154
dataset_size: 52621793
- config_name: qa_wide
features:
- name: chunk_id
dtype: string
- name: qa
list:
- name: answer
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 49971385
num_examples: 70018
download_size: 27339393
dataset_size: 49971385
configs:
- config_name: kb_chunks
data_files:
- split: train
path: kb_chunks/train-*
- config_name: qa_long
data_files:
- split: train
path: qa_long/train-*
- config_name: qa_wide
data_files:
- split: train
path: qa_wide/train-*
---
<!-- badges: start -->
<!-- Add or rearrange any shields.io badges you like.
Example licence badge ⬇️ -->



<!-- badges: end -->
# 🩺 MediMaven-QA v1.0
**MediMaven-QA** is a *chunk-level, citation-preserving* medical question-answer corpus purpose-built for **Retrieval-Augmented Generation (RAG)**.
It bridges everyday **lay-symptom narratives** with trustworthy **clinical content** from curated web sources.
## 📦 Dataset Contents
| Config (`name`) | Rows | What it holds | Typical use-case |
|----------------------|------:|---------------|------------------|
| `chunks` | 70 248 | 200-token, sentence-aware context windows with rich metadata (`id`, `url`, `title`, `section`, `source`, `n_token`, `text`) | RAG context store / retriever training |
| `qa_wide` | 70 018 | *List-of-dict* QA per `chunk_id` <br>→ single row may have ≥1 QA pair | Fast retrieval + generation, keeps chunk linkage |
| `qa_long` | 143 221 | Fully exploded (`chunk_id`, `question`, `answer`) | Classic supervised QA fine-tuning or eval |
> ⚠️ **Disclaimer** — This corpus is for *research & benchmarking only*.
> It is **not** a diagnostic tool and should not be used in clinical workflows.
## 🚀 Quick Load
```python
from datasets import load_dataset
# pick one of these configs
qa_long = load_dataset("bernard-kyei/medimaven-qa-data", "qa_long", split="train")
qa_long = load_dataset("bernard-kyei/medimaven-qa-data", "qa_long", split="train")
# accompany with chunks to get contexts
chunks = load_dataset("bernard-kyei/medimaven-qa-data", "kb_chunks", split="train")
print(qa_long[0]["question"])
print(qa_long[0]["answer"])
```
# 🛠️ Generation Pipeline
| Stage | Tooling | Notes |
|---------------------|---------------------------------------------|-------------------------------------|
| 1️⃣ **Crawl** | Scrapy + Splash | Mayo Clinic, NHS.uk, WebMD, Cleveland Clinic (public-domain / permissive T\&Cs) |
| 2️⃣ **Chunk** | spaCy sentenciser | ≈200 tokens / chunk; keeps heading context |
| 3️⃣ **Synthetic QA** | GPT-4o-mini (`gpt-4o-mini-2024-05-preview`) | • 1 concise lay Q <br>• 1 symptom-narrative Q <br>→ cost **\$40** for 143 k pairs |
| 4️⃣ **Versioning** | Weights & Biases Artifacts | `kb_chunks`, `qa_wide` `qa_long` |
# 📊 Key Stats
| Metric | Value |
| ----------------------- | ---------: |
| Total context tokens | **27.4 M** |
| Avg. tokens / chunk | 390 |
| Unique host domains | 4 |
| QA pairs / chunk (mean) | 2.0 |
| % symptom-narrative Qs | 51 % |
# 🧩 Dataset Structure (Arrow schema)
<details><summary>click to expand</summary>
┌─────────────┬──────────────────────┐
│ chunks │ qa_wide / qa_long │
├─────────────┼──────────────────────┤
│ id: string │ chunk_id: string │
│ url: string │ question: string │
│ title: str │ answer: string │
│ section:str │ -- qa_wide only -- │
│ source:str │ qa: list<question…> │
│ text: str │ │
│ n_token:int │ │
└─────────────┴──────────────────────┘
</details>
# 📜 Citation
```bibtex
@misc{KyeiMensah2025MediMavenQA,
author = {Kyei-Mensah, Bernard},
title = {MediMaven-QA: A Citation-Preserving Medical Q\A Dataset with Symptom Narratives},
year = {2025},
url = {https://huggingface.co/datasets/dranreb1660/medimaven-qa-data},
note = {Version 1.0}
}
```
# 🗒️ Changelog
| Date (UTC) | Version | Highlights |
| -------------- | ------- | ---------------------------------------------------------------------------------------- |
| **2025-05-27** | `v1.0` | • Sentence-aware chunking <br>• 143 k synthetic QA pairs <br>• Cost optimisation to \$25 |
|
syvai/dk-voice-pro | syvai | 2025-05-27T22:37:04Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-27T22:36:22Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: audio
dtype: audio
- name: spoken_text
dtype: string
- name: style
dtype: string
- name: style_id
dtype: string
- name: instructions
dtype: string
- name: voice
dtype: string
splits:
- name: train
num_bytes: 413845673.618
num_examples: 2397
download_size: 402628331
dataset_size: 413845673.618
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
littleGuagua/x_dataset_24747 | littleGuagua | 2025-05-27T22:32:09Z | 1,247 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-26T08:49:30Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** littleGuagua/x_dataset_24747
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5EM4mwdfwdBzEbEqJ9KsFnj2sKpAjywcb5Ddz3CEoKV2ksj1
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{littleGuagua2025datauniversex_dataset_24747,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={littleGuagua},
year={2025},
url={https://huggingface.co/datasets/littleGuagua/x_dataset_24747},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 157467919
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-13T00:00:00Z
- **Last Updated:** 2025-02-18T16:32:12Z
### Data Distribution
- Tweets with hashtags: 42.71%
- Tweets without hashtags: 57.29%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 90209693 | 57.29% |
| 2 | #riyadh | 1088786 | 0.69% |
| 3 | #zelena | 820088 | 0.52% |
| 4 | #tiktok | 653763 | 0.42% |
| 5 | #bbb25 | 394331 | 0.25% |
| 6 | #ad | 378659 | 0.24% |
| 7 | #jhope_at_galadespiècesjaunes | 234371 | 0.15% |
| 8 | #bbmzansi | 213586 | 0.14% |
| 9 | #pr | 203109 | 0.13% |
| 10 | #yahooニュース | 190885 | 0.12% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T08:50:16Z | 2482006 | 2482006 |
| 2025-01-29T21:00:47Z | 29908448 | 32390454 |
| 2025-02-02T09:11:30Z | 28938392 | 61328846 |
| 2025-02-05T21:23:51Z | 29767835 | 91096681 |
| 2025-02-09T09:36:47Z | 29027751 | 120124432 |
| 2025-02-12T21:54:03Z | 28620241 | 148744673 |
| 2025-02-16T09:45:11Z | 7404661 | 156149334 |
| 2025-02-18T00:09:45Z | 696224 | 156845558 |
| 2025-02-18T16:32:12Z | 622361 | 157467919 |
|
alucchi/Qwen3-1.7B_n1000_e2_oadam0.0001_b44_1_a10_1825_train | alucchi | 2025-05-27T22:18:50Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-27T22:18:39Z | null | ---
dataset_info:
- config_name: default
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: generated_text
dtype: string
- name: generated_grid_rect
sequence:
sequence: int64
- name: task_solution
sequence:
sequence:
sequence: int64
- name: match
dtype: int64
- name: score
dtype: float64
splits:
- name: train
num_bytes: 4448507
num_examples: 931
download_size: 553312
dataset_size: 4448507
- config_name: main
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: generated_text
dtype: string
- name: generated_grid_rect
sequence:
sequence: int64
- name: task_solution
sequence:
sequence:
sequence: int64
- name: match
dtype: int64
- name: score
dtype: float64
splits:
- name: train
num_bytes: 4448507
num_examples: 931
download_size: 553312
dataset_size: 4448507
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: main
data_files:
- split: train
path: main/train-*
---
|
masoudc/countdown-tinyzero-20250527_215029 | masoudc | 2025-05-27T21:50:31Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-27T21:50:30Z | null | ---
dataset_info:
description: |
Countdown task dataset gen from tinyzero: given a target number and N numbers, generate equations to reach the target.
license: 'mit'
homepage: 'https://huggingface.co/qweft'
citation: 'https://github.com/Jiayi-Pan/TinyZero'
---
# Countdown Dataset
Countdown task dataset gen from tinyzero: given a target number and N numbers, generate equations to reach the target.
- License: mit
- Homepage: https://huggingface.co/qweft
- Citation: https://github.com/Jiayi-Pan/TinyZero
|
maksimko123/deepcad_test_mesh | maksimko123 | 2025-05-27T21:44:19Z | 0 | 0 | [
"license:cc-by-nc-4.0",
"region:us"
] | [] | 2025-05-27T21:41:46Z | null | ---
license: cc-by-nc-4.0
---
|
jmarangola/iai_blocks_2 | jmarangola | 2025-05-27T21:40:42Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-27T21:40:40Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 2,
"total_frames": 863,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 20,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.image.global_0": {
"dtype": "video",
"names": [
"channels",
"height",
"width"
],
"shape": [
3,
240,
320
],
"info": {
"video.fps": 20.0,
"video.height": 240,
"video.width": 320,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.state": {
"dtype": "float32",
"names": null,
"shape": [
10
]
},
"action": {
"dtype": "float32",
"shape": [
10
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
James096/reddit_dataset_69 | James096 | 2025-05-27T21:26:01Z | 61 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-05-26T09:26:27Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** James096/reddit_dataset_69
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5CUamzGz3SJWxQQghHSuucgkprsAG4k9qSpPvsuwrXF4HibU
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{James0962025datauniversereddit_dataset_69,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={James096},
year={2025},
url={https://huggingface.co/datasets/James096/reddit_dataset_69},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 31859260
- **Date Range:** 2007-06-05T00:00:00Z to 2025-05-27T00:00:00Z
- **Last Updated:** 2025-05-27T05:58:31Z
### Data Distribution
- Posts: 7.61%
- Comments: 92.39%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/indonesia | 93353 | 0.29% |
| 2 | r/namenerds | 89673 | 0.28% |
| 3 | r/masterduel | 84700 | 0.27% |
| 4 | r/GamingLeaksAndRumours | 83566 | 0.26% |
| 5 | r/AITAH | 83539 | 0.26% |
| 6 | r/Grimdank | 81153 | 0.25% |
| 7 | r/reddevils | 81131 | 0.25% |
| 8 | r/Ratschlag | 80329 | 0.25% |
| 9 | r/investing | 79774 | 0.25% |
| 10 | r/masseffect | 75478 | 0.24% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-05-26T11:29:09Z | 31493351 | 31493351 |
| 2025-05-27T05:58:31Z | 365909 | 31859260 |
|
AlirezaAbdollahpoor/MNLP_M2_quantized_dataset | AlirezaAbdollahpoor | 2025-05-27T21:17:44Z | 0 | 0 | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2309.12284",
"arxiv:1705.04146",
"region:us",
"mcqa",
"math",
"algebra",
"evaluation",
"quantization",
"benchmarking"
] | [
"question-answering",
"multiple-choice"
] | 2025-05-27T21:17:40Z | null | ---
license: mit
task_categories:
- question-answering
- multiple-choice
language:
- en
tags:
- mcqa
- math
- algebra
- evaluation
- quantization
- benchmarking
size_categories:
- n<1K
---
# MCQA Test Dataset for Model Evaluation
This dataset contains 3254 carefully selected test samples from MetaMathQA and AQuA-RAT datasets, designed for MCQA (Multiple Choice Question Answering) model evaluation and quantization testing.
## Dataset Overview
- **Total Samples**: 3254
- **MetaMathQA Samples**: 3000 (mathematical problems)
- **AQuA-RAT Samples**: 254 (algebraic word problems)
- **Question Types**: Math, Algebra
- **Intended Use**: Model evaluation, quantization benchmarking
## Source Datasets
This dataset is derived from:
- [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) - Mathematical reasoning problems
- [AQuA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat) - Algebraic reasoning problems
## Sampling Methodology
Random sampling from test portions to avoid training contamination
- **Random Seed**: 42 (for reproducibility)
- **MetaMathQA**: Sampled from the last portion of training split to avoid contamination
- **AQuA-RAT**: Randomly sampled from the official test split
## Dataset Schema
| Field | Type | Description |
|-------|------|-------------|
| `question_body` | string | Raw question text |
| `formatted_question` | string | Alpaca-style formatted question for inference |
| `correct_answer` | string | Ground truth answer |
| `question_id` | string | Unique identifier (metamath_X or aqua_X) |
| `source` | string | Dataset source (metamath or aqua_rat) |
| `question_type` | string | Type of question (math or algebra) |
| `dataset_index` | int | Original index in source dataset |
| `dataset_source` | string | URL of original dataset |
| `global_id` | int | Global index in combined dataset |
| `split` | string | Always "test" |
## Usage Examples
### Basic Loading
```python
from datasets import load_dataset
# Load the entire dataset
dataset = load_dataset("AlirezaAbdollahpoor/MNLP_M2_quantized_dataset")
# Access the data
test_data = dataset['train'] # Note: stored as 'train' split in HF
print(f"Total samples: {len(test_data)}")
```
### Filter by Question Type
```python
# Get only math questions
math_questions = test_data.filter(lambda x: x['question_type'] == 'math')
print(f"Math questions: {len(math_questions)}")
# Get only algebra questions
algebra_questions = test_data.filter(lambda x: x['question_type'] == 'algebra')
print(f"Algebra questions: {len(algebra_questions)}")
```
### Model Evaluation Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load your model
model = AutoModelForCausalLM.from_pretrained("your-model")
tokenizer = AutoTokenizer.from_pretrained("your-model")
# Evaluate on the dataset
correct = 0
total = len(test_data)
for sample in test_data:
prompt = sample['formatted_question']
# Generate response
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Extract and compare answer
predicted_answer = extract_answer(response)
if predicted_answer == sample['correct_answer']:
correct += 1
accuracy = correct / total
print(f"Accuracy: {accuracy:.3f}")
```
## Evaluation Metrics
This dataset is designed for:
- **Accuracy**: Percentage of correctly answered questions
- **Per-type Performance**: Separate metrics for math vs algebra questions
- **Quantization Impact**: Comparing performance across different quantization methods
- **Speed Benchmarking**: Measuring inference throughput
## Related Work
This dataset was created as part of an MCQA model fine-tuning and quantization study. It provides a standardized evaluation set for:
- Comparing baseline vs fine-tuned model performance
- Testing various quantization methods (4-bit, 8-bit, GGML, etc.)
- Benchmarking inference speed and memory usage
## Citation
If you use this dataset, please cite the original source datasets:
```bibtex
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
@misc{ling2017program,
title={Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems},
author={Wang Ling and Dani Yogatama and Chris Dyer and Phil Blunsom},
year={2017},
eprint={1705.04146},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
This dataset is released under the MIT License, following the licensing of the source datasets.
|
Xiaofeng77/reil_sokoban_preference | Xiaofeng77 | 2025-05-27T21:02:06Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-27T21:02:04Z | null | ---
dataset_info:
features:
- name: data_source
dtype: string
- name: prompt
dtype: string
- name: response
dtype: 'null'
- name: ability
dtype: string
- name: reward_model
struct:
- name: ground_truth
struct:
- name: numbers
sequence: int64
- name: target
dtype: int64
- name: style
dtype: string
- name: extra_info
struct:
- name: index
dtype: int64
- name: split
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 3932272
num_examples: 3982
download_size: 282570
dataset_size: 3932272
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jlbaker361/ssl-art_coco_captioned | jlbaker361 | 2025-05-27T20:41:51Z | 88 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-23T15:09:38Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: embedding
sequence:
sequence:
sequence: float32
- name: text
sequence:
sequence:
sequence: float32
- name: prompt
dtype: string
- name: posterior
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 103719683.0
num_examples: 20
download_size: 104739116
dataset_size: 103719683.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
UCSC-VLAA/MedReason | UCSC-VLAA | 2025-05-27T20:39:33Z | 2,058 | 62 | [
"task_categories:question-answering",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.00993",
"region:us",
"reasoning-datasets-competition",
"reasoning-LLMs"
] | [
"question-answering"
] | 2025-03-21T19:34:11Z | null | ---
license: apache-2.0
tags:
- reasoning-datasets-competition
- reasoning-LLMs
task_categories:
- question-answering
---
# MedReason: Eliciting Factual Medical Reasoning Steps in LLMs via Knowledge Graphs
<p align="center">
📃 <a href="https://huggingface.co/papers/2504.00993" target="_blank">Paper</a> |🤗 <a href="https://huggingface.co/UCSC-VLAA/MedReason-8B" target="_blank">MedReason-8B</a> | 📚 <a href="https://huggingface.co/datasets/UCSC-VLAA/MedReason" target="_blank">MedReason Data</a>
</p>
## ✨ Latest News
- [05/27/2025] 🎉 MedReason wins 3rd prize🏆 in the [Huggingface Reasoning Datasets Competition](https://x.com/bespokelabsai/status/1910068013661118874)!
## ⚡Introduction
**MedReason** is a large-scale high-quality medical reasoning dataset designed to enable faithful and explainable medical problem-solving in large language models (LLMs).
- We utilize a structured medical knowledge graph (KG) to convert clinical QA pairs into logical chains of reasoning, or “thinking paths”.
- Our pipeline generates detailed reasoning for various medical questions from 7 medical datasets, resulting in a dataset of **32,682** question-answer pairs, each with detailed, step-by-step explanations.
- By finetuning with proposed [MedReason dataset](https://huggingface.co/datasets/UCSC-VLAA/MedReason), our best model [MedReason-8B](https://huggingface.co/UCSC-VLAA/MedReason-8B), achieves *state-of-the-art* performance.
We open-sourced our CoT dataset here.
## 🙏🏼 Acknowledgement
We gratefully acknowledge the inspiring work of [HuatuoGPT-o1](https://github.com/FreedomIntelligence/HuatuoGPT-o1), which laid important groundwork for this research. We also thank the developers of the excellent tools [curator](https://github.com/bespokelabsai/curator/), [trl](https://github.com/huggingface/trl), and [sglang](https://github.com/sgl-project/sglang) for making this work possible.
## 📖 Citation
```
@misc{wu2025medreasonelicitingfactualmedical,
title={MedReason: Eliciting Factual Medical Reasoning Steps in LLMs via Knowledge Graphs},
author={Juncheng Wu and Wenlong Deng and Xingxuan Li and Sheng Liu and Taomian Mi and Yifan Peng and Ziyang Xu and Yi Liu and Hyunjin Cho and Chang-In Choi and Yihan Cao and Hui Ren and Xiang Li and Xiaoxiao Li and Yuyin Zhou},
year={2025},
eprint={2504.00993},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.00993},
}
``` |
gptilt/lol-ultimate-snapshot-challenger-15min | gptilt | 2025-05-27T19:55:40Z | 127 | 1 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-25T15:10:22Z | null | ---
configs:
- config_name: snapshot
data_files:
- split: train_region_americas
path: snapshot/train_region_americas*.parquet
- split: test_region_americas
path: snapshot/test_region_americas*.parquet
- split: train_region_asia
path: snapshot/train_region_asia*.parquet
- split: test_region_asia
path: snapshot/test_region_asia*.parquet
- split: train_region_europe
path: snapshot/train_region_europe*.parquet
- split: test_region_europe
path: snapshot/test_region_europe*.parquet
---
# GPTilt: League of Legends Challenger Matches' Snapshots At 15 Minutes
This dataset is part of the [GPTilt](https://github.com/gptilt) open-source initiative, aimed at democratizing access to high-quality LoL data for research and analysis, fostering public exploration, and advancing the community's understanding of League of Legends through data science and AI. It provides detailed data from high-elo matches.
*By using this dataset, users accept full responsibility for any consequences arising from its use. GPTilt assumes no liability for any damages that may result. Users are strongly encouraged to review the ["Uses"](#uses) section—particularly the ["Out-of-Scope Use"](#out-of-scope-use) subsection—for guidance.*
## Getting Started
First, install Hugging Face's [datasets](https://pypi.org/project/datasets/) package:
```bash
pip install datasets
```
Now, you can load the dataset!
```py
from datasets import load_dataset
# Specify just the config_name / table
dataset = load_dataset("gptilt/lol-ultimate-snapshot-challenger-15min", name="snapshot")
# Or include the split!
dataset = load_dataset("gptilt/lol-ultimate-snapshot-challenger-15min", name="snapshot", split="train_region_americas")
```
## Dataset Summary
This dataset contains **League of Legends Challenger Matches' Snapshots At 15 Minutes**. Provides a complete snapshot of the game at 15 minutes. Data was originally collected and processed via the official Riot Games API. It's , with the primary language being english.
## Dataset Structure
The data is structured into tables:
- **snapshot**: Contains a snapshot of the match at a given time, with contextual information such as kills/assists, as well as pregame state (champions, runes, etc).
```json
{
"matchId": "LA2_1495348800",
# Player information
"kills_0": 6,
"deaths_0": 2,
"assists_0": 3,
"inventory_0": [1421, 3500], # Item IDs
"level_0": 12, # Level at time of event
(...)
"kills_1": 0,
"deaths_1": 1,
}
```
All snapshots have a `matchId` column, making it compatible with all [`basic` tier `matches` tables](https://huggingface.co/datasets/gptilt/lol-basic-matches-challenger-10k) and [the `ultimate` tier `events` dataset](https://huggingface.co/datasets/gptilt/lol-ultimate-events-challenger-10m).
Additionally, data is segmented into 6 splits: ['train_region_americas', 'test_region_americas', 'train_region_asia', 'test_region_asia', 'train_region_europe', 'test_region_europe'].
## Dataset Creation
### Curation Rationale
This dataset was created to address the lack of large-scale, publicly available, and analysis-ready datasets for League of Legends research. The GPTilt project aims to provide resources for the community to apply data science and AI techniques to better understand the intricate dynamics of the game, moving beyond simple win prediction towards interpreting strategic patterns and complex interactions. This specific dataset focuses on high-elo (Challenger) players to capture refined strategic execution.
### Source Data
#### Data Collection and Processing
The source data originates exclusively from the [**Riot Games API**](https://developer.riotgames.com/apis) and [**CDragon**](https://communitydragon.org/).
1. **Seeding:** High-elo player PUUIDs were initially identified using the `league-v4` endpoint for the Challenger tier across multiple regions.
2. **Match History:** The `match-v5` endpoint was used to retrieve recent match IDs for these players.
3. **Match & Timeline Fetching:** The `match-v5` (match details) and `match-v5` (match timeline) endpoints were used to download the full data for each unique match ID identified.
4. **Raw Storage:** Raw API responses (JSON format) were saved.
5. **Staging & Transformation:** Raw data was parsed, and transformed into the basic-tier dataset 'League of Legends Challenger Matches'. The matches dataset was then used to build the enriched events dataset, which served as the source for the ultimate-tier dataset 'League of Legends Challenger Matches Snapshot'.
6. **Output:** Data was written to Parquet files, partitioned by `region`.
#### Who are the source data producers?
The underlying gameplay data is generated by **League of Legends players** participating in high-elo ranked matches. The **Riot Games API** serves as the source interface providing access to this gameplay data. The dataset curators are the contributors to the GPTilt project who performed the collection and processing steps. No demographic information about the players is collected, besides the region.
#### Personal and Sensitive Information
The dataset contains **PUUIDs** and **Participant IDs**, which are pseudonymous identifiers linked to League of Legends accounts. No other Personally Identifiable Information (PII) like real names, emails, or addresses is included. Use of these identifiers is subject to Riot Games' policies. Users should exercise caution and adhere to these policies, avoiding attempts to [deanonymize players who cannot reasonably be identified from visible information](https://developer.riotgames.com/policies/general#_developer-safety).
### Bias, Risks, and Limitations
- **Skill Tier Bias:** This dataset focuses *exclusively* on the Challenger tier. Findings may not generalize to other skill levels (Bronze, Silver, Gold, Platinum, Diamond, Master, Grandmaster) where metas, champion picks, and strategic execution differ significantly. Because match data is selected by searching for Challenger players, multi-tier games may (and are expected) to be present in the dataset.
- **Regional Bias:** While collected from multiple regions, the distribution might not be perfectly balanced, potentially reflecting the metas dominant in the included regions during the collection period.
- **Patch Bias:** The data reflects gameplay on specific game versions (see `matches` table `gameVersion` field). Major patches can significantly alter champion balance, items, and objectives, potentially making findings less relevant to different patches.
- **Missing Context:** The data captures *recorded* events and states but lacks external context like player communication (voice/text chat), player fatigue/tilt, real-time strategic intent, or external distractions.
- **API Limitations:** Data is subject to the accuracy and granularity provided by the Riot Games API. Some nuanced actions or states might not be perfectly captured. Rate limits inherent to the API restrict the size and frequency of potential dataset updates.
#### Recommendations
- Users should explicitly acknowledge the **high-elo (Challenger) bias** when reporting results and be cautious about generalizing findings to other player segments.
- Always consider the **game version (`gameVersion`)** when analyzing the data, as metas and balance change significantly between patches.
- Users **must** adhere to the **Riot Games API Terms of Service and Developer Policies** in all uses of this data.
## Uses
### Disclaimer
*This dataset utilizes data from the Riot Games API. Its use is subject to the Riot Games API Terms of Service and relevant developer policies. GPTilt is not endorsed by Riot Games and does not reflect the views or opinions of Riot Games or anyone officially involved in producing or managing League of Legends. League of Legends and Riot Games are trademarks or registered trademarks of Riot Games, Inc. League of Legends © Riot Games, Inc.*
### License
This dataset and all associated code is licensed under the [Creative Commons Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/legalcode.en) license.
### Direct Use
This dataset is intended for **non-commercial research, data analysis, and exploration** aimed at understanding League of Legends gameplay dynamics, strategic patterns, champion interactions, and game flow. Suitable uses include:
- **Statistical analysis** of high-elo match characteristics.
- **Exploratory data analysis** to uncover **trends** and correlations.
- Training **machine learning models** (including Transformer-based architectures like LLoLMs) for tasks related to **game state representation**, event sequence modeling, pattern recognition for game understanding, etc.
- **Feature engineering** for derived metrics.
- **Educational purposes** related to data science and game analytics.
**Users must ensure their use case complies with the Riot Games API [Terms of Service](https://developer.riotgames.com/terms) and [Developer Policies](https://developer.riotgames.com/policies/general). Consult these policies before using the data.**
### Out-of-Scope Use
This dataset **must not** be used for purposes that violate the Riot Games API [Terms of Service](https://developer.riotgames.com/terms) or [Developer Policies](https://developer.riotgames.com/policies/general).
This dataset is derived from high-elo games and may not accurately represent gameplay patterns at lower skill levels. **Consult the Riot Games API [Terms of Service](https://developer.riotgames.com/terms) and [Developer Policies](https://developer.riotgames.com/policies/general) for comprehensive usage restrictions.**
## Changelist
### May 27, 2025
- Divided splits into `train` and `test`.
## Citation
**If you wish to use this dataset in your work, we kindly ask that you cite it.**
For most informal work, a simple mention of the GPTilt project and the League of Legends Challenger Matches' Snapshots At 15 Minutes dataset will suffice.
**BibTeX:**
```bibtex
@misc{gptilt_league_of_legends_challenger_matches'_snapshots_at_15_minutes,
author = { GPTilt Contributors },
title = { League of Legends Challenger Matches' Snapshots At 15 Minutes },
year = { 2025 },
publisher = { Hugging Face },
journal = { Hugging Face Hub },
url = { https://huggingface.co/datasets/gptilt/lol-ultimate-snapshot-challenger-15min }
}
``` |
CompassioninMachineLearning/may27_pretraining_research_documents | CompassioninMachineLearning | 2025-05-27T19:54:49Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-27T19:54:42Z | null | ---
dataset_info:
features:
- name: instruction
dtype: 'null'
- name: output
struct:
- name: instruction
dtype: 'null'
- name: origin
dtype: string
- name: output
dtype: string
- name: origin
dtype: string
splits:
- name: train
num_bytes: 64749918.6
num_examples: 10764
- name: test
num_bytes: 7194435.4
num_examples: 1196
download_size: 37502393
dataset_size: 71944354.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
sophivideo/wATCH-Sophie-Rain-Sophie-Rain-Videoss | sophivideo | 2025-05-27T19:51:17Z | 0 | 0 | [
"license:artistic-2.0",
"region:us"
] | [] | 2025-05-27T19:51:17Z | null | ---
license: artistic-2.0
---
|
HAissa/MNLP_M2_mcqa_dataset | HAissa | 2025-05-27T19:36:06Z | 326 | 0 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-19T22:20:21Z | null | ---
license: apache-2.0
dataset_info:
- config_name: default
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 1510172124.0
num_examples: 300660
- name: validation
num_bytes: 376612569.0
num_examples: 75165
download_size: 875467005
dataset_size: 1886784693.0
- config_name: no_thinking
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 129546698
num_examples: 185180
- name: validation
num_bytes: 29349748
num_examples: 46295
download_size: 77798657
dataset_size: 158896446
- config_name: thinking
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 1380625426
num_examples: 115480
- name: validation
num_bytes: 347262821
num_examples: 28870
download_size: 787707673
dataset_size: 1727888247
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- config_name: no_thinking
data_files:
- split: train
path: no_thinking/train-*
- split: validation
path: no_thinking/validation-*
- config_name: thinking
data_files:
- split: train
path: thinking/train-*
- split: validation
path: thinking/validation-*
---
|
jieyuz2/m | jieyuz2 | 2025-05-27T19:15:59Z | 206 | 0 | [
"arxiv:1910.09700",
"region:us"
] | [] | 2024-09-01T21:27:53Z | null | ---
base_model: TIGER-Lab/Mantis-8B-siglip-llama3-pretraind
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
rlhn/rlhn-400K | rlhn | 2025-05-27T19:08:44Z | 29 | 1 | [
"task_categories:question-answering",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2505.16967",
"region:us"
] | [
"question-answering"
] | 2025-04-07T23:43:07Z | null | ---
dataset_info:
features:
- name: query_id
dtype: string
- name: query
dtype: string
- name: positive_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: negative_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: subset
dtype: string
splits:
- name: train
num_bytes: 8135550141
num_examples: 390175
download_size: 4782876145
dataset_size: 8135550141
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- en
pretty_name: RLHN-400K
size_categories:
- 100K<n<1M
---
# Dataset Card for RLHN-400K
## Dataset Description
[Repository](https://github.com/castorini/rlhn) |
[Paper](https://huggingface.co/papers/2505.16967) |
[ArXiv](https://arxiv.org/abs/2505.16967)
RLHN is a cascading LLM framework designed to accurately relabel hard negatives in existing IR/RAG training datasets, such as MS MARCO and HotpotQA.
This Tevatron dataset (680K training pairs) contains the queries, positives + relabeled hard negatives, remaining hard negatives for 7 datasets in the BGE training collection.
This repository contains the training pairs that can be used to fine-tune embedding, ColBERT or multi-vector, and reranker models.
The original dataset (bad quality; containing false negatives) can be found at [rlhn/default-400K](https://huggingface.co/datasets/rlhn/default-400K/).
> Note: RLHN datasets are not **new** training datasets, but rather existing BGE collection training datasets with hard negatives cleaned!
## Dataset Structure
To access the data using HuggingFace `datasets`:
```python
rlhn = datasets.load_dataset('rlhn/rlhn-400K')
# training set:
for data in freshstack['train']:
query_id = data["query_id"] # md5 hash of the query_id
query = data["query"] # query text
subset = data["subset"] # training dataset, e.g., fiqa or msmarco_passage
# positive passages
for positive_passage in data["positive_passages"]:
doc_id = positive_passage["docid"]
title = positive_passage["title"] # title is usually empty, added in text
text = positive_passage["text"] # contains both the title & text
# hard negative passages
for negative_passage in data["negative_passages"]:
doc_id = negative_passage["docid"]
title = negative_passage["title"] # title is usually empty, added in text
text = negative_passage["text"] # contains both the title & text
```
## Original Dataset Statistics
The following table contains the number of training pairs for each training dataset included in RLHN. These numbers are for the default setting.
| Dataset | 100K splits | 250K splits | 400K splits | 680K splits |
|-------------------|-------------|-------------|-------------|------------- |
| arguana | 4,065 | 4,065 | 4,065 | 4,065 |
| fever | 28,755 | 28,755 | 28,755 | 28,755 |
| fiqa | 5,500 | 5,500 | 5,500 | 5,500 |
| hotpotqa | 10,250 | 30,000 | 84,516 | 84,516 |
| msmarco_passage | 49,571 | 145,000 | 210,000 | 485,823 |
| nq | 6,110 | 30,000 | 58,568 | 58,568 |
| scidocsrr | 12,654 | 12,654 | 12,654 | 12,654 |
| **total** | **96,167** | **255,974** | **404,058** | **679,881** |
## License
The RLHN dataset is made available with the CC-BY-SA 4.0 license.
## Hashing & IDs
We generate the md5 hash as the unique identifier (ID) for both the query \& documents, using the code below:
```python
import hashlib
def get_md5_hash(text):
"""Calculates the MD5 hash of a given string.
Args:
text: The string to hash.
Returns:
The MD5 hash of the string as a hexadecimal string.
"""
text_bytes = text.encode('utf-8') # Encode the string to bytes
md5_hash = hashlib.md5(text_bytes).hexdigest()
return md5_hash
```
## Citation
```
@misc{thakur2025relabel,
title={Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval},
author={Nandan Thakur and Crystina Zhang and Xueguang Ma and Jimmy Lin},
year={2025},
eprint={2505.16967},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2505.16967},
}
``` |
endre01/MNLP_M2_rag_documents | endre01 | 2025-05-27T18:21:20Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-27T18:21:14Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 89809866
num_examples: 133856
download_size: 49350248
dataset_size: 89809866
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
joshcd/MNLP_M2_rag_dataset | joshcd | 2025-05-27T18:15:39Z | 29 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-26T17:21:09Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: distractor3
dtype: string
- name: distractor1
dtype: string
- name: distractor2
dtype: string
- name: correct_answer
dtype: string
- name: support
dtype: string
splits:
- name: train
num_bytes: 6546183
num_examples: 11679
- name: validation
num_bytes: 554120
num_examples: 1000
- name: test
num_bytes: 563927
num_examples: 1000
download_size: 4652637
dataset_size: 7664230
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
muqtasid87/finegrained_vehicle_labels | muqtasid87 | 2025-05-27T17:23:44Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-27T17:22:44Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
- name: text
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 60970760.575
num_examples: 1075
download_size: 50269295
dataset_size: 60970760.575
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Yehor/ual-topics | Yehor | 2025-05-27T17:10:24Z | 29 | 2 | [
"task_categories:text-classification",
"task_ids:topic-classification",
"source_datasets:original",
"language:uk",
"license:cc-by-nc-sa-4.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/4563",
"region:us"
] | [
"text-classification"
] | 2024-08-15T17:34:12Z | null | ---
language:
- uk
license:
- cc-by-nc-sa-4.0
size_categories:
- 1K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
pretty_name: UA-L Topics Corpus
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': inshe
'1': ekologiya
'2': ziemielnie_pravo
'3': reklama
'4': bankivska_diialnist
'5': prava_spozhivachiv
'6': medicina
'7': spadkove_pravo
'8': immighratsiia_iemighratsiia
'9': intieliektualna_vlasnist
'10': gospodarskie_pravo
'11': pidpriemnicka_dialnist
'12': opodatkuvannia
'13': piensiiata_sotsialni_viplati
'14': viiskovie_pravo
'15': sudova_praktika
'16': kriminalnie_pravo
'17': gromadianski_pravovidnosini
'18': strakhuvannya
'19': pratsevlashtuvvannya
'20': sotsialnyj_zakhist
'21': vighotovliennia_produktsiyi_ta_nadannia_poslugh
'22': litsienzuvannia
'23': reyestraciya_likvidaciya_bankrutstvo
'24': doghovirni_vidnosini
'25': administrativnie_pravo
'26': nierukhomist
'27': prava_vnutrishno_pieriemishchienikh_osib
'28': investitsii
'29': notarialni_pytanniia
'30': avtovlasnykam
'31': zhitlovi_pravovidnosini
'32': dovircha_vlastnist
'33': dierzhavni_zakupivli
'34': simejne_pravo
'35': mytne_pravo
'36': mizhnarodni_pravovidnosini
'37': korporativnie_pravo
'38': tsivilne_pravo
configs:
- config_name: default
data_files:
- split: train
path: data/train.jsonl
- split: test
path: data/test.jsonl
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# `ual-topics`
This dataset contains texts from https://ua-lawyer.com project. The texts contains questions and their labels (a category of law) in Ukrainian.
🚨🚨🚨 ATTENTION! 🚨🚨🚨
Look at **a better version** (balanced over labels) of this dataset: https://huggingface.co/datasets/ua-l/topics-train-test
## Community
- **Discord**: https://bit.ly/discord-uds
- Natural Language Processing: https://t.me/nlp_uk
## Install
```text
uv venv --python 3.12
source .venv/bin/activate
uv pip install -r requirements.txt
uv pip install -r requirements-dev.txt
```
## Cite this work
```
@misc {smoliakov_2025,
author = { {Smoliakov} },
title = { ual-topics (Revision 064f6e5) },
year = 2025,
url = { https://huggingface.co/datasets/Yehor/ual-topics },
doi = { 10.57967/hf/4563 },
publisher = { Hugging Face }
}
```
|
bouchonnn/MNLP_M2_dpo_dataset | bouchonnn | 2025-05-27T16:58:47Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-16T14:52:51Z | null | ---
dataset_info:
features:
- name: post_id
dtype: string
- name: domain
dtype: string
- name: upvote_ratio
dtype: float64
- name: history
dtype: string
- name: c_root_id_A
dtype: string
- name: c_root_id_B
dtype: string
- name: created_at_utc_A
dtype: int64
- name: created_at_utc_B
dtype: int64
- name: score_A
dtype: int64
- name: score_B
dtype: int64
- name: human_ref_A
dtype: string
- name: human_ref_B
dtype: string
- name: labels
dtype: int64
- name: seconds_difference
dtype: float64
- name: score_ratio
dtype: float64
- name: id
dtype: string
- name: dataset
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 35585102.0
num_examples: 12354
download_size: 21301608
dataset_size: 35585102.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
deanngkl/affectnet_no_contempt | deanngkl | 2025-05-27T16:46:30Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-27T16:32:24Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': anger
'1': disgust
'2': fear
'3': happiness
'4': neutral
'5': sadness
'6': surprise
splits:
- name: train
num_bytes: 7939507155.0
num_examples: 27823
download_size: 7939114328
dataset_size: 7939507155.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tele-AI-MAIL/WebUIBench | Tele-AI-MAIL | 2025-05-27T16:37:06Z | 76 | 0 | [
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2404.05955",
"region:us"
] | [] | 2025-05-23T02:05:20Z | null | ---
license: cc-by-4.0
configs:
- config_name: Element_Classification
data_files:
- split: test
path: Element_Classification/test-*
- config_name: Attribute_Regconition
data_files:
- split: test
path: Attribute_Regconition/test-*
- config_name: Visual_Grounding
data_files:
- split: test
path: Visual_Grounding/test-*
- config_name: OCR
data_files:
- split: test
path: OCR/test-*
- config_name: Code_Error_Correction
data_files:
- split: test
path: Code_Error_Correction/test-*
- config_name: Code_Function_Editing
data_files:
- split: test
path: Code_Function_Editing/test-*
- config_name: Webpage_HTML_Matching
data_files:
- split: test
path: Webpage_HTML_Matching/test-*
- config_name: Webpage_HTMl_Retrieval
data_files:
- split: test
path: Webpage_HTML_Retrieval/test-*
dataset_info:
- config_name: Element_Classification
features:
- name: id
dtype: string
- name: question
dtype: string
- name: image_id
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: subtask
dtype: string
splits:
- name: test
num_bytes: 442962174
num_examples: 950
download_size: 442962174
dataset_size: 442962174
- config_name: Attribute_Regconition
features:
- name: id
dtype: string
- name: question
dtype: string
- name: image_id
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: subtask
dtype: string
splits:
- name: test
num_bytes: 1679258113
num_examples: 3718
download_size: 1679258113
dataset_size: 1679258113
- config_name: Visual_Grounding
features:
- name: id
dtype: string
- name: question
dtype: string
- name: image_id
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: subtask
dtype: string
splits:
- name: test
num_bytes: 1897962456
num_examples: 3934
download_size: 1897962456
dataset_size: 1897962456
- config_name: OCR
features:
- name: id
dtype: string
- name: question
dtype: string
- name: image_id
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: target_[x1,y1,x2,y2]
dtype: string
- name: subtask
dtype: string
splits:
- name: test
num_bytes: 1147237990
num_examples: 2460
download_size: 1147237990
dataset_size: 1147237990
- config_name: Code_Error_Correction
features:
- name: id
dtype: string
- name: question
dtype: string
- name: code_with_error
dtype: string
- name: answer
dtype: string
- name: subtask
dtype: string
splits:
- name: test
num_bytes: 2885440
num_examples: 2635
download_size: 2885440
dataset_size: 2885440
- config_name: Code_Function_Editing
features:
- name: id
dtype: string
- name: question
dtype: string
- name: function_description
dtype: string
- name: answer
dtype: string
- name: subtask
dtype: string
splits:
- name: test
num_bytes: 2712168
num_examples: 2290
download_size: 2712168
dataset_size: 2712168
- config_name: Webpage_HTML_Matching
features:
- name: id
dtype: string
- name: question
dtype: string
- name: image_id
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: subtask
dtype: string
splits:
- name: test
num_bytes: 1003289265
num_examples: 2143
download_size: 1003289265
dataset_size: 1003289265
- config_name: Webpage_HTML_Retrieval
features:
- name: id
dtype: string
- name: question
dtype: string
- name: image_id
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: subtask
dtype: string
splits:
- name: test
num_bytes: 1109887493
num_examples: 2345
download_size: 1109887493
dataset_size: 1109887493
---
# WebUIBench
Dataset for the paper: [WebUIBench: A Comprehensive Benchmark for Evaluating Multimodal Large Language Models in WebUI-to-Code](https://arxiv.org/abs/2404.05955)
🏠 [Homepage](https://github.com/MAIL-Tele-AI/WebUIBench) | [**📖 arXiv**](https://arxiv.org/abs/2404.05955)
## Introduction
<!--  -->
We introduce WebUIBench, a large-scale and comprehensive benchmark designed to evaluate the WebUI-to-Code capabilities of Multimodal Large Language Models (MLLMs). WebUIBench comprises over **21K question-answer pairs** derived from more than **0.7K real-world websites**, encompassing **9 distinct subtasks**. We conducted extensive experiments on 7 state-of-the-art closed-source and 22 prominent open-source MLLMs. Our key findings highlight the models' deficiencies in webpage generation tasks across various dimensions, including cross-modality reasoning, element localization, and webpage layout generation.
## Contact
- Zhiyu Lin: [[email protected]]([email protected])
- Zhengda Zhou: [[email protected]]([email protected])
- Zhiyuan Zhao: [[email protected]]([email protected])
# 🚩Citation
If you find this work is helpful, please kindly cite as follows. Thanks !
```bibtex
@article{xx,
title={WebUIBench: A Comprehensive Benchmark for Evaluating Multimodal Large Language Models in WebUI-to-Code},
author={xx},
journal={arXiv preprint arXiv:xx},
year={2025}
}
```
|
Taylor658/synthetic-fine-arts | Taylor658 | 2025-05-27T16:32:24Z | 22 | 1 | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:other",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"fine-arts",
"dataset",
"synthetic",
"multi-domain",
"art"
] | [
"text-generation",
"question-answering",
"summarization",
"other"
] | 2025-01-28T23:32:44Z | null | ---
language:
- en
size_categories:
- 100K<n<500K
license: mit
task_categories:
- text-generation
- question-answering
- summarization
- other
tags:
- fine-arts
- dataset
- synthetic
- multi-domain
- art
dataset_info:
features:
- name: ID
dtype: string
- name: AreaOfFocus
dtype: string
- name: ArtisticChallenge
dtype: string
- name: ProposedSolution
dtype: string
- name: VerificationMethod
dtype: string
- name: ReferenceMaterial
dtype: string
- name: EthicalConsiderations
dtype: string
dataset_size: 225000
dataset_version: "1.0.0"
---
# Synthetic Fine Arts (Challenge, Solution) Dataset
> **Description**
> **Synthetic Fine Arts** is a **225,000-row** dataset of *(artistic challenge, proposed solution)* pairs spanning multiple areas within **Visual Arts, Performing Arts, Musical Arts, Literary Arts, Digital Arts, Art History, and Art Theory**.
>
> Each entry provides a high-level **ArtisticChallenge**, accompanied by a **ProposedSolution** referencing established or pseudo-random *creative techniques, theoretical principles, and historical precedents*. **VerificationMethod** and other metadata fields are included to *mimic* real curation processes.
>
> **Disclaimer**: *All* text is **synthetically generated** and **should not be construed as real** on artistic, historical, or technical matters.
---
## Key Highlights ✨
1. **Multi-Domain Coverage**
\- Encompasses *Visual Arts: Painting, Performing Arts: Theater/Dance, Musical Arts: Composition, Literary Arts: Poetry, Digital Arts: Generative Art, Art History: Movement Analysis, Art Theory: Philosophical Approach*, etc.
2. **Large Scale**
\- **225,000** synthetic challenge-solution pairs, suitable for training, fine-tuning, or experimentation in r1 focusing on *artistic creativity*.
3. **Detailed Columns**
\- Each row has:
1. **`ID`** – A zero-padded identifier like `AID000001`.
2. **`AreaOfFocus`** – E.g., “Visual Arts: Painting.”
3. **`ArtisticChallenge`** – A short textual challenge (e.g., merging classic and contemporary styles).
4. **`ProposedSolution`** – Potential method/technique to address the challenge, referencing color theory, composition rules, or historical methods.
5. **`VerificationMethod`** – Approach used to ensure correctness (e.g., “Technical validation (color theory),” “Historical grounding,” etc.).
6. **`ReferenceMaterial`** – Placeholder references to museum APIs, open-access artwork, scholarly texts.
7. **`EthicalConsiderations`** – Synthetic flags like “Cultural sensitivity review passed,” “Copyright cleared,” etc.
## Dataset Structure 🏗️
**Example Columns**:
- **`ID`**: string identifier with zero-padding (e.g., `AID000123`).
- **`AreaOfFocus`**: text describing the primary art domain or sub-domain.
- **`ArtisticChallenge`**: a concise statement of a creative or technical challenge.
- **`ProposedSolution`**: a method or technique referencing real-world or hypothetical best practices.
- **`VerificationMethod`**: how the solution was (synthetically) validated (e.g., “Peer-reviewed research cross-check”).
- **`ReferenceMaterial`**: placeholders such as “MET Open Access paintings dataset.”
- **`EthicalConsiderations`**: notes on copyright, cultural sensitivity, or related checks.
### Example Entry
```json
{
"ID": "AID000001",
"AreaOfFocus": "Visual Arts: Painting",
"ArtisticChallenge": "Achieving realistic lighting in portrait painting",
"ProposedSolution": "Adopt advanced underpainting methods for depth and color harmony, referencing late Renaissance techniques.",
"VerificationMethod": "Technical validation (color theory)",
"ReferenceMaterial": "MET Open Access paintings dataset",
"EthicalConsiderations": "Age-appropriate content"
}
```
> **Note**: All text is **synthetic** and references are placeholders. Real world usage would replace these with accurate citations or data from museum APIs, peer-reviewed journals, historical archives, etc.
## Usage & Examples 💡
Load with the **Hugging Face** `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("your-username/synthetic_fine_arts", split="train")
print(dataset[0])
```
### Potential Applications
1. **Text Generation & Fine-Tuning**
- Use “ArtisticChallenge” as a prompt and “ProposedSolution” as the target, training models to offer creative solutions or suggestions in arts-related tasks.
2. **Style Transfer or Aesthetic Judgment**
- Explore classification tasks around “VerificationMethod,” “EthicalConsiderations,” or the type of “AreaOfFocus” to build automated aesthetic or ethical checks.
## Caveats & Limitations ⚠️
1. **Synthetic Content**
- All entries are generated with template-based or random processes and *Do Not* reflect historically accurate references or proven artistic methods.
2. **Cultural & Ethical Sensitivity**
- Fields like “Cultural sensitivity review passed” are hypothetical. Real curation for culturally sensitive or traditional arts requires human expertise.
3. **No Actual Artistic Authority**
- This dataset does **not** substitute expert knowledge from professionals in fine arts, art history, or museum curation.
## Citation & Acknowledgments 🙌
```bibtex
@misc{synthetic_fine_arts_2025,
title = {Synthetic Fine Arts (Challenge, Solution) Dataset},
author = {https://huggingface.co/Taylor658},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/taylor658/synthetic_fine_arts}}
}
```
## Contributing 🧑💻
Feel free to open issues or pull requests if you wish to:
- Add more fine-grained sub-domains (e.g., sculpture, orchestral composition, dance notation systems)
- Integrate real open-access references to museum collections, historical journals, or scholarly works
- Expand or refine the *VerificationMethod* to incorporate advanced analytics or peer-reviewed confirmation
---
> **Disclaimer**: **All content is synthetic** and intended for *research and experimentation* only.
|
somerandomguyontheweb/en_be_mt_datasets_evaluation | somerandomguyontheweb | 2025-05-27T16:30:40Z | 0 | 0 | [
"task_categories:translation",
"language:be",
"language:en",
"license:pddl",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"translation"
] | 2025-05-27T15:11:52Z | null | ---
license: pddl
task_categories:
- translation
language:
- be
- en
size_categories:
- n<1K
---
## Overview
This is a small dataset of English-Belarusian sentence pairs sampled from the largest parallel corpora in [OPUS](https://opus.nlpl.eu/results/en&be/corpus-result-table) (100 random instances from each of the following: NLLB, HPLT, CCMatrix, CCAligned) and manually labeled for correctness by a speaker of Belarusian. The taxonomy of labels follows [Kreutzer et al. 2022](https://doi.org/10.1162/tacl_a_00447):
- CC: correct translation, natural sentence
- CB: correct translation, boilerplate or low quality
- CS: correct translation, short
- X: incorrect translation
- WL: wrong language
- NL: not a language
Where appropriate, the labels are accompanied by free-form comments.
## Data sampling
In Unix shell, execute:
```bash
sample_sentence_pairs () {
mkdir -p $1
cd $1
wget https://object.pouta.csc.fi/OPUS-$1/$2/moses/be-en.txt.zip
unzip be-en.txt.zip
paste $1.be-en.en $1.be-en.be | shuf -n 100 > $1.be-en.sample100.txt
ls | grep -v sample100 | xargs rm
cd ..
}
sample_sentence_pairs NLLB v1
sample_sentence_pairs HPLT v2
sample_sentence_pairs CCMatrix v1
sample_sentence_pairs CCAligned v1
mv */*.txt .
rm -r NLLB HPLT CCMatrix CCAligned
```
Then in Python:
```python3
import csv
def to_csv(filename):
with open(filename) as f:
data = [line.strip().split("\t") for line in f]
assert all(len(x) == 2 for x in data)
with open("processed_%s.csv" % filename, "w") as f:
csv_writer = csv.writer(f)
csv_writer.writerow(["en", "be"])
csv_writer.writerows(data)
to_csv("NLLB.be-en.sample100.txt")
to_csv("HPLT.be-en.sample100.txt")
to_csv("CCMatrix.be-en.sample100.txt")
to_csv("CCAligned.be-en.sample100.txt")
```
## Labeling results
| Dataset | CC | CB | CS | X | WL | NL |
|-----------|----|----|----|----|----|----|
| NLLB | 17 | | | 73 | 10 | |
| HPLT | 41 | 35 | 6 | 17 | 1 | |
| CCMatrix | 7 | 1 | | 92 | | |
| CCAligned | 31 | 38 | 8 | 22 | 1 | | |
tcapelle/boostrap_triton | tcapelle | 2025-05-27T16:29:55Z | 149 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-21T14:21:36Z | null | ---
dataset_info:
features:
- name: pt_code
dtype: string
- name: triton_code
dtype: string
- name: pt_entrypoint
dtype: string
- name: triton_entrypoint
dtype: string
- name: reasoning
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: tests_code
dtype: string
- name: pt_code_runs
dtype: bool
- name: stdout
dtype: string
- name: stderr
dtype: string
- name: stop_reason
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: entrypoint
dtype: string
- name: tests
dtype: string
- name: conversion_reasoning
dtype: string
splits:
- name: train
num_bytes: 5838439
num_examples: 378
download_size: 1447320
dataset_size: 5838439
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
relaxedandcalm/screw3 | relaxedandcalm | 2025-05-27T16:09:53Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-27T16:08:34Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "mcx",
"total_episodes": 10,
"total_frames": 4679,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": "main"
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": "main"
},
"observation.images.first_cam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.second_cam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
HaniAI/AI4LI-DATA-GRPO_vietnamese | HaniAI | 2025-05-27T15:42:52Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-27T15:42:50Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 745386.9779735683
num_examples: 1620
download_size: 469498
dataset_size: 745386.9779735683
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "AI4LI-DATA-GRPO_vietnamese"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fvdfs41/home | fvdfs41 | 2025-05-27T15:37:55Z | 1,658 | 0 | [
"language:en",
"region:us",
"playstationhome",
"pshome",
"preservation",
"revival",
"archive",
"cache",
"non-profit",
"homelab"
] | [] | 2024-10-31T23:03:24Z | null | ---
language:
- en
tags:
- playstationhome
- pshome
- preservation
- revival
- archive
- cache
- non-profit
- homelab
pretty_name: Playstation®Home Cache Depot
---
# ✧ Playstation®Home Cache Depot ✧
This repository is an archive of assets pertaining to **Playstation®Home**.
Playstation®Home was an online social world video game that was on PS3. It was closed down by it's creator ( Sony Computer Entertainment ) on April 1st 2015. The Playstation®Home community strongly feels that Playstation®Home is an abandonded game and its assets to be lost media.
All assets archived here are deemed to be owned by Sony Computer Entertainment and their third party associates.
These assets are sourced from ...
- The JohnDrinkWater Playstation®Home Archive ( [johndrinkwater github repo](https://github.com/johndrinkwater/ps-home-archive) )
- Donations made by past Playstation®Home users that voluntarily retrieved the data off their own PS3s.
## ✧ Projects Involved ✧
This repository is associated with the preservation projects listed below, which are open-sourced, non-profit initiatives operating under the legal framework established for emulation and preservation. The main goal is to preserve and restore Playstation®Home's content.
### ✧ Home Laboratory ✧
[Discord Server](https://discord.gg/NAUttdtPS5)
This project provides :
- a more developer-oriented environment that includes, but is not limited to
- open source software for an Playstation®Home online server; either locally and/or public. ( [MultiServer3 Github Repo](https://github.com/GitHubProUser67/MultiServer3) )
- open source tools for handling Playstation®Home assets; either PC tools and/or Web tools.
<br><br>Compiled: [Nautilus](https://github.com/DeViL303/MultiServer3-NuatilusFork/releases) /
Source: [Nautilus](https://github.com/GitHubProUser67/NautilusXP2024)
- support for getting everything setup and running as well as guidance into how Playstation®Home works.
- the assets needed to create an Content Delivery Network ( CDN ) in some form or other.
- transparent, in-depth progress updates on its restoration efforts.
- a Playstation®Home scene database ( [google sheets](https://docs.google.com/spreadsheets/d/1acznLvA2k4I7yl56i3pCmAhzxG4pPcrx/edit?usp=sharing&ouid=113258013303427394442&rtpof=true&sd=true) )
- it's own Playstation®Home public server which supports both QA ( Developer ) and Retail ( Consumer ) builds for version 1.86. It is playable on both a Jailbroken PS3 and the RPCS3 emulator. ( [HL Website](https://pshomeologylab.net/) )
- a Playstation®Home item ( object ) catalogue database and inventory management system for the PS®Homeology Lab online server, along with an external command module for the QA ( Developer ) build. ( [psho](http://psho.me/) )
### ✧ Home Headquarters ✧
[Discord Server](https://discord.com/invite/87W5qaMtgB)
This project provides :
- a Playstation®Home public server that is running off of Home Laboratory's software. It supports only the Retail ( Consumer ) build for version 1.86. It is playable on both a Jailbroken PS3 and the RPCS3 emulator. ( [HHQ Website](https://homeheadquarters.online/) )
- a more community-oriented environment with weekly in-game get-togethers ( events ).
- a larger player base that is primarily made up of past Playstation®Home users.
- a laughable staff hierarchy alongside moderation that's a bit too self-serious on both its Discord and its Playstation®Home online server.
## ✧ Playstation®Home Cache Information ✧
### ✧ Overview ✧
Playstation®Home had a lot of in-game content with a very unique loading system. When a player logged into Playstation®Home, the game reserved a limited amount of space on the PS3's internal HDD for assets to be downloaded from Sony's server. Whenever a player interacted with an asset ( spaces ( scenes ), items/minigames ( objects ), posters, videos, etc ) in-game, it would download and store the assets temporarily until the reserved space was full. **These are referred to as "caches" and are only obtainable by gaining access to one's internal PS3 HDD via a jailbreak**.
Caches are needed to restore Playstation®Home to its fullest. When new content is found, it can be added to the online public servers and thus be restored. A game can't function without it's assets. Playstation®Home was seperated into four regions and each region had it's own unique content and limited-time events. A large percentage of the content is still missing, most notably that from the Japanese region. This is why it is strongly encouraged for everyone to dust off their PS3 and **check for the Playstation®Home icon**. It is located under the **Playstation Network tab and resembles that of a house**.
If you happen to spot the Playstation®Home icon on your PS3, press the **Triangle button** on the icon to view its information. You should see an **install date ( between 2008 and 2015 ) and a size ( from 3GB to 12GB )**. If the icon meets these criteria, please consider donating the data to one of the projects mentioned above by following the cache extraction guide below. If you cannot press Triangle on the icon, there is no data behind it. Similarly, if the install date is later than April 1st 2015, or the size is under 100MB, it indicates that Playstation®Home was either installed after its shutdown or was never logged into.
To reiterate, in order to extract the Playstation®Home cache, it is **required to jailbreak your PS3** to gain access to its internal HDD. You will also **need a USB Stick** that's formated to the **FAT32** format. Most USB Sticks are FAT32 now days but if for some reason it's not, you will need to reformat it using a PC program called Rufus. If you have no USB Stick, do an internet search for "USB Stick 16GB FAT32" then order it.
For newcomers, the PS3 jailbreak community **recommends updating your PS3 to the Hybrid Firmware ( HFW ) then installing the HEN software**. It is a Semi-untethered Jailbreak where the user has to enable HEN to go into a jailbroken state. When rebooting the PS3, it returns to a non-jailbroken state until the user enables HEN again. Because of this, it is considered to be **very safe**.
Once jailbroken, a **Homebrew application called multiMAN ( mmCM )** can be used to **browse the PS3 directories** via its own File Manager / mmOS. Playstation®Home's cache folders will be **in the dev_hdd0/game/ directory** and can be **indentified by one of the below folder pairs**. **The objective is to copy the two folders from the PS3 to the FAT32 USB Stick.**
NPIA00005 & NPIA00005DATA ( Retail )
NPIA00010 & NPIA00010DATA ( Developer )
NPEA00013 & NPEA00013DATA ( Developer / Closed Beta )
The jailbreak should take 10 minutes tops and the data extraction should take 30 minutes to 90 minutes tops depending on the number of files.
After the PS3 has extracted the data onto your USB stick, insert it into your computer, transfer the data, then **zip the two folders and upload the resulting file to a cloud service** of your choice (e.g., Google Drive, Mega, etc.). Then, **join one of the Discord servers** linked above and post the link in the appropriate channel.
Upon request, a comprehensive analysis of the cache—detailing its contents and any new files discovered—is available.
### ✧ Extraction Guides ✧
- ( [Guide #1](https://pshomeologylab.net/Cache) )
- ( [Guide #2](https://homeheadquarters.online/Cache) )
### ✧ Public Archive ✧
A vast majority of Playstation®Home raw caches donated by it's former players are archived publicly in this google drive with logs included. ( [Google Drive](https://drive.google.com/drive/u/1/folders/1Wuk2GNsXOZ_qLJFqtg0gExRpZqxL3sec) )
You can find individual download links here. ( [Google Sheets](https://docs.google.com/spreadsheets/d/1uR7IRGjkl_n5CMBua6zIQV5EKXdSk8_D-sTDoJGMe7c/edit?usp=sharing) )
## ✧ Notable Mentions ✧
The following individuals are key figures spearheading the revolution of Playstation®Home Online as a fully open-source environment :
- **AgentDark447** ( [github](https://github.com/GitHubProUser67) )
- **Jumpsuit** ( [github](https://github.com/Jump-Suit) )
- **Devil303** ( [psx-place](https://www.psx-place.com/members/devil303.22544/) )
- **Rew** ( [twitter](https://x.com/pebxcvi) )
- **Splicewave** ( [youtube](https://www.youtube.com/channel/UC63x8NBm5NkoKMrTl4zrbIA ) )
- **Kami 2.0**
- **Pongo** ( [twitter](https://x.com/Pongo86_) )
- **Spookysniper**
- **Cade** |
ariflaksito/exarank1 | ariflaksito | 2025-05-27T15:36:13Z | 0 | 0 | [
"license:gpl-2.0",
"region:us"
] | [] | 2025-05-27T14:26:27Z | null | ---
license: gpl-2.0
dataset_info:
features:
- name: label
dtype: int64
- name: query
dtype: string
- name: doc
dtype: string
- name: explanation
dtype: string
splits:
- name: train
num_bytes: 11262781
num_examples: 21600
- name: validation
num_bytes: 1240270
num_examples: 2400
- name: test
num_bytes: 3111499
num_examples: 6000
download_size: 9061207
dataset_size: 15614550
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Kamyar-zeinalipour/farsi_dialogue_sentiment | Kamyar-zeinalipour | 2025-05-27T15:25:29Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-27T15:25:24Z | null | ---
dataset_info:
features:
- name: Title
dtype: string
- name: Reference
dtype: string
- name: Characters
dtype: string
- name: Dialogue_Type
dtype: string
- name: Speakers_Sentiments
dtype: string
- name: dialogue
dtype: string
- name: Overall_Sentiment_Reviewed
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 3674732
num_examples: 1867
- name: val
num_bytes: 192711
num_examples: 99
- name: test
num_bytes: 201877
num_examples: 104
download_size: 1692428
dataset_size: 4069320
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
GingerBled/RAG_corpus_docs_xtra_small | GingerBled | 2025-05-27T15:03:14Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-27T13:54:11Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 26871708.829308826
num_examples: 50000
download_size: 16846448
dataset_size: 26871708.829308826
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Lithium73fr/TEST6 | Lithium73fr | 2025-05-27T14:53:04Z | 0 | 0 | [
"task_categories:robotics",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-05-27T14:53:01Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# TEST6
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
Bluestrike/ai-chatbot | Bluestrike | 2025-05-27T14:38:21Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-27T14:37:56Z | null | ---
license: apache-2.0
---
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 442