datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-06 22:02:15
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-06 20:13:22
| trending_score
float64 0
40
⌀ | card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sumukshashidhar-testing/yourbench_example | sumukshashidhar-testing | 2025-06-04T11:22:58Z | 8 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-31T12:17:57Z | null | ---
dataset_info:
- config_name: chunked
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
- name: raw_chunk_summaries
sequence: string
- name: chunk_summaries
sequence: string
- name: raw_document_summary
dtype: string
- name: document_summary
dtype: string
- name: summarization_model
dtype: string
- name: chunks
list:
- name: chunk_id
dtype: string
- name: chunk_text
dtype: string
- name: multihop_chunks
list:
- name: chunk_ids
sequence: string
- name: chunks_text
sequence: string
splits:
- name: train
num_bytes: 57298
num_examples: 2
download_size: 72936
dataset_size: 57298
- config_name: ingested
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
splits:
- name: train
num_bytes: 18022
num_examples: 2
download_size: 13492
dataset_size: 18022
- config_name: lighteval
features:
- name: question
dtype: string
- name: additional_instructions
dtype: string
- name: ground_truth_answer
dtype: string
- name: gold
sequence: string
- name: choices
sequence: 'null'
- name: question_category
dtype: string
- name: kind
dtype: string
- name: estimated_difficulty
dtype: int64
- name: citations
sequence: string
- name: document_id
dtype: string
- name: chunk_ids
sequence: string
- name: question_generating_model
dtype: string
- name: chunks
sequence: string
- name: document
dtype: string
- name: document_summary
dtype: string
- name: answer_citation_score
dtype: float64
- name: chunk_citation_score
dtype: float64
- name: citation_score
dtype: float64
splits:
- name: train
num_bytes: 211444
num_examples: 20
download_size: 47040
dataset_size: 211444
- config_name: single_shot_questions
features:
- name: chunk_id
dtype: string
- name: document_id
dtype: string
- name: additional_instructions
dtype: string
- name: question
dtype: string
- name: self_answer
dtype: string
- name: choices
sequence: 'null'
- name: estimated_difficulty
dtype: int64
- name: self_assessed_question_type
dtype: string
- name: generating_model
dtype: string
- name: thought_process
dtype: string
- name: raw_response
dtype: string
- name: citations
sequence: string
splits:
- name: train
num_bytes: 193816
num_examples: 20
download_size: 39271
dataset_size: 193816
- config_name: summarized
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
- name: raw_chunk_summaries
sequence: string
- name: chunk_summaries
sequence: string
- name: raw_document_summary
dtype: string
- name: document_summary
dtype: string
- name: summarization_model
dtype: string
splits:
- name: train
num_bytes: 28605
num_examples: 2
download_size: 43552
dataset_size: 28605
configs:
- config_name: chunked
data_files:
- split: train
path: chunked/train-*
- config_name: ingested
data_files:
- split: train
path: ingested/train-*
- config_name: lighteval
data_files:
- split: train
path: lighteval/train-*
- config_name: single_shot_questions
data_files:
- split: train
path: single_shot_questions/train-*
- config_name: summarized
data_files:
- split: train
path: summarized/train-*
---
|
anonloftune/insurance-30-sft-pythia-1b | anonloftune | 2025-06-04T11:22:37Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T11:22:34Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 11927481
num_examples: 16380
- name: validation
num_bytes: 1405312
num_examples: 1980
download_size: 5209582
dataset_size: 13332793
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
qizekun/OmniSpatial | qizekun | 2025-06-04T11:19:16Z | 128 | 5 | [
"task_categories:visual-question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us",
"Spatial Reasoning"
] | [
"visual-question-answering"
] | 2025-04-15T13:23:53Z | null | ---
license: apache-2.0
task_categories:
- visual-question-answering
language:
- en
tags:
- Spatial Reasoning
size_categories:
- 1K<n<10K
---
# OmniSpatial
## Task Schema Documentation
This document provides a structured explanation of the task schema for the visual-spatial reasoning benchmark.
---
## Schema Structure
The schema is represented in JSON format, containing the following key components:
| Key | Description |
|-------------------|--------------------------------------------------------------------------------------------------------------|
| **id** | Identifier for the question, formatted as `{image_number}_{question_number}`. |
| **question** | The prompt or query that needs to be answered based on visual-spatial reasoning. |
| **options** | A list of possible answer choices for the question. |
| **answer** | The index of the correct answer (Ground Truth, GT) within the `options` list. |
| **task_type** | The main category of the reasoning task, with four types: |
| | - `Dynamic_Reasoning`: Analyzing motion or changes over time. |
| | - `Spatial_Interaction`: Understanding spatial relationships and object interactions. |
| | - `Complex_Logic`: Multi-step logical reasoning involving spatial or interactive elements. |
| | - `Perspective_Taking`: Reasoning about the scene from different viewpoints or observer positions. |
| **sub_task_type** | A more specific categorization of the task, for example, `Motion_Analysis` under `Dynamic_Reasoning`. |
| **sub_sub_task_type** | An additional layer of task categorization, currently not provided but planned for future updates. |
---
## Example
Below is an example schema instance:
```json
{
"id": "15_1",
"question": "If the giraffe on the right reaches the camera in 4 s, what is its speed?",
"options": [
"10.9m/s",
"0.9m/s",
"35.7m/s",
"14.7m/s"
],
"answer": 1,
"task_type": "Dynamic_Reasoning",
"sub_task_type": "Motion_Analysis"
} |
nyuuzyou/Minecraft-Skins-20M | nyuuzyou | 2025-06-04T11:16:46Z | 0 | 0 | [
"task_categories:image-classification",
"task_categories:text-to-image",
"annotations_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"license:other",
"size_categories:10M<n<100M",
"format:json",
"modality:text",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us",
"image"
] | [
"image-classification",
"text-to-image"
] | 2025-06-03T21:17:18Z | null | ---
pretty_name: Minecraft Skins Dataset
size_categories:
- 10M<n<100M
task_categories:
- image-classification
- text-to-image
annotations_creators:
- found
multilinguality:
- monolingual
source_datasets:
- original
configs:
- config_name: default
data_files:
- split: train
path: "dataset_*.jsonl.zst"
default: true
tags:
- image
license:
- other
---
# Dataset Card for Minecraft Skins
### Dataset Summary
This dataset contains 19,973,928 unique Minecraft player skins collected from various sources. Each skin is stored as a base64-encoded image with a unique identifier.
## Dataset Structure
### Data Fields
This dataset includes the following fields:
- `id`: A randomly generated UUID for each skin entry. These UUIDs are not linked to any external APIs or services (such as Mojang's player UUIDs) and serve solely as unique identifiers within this dataset.
- `image`: The skin image encoded in base64 format.
### Data Splits
All examples are in the train split, there is no validation split.
### Data Format
- **Format**: JSONL (JSON Lines) compressed with Zstandard (.jsonl.zst)
- **File Structure**: Multiple files containing approximately 100,000 entries each
- **Total Entries**: 19,973,928 unique skins
- **Image Format**: Base64-encoded PNG images (64x64 pixels, standard Minecraft skin format)
### Disclaimer
This dataset is not affiliated with, endorsed by, or associated with Microsoft Corporation or Mojang Studios. Minecraft is a trademark of Microsoft Corporation and Mojang Studios. This dataset is provided for research and educational purposes only.
|
erdem-erdem/24-puzzle-game-10k-q-t-format-v0.3 | erdem-erdem | 2025-06-04T11:15:14Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T11:15:12Z | null | ---
dataset_info:
features:
- name: num
sequence: int64
- name: target
dtype: int64
splits:
- name: train
num_bytes: 440000
num_examples: 10000
download_size: 31593
dataset_size: 440000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "24-puzzle-game-10k-q-t-format-v0.3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anonloftune/insurance-30-sft-pythia-410m | anonloftune | 2025-06-04T11:14:45Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T11:14:41Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 14394827
num_examples: 16380
- name: validation
num_bytes: 1718891
num_examples: 1980
download_size: 6205752
dataset_size: 16113718
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
danthepol/m3-rag-corpus | danthepol | 2025-06-04T11:11:39Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T11:30:16Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 46532474.89776381
num_examples: 54053
download_size: 24426950
dataset_size: 46532474.89776381
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yufanhuangNV/Cosmos-SFT-Nexar-Test | yufanhuangNV | 2025-06-04T11:10:22Z | 0 | 0 | [
"task_categories:visual-question-answering",
"task_categories:video-text-to-text",
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"video"
] | [
"visual-question-answering",
"video-text-to-text"
] | 2025-06-04T11:09:20Z | null | ---
configs:
- config_name: nexar-sft
data_files:
- split: understanding
path: nexar-sft/nexar_understanding.json
language:
- en
task_categories:
- visual-question-answering
- video-text-to-text
tags:
- video
license: cc-by-4.0
--- |
kowndinya23/alpaca_eval_prompts | kowndinya23 | 2025-06-04T10:59:34Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T10:59:32Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 254435
num_examples: 805
download_size: 103885
dataset_size: 254435
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
french-datasets/rntc_clinical-insights | french-datasets | 2025-06-04T10:49:11Z | 0 | 0 | [
"language:fra",
"region:us"
] | [] | 2025-06-04T10:48:46Z | null | ---
language:
- fra
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [rntc/clinical-insights](https://huggingface.co/datasets/rntc/clinical-insights). |
erdem-erdem/24-puzzle-game-10k-q-t-format-v0.2 | erdem-erdem | 2025-06-04T10:40:26Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T10:40:24Z | null | ---
dataset_info:
features:
- name: num
sequence: int64
- name: target
dtype: int64
splits:
- name: train
num_bytes: 440000
num_examples: 10000
download_size: 31593
dataset_size: 440000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "24-puzzle-game-10k-q-t-format-v0.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HaeChan0305/Qwen3-32B-AIME-2023-2024-2025-sampling64 | HaeChan0305 | 2025-06-04T10:29:18Z | 0 | 0 | [
"task_categories:text-generation",
"language:en",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"math"
] | [
"text-generation"
] | 2025-06-04T10:14:30Z | null | ---
dataset_info:
features:
- name: query_index
dtype: int64
- name: response_index
dtype: int64
- name: problem
dtype: string
- name: solution
dtype: 'null'
- name: answer
dtype: string
- name: subject
dtype: 'null'
- name: level
dtype: 'null'
- name: unique_id
dtype: string
- name: thinking
dtype: string
- name: content
dtype: string
- name: thinking_length
dtype: int64
- name: content_length
dtype: int64
- name: correct
dtype: bool
splits:
- name: train
num_bytes: 55796417
num_examples: 1536
download_size: 21968184
dataset_size: 55796417
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- en
tags:
- math
size_categories:
- 1K<n<10K
---
- Model : Qwen3-32B
- Original Dataset : first 24 queries in AIME2023 (시간 없어서 뒤에꺼 못함.)
- Sampilng Sie : 64
- ‘correct’ : computed by the code in the link (https://github.com/LeapLabTHU/Absolute-Zero-Reasoner/blob/master/absolute_zero_reasoner/rewards/math_utils.py) |
ihsanbasheer/legal-docs-images-labels | ihsanbasheer | 2025-06-04T10:25:33Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T10:25:30Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 101742015.554
num_examples: 1237
download_size: 113372719
dataset_size: 101742015.554
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
french-datasets/ahmadSiddiqi_amazon_reviews_fr | french-datasets | 2025-06-04T10:15:39Z | 0 | 0 | [
"task_categories:text-classification",
"language:fra",
"region:us"
] | [
"text-classification"
] | 2025-06-04T10:12:07Z | null | ---
language:
- fra
viewer: false
task_categories:
- text-classification
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [ahmadSiddiqi/amazon_reviews_fr](https://huggingface.co/datasets/ahmadSiddiqi/amazon_reviews_fr). |
coolroman/15_OID_0 | coolroman | 2025-06-04T10:10:54Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T08:50:23Z | null | ---
license: apache-2.0
---
|
rajivmehtapy/highland_json_ds | rajivmehtapy | 2025-06-04T10:01:50Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T09:53:54Z | null | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: title
dtype: string
- name: detail
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 81416
num_examples: 131
download_size: 34809
dataset_size: 81416
---
|
dwb2023/azure-ai-engineer-golden-dataset | dwb2023 | 2025-06-04T09:56:52Z | 20 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-02T18:38:26Z | null | ---
dataset_info:
features:
- name: user_input
dtype: string
- name: reference_contexts
sequence: string
- name: reference
dtype: string
- name: synthesizer_name
dtype: string
splits:
- name: train
num_bytes: 223514
num_examples: 34
download_size: 28373
dataset_size: 223514
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kristaller486/tmp_ds | kristaller486 | 2025-06-04T09:49:37Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T09:49:21Z | null | ---
dataset_info:
features:
- name: source
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: prompt_tokens
dtype: int64
- name: answer_tokens
dtype: int64
- name: cluster
dtype: int64
- name: prompt_lang
dtype: string
- name: answer_lang
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 387179317
num_examples: 86180
download_size: 180444089
dataset_size: 387179317
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Anjan9320/20250604151632 | Anjan9320 | 2025-06-04T09:46:57Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T09:46:53Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 5932065.0
num_examples: 10
download_size: 4977868
dataset_size: 5932065.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ziadrone/datasetcreation-tes5 | ziadrone | 2025-06-04T09:44:54Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T09:44:49Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: num_tokens
dtype: int64
- name: source
dtype: string
splits:
- name: train
num_bytes: 107870
num_examples: 30
download_size: 58494
dataset_size: 107870
---
# Dataset Card for "datasetcreation-tes5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mdhasnainali/job-html-to-json | mdhasnainali | 2025-06-04T09:26:09Z | 210 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-25T11:26:53Z | null | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: html
dtype: string
- name: json
struct:
- name: application_info
struct:
- name: apply_url
dtype: string
- name: contact_email
dtype: string
- name: deadline
dtype: string
- name: benefits
sequence: string
- name: cloud_providers
sequence: string
- name: databases
sequence: string
- name: department
dtype: string
- name: employment_type
dtype: string
- name: experience_level
dtype: string
- name: job_id
dtype: string
- name: language_requirements
sequence: string
- name: location
struct:
- name: city
dtype: string
- name: country
dtype: string
- name: hybrid
dtype: bool
- name: remote
dtype: bool
- name: state
dtype: string
- name: nice_to_have
sequence: string
- name: posted_date
dtype: string
- name: programming_languages
sequence: string
- name: qualifications
struct:
- name: certifications
sequence: string
- name: education_level
dtype: string
- name: fields_of_study
dtype: string
- name: recruitment_process
sequence: string
- name: requirements
sequence: string
- name: responsibilities
sequence: string
- name: salary
struct:
- name: currency
dtype: string
- name: max
dtype: float64
- name: min
dtype: float64
- name: period
dtype: string
- name: title
dtype: string
- name: tools
sequence: string
- name: work_schedule
dtype: string
- name: years_of_experience
struct:
- name: max
dtype: float64
- name: min
dtype: float64
- name: filename
dtype: string
splits:
- name: train
num_bytes: 53805084.25297892
num_examples: 5400
- name: test
num_bytes: 548014.7470210816
num_examples: 55
download_size: 28261588
dataset_size: 54353099.0
---
|
myfi/parser_dataset_sgpt_v3.4 | myfi | 2025-06-04T09:23:44Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T09:23:39Z | null | ---
dataset_info:
features:
- name: conversations
dtype: string
splits:
- name: train
num_bytes: 11034506
num_examples: 1924
download_size: 1087613
dataset_size: 11034506
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-ADA-batch-30 | ChavyvAkvar | 2025-06-04T09:12:36Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T09:11:28Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923453990
num_examples: 1000
download_size: 924469330
dataset_size: 923453990
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RasmusP/1armmovement | RasmusP | 2025-06-04T09:11:34Z | 112 | 0 | [
"task_categories:robotics",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-05-17T13:13:03Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# 1armmovement
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
A113NW3I/TIIF-Bench-Data | A113NW3I | 2025-06-04T08:58:58Z | 293 | 4 | [
"license:mit",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"arxiv:2506.02161",
"region:us"
] | [] | 2025-05-22T10:57:35Z | null | ---
license: mit
paperswithcode:
- arxiv:2506.02161
---
We release the images generated by the proprietary models evaluated in [“🔍TIIF-Bench: How Does Your T2I Model Follow Your Instructions?”](https://arxiv.org/abs/2506.02161).
Produced under carefully crafted, high-quality prompts, these images form a valuable asset that can benefit the open-source community in a variety of applications🔥.
|
nqzfaizal77ai/nqzanime-multiple-character-512 | nqzfaizal77ai | 2025-06-04T08:56:38Z | 113 | 0 | [
"license:cc-by-nc-4.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-05-24T04:03:42Z | null | ---
license: cc-by-nc-4.0
---
this is collection dataset from extracting anime :
* angel beats (available only reach episode 2)
* argevollen (available only reach episode 2)
* asterisk war
* azur lane
* baby steps
* black bullet
* break blade
* btooom
* chrome shelled regios (available only reach episode 2)
* clannad
* classroom crisis
* classroom of the elite
* code geass lelouch rebellion
* darling in the franxx
* date a live
* death note
* devil survivor 2
* diamond no ace
* egao no daika
* full metal panic
* gargantia
* guilty crown
* hanebado
* heavy object
* highscool dxd
* highschool of the dead
* hinomaruzumou
* hyouka
* kantai collection
* knight in area
* k-on
* kyoukai no kanata
* legend of the galactic heroes
* little buster
* magical girl spec ops asuka
* majestic prince (available only reach episode 2)
* mahouka koukou no rettousei
* mobile suit gundam 00
* mobile suit gundam: iron-blooded orphans
* oregairu
* oreshura
* oresuki
* phantasy star
* rakudai kishi no cavalry
* sakurasau no pet na kanojo
* steins gate
* strike the blood
* suzumiya haruhi
* taboo tattoo
* toaru kagaku no accelerator
* toaru kagaku no magical index
* toaru kagaku no railgun
* unbreakable machine doll
* upotte
* valvrave the liberator
* zenonzard
* zetsuen no tempest
* z/x ignition
and some is from hunting anime image related to work,school,law,modern military,scientist,sport,martial-art,and sci-fi |
allenai/reward-bench-2 | allenai | 2025-06-04T08:53:38Z | 189 | 8 | [
"task_categories:question-answering",
"language:en",
"license:odc-by",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2506.01937",
"region:us"
] | [
"question-answering"
] | 2025-05-30T22:48:39Z | null | ---
language:
- en
license: odc-by
size_categories:
- 1K<n<10K
task_categories:
- question-answering
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: chosen
sequence: string
- name: rejected
sequence: string
- name: num_correct
dtype: int64
- name: num_incorrect
dtype: int64
- name: total_completions
dtype: int64
- name: models
sequence: string
- name: subset
dtype: string
- name: additional_metadata
struct:
- name: category
dtype: string
- name: correct
dtype: string
- name: index
dtype: float64
- name: instruction_id_list
sequence: string
- name: label
dtype: string
- name: method
dtype: string
- name: models
sequence: string
- name: prompt_norm
dtype: string
- name: subcategory
dtype: string
- name: valid
dtype: float64
splits:
- name: test
num_bytes: 13772499
num_examples: 1865
download_size: 6973189
dataset_size: 13772499
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
<!-- <img src="https://huggingface.co/spaces/allenai/reward-bench/resolve/main/src/logo.png" alt="RewardBench Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> -->
[Code](https://github.com/allenai/reward-bench) | [Leaderboard](https://huggingface.co/spaces/allenai/reward-bench) | [Results](https://huggingface.co/datasets/allenai/reward-bench-2-results) | [Paper](https://arxiv.org/abs/2506.01937)
# RewardBench 2 Evaluation Dataset Card
The RewardBench 2 evaluation dataset is the new version of RewardBench that is based on unseen human data and designed to be substantially more difficult! RewardBench 2 evaluates capabilities of reward models over the following categories:
1. **Factuality** (*NEW!*): Tests the ability of RMs to detect hallucinations and other basic errors in completions.
2. **Precise Instruction Following** (*NEW!*): Tests the ability of RMs to judge whether text follows precise instructions, such as "Answer without the letter u".
3. **Math**: Tests RMs' abilities at math, on open-ended human prompts ranging from middle school physics and geometry to college-level chemistry, calculus, combinatorics, and more.
4. **Safety**: Tests RMs' abilities to correctly comply with or refuse prompts related to harmful use cases as well as general compliance behaviors.
5. **Focus**: Tests RMs' ability to detect high-quality, on-topic answers to general user queries.
6. **Ties** (*NEW*!): This new type of subset tests the robustness of RMs in domains with many possible similar answers. For example, the question "Name a color of the rainbow" has seven possible correct answers and infinitely many incorrect ones.
The RewardBench 2 leaderboard averages over these six subsets.
For the first five categories, the scoring for RewardBench 2 evaluates success as whether the score of a prompt-chosen pair is greater than the score of *three* prompt-rejected pairs.
The "Ties" score is a weighted score of accuracy (as measured by *all* valid correct answers being scored higher than *all* incorrect answers) and whether the reward margin between correct and incorrect answers exceeds that of the highest and lowest-scored correct responses. This metric rewards not only correctness, but also a model's ability to prioritize correct answers over incorrect ones more strongly than it distinguishes between equally valid correct responses.
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/reward-bench/main-fig-hor.png" alt="RewardBench 2 Flow" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
## Dataset Construction Summary
| Domain | Count | Prompt Source | Method of generating completions | Completion Filtering |
|--------|-------|---------------|----------------------------------|---------------------|
| Factuality | 475 | Human | Both | Multi-LM-as-a-judge |
| Precise IF | 160 | Human | Natural | Verifier functions |
| Math | 183 | Human | Natural | Majority voting |
| Safety | 450 | CoCoNot | Both | LM-as-a-judge & rubrics |
| Focus | 495 | Human | System Prompt Variation | N/A |
| Ties | 102 | Manual | System Prompt Variation | Manual verification |
## Dataset Details
Each sample in the dataset has the following items.
Note, the dataset is single-turn:
* `prompt` (`str`): the instruction given in the various test sets.
* `chosen` (`list[str]`): the chosen response(s) (1 chosen response for all subsets but ties)
* `rejected` (`list[str]`): the rejected responses (3 chosen responses for all subsets but ties)
* `num_correct` (`int`): the number of chosen responses
* `num_rejected` (`int`): the number of rejected responses
* `total_completions` (`int`): the total number of responses
* `models` (`list[str]`): a list of models that the chosen and rejected responses are generated from, respectively
* `subset` (`str`): the subset the datapoint is part of.
* `id` (`int`): an incremented id for every prompt in the benchmark.
To select a specific subset use HuggingFace Datasets `.filter` functionality.
```
dataset = dataset.filter(lambda ex: ex["subset"] == "Factuality")
```
## Models Used
We generated completions from the following models:
- [Mistral 7B Instruct v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) (Apache 2.0)
- [Tulu 3 8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B) (Llama 3.1 Community License Agreement)
- [Tulu 3 70B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-70B) (Llama 3.1 Community License Agreement)
- [Llama 3.1 8B Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) (Llama 3.1 Community License Agreement)
- [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) (Llama 3.1 Community License Agreement)
- [Llama 3.2 1B Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) (Llama 3.2 Community License Agreement)
- [Llama 2 7B Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) (Llama 2 Community License Agreement)
- [Tulu 2 70B](https://huggingface.co/allenai/tulu-2-dpo-70b) (Ai2 ImpACT Low Risk License)
- [Qwen2.5 72B Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) (Qwen License Agreement)
- [Qwen2.5 7B Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) (Apache 2.0)
- [Qwen2.5 14B Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) (Apache 2.0)
- [Qwen2.5 0.5B Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) (Apache 2.0)
- [Qwen2.5 Math 72B Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-72B-Instruct) (Qwen License Agreement)
- [Qwen2.5 Math 7B Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) (Apache 2.0)
- [Deepseek Math 7B RL](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl) (This model is licensed under the Deepseek License. Any use of the outputs from this model must be in accordance with the use restrictions in the [Deepseek License](https://github.com/deepseek-ai/DeepSeek-Math/blob/main/LICENSE-MODEL).)
- [OLMoE 1B 7B 0924 Instruct](https://huggingface.co/allenai/OLMoE-1B-7B-0924) (Apache 2.0)
- [Dolphin 2.0 Mistral 7b](https://huggingface.co/cognitivecomputations/dolphin-2.0-mistral-7b) (Apache 2.0)
- [Zephyr 7b Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) (MIT License)
- GPT-4o (Outputs produced by GPT-4 are subject to OpenAI's [terms of use](https://openai.com/policies/row-terms-of-use/))
- Claude 3.5 Sonnet (Outputs produced by Claude are subject to Anthropic [terms of service](https://www.anthropic.com/legal/consumer-terms) and [usage policy](https://www.anthropic.com/legal/aup))
## License
This dataset is licensed under ODC-BY. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use). This dataset includes output data generated from third party models that are subject to separate terms governing their use.
## Trained Reward Models
We also trained and released several reward models— check out the [RewardBench 2 Collection](https://huggingface.co/collections/allenai/reward-bench-2-683d2612a4b3e38a3e53bb51) to use them!
## Citation
```
@misc{malik2025rewardbench2advancingreward,
title={RewardBench 2: Advancing Reward Model Evaluation},
author={Saumya Malik and Valentina Pyatkin and Sander Land and Jacob Morrison and Noah A. Smith and Hannaneh Hajishirzi and Nathan Lambert},
year={2025},
eprint={2506.01937},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.01937},
}
``` |
Yofuria/llama3-ultrafeedback-armorm-swapped-40 | Yofuria | 2025-06-04T08:51:45Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T08:46:02Z | null | ---
dataset_info:
features:
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: all_generated_responses
sequence: string
- name: all_rm_scores
sequence: float64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 882657158
num_examples: 59876
- name: test
num_bytes: 28683892
num_examples: 1961
download_size: 419146669
dataset_size: 911341050
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Sicong/caption_rl | Sicong | 2025-06-04T08:50:01Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T07:49:10Z | null | ---
dataset_info:
features:
- name: images
sequence: image
- name: problem
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1327022766.28
num_examples: 3728
- name: validation
num_bytes: 67436706.0
num_examples: 200
download_size: 1380617170
dataset_size: 1394459472.28
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
anonloftune/insurance-30-loftune-j | anonloftune | 2025-06-04T08:37:17Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T08:37:12Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 236720985
num_examples: 136462
- name: validation
num_bytes: 28185680
num_examples: 17053
download_size: 11351608
dataset_size: 264906665
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Toycat/scLibrary | Toycat | 2025-06-04T08:36:31Z | 370 | 5 | [
"license:mit",
"arxiv:2405.06708",
"region:us"
] | [] | 2024-12-29T15:39:44Z | null | ---
license: mit
---
The dataset scLibrary is the pre-training dataset used by the LangCell model.
You can use `git-lfs` to download `sclibrary.dataset` from this repository, and then use the following code to load the data:
```python
from datasets import load_from_disk
sclibrary=load_from_disk("/path/to/sclibrary.dataset")
```
Model github: https://github.com/PharMolix/LangCell
Paper: https://arxiv.org/abs/2405.06708 |
PAphospho/orange-circle-black-box | PAphospho | 2025-06-04T08:34:10Z | 0 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-06-04T08:33:00Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# orange-circle-black-box
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
LockeLamora2077/hayabusa_llm_report_forensic_reasoning | LockeLamora2077 | 2025-06-04T08:33:42Z | 88 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-30T08:15:19Z | null | ---
license: apache-2.0
---
|
siqiLi/eval_act_so100_test_12 | siqiLi | 2025-06-04T08:31:42Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial",
"eval"
] | [
"robotics"
] | 2025-06-04T08:30:20Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
- eval
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 10,
"total_frames": 7060,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
pepijn223/lekiwi1749025613 | pepijn223 | 2025-06-04T08:27:28Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-04T08:27:25Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "lekiwi_client",
"total_episodes": 1,
"total_frames": 250,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
9
],
"names": [
"arm_shoulder_pan.pos",
"arm_shoulder_lift.pos",
"arm_elbow_flex.pos",
"arm_wrist_flex.pos",
"arm_wrist_roll.pos",
"arm_gripper.pos",
"x.vel",
"y.vel",
"theta.vel"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": [
"arm_shoulder_pan.pos",
"arm_shoulder_lift.pos",
"arm_elbow_flex.pos",
"arm_wrist_flex.pos",
"arm_wrist_roll.pos",
"arm_gripper.pos",
"x.vel",
"y.vel",
"theta.vel"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 10,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
640,
480,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 640,
"video.width": 480,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 10,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
aichamrf/traduccionjuridica | aichamrf | 2025-06-04T08:20:28Z | 0 | 0 | [
"task_categories:translation",
"language:es",
"language:en",
"size_categories:n<1K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"translation"
] | 2025-06-04T08:11:34Z | null | ---
task_categories:
- translation
language:
- es
- en
--- |
oulianov/my_dataset_16 | oulianov | 2025-06-04T08:13:23Z | 387 | 0 | [
"task_categories:robotics",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-05-05T11:59:15Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# my_dataset_16
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
danthepol/m3-rag-training | danthepol | 2025-06-04T08:04:26Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T07:56:44Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: context
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 44538501
num_examples: 55049
download_size: 25166641
dataset_size: 44538501
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
svjack/Xiang-Lookalike-Videos-Splited | svjack | 2025-06-04T08:02:53Z | 0 | 0 | [
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-06-04T08:00:40Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path:
- "*.mp4"
--- |
pepijn223/lekiwi1749024087 | pepijn223 | 2025-06-04T08:01:45Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-04T08:01:41Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "lekiwi_client",
"total_episodes": 1,
"total_frames": 250,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
9
],
"names": [
"arm_shoulder_pan.pos",
"arm_shoulder_lift.pos",
"arm_elbow_flex.pos",
"arm_wrist_flex.pos",
"arm_wrist_roll.pos",
"arm_gripper.pos",
"x.vel",
"y.vel",
"theta.vel"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": [
"arm_shoulder_pan.pos",
"arm_shoulder_lift.pos",
"arm_elbow_flex.pos",
"arm_wrist_flex.pos",
"arm_wrist_roll.pos",
"arm_gripper.pos",
"x.vel",
"y.vel",
"theta.vel"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 10,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
640,
480,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 640,
"video.width": 480,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 10,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
yufanhuangNV/CS-SFT-Nexar-Test | yufanhuangNV | 2025-06-04T08:01:07Z | 0 | 0 | [
"task_categories:visual-question-answering",
"task_categories:video-text-to-text",
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"video"
] | [
"visual-question-answering",
"video-text-to-text"
] | 2025-06-04T07:54:29Z | null | ---
configs:
- config_name: nexar-sft
data_files:
- split: understanding
path: nexar-sft/nexar_understanding.json
language:
- en
task_categories:
- visual-question-answering
- video-text-to-text
tags:
- video
license: cc-by-4.0
--- |
lenankamp/SummaTheologica-UserAssistantPairing | lenankamp | 2025-06-04T07:58:09Z | 0 | 0 | [
"task_categories:text-generation",
"language:en",
"license:cc0-1.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | 2025-06-04T07:38:00Z | null | ---
license: cc0-1.0
task_categories:
- text-generation
language:
- en
---
# Summa Theologica Q&A Dataset
Welcome to the **Summa Theologica Q&A Dataset**, where medieval theology meets modern AI in a glorious, slightly absurd clash of eras. This dataset takes Thomas Aquinas' *Summa Theologica*—a 13th-century theological heavyweight—and chops it up into neat little user/assistant pairings, perfect for training your AI to pontificate on the nature of paradise or argue about angels on pinheads.
## What's in the Box?
- **Format**: JSON, because apparently that's the only way we store knowledge these days.
- **Content**: Hundreds of question-answer pairs, where:
- **User**: Asks profound questions like "Is paradise a corporeal place?" or other brain-benders straight from Aquinas' quill.
- **Assistant**: Responds with Aquinas' verbose, occasionally sassy reasoning, often citing Augustine or Isidore like the ultimate academic name-dropper.
- **Example**:
```json
{
"User": "Whether Paradise Is a Corporeal Place?",
"Assistant": "On the contrary, Augustine says (Gen. ad lit. viii, 1): 'Three general opinions prevail about paradise...' [and so on, because brevity was not Aquinas' forte]."
}
```
## Why This Dataset Exists
Because someone thought, "Hey, what if we turned a 700-year-old theological tome into a chatbot's training fodder?" And here we are. Use it to:
- Train your AI to sound like a medieval scholar with a penchant for overexplaining.
- Generate the most erudite chatbot responses this side of the 13th century.
- Confuse your friends by dropping "corporeal vs. spiritual paradise" debates at parties.
## How to Use It
1. Clone this dataset from Hugging Face (you know the drill).
2. Feed it into your favorite language model. Bonus points if it starts citing Aristotle unprompted.
3. Watch your AI wax poetic about lunar circles and the "right hand of the heavens."
4. Regret nothing, because life's too short to not have fun with theology.
## Caveats
- **Length**: Aquinas didn't believe in short answers. Some responses are longer than your average TikTok attention span.
- **Tone**: Expect a mix of divine wisdom, philosophical flexing, and the occasional medieval mic-drop.
- **Relevance**: If you're looking for practical data, like stock prices or cat memes, this ain't it.
## License
Public domain, because Aquinas has been dead for a while, and we're pretty sure he won't sue.
## Contributing
Got more medieval theology to add? Found a typo in our parsing of the *Summa*? Submit a pull request, and we'll consider canonizing you (just kidding about that last part... or are we?).
## Acknowledgments
- Thomas Aquinas, for writing the *Summa Theologica* and giving us something to parse.
- Augustine and Isidore, for being the most-quoted wingmen in history.
- The brave souls who read this README and still decide to download.
*Now go forth and make your AI debate the nature of paradise. Or, you know, just use it to sound smart at trivia night.* |
One-RL-to-See-Them-All/Orsta-Data-47k | One-RL-to-See-Them-All | 2025-06-04T07:54:00Z | 236 | 7 | [
"task_categories:image-text-to-text",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"arxiv:2505.18129",
"arxiv:2307.12813",
"arxiv:1612.06890",
"arxiv:2002.10215",
"region:us",
"vision-language",
"multimodal",
"reinforcement-learning",
"visual-reasoning",
"visual-perception",
"V-Triune",
"Orsta"
] | [
"image-text-to-text"
] | 2025-05-26T02:50:12Z | null | ---
language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- image-text-to-text
tags:
- vision-language
- multimodal
- reinforcement-learning
- visual-reasoning
- visual-perception
- V-Triune
- Orsta
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: train_chart_chartqapro_498
data_files:
- split: train
path: train_chart_chartqapro_498/train-*
dataset_info:
- config_name: default
features:
- name: data_source
dtype: string
- name: images
sequence: image
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: answer
dtype: string
- name: ground_truth
dtype: string
- name: accuracy_ratio
dtype: float32
- name: format_ratio
dtype: float32
- name: verifier
dtype: string
- name: verifier_parm
struct:
- name: det_verifier_normalized
dtype: bool
- name: det_reward_ratio
struct:
- name: iou_max_label_first
dtype: float32
- name: iou_max_iou_first
dtype: float32
- name: iou_completeness
dtype: float32
- name: map
dtype: float32
- name: map50
dtype: float32
- name: map75
dtype: float32
- name: extra_info
struct:
- name: id
dtype: string
- name: image_path
dtype: string
splits:
- name: train
num_bytes: 39912717.0
num_examples: 498
- name: test
num_bytes: 15158256.0
num_examples: 176
download_size: 46636238
dataset_size: 55070973.0
- config_name: train_chart_chartqapro_498
features:
- name: data_source
dtype: string
- name: images
sequence: image
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: answer
dtype: string
- name: ground_truth
dtype: string
- name: accuracy_ratio
dtype: float32
- name: format_ratio
dtype: float32
- name: verifier
dtype: string
- name: verifier_parm
struct:
- name: det_verifier_normalized
dtype: bool
- name: det_reward_ratio
struct:
- name: iou_max_label_first
dtype: float32
- name: iou_max_iou_first
dtype: float32
- name: iou_completeness
dtype: float32
- name: map
dtype: float32
- name: map50
dtype: float32
- name: map75
dtype: float32
- name: extra_info
struct:
- name: id
dtype: string
- name: image_path
dtype: string
splits:
- name: train
num_bytes: 39912717.0
num_examples: 498
download_size: 33774705
dataset_size: 39912717.0
---
# Orsta-Data-47k Dataset
* 🐙 **GitHub Repo:** [MiniMax-AI/One-RL-to-See-Them-All](https://github.com/MiniMax-AI/One-RL-to-See-Them-All)
* 📜 **Paper (arXiv):** [V-Triune: One RL to See Them All (arXiv:2505.18129)](https://arxiv.org/abs/2505.18129)
## Dataset Description 📖
**Orsta-Data-47k** is a specialized dataset curated for the post-training of Vision-Language Models (VLMs) using our [V-Triune](https://github.com/MiniMax-AI/One-RL-to-See-Them-All) unified reinforcement learning system. Its primary purpose is to enable robust joint training across a diverse spectrum of both visual reasoning and visual perception tasks, powering models like [Orsta](https://huggingface.co/collections/One-RL-to-See-Them-All/one-rl-to-see-them-all-6833d27abce23898b2f9815a) to achieve advanced multimodal capabilities.
This dataset is a carefully selected aggregation from 18 publicly available datasets, refined through a rigorous filtering process to ensure high quality and suitability for RL-based fine-tuning.
## Tasks Covered 🎯
The dataset is structured to cover eight principal task categories, balanced between reasoning and perception:
* **Visual Reasoning Tasks 🤔:**
* Mathematics (Math QA)
* Puzzle Solving (Visual Puzzles)
* Science Question Answering (Science QA)
* Chart Understanding (Chart QA)
* **Visual Perception Tasks 👁️:**
* Object Detection
* Visual Grounding
* Object Counting
* Optical Character Recognition (OCR)
## Data Curation Process 🛠️
To create a high-quality corpus for effective RL post-training, we implemented a comprehensive two-stage data curation pipeline:
1. **Rule-based Filtering:** An initial filtering stage applied a set of predefined rules to the source datasets. These rules were tailored differently for reasoning and perception tasks, aiming to remove noisy samples, questions prone to "hacking" (e.g., certain multiple-choice formats), and problematic answer formats. For perception tasks, this also involved standardizing coordinate systems and filtering based on object size or count.
2. **Difficulty-based Filtering:** Following rule-based cleaning, a difficulty filter was applied. This stage removed samples deemed too easy (e.g., already solvable by baseline models) or excessively hard, ensuring that the remaining data provides a meaningful and efficient learning signal for the models.
This meticulous process resulted in a high-quality collection of approximately **47,700 samples**. To address potential dataset imbalances, data for certain tasks (e.g., puzzles) was strategically duplicated to ensure adequate representation.
## Dataset Composition & Structure 📊
* **Total Samples:** ~47.7K
* **Task Categories:** 8 (4 reasoning, 4 perception)
* **Aggregated From:** 18 distinct public datasets
* **Content Breakdown:**
* Visual Perception Samples: ~20.6K
* Visual Reasoning Samples: ~27.1K
* **Interaction Format:** The data primarily consists of single-image, single-turn conversational interactions (e.g., an image paired with a question and its corresponding answer/grounding).
* **Storage Format:** All curated data is stored in the efficient Parquet format.
## Intended Use & Training 🚀
This dataset is designed for use with the [V-Triune](https://github.com/MiniMax-AI/One-RL-to-See-Them-All) framework for reinforcement learning-based post-training of VLMs. In the training of [Orsta](https://huggingface.co/collections/One-RL-to-See-Them-All/one-rl-to-see-them-all-6833d27abce23898b2f9815a) models, all samples from this dataset were uniformly mixed and utilized.
## Dataset Usage
This section outlines how to download and use the Orsta-Data-47k dataset.
### Downloading the Dataset
You can download the dataset directly from the Hugging Face Hub using the `huggingface-cli` tool. Make sure you have `huggingface_hub` installed (`pip install huggingface_hub`).
Execute the following command in your terminal:
```bash
huggingface-cli download --repo-type dataset --resume-download One-RL-to-See-Them-All/Orsta-Data-47k --local-dir Orsta-Data-47k
```
This command will download all dataset files into a local directory named `Orsta-Data-47k`. The `--resume-download` flag is useful for resuming downloads if interrupted.
### Dataset Structure
Once downloaded, the dataset will have the following structure within the `Orsta-Data-47k` directory. All data files are in the Parquet (`.parquet`) format.
```
Orsta-Data-47k/
├── test/
│ ├── test_chart_megabench_176.parquet
......
│ └── test_science_megabench_91.parquet
└── train/
├── train_chart_chartqapro_498.parquet
......
└── train_science_virl39k_2539.parquet
```
### File Naming Convention
The files within the `train/` and `test/` directories follow this naming convention:
`{split}_{task_name}_{source_name}_{num}.parquet`
Where:
* `{split}`: Indicates the dataset split, either `train` or `test`.
* `{task_name}`: Specifies the general task category.
* `{source_name}`: Denotes the specific benchmark or origin of the data.
* `{num}`: Represents the number of samples contained within that Parquet file.
### Purpose of Each Split
* **`train/` directory**: These files constitute the training corpus for the Orsta model.
* **`test/` directory**: These files contain samples specifically curated for online evaluation of the model's performance on different tasks *during* the training process. Analyzing performance on these samples helps in diagnosing the training status and understanding the model's learning progression for each task category.
### Data Format
```python
{
'data_source': Value(dtype='string', id=None),
'images': Sequence(feature=Image(mode=None, decode=True, id=None), length=-1, id=None),
'prompt': [{'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None)}],
'ability': Value(dtype='string', id=None),
'reward_model': {
'answer': Value(dtype='string', id=None),
'ground_truth': Value(dtype='string', id=None),
'accuracy_ratio': Value(dtype='float32', id=None),
'format_ratio': Value(dtype='float32', id=None),
'verifier': Value(dtype='string', id=None),
'verifier_parm': {
'det_verifier_normalized': Value(dtype='bool', id=None),
'det_reward_ratio': {
'iou_max_label_first': Value(dtype='float32', id=None),
'iou_max_iou_first': Value(dtype='float32', id=None),
'iou_completeness': Value(dtype='float32', id=None),
'map': Value(dtype='float32', id=None),
'map50': Value(dtype='float32', id=None),
'map75': Value(dtype='float32', id=None)
}
}
},
'extra_info': {'id': Value(dtype='string', id=None), 'image_path': Value(dtype='string', id=None)}
}
```
## 📊 Data Sources and Composition
The **Orsta-Data-47k** dataset is constructed entirely from publicly available, open-source datasets. These have been aggregated and curated to create a collection suitable for VLM post-training on both visual reasoning and perception tasks.
Orsta-Data-47k is compiled from 18 distinct public datasets. The primary contributing sources for each task category are as follows:
* **Math**: [mm_math](https://huggingface.co/datasets/THU-KEG/MM_Math), [geometry3k](https://huggingface.co/datasets/hiyouga/geometry3k), [mmk12](https://huggingface.co/datasets/FanqingM/MMK12)
* **Puzzle**: [PuzzleVQA](https://huggingface.co/datasets/declare-lab/PuzzleVQA), [AlgoPuzzleVQA](https://huggingface.co/datasets/declare-lab/AlgoPuzzleVQA), [VisualPuzzles](https://huggingface.co/datasets/neulab/VisualPuzzles)
* **Science**: [ScienceQA](https://huggingface.co/datasets/lmms-lab/ScienceQA), [SciVQA](https://huggingface.co/datasets/katebor/SciVQA), [ViRL39K-Science](https://huggingface.co/datasets/TIGER-Lab/ViRL39K)
* **Chart**: [ChartQAPro](https://huggingface.co/datasets/ahmed-masry/ChartQAPro), [ChartX](https://huggingface.co/datasets/U4R/ChartX), [Table-VQA-Bench](https://huggingface.co/datasets/terryoo/TableVQA-Bench), [ViRL39K-Chart](https://huggingface.co/datasets/TIGER-Lab/ViRL39K)
* **Detection**: [V3Det](https://arxiv.org/abs/2307.12813), [Object365](https://www.objects365.org/overview.html)
* **Grounding**: [D^3](https://arxiv.org/abs/2307.12813)
* **Counting**: [CLEVR](https://arxiv.org/abs/1612.06890)
* **OCR**: [LLaVA-OV Data](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), [EST-VQA](https://arxiv.org/abs/2002.10215)
For detailed information and licensing for each source dataset, please refer to their original publications and repositories. Our specific aggregation and curation methodology for Orsta-Data-47k is described in our paper: [V-Triune: One RL to See Them All (arXiv:2505.18129)](https://arxiv.org/abs/2505.18129).
## Citation Information 📜
If you use the Orsta-Data-47k dataset or our V-Triune framework in your research, please cite our accompanying paper:
```bibtex
@article{ma2025one,
title={One RL to See Them All: Visual Triple Unified Reinforcement Learning},
author={Ma, Yan and Du, Linge and Shen, Xuyang and Chen, Shaoxiang and Li, Pengfei and Ren, Qibing and Ma, Lizhuang and Dai, Yuchao and Liu, Pengfei and Yan, Junjie},
journal={arXiv preprint arXiv:2505.18129},
year={2025}
}
``` |
ChavyvAkvar/synthetic-trades-ADA-batch-25 | ChavyvAkvar | 2025-06-04T07:52:23Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T07:51:18Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923454142
num_examples: 1000
download_size: 924505797
dataset_size: 923454142
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ByteDance-Seed/BM-6M | ByteDance-Seed | 2025-06-04T07:38:27Z | 1,647 | 4 | [
"task_categories:image-to-image",
"license:cc0-1.0",
"size_categories:1M<n<10M",
"arxiv:2506.03107",
"region:us"
] | [
"image-to-image"
] | 2025-05-27T06:40:16Z | null | ---
license: cc0-1.0
dataset_info:
features:
- name: image_id
dtype: string
- name: src_img
dtype: image
- name: tgt_img
dtype: image
- name: edit_prompt
dtype: string
- name: edit_prompt_rewrite_instruction
dtype: string
- name: src_img_caption
dtype: string
- name: tgt_img_caption
dtype: string
splits:
- name: train
num_bytes: 45095600735.92
num_examples: 780308
download_size: 44625567266
dataset_size: 45095600735.92
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- image-to-image
size_categories:
- 1M<n<10M
---
[](https://arxiv.org/abs/2506.03107)
[](https://boese0601.github.io/bytemorph/)
[](https://huggingface.co/datasets/ByteDance-Seed/BM-Bench)
[](https://huggingface.co/datasets/ByteDance-Seed/BM-6M-Demo)
[](https://huggingface.co/datasets/ByteDance-Seed/BM-6M)
[](https://huggingface.co/spaces/Boese0601/ByteMorpher-Demo)
[](https://huggingface.co/ByteDance-Seed/BM-Model)
[](https://github.com/ByteDance-Seed/BM-code)
# Dataset Card for ByteMorph-6M
The task of editing images to reflect non-rigid motions, such as changes in camera viewpoint, object deformation, human articulation, or complex interactions, represents a significant yet underexplored frontier in computer vision. Current methodologies and datasets often concentrate on static imagery or rigid transformations, thus limiting their applicability to expressive edits involving dynamic movement. To bridge this gap, we present ByteMorph, a substantial benchmark specifically created for instruction-based image editing focused on non-rigid motions. This dataset card contains the example training data subset and instructions for ByteMorph-6M.
## Dataset Details
Original videos are generated by [Seaweed](https://seaweed.video/) and sampled into frames as source-target image editing pairs. These frames are further filtered and captioned by VLM. For visualization of a subset of the whole dataset, please visit [this repo](https://huggingface.co/datasets/ByteDance-Seed/BM-6M-Demo).
## Intended use
Primary intended uses: The primary use of ByteMorph is research on text-to-image and instruction-based image editing.
Primary intended users: The model's primary intended users are researchers and hobbyists in computer vision, image generation, image processing, and AIGC.
## Dataset Structure
```bash
BM-6M
|----subset-1 # We will release this subset soon
|----sample_frames # extracted first and last frames from the video
|----batch_0.tar
|----batch_1.tar
|----...
|----sample_multi_frames # extracted multi frames from the video
|----batch_0.tar
|----batch_1.tar
|----...
|----subset-2 # This subset has been released
|----subset-3 # This subset has been released
|----... # These have been released
|----subset-9 # This subset has been released
```
### How to use ByteMorph-6M
Simply download this dataset with [git-lfs](https://github.com/git-lfs/git-lfs/blob/main/INSTALLING.md). You can also download the subset of the whole dataset.
```bash
git lfs clone https://huggingface.co/datasets/ByteDance-Seed/BM-6M
```
## Bibtex citation
```bibtex
@misc{chang2025bytemorphbenchmarkinginstructionguidedimage,
title={ByteMorph: Benchmarking Instruction-Guided Image Editing with Non-Rigid Motions},
author={Di Chang and Mingdeng Cao and Yichun Shi and Bo Liu and Shengqu Cai and Shijie Zhou and Weilin Huang and Gordon Wetzstein and Mohammad Soleymani and Peng Wang},
year={2025},
eprint={2506.03107},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.03107},
}
``` |
iantc104/av_aloha_sim_pour_test_tube | iantc104 | 2025-06-04T07:34:56Z | 145 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-02T05:30:44Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 20,
"total_frames": 5990,
"total_tasks": 1,
"total_videos": 120,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 25,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.zed_cam_left": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 25,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.zed_cam_right": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 25,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist_cam_left": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 25,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist_cam_right": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 25,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.overhead_cam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 25,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.worms_eye_cam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 25,
"video.channels": 3,
"has_audio": false
}
},
"observation.state": {
"dtype": "float32",
"shape": [
21
],
"names": null
},
"observation.environment_state": {
"dtype": "float32",
"shape": [
21
],
"names": null
},
"action": {
"dtype": "float32",
"shape": [
21
],
"names": null
},
"left_eye": {
"dtype": "float32",
"shape": [
2
],
"names": null
},
"right_eye": {
"dtype": "float32",
"shape": [
2
],
"names": null
},
"left_arm_pose": {
"dtype": "float32",
"shape": [
16
],
"names": null
},
"right_arm_pose": {
"dtype": "float32",
"shape": [
16
],
"names": null
},
"middle_arm_pose": {
"dtype": "float32",
"shape": [
16
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
nguyentranai07/FullyIndicatorReport4 | nguyentranai07 | 2025-06-04T07:22:06Z | 363 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-30T01:48:11Z | null | ---
dataset_info:
features:
- name: Content
dtype: string
- name: Key
dtype: string
splits:
- name: train
num_bytes: 23409810
num_examples: 2000
download_size: 10423127
dataset_size: 23409810
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rubenroy/GammaCorpus-v2-5m | rubenroy | 2025-06-04T07:19:17Z | 0 | 1 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"region:us",
"chat-dataset",
"conversational-ai",
"natural-language-processing",
"ai-generated",
"multiple-turn-dialogue",
"jsonl",
"nlp",
"gammacorpus",
"chat",
"conversational"
] | [
"text-generation"
] | 2025-06-04T07:15:51Z | null | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- chat-dataset
- conversational-ai
- natural-language-processing
- ai-generated
- multiple-turn-dialogue
- jsonl
- nlp
- gammacorpus
- chat
- conversational
pretty_name: GammaCorpus
size_categories:
- 1M<n<10M
---
# GammaCorpus: v2 - 5 Million Lines of Pure Dialogue
## What is it?
The **GammaCorpus v2 5m** dataset consists of 5 million structured multi-turn conversations, where each interaction includes:
- **Input**: A user prompt or question.
- **Output**: A response generated by an AI assistant.
> [!IMPORTANT]
> The dataset files were mistakenly deleted; I'm working to restore them. For now, check out [GammaCorpus v2 1m](https://huggingface.co/datasets/rubenroy/GammaCorpus-v2-1m)
> [!TIP]
> This is the *SECOND* and *LATEST* version of the GammaCorpus dataset. This is a significantly improved version as it contains higher quality conversations and heavy cleaning than the GammaCorpus v1 dataset collection.
## Dataset Summary
- **Number of Rows**: 5,000,000
- **Format**: JSONL
- **Language**: English
- **Data Type**: User and AI-generated content
## Dataset Structure
### Data Instances
The dataset is formatted in JSONL, where each line is a JSON object containing a conversation. Below is an example:
```jsonl
{"conversation": [{"input": "What can be seen once in a minute, twice in a moment, and never in a thousand years?", "output": "The letter 'M'."}]}
```
### Data Fields
- **`conversation` (array)**: A list of conversation objects, each containing:
- **`input` (string)**: The user-provided query or prompt.
- **`output` (string)**: The AI-generated response to the input.
## Considerations for Using the Data
### Biases
As the dataset is generated from user queries and AI responses, it may contain biases inherent in the underlying AI model or reflective of common societal biases. Additionally:
- Some entries may contain NSFW or toxic content.
- Ethical, cultural, and societal biases present in the data could propagate to models trained on it.
We have made a substantial effort with this version of GammaCorpus to filter innapropriate information, but we still strongly recommend any users to preprocess the dataset before using in production evironments.
### Other Known Limitations
- Certain topics may be overrepresented or underrepresented based on user query patterns.
- Content diversity may not fully reflect real-world conversational scenarios.
## Additional Information
### Licensing Information
The dataset is released under the **[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0)**. Please refer to the license for usage rights and restrictions. |
Lijiaxin0111/M3_VOS | Lijiaxin0111 | 2025-06-04T07:18:14Z | 3 | 0 | [
"task_categories:video-classification",
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:image",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2412.13803",
"region:us",
"CVPR2025",
"video",
"segmentation",
"computer-vision",
"physical",
"M3-VOS"
] | [
"video-classification"
] | 2025-05-31T10:25:15Z | null | ---
configs:
- config_name: default
data_files:
- split: test
path: m3vos_viewer_data_with_paths.jsonl
license: apache-2.0
task_categories:
- video-classification
language:
- en
- zh
tags:
- CVPR2025
- video
- segmentation
- computer-vision
- physical
- M3-VOS
pretty_name: M3-VOS
size_categories:
- n<1K
---
<h2 align="center">
<a href="https://zixuan-chen.github.io/M-cube-VOS.github.io/">[CVPR 2025] M<sup>3</sup>-VOS: Multi-Phase, Multi-Transition, and Multi-Scenery Video Object Segmentation</a></h2>
<h5 align="center">If you like our project, please give us a star ⭐ on GitHub for the latest update. </h5>
## 💡 Description
- **Venue:** CVPR2025
- **Repository:** [🛠️Tool](https://github.com/Lijiaxin0111/SemiAuto-Multi-Level-Annotation-Tool), [🏠Page](https://zixuan-chen.github.io/M-cube-VOS.github.io/)
- **Paper:** arxiv.org/html/2412.13803v2
- **Point of Contact:** [Jiaxin Li]([email protected]) , [Zixuan Chen]([email protected])
### 📁 Structure
This dataset contains annotated videos and images for object segmentation tasks with phase transition information. The directory structure and file descriptions are as follows:
- `meta/`
- `all_core_seqs.txt`: A list of core sequences used in the dataset.
- `all_phase_transition.json`: Metadata describing the phase transition states of target objects.
- `target_object.json`: Contains information about the target objects in each video sequence.
- `data/`
- `Annotations/`: Contains segmentation masks for the annotated target objects.
- `Videos/`: The original video files corresponding to each sequence.
- `JPEGImages/`: Extracted image frames from the videos.
- `ImageSets/`
- `val.txt`: A list of video sequences used for validation.
For more details, please refer to our paper on arXiv: [2412.13803](https://arxiv.org/abs/2412.13803).
## ✏️ Citation
If you find our paper and code useful in your research, please consider giving a star and citation.
```BibTeX
@InProceedings{chen2024m3vos_2025_CVPR,
author = {Zixuan Chen and Jiaxin Li and Liming Tan and Yejie Guo and Junxuan Liang and Cewu Lu and Yong-Lu Li},
title = {M$^3$-VOS: Multi-Phase, Multi-Transition, and Multi-Scenery Video Object Segmentation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2025}
}
``` |
Yukinonooo/animation-data | Yukinonooo | 2025-06-04T07:18:10Z | 130 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-30T09:31:29Z | null | ---
dataset_info:
features:
- name: svg
dtype: string
- name: animation_prompt_normal
dtype: string
- name: filename
dtype: string
- name: diff
dtype: string
- name: animation_prompt_expert
dtype: string
- name: animation_prompt_designer
dtype: string
splits:
- name: train
num_bytes: 88480
num_examples: 3
- name: test
num_bytes: 624
num_examples: 3
download_size: 11866
dataset_size: 89104
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ChavyvAkvar/synthetic-trades-BNB-batch-38 | ChavyvAkvar | 2025-06-04T07:12:38Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T07:11:39Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450337
num_examples: 1000
download_size: 924507502
dataset_size: 923450337
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-ADA-batch-22 | ChavyvAkvar | 2025-06-04T07:12:10Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T07:11:12Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923453992
num_examples: 1000
download_size: 924431627
dataset_size: 923453992
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ncnynl/lekiwi_test | ncnynl | 2025-06-04T07:11:25Z | 429 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-28T11:09:13Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "lekiwi",
"total_episodes": 2,
"total_frames": 672,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
9
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper",
"x_mm",
"y_mm",
"theta"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper",
"x_mm",
"y_mm",
"theta"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
640,
480,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 640,
"video.width": 480,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Daemontatox/medical-conversations-20250604-100242 | Daemontatox | 2025-06-04T07:08:03Z | 0 | 0 | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"region:us",
"medical",
"healthcare",
"conversations",
"synthetic"
] | [
"conversational",
"question-answering"
] | 2025-06-04T07:07:43Z | null | ---
license: apache-2.0
task_categories:
- conversational
- question-answering
language:
- en
tags:
- medical
- healthcare
- conversations
- synthetic
size_categories:
- n<1K
---
# Medical Conversation Dataset
This dataset contains synthetic medical conversations generated from medical literature and documents.
## Dataset Information
- **Format:** Unknown
- **Number of Records:** 0
- **Generated:** 2025-06-04 10:07:54 UTC
## Structure
Unknown format.
## Generation Statistics
- **PDFs Processed:** 1
- **Text Chunks Extracted:** 11
- **Conversations Generated:** 0
- **Success Rate:** 100.0%
- **Average Confidence Score:** 0.00
- **Processing Time:** 298.4 seconds
## Usage
This dataset is designed for training conversational AI models for medical applications. It should be used responsibly and always in conjunction with proper medical disclaimers.
### Loading the Dataset
```python
import json
# Load the dataset
with open('dataset_file.json', 'r') as f:
dataset = json.load(f)
# Access conversations
for record in dataset:
# Process based on format
pass
```
## Important Medical Disclaimer
⚠️ **This dataset is for educational and research purposes only. The generated conversations should not be used as a substitute for professional medical advice, diagnosis, or treatment. Always consult with qualified healthcare professionals for medical concerns.**
## License
Apache 2.0
## Citation
If you use this dataset, please cite:
```
@dataset{medical_conversations_2025,
title={Medical Conversation Dataset},
author={Generated using DS_Creator},
year={2025},
url={https://huggingface.co/datasets/Daemontatox/medical-conversations-20250604-100242}
}
```
|
ChavyvAkvar/synthetic-trades-BNB-batch-37 | ChavyvAkvar | 2025-06-04T06:47:17Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T06:46:20Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450559
num_examples: 1000
download_size: 924491081
dataset_size: 923450559
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bigai-nlco/ReflectionEvo | bigai-nlco | 2025-06-04T06:30:41Z | 518 | 7 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2505.16475",
"region:us"
] | [
"question-answering",
"text-generation"
] | 2025-05-06T06:18:17Z | null | ---
language:
- en
license: mit
task_categories:
- question-answering
- text-generation
size_categories:
- 10K<n<100K
configs:
- config_name: Dpref
data_files:
- split: train
path:
- Dpref/Meta-Llama-3-8B-Instruct_bigbench.jsonl
- Dpref/Meta-Llama-3-8B-Instruct_logiqa.jsonl
- Dpref/Meta-Llama-3-8B-Instruct_math.jsonl
- Dpref/Meta-Llama-3-8B-Instruct_mbpp.jsonl
- Dpref/Mistral-7B-Instruct-v0.2_bigbench.jsonl
- Dpref/Mistral-7B-Instruct-v0.2_logiqa.jsonl
- Dpref/Mistral-7B-Instruct-v0.2_mbpp.jsonl
- Dpref/gemma-2-9b-it_bigbench.jsonl
- Dpref/gemma-2-9b-it_logiqa.jsonl
- Dpref/gemma-2-9b-it_math.jsonl
- Dpref/gemma-2-9b-it_mbpp.jsonl
- config_name: D+-
data_files:
- split: train
path:
- D+-/Meta-Llama-3-8B-Instruct_bigbench.jsonl
- D+-/Meta-Llama-3-8B-Instruct_logiqa.jsonl
- D+-/Meta-Llama-3-8B-Instruct_math.jsonl
- D+-/Meta-Llama-3-8B-Instruct_mbpp.jsonl
- D+-/Mistral-7B-Instruct-v0.2_bigbench.jsonl
- D+-/Mistral-7B-Instruct-v0.2_logiqa.jsonl
- D+-/Mistral-7B-Instruct-v0.2_mbpp.jsonl
- D+-/gemma-2-9b-it_bigbench.jsonl
- D+-/gemma-2-9b-it_logiqa.jsonl
- D+-/gemma-2-9b-it_math.jsonl
- D+-/gemma-2-9b-it_mbpp.jsonl
- config_name: D+
data_files:
- split: train
path:
- D+/Meta-Llama-3-8B-Instruct_bigbench.jsonl
- D+/Meta-Llama-3-8B-Instruct_logiqa.jsonl
- D+/Meta-Llama-3-8B-Instruct_math.jsonl
- D+/Meta-Llama-3-8B-Instruct_mbpp.jsonl
- D+/Mistral-7B-Instruct-v0.2_bigbench.jsonl
- D+/Mistral-7B-Instruct-v0.2_logiqa.jsonl
- D+/Mistral-7B-Instruct-v0.2_mbpp.jsonl
- D+/gemma-2-9b-it_bigbench.jsonl
- D+/gemma-2-9b-it_logiqa.jsonl
- D+/gemma-2-9b-it_math.jsonl
- D+/gemma-2-9b-it_mbpp.jsonl
---
Github Repo for ReflectEvo: https://github.com/bigai-nlco/ReflectEvo
Arxiv Paper for ReflectEvo: https://arxiv.org/abs/2505.16475 |
RobotisSW/ai_worker_dataset_0604_8 | RobotisSW | 2025-06-04T05:57:14Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"ROBOTIS",
"ai_worker"
] | [
"robotics"
] | 2025-06-04T05:57:05Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- ROBOTIS
- ai_worker
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 3,
"total_frames": 903,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"observation.images.cam_wrist_right": {
"dtype": "video",
"names": [
"channels",
"height",
"width"
],
"shape": [
240,
424,
3
],
"info": {
"video.height": 240,
"video.width": 424,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.cam_wrist_left": {
"dtype": "video",
"names": [
"channels",
"height",
"width"
],
"shape": [
240,
424,
3
],
"info": {
"video.height": 240,
"video.width": 424,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.state": {
"dtype": "float32",
"names": [
"arm_l_joint1",
"arm_l_joint2",
"arm_l_joint3",
"arm_l_joint4",
"arm_l_joint5",
"arm_l_joint6",
"arm_l_joint7",
"gripper_l_joint1",
"arm_r_joint1",
"arm_r_joint2",
"arm_r_joint3",
"arm_r_joint4",
"arm_r_joint5",
"arm_r_joint6",
"arm_r_joint7",
"gripper_r_joint1",
"head_joint1",
"head_joint2",
"lift_joint"
],
"shape": [
19
]
},
"action": {
"dtype": "float32",
"names": [
"arm_l_joint1",
"arm_l_joint2",
"arm_l_joint3",
"arm_l_joint4",
"arm_l_joint5",
"arm_l_joint6",
"arm_l_joint7",
"gripper_l_joint1",
"arm_r_joint1",
"arm_r_joint2",
"arm_r_joint3",
"arm_r_joint4",
"arm_r_joint5",
"arm_r_joint6",
"arm_r_joint7",
"gripper_r_joint1",
"head_joint1",
"head_joint2",
"lift_joint"
],
"shape": [
19
]
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ChavyvAkvar/synthetic-trades-BNB-batch-35 | ChavyvAkvar | 2025-06-04T05:55:01Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T05:54:01Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450732
num_examples: 1000
download_size: 924490767
dataset_size: 923450732
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cosrigel/Vietnamese-Emo | cosrigel | 2025-06-04T05:44:00Z | 0 | 0 | [
"license:gemma",
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T07:11:06Z | null | ---
license: gemma
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 1051343949.0
num_examples: 365
download_size: 1004625567
dataset_size: 1051343949.0
---
|
gamga200/Smart_Inf_2025_Source_1 | gamga200 | 2025-06-04T05:35:10Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T05:30:16Z | null | ---
license: apache-2.0
---
|
ArlingtonCL2/Barkopedia_Dog_Sex_Classification_Dataset | ArlingtonCL2 | 2025-06-04T05:08:43Z | 455 | 0 | [
"task_categories:audio-classification",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"modality:audio",
"region:us",
"biology",
"dog"
] | [
"audio-classification"
] | 2025-04-15T06:03:21Z | null | ---
license: mit
task_categories:
- audio-classification
language:
- en
tags:
- biology
- dog
pretty_name: BpAE
size_categories:
- 10K<n<100K
---
## 📦 Dataset Description
This dataset is part of the **Barkopedia Challenge**: [https://uta-acl2.github.io/barkopedia.html](https://uta-acl2.github.io/barkopedia.html)
Check training data on Hugging Face:
👉 [ArlingtonCL2/Barkopedia_Dog_Sex_Classification_Dataset](https://huggingface.co/datasets/ArlingtonCL2/Barkopedia_Dog_Sex_Classification_Dataset/)
This challenge provides a dataset of labeled dog bark audio clips:
**29,345 total clips** of vocalizations from **156 individual dogs** across **5 breeds**:
- **Shiba Inu**
- **Husky**
- **Chihuahua**
- **German Shepherd**
- **Pitbull**
- **Training set**: **26,895** clips
- 13,567 female
- 13,328 male
- **Test set**: **2,450** clips
- 1,271 female
- 1,179 male
- Among these,
- **980 clips (~40%)** are used for public leaderboard evaluation
- **1,470 clips (~60%)** are used for final private leaderboard evaluation
- The full 2,450 evaluation clips are included in the dataset, but only public clips yield visible scores.
---
### 🔖 Labels
Each audio clip is annotated with the **sex of the dog** (`male` or `female`).
The labels were **manually generated and verified** by us.
The `train_labels.csv` file provides the ground truth (correct labels) for the training set. It includes:
- **audio_id**: The filename of the dog bark audio clip (e.g., `bark_2408`)
- **pred_dog_sex**: The annotated sex of the dog (`male` or `female`)
---
## 🛠️ Setup Instructions
You need to merge the training splits (train_0, train_1, train_2) into a single directory by running the provided script:
```bash
python merge_train_set.py
``` |
ChavyvAkvar/synthetic-trades-BNB-batch-33 | ChavyvAkvar | 2025-06-04T05:08:02Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T05:07:05Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450285
num_examples: 1000
download_size: 924509994
dataset_size: 923450285
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-full | ChavyvAkvar | 2025-06-04T05:06:32Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:29:20Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 46172405750
num_examples: 50000
download_size: 46216207849
dataset_size: 46172405750
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Envy1025/seti_analysis_results | Envy1025 | 2025-06-04T04:59:41Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:59:38Z | null | ---
dataset_info:
features:
- name: type
dtype: string
- name: inp_date
dtype: string
- name: category
dtype: string
- name: category_sub
dtype: string
- name: origin_nm
dtype: string
- name: origin_url
dtype: string
- name: title
dtype: string
- name: content
dtype: string
- name: extracted_keywords
dtype: string
- name: doc_neg_prob
dtype: float64
- name: doc_pos_prob
dtype: float64
- name: label
dtype: int64
- name: sentence_details
list:
- name: sent_neg_prob
dtype: float64
- name: sent_pos_prob
dtype: float64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 259202
num_examples: 20
download_size: 166720
dataset_size: 259202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Noah0214/aloha_mobile_wash_pan | Noah0214 | 2025-06-04T04:55:16Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-06-03T16:03:33Z | null | ---
license: apache-2.0
---
|
gouthxm07/fertilizer-rec-disease-control_by_GP | gouthxm07 | 2025-06-04T04:40:45Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:37:54Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 438752
num_examples: 2738
download_size: 194135
dataset_size: 438752
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fertilizer-rec-disease-control_by_GP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ko-vlm/K-MMStar | ko-vlm | 2025-06-04T04:26:10Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:25:34Z | null | ---
dataset_info:
features:
- name: images
dtype: image
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
- name: l2_category
dtype: string
- name: meta_info
dtype: string
splits:
- name: val
num_bytes: 45138945.5
num_examples: 1500
download_size: 41892141
dataset_size: 45138945.5
configs:
- config_name: default
data_files:
- split: val
path: data/val-*
---
|
Allen-UQ/cora_2_hop_nei_aug | Allen-UQ | 2025-06-04T04:21:35Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:21:15Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 8067719
num_examples: 1133
- name: validation
num_bytes: 26585828
num_examples: 3727
- name: test
num_bytes: 109633444
num_examples: 15277
download_size: 71028871
dataset_size: 144286991
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Raiff1982/Codettesspecial | Raiff1982 | 2025-06-04T04:14:02Z | 33 | 0 | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:text-generation",
"language:en",
"license:other",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/5196",
"region:us",
"music",
"art",
"legal",
"chemistry"
] | [
"question-answering",
"text-classification",
"summarization",
"text-generation"
] | 2025-04-21T07:39:21Z | null | ---
license: other
license_name: other
license_link: LICENSE
task_categories:
- question-answering
- text-classification
- summarization
- text-generation
language:
- en
tags:
- music
- art
- legal
- chemistry
pretty_name: Codettes special
size_categories:
- n>1T
--- |
ChavyvAkvar/synthetic-trades-BNB-batch-31 | ChavyvAkvar | 2025-06-04T04:11:08Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:10:05Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450102
num_examples: 1000
download_size: 924475257
dataset_size: 923450102
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
robb-0/kawaii_chibi_avatar_dataset | robb-0 | 2025-06-04T04:07:39Z | 103 | 0 | [
"task_categories:text-to-image",
"task_categories:image-classification",
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"art"
] | [
"text-to-image",
"image-classification"
] | 2025-05-25T12:03:14Z | null | ---
license: cc-by-4.0
task_categories:
- text-to-image
- image-classification
language:
- en
tags:
- art
pretty_name: Kawaii Chibi Avatar Dataset
size_categories:
- n<1K
---
# Kawaii Chibi Avatar Dataset

This is the dataset used to train
**Kawaii Chibi Avatar for Illustrious**.
---
* All images have a `.txt` file auto-tagged on Civitai.
* All images were generated on SDXL using Kawaii Chibi Avatar for SDXL
---
## License
License: CC BY 4.0
Attribution:
Kawaii Chibi Avatar Dataset © 2025 by Robb-0 is licensed under CC BY 4.0
--- |
clnine/sample-dataset-wikipedia-cs-terms | clnine | 2025-06-04T04:05:54Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:05:50Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2427827
num_examples: 158
download_size: 1135197
dataset_size: 2427827
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jiseop11892/gung_fix | jiseop11892 | 2025-06-04T04:05:45Z | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:05:22Z | null | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 111273
num_examples: 127
download_size: 67057
dataset_size: 111273
---
|
netager/finance_news_summarizer | netager | 2025-06-04T04:02:44Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:02:41Z | null | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: user_prompt
dtype: string
- name: assistant
struct:
- name: is_stock_related
dtype: bool
- name: negative_impact_stocks
sequence: string
- name: negative_keywords
sequence: string
- name: positive_impact_stocks
sequence: string
- name: positive_keywords
sequence: string
- name: reason_for_negative_impact
dtype: string
- name: reason_for_positive_impact
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 58345
num_examples: 10
download_size: 45793
dataset_size: 58345
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Yuyeong/rw_pubmed_simple_1_public | Yuyeong | 2025-06-04T03:57:43Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T03:57:19Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
- name: group_idx
dtype: int64
- name: node_idx
dtype: int64
- name: train_0
dtype: bool
- name: validation_0
dtype: bool
- name: test_0
dtype: bool
splits:
- name: train
num_bytes: 259376973.5923315
num_examples: 156000
download_size: 165382772
dataset_size: 259376973.5923315
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nguyentranai07/HIVT_all | nguyentranai07 | 2025-06-04T03:51:22Z | 105 | 0 | [
"region:us"
] | [] | 2025-06-01T02:36:30Z | null | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Answer
dtype: string
splits:
- name: train
num_bytes: 142270179
num_examples: 32942
download_size: 64859862
dataset_size: 142270179
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DonJoey/extract_principle_parallel_16 | DonJoey | 2025-06-04T03:35:30Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T03:35:23Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 171433461
num_examples: 31914
download_size: 78051916
dataset_size: 171433461
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DonJoey/extract_principle_direct | DonJoey | 2025-06-04T03:35:10Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T03:35:00Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 94044570
num_examples: 31914
download_size: 39633989
dataset_size: 94044570
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ljnlonoljpiljm/stockimage-1.5M-scored-high-similarity | ljnlonoljpiljm | 2025-06-04T03:34:35Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T02:50:26Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: text
dtype: string
- name: similarity
dtype: float64
splits:
- name: train
num_bytes: 22490818813.355957
num_examples: 575394
download_size: 22354991127
dataset_size: 22490818813.355957
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CohenQu/RLAD-joint.00.00 | CohenQu | 2025-06-04T03:29:08Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T03:23:27Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: suffix
dtype: string
splits:
- name: train
num_bytes: 5032369426
num_examples: 102312
- name: test
num_bytes: 560289342
num_examples: 11392
download_size: 1524300867
dataset_size: 5592658768
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
zhang0jhon/Aesthetic-4K | zhang0jhon | 2025-06-04T03:28:12Z | 1,152 | 23 | [
"license:mit",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2503.18352",
"arxiv:2506.01331",
"doi:10.57967/hf/5209",
"region:us"
] | [] | 2025-02-16T01:47:04Z | null | ---
license: mit
---
# Aesthetic-4K Dataset
We introduce Aesthetic-4K, a high-quality dataset for ultra-high-resolution image generation, featuring carefully selected images and captions generated by GPT-4o.
Additionally, we have meticulously filtered out low-quality images through manual inspection, excluding those with motion blur, focus issues, or mismatched text prompts.
For more details, please refer to our paper:
* [Diffusion-4K: Ultra-High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2503.18352) (CVPR 2025)
* [Ultra-High-Resolution Image Synthesis: Data, Method and Evaluation](https://arxiv.org/abs/2506.01331)
* Source code is available at [https://github.com/zhang0jhon/diffusion-4k](https://github.com/zhang0jhon/diffusion-4k).
## Citation
If you find our paper or dataset is helpful in your research or applications, generously cite with:
```
@inproceedings{zhang2025diffusion4k,
title={Diffusion-4K: Ultra-High-Resolution Image Synthesis with Latent Diffusion Models},
author={Zhang, Jinjin and Huang, Qiuyu and Liu, Junjie and Guo, Xiefan and Huang, Di},
year={2025},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
}
@misc{zhang2025ultrahighresolutionimagesynthesis,
title={Ultra-High-Resolution Image Synthesis: Data, Method and Evaluation},
author={Zhang, Jinjin and Huang, Qiuyu and Liu, Junjie and Guo, Xiefan and Huang, Di},
year={2025},
note={arXiv:2506.01331},
}
``` |
prerit2k/eval_act_bench01_21__112 | prerit2k | 2025-06-04T03:19:35Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-06-04T03:19:33Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"trossen_subversion": "v1.0",
"robot_type": "trossen_ai_solo",
"total_episodes": 1,
"total_frames": 855,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.images.cam_main": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
gbennani/RAG_wiki_corpus | gbennani | 2025-06-04T03:13:32Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T03:13:19Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: page_title
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 251283752
num_examples: 118092
download_size: 123094150
dataset_size: 251283752
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yangfengzzz/so101_test50 | yangfengzzz | 2025-06-04T03:12:03Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-04T03:04:41Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 50,
"total_frames": 43427,
"total_tasks": 1,
"total_videos": 100,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ShuoHsuan/grasp_0604 | ShuoHsuan | 2025-06-04T03:11:10Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"grasp"
] | [
"robotics"
] | 2025-06-04T03:10:55Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- grasp
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 10,
"total_frames": 2389,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.side": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
anirudhb11/star-graph-deg-7-path-3-nodes-300 | anirudhb11 | 2025-06-04T03:02:40Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T03:02:38Z | null | ---
dataset_info:
features:
- name: graph
dtype: string
- name: source
dtype: string
- name: destination
dtype: string
- name: path
dtype: string
splits:
- name: train
num_bytes: 26379705
num_examples: 200000
- name: test
num_bytes: 2637583
num_examples: 20000
download_size: 19070077
dataset_size: 29017288
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
anirudhb11/star-graph-deg-6-path-3-nodes-300 | anirudhb11 | 2025-06-04T03:02:01Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T03:01:59Z | null | ---
dataset_info:
features:
- name: graph
dtype: string
- name: source
dtype: string
- name: destination
dtype: string
- name: path
dtype: string
splits:
- name: train
num_bytes: 23474986
num_examples: 200000
- name: test
num_bytes: 2346572
num_examples: 20000
download_size: 16995454
dataset_size: 25821558
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
smirki/postview-commons | smirki | 2025-06-04T02:58:48Z | 0 | 0 | [
"license:fair-noncommercial-research-license",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T02:58:25Z | null | ---
license: fair-noncommercial-research-license
---
|
anirudhb11/star-graph-deg-5-path-3-nodes-300 | anirudhb11 | 2025-06-04T02:57:04Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T02:57:02Z | null | ---
dataset_info:
features:
- name: graph
dtype: string
- name: source
dtype: string
- name: destination
dtype: string
- name: path
dtype: string
splits:
- name: train
num_bytes: 20565629
num_examples: 200000
- name: test
num_bytes: 2057233
num_examples: 20000
download_size: 14862114
dataset_size: 22622862
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
GarrieD/toy_in_pot_v2_simple | GarrieD | 2025-06-04T02:47:56Z | 0 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-06-04T01:45:42Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# toy_in_pot_v2_simple
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
akahana/anti-spoofing-casiafasd | akahana | 2025-06-04T02:46:55Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T02:44:55Z | null | ---
dataset_info:
- config_name: test
features:
- name: filename
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 142884
num_examples: 2408
download_size: 26320
dataset_size: 142884
- config_name: train
features:
- name: filename
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 101314
num_examples: 1655
download_size: 17963
dataset_size: 101314
configs:
- config_name: test
data_files:
- split: train
path: test/train-*
- config_name: train
data_files:
- split: train
path: train/train-*
---
|
luojunyu/FinMME | luojunyu | 2025-06-04T02:43:29Z | 79 | 2 | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2505.24714",
"region:us",
"finance",
"multimodal",
"reasoning"
] | [
"multiple-choice",
"question-answering"
] | 2025-05-28T16:56:56Z | null | ---
license: mit
dataset_info:
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: question_text
dtype: string
- name: question_type
dtype: string
- name: options
dtype: string
- name: answer
dtype: string
- name: unit
dtype: string
- name: tolerance
dtype: float32
- name: verified_caption
dtype: string
- name: related_sentences
dtype: string
splits:
- name: train
num_bytes: 419829046.637
num_examples: 11099
download_size: 398554212
dataset_size: 419829046.637
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- multiple-choice
- question-answering
language:
- en
tags:
- finance
- multimodal
- reasoning
pretty_name: FinMME
size_categories:
- 10K<n<100K
---
Multimodal Large Language Models (MLLMs) have experienced rapid development in recent years. However, there is a notable lack of effective and specialized multimodal evaluation datasets in the financial domain. To advance the development of MLLMs in the finance domain, we introduce FinMME, encompassing more than 11,000 high-quality financial research samples across 18 financial domains and 6 asset classes, featuring 10 major chart types and 21 subtypes. We ensure data quality through 20 annotators and carefully designed validation mechanisms. Additionally, we develop FinScore, an evaluation system incorporating hallucination penalties and multi-dimensional capability assessment to provide an unbiased evaluation.
## Usage
Please refer to https://github.com/luo-junyu/FinMME for the evaluation protocol.
## Citation
Paper Link: https://arxiv.org/abs/2505.24714
If you find our work helpful, please consider citing our work:
```BibTex
@inproceedings{finmme,
title={FinMME: Benchmark Dataset for Financial Multi-Modal Reasoning Evaluation},
author={Junyu Luo and Zhizhuo Kou and Liming Yang and Xiao Luo and Jinsheng Huang and Zhiping Xiao and Jingshu Peng and Chengzhong Liu and Jiaming Ji and Xuanzhe Liu and Sirui Han and Ming Zhang and Yike Guo},
booktitle={Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics},
year={2025}
}
```
|
prayog-io/eval | prayog-io | 2025-06-04T02:35:18Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T02:34:54Z | null | ---
dataset_info:
features:
- name: document
dtype: string
- name: chunk_id
dtype: int64
- name: is_table
dtype: bool
- name: question
dtype: string
- name: answer
dtype: string
- name: evaluation_criteria
dtype: string
- name: difficulty
dtype: int64
- name: category
dtype: string
- name: model
dtype: string
- name: original_question
dtype: string
- name: original_answer
dtype: string
splits:
- name: train
num_bytes: 13010
num_examples: 16
- name: eval
num_bytes: 13010
num_examples: 16
download_size: 25840
dataset_size: 26020
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
---
|
ArlingtonCL2/Barkopedia_Dog_Act_Env | ArlingtonCL2 | 2025-06-04T02:35:06Z | 130 | 2 | [
"task_categories:audio-classification",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"region:us",
"biology",
"dog"
] | [
"audio-classification"
] | 2025-04-10T21:20:42Z | null | ---
license: mit
task_categories:
- audio-classification
language:
- en
tags:
- biology
- dog
pretty_name: BpAE
size_categories:
- 10K<n<100K
---
# 🐶 Barkopedia Challenge Dataset
[🔗 Barkopedia Website](https://uta-acl2.github.io/barkopedia.html)
## 📦 Dataset Description
This challenge provides a labeled dataset of dog bark audio clips for understanding **activity** and **environment** from sound.
### 📁 Current Release
- **Training Set**
- Located in the `train/` folder
- Includes:
- `split1.zip` and `split2.zip` — each contains a portion of the audio files
- `train_label.csv` — contains labels for all training clips
- Total: **12,480** training audio clips
- **Test Set**
- To be released in **June**
- Will contain **3,120** audio clips
- **40% public (1,248)** — used for live leaderboard updates
- **60% private (1,872)** — used for final evaluation
## 🔖 Labels
Each clip is annotated with **one activity** and **one environment** category.
### 🎯 Activity Categories (`act_category`)
- `rest`
- `alerting to sounds`
- `seeking attention`
- `playing with human`
- `playing with other animals`
- `playing with toy`
- `begging for food`
- `taking shower`
### 🌍 Environment Categories (`env_category`)
- `indoor (general)`
- `near window`
- `near door`
- `on grass`
- `near other animals`
- `vehicle interior`
> Labels were initially generated using video-assisted inference via a visual-language model (Janus-Pro-7B), and later manually verified to ensure quality.
## 🚀 Submission & Leaderboard
All competition data, result submission, and the live leaderboard are hosted on **[Hugging Face](https://huggingface.co/)**.
If you don’t have a Hugging Face account yet, please [register here](https://huggingface.co/join).
---
📌 *Stay tuned for the test set release in June and leaderboard launch!*
--- |
ChavyvAkvar/synthetic-trades-BNB-batch-27 | ChavyvAkvar | 2025-06-04T02:30:04Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T02:28:46Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450340
num_examples: 1000
download_size: 924490395
dataset_size: 923450340
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
paultr/so101_test | paultr | 2025-06-04T02:24:41Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-04T02:23:37Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 10,
"total_frames": 3632,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
720,
1280,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 720,
"video.width": 1280,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.topdown": {
"dtype": "video",
"shape": [
1080,
1920,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 1080,
"video.width": 1920,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
sysuyy/ImgEdit_recap_mask | sysuyy | 2025-06-04T02:22:10Z | 1,649 | 0 | [
"license:mit",
"arxiv:2505.20275",
"region:us"
] | [] | 2025-05-25T02:47:41Z | null | ---
license: mit
---
[ImgEdit: A Unified Image Editing Dataset and Benchmark](https://huggingface.co/papers/2505.20275)
# 🌍 Introduction
**ImgEdit** is a large-scale, high-quality image-editing dataset comprising 1.2 million carefully curated edit pairs, which contain both novel and complex single-turn edits, as well as challenging multi-turn tasks.
To ensure the data quality, we employ a multi-stage pipeline that integrates a cutting-edge vision-language model, a detection model, a segmentation model, alongside task-specific in-painting procedures and strict post-processing. ImgEdit surpasses existing datasets in both task novelty and data quality.
Using ImgEdit, we train **ImgEdit-E1**, an editing model using Vision Language Model to process the reference image and editing prompt, which outperforms existing open-source models on multiple tasks, highlighting the value of ImgEdit and model design.
For comprehensive evaluation, we introduce **ImgEdit-Bench**, a benchmark designed to evaluate image editing performance in terms of instruction adherence, editing quality, and detail preservation.
It includes a basic testsuite, a challenging single-turn suite, and a dedicated multi-turn suite.
We evaluate both open-source and proprietary models, as well as ImgEdit-E1.
# 📜 Citation
If you find our paper and code useful in your research, please consider giving a star ⭐ and citation 📝.
```bibtex
@article{ye2025imgedit,
title={ImgEdit: A Unified Image Editing Dataset and Benchmark},
author={Ye, Yang and He, Xianyi and Li, Zongjian and Lin, Bin and Yuan, Shenghai and Yan, Zhiyuan and Hou, Bohan and Yuan, Li},
journal={arXiv preprint arXiv:2505.20275},
year={2025}
}
```
|
DUTAOZHANG/Style2Code_datasets | DUTAOZHANG | 2025-06-04T02:12:30Z | 0 | 0 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10M<n<100M",
"region:us",
"code"
] | [
"text-generation"
] | 2025-06-04T02:04:12Z | null | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- code
size_categories:
- 10M<n<100M
---
## 📦 Dataset Source and Processing
The dataset for this project is derived from the [iamtarun/python_code_instructions_18k_alpacadataset](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpacadataset), which contains approximately 18,000 Python code snippets paired with instructions. It was designed to provide high-quality samples for instruction-driven code generation tasks.
To enrich the style diversity and support style-controllable generation, we employed three powerful large language models—**DeepSeek**, **Qwen**, and **Doubao**—to generate diverse code samples for each instruction in the dataset. We then carefully cleaned and aligned the generated code snippets to ensure that they are semantically equivalent yet stylistically distinct.
The resulting pairs (same functionality, different styles) serve as the training corpus for our contrastive style encoder and style-controlled generator. This enhanced dataset enables fine-grained style transfer and stylistic alignment during code generation in Style2Code.
---
✅ **Key Details for Reproduction**
- **Source dataset**: [iamtarun/python_code_instructions_18k_alpacadataset](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpacadataset)
- **Style-variant generation models**: DeepSeek, Qwen, Doubao
- **Cleaning and alignment**: Post-processing to remove low-quality outputs and ensure semantic equivalence across style variants
- **Use case**: Training Style2Code for explicit style vector extraction and style-controlled code generation
For further details and usage instructions, please refer to the [Style2Code GitHub repository](https://github.com/zh19980811/Style2Code). |
ztony0712/object_detection | ztony0712 | 2025-06-04T02:12:22Z | 56 | 0 | [
"task_categories:object-detection",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"object-detection"
] | 2025-05-15T17:57:31Z | null | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: Label
dtype: string
- name: image
dtype: image
- name: Rating
dtype: float64
- name: Deviation
dtype: float64
- name: percentile
dtype: float64
splits:
- name: val
num_bytes: 200575077.073
num_examples: 4951
download_size: 214465256
dataset_size: 200575077.073
configs:
- config_name: default
data_files:
- split: val
path: data/val-*
license: apache-2.0
task_categories:
- object-detection
language:
- en
pretty_name: Object Detection
size_categories:
- 1K<n<10K
---
# Visualization of Object Detection Task Cases Samples
Check dataset samples visualization by viewing Dataset Viewer.
The sampling procedure is guided by the Elo distribution introduced in our method.
Original dataset is validation set of COCO dataset.
samples/origin: 4951/5000
# License
This repository is licensed under the Apache License 2.0 |
songtingyu/vf-eval | songtingyu | 2025-06-04T01:42:39Z | 39 | 0 | [
"task_categories:video-text-to-text",
"task_categories:visual-question-answering",
"license:mit",
"size_categories:1K<n<10K",
"modality:video",
"arxiv:2505.23693",
"region:us"
] | [
"video-text-to-text",
"visual-question-answering"
] | 2025-05-28T09:46:31Z | null | ---
license: mit
task_categories:
- video-text-to-text
- visual-question-answering
size_categories:
- 1K<n<10K
configs:
- config_name: val
data_files:
- split: yesno
path: "yesno/yesno_val.json"
- split: multichoice
path: "multi/multi_val.json"
- split: openend
path: "openend/openend_val.json"
- config_name: test
data_files:
- split: yesno
path: "yesno/yesno_test.json"
- split: multichoice
path: "multi/multi_test.json"
- split: openend
path: "openend/openend_test.json"
- config_name: total
default: true
data_files:
- split: yesno
path: "yesno/yesno_final.json"
- split: multichoice
path: "multi/multi_final.json"
- split: openend
path: "openend/openend_final.json"
---
# Dataset Card for VF-Eval Benchmark
Repository: [sighingsnow/vf-eval](https://github.com/SighingSnow/VF-EVAL)
For the usage of this dataset, please refer to the github repo.
If you find this repository helpful, feel free to cite our paper:
```bibtex
@misc{song2025vfeval,
title={VF-Eval: Evaluating Multimodal LLMs for Generating Feedback on AIGC Videos},
author={Tingyu Song and Tongyan Hu and Guo Gan and Yilun Zhao},
year={2025},
eprint={2505.23693},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.23693},
}
```
|
Subsets and Splits