Datasets:
You need to agree to share your contact information to access this dataset
The information you provide will be collected, stored, processed and shared in accordance with the RealSource Privacy Policy.
REALSOURCE WORLD COMMUNITY LICENSE AGREEMENT: All the data and code within this repo are under CC BY-NC-SA 4.0.
Log in or Sign Up to review the conditions and access this dataset content.
RealSource World
RealSource World is a large-scale real-world robotics manipulation dataset collected using RS-02 dual-arm humanoid robot. This dataset contains diverse long-horizon manipulation tasks performed in real-world environments, with detailed annotations for atomic skills and quality assessments.
Key Features
- 14+ million frames of real-world dual-arm manipulation demonstrations.
- 11,428+ episodes across 35 distinct manipulation tasks.
- 71-dimensional proprioceptive state space including joint positions, velocities, accelerations, forces, torques, and end-effector poses.
- Multi-camera visual observations (head camera, left hand camera, right hand camera) at 720Γ1280 resolution, 30 FPS.
- Fine-grained annotations with atomic skill segmentation and quality assessments for each episode.
- Diverse scenes including kitchen, conference room, convenience store, and household environments.
- Dual-arm coordination tasks demonstrating complex bimanual manipulation skills.
News
[2025/12]RealSource World dataset fully uploaded with 35 tasks. Download Link[2025/11]RealSource World released on Hugging Face. Download Link
Changelog
Version 1.2 (December 2025)
- Data Format Upgrade & Fixes
- Upgraded codebase_version to v2.1
- Extended proprioceptive state space from 57 to 71 dimensions
- Added joint acceleration data (LeftJoint_Acc/RightJoint_Acc): 14 dimensions (7 Γ 2)
- Fixed timing synchronization issues in some data
- Optimized data structure for improved completeness and consistency
- All datasets reprocessed for high-quality output
Version 1.1 (December 2025)
- Full Dataset Upload
- Complete release of all dataset files on Hugging Face
- 35 manipulation tasks with 11,428 episodes
- Total size: ~525GB
- Total files: ~104,000+
Version 1.0 (November 2025)
- Initial Release
- Released RealSource World dataset on Hugging Face
- 29 manipulation tasks with 11,428 episodes
- 14+ million frames of real-world dual-arm manipulation demonstrations
- 57-dimensional proprioceptive state space
- Multi-camera visual observations (head, left hand, right hand)
- Fine-grained annotations with atomic skill segmentation
- Complete camera parameters (intrinsics and extrinsics) for all episodes
- Quality assessment for each episode
Quick Start
Download Dataset
To download the complete dataset, you can use the following commands. If you encounter any issues, please refer to the Hugging Face official documentation.
# Ensure git-lfs is installed (https://git-lfs.com)
git lfs install
# When prompted for password, use an access token with read permissions.
# Generate from your settings page: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/RealSourceData/RealSource-World
# If you want to download only file pointers without large files
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/RealSourceData/RealSource-World
If you only want to download a specific task from the RealSource World dataset, for example Arrange_the_cups, follow these steps:
# Ensure Git LFS is installed (https://git-lfs.com)
git lfs install
# Initialize an empty Git repository
git init RealSource-World
cd RealSource-World
# Set remote repository
git remote add origin https://huggingface.co/datasets/RealSourceData/RealSource-World
# Enable sparse checkout
git sparse-checkout init
# Specify folders and files to download
git sparse-checkout set Arrange_the_cups scripts
# Pull data from main branch
git pull origin main
Dataset Structure
Folder Hierarchy
RealSource-World/
βββ Arrange_the_cups/
β βββ data/
β β βββ chunk-000/
β β βββ episode_000000.parquet
β β βββ episode_000001.parquet
β β βββ ...
β βββ meta/
β β βββ info.json # Dataset metadata and feature definitions
β β βββ episodes.jsonl # Episode-level metadata
β β βββ episodes_stats.jsonl # Episode statistics
β β βββ tasks.jsonl # Task descriptions
β β βββ sub_tasks.jsonl # Fine-grained sub-task annotations
β β βββ camera.json # Camera parameters for all episodes
β βββ videos/
β βββ chunk-000/
β βββ observation.images.head_camera/
β β βββ episode_000000.mp4
β β βββ ...
β βββ observation.images.left_hand_camera/
β β βββ episode_000000.mp4
β β βββ ...
β βββ observation.images.right_hand_camera/
β βββ episode_000000.mp4
β βββ ...
βββ Arrange_the_items_on_the_conference_table/
β βββ ...
βββ Clean_the_convenience_store/
β βββ ...
βββ ...
Each dataset folder contains:
data/- Parquet files for observations and actionsmeta/- JSONL files for episodes, tasks, and annotationsvideos/- MP4 videos from multiple camera angles
Tasks
The following 35 tasks are included in this dataset:
| Category | Task List |
|---|---|
| Kitchen Tasks | Arrange the cups, Stack the cups, Clean the kitchen counter, Tidy up the cooking counter, Cook rice using an electric rice cooker, Steam rice in a rice cooker, Steam potatoes, Steam buns, Make toast, Prepare the bread |
| Organization Tasks | Organize the TV cabinet, Organize the magazines, Organize the pen holder, Organize the repair tools, Organize the toys, Organize the glass tube on the rack, Replace the tissues and arrange them, Tidy up the children's room, Tidy up the conference room table, Place the books, Place the hairdryer, Place the slippers, Take down the book, Hang out the clothes to dry |
| Convenience Store Tasks | Clean the convenience store, Replenish tea bags |
| Industrial Tasks | Cable plugging, Move industrial parts to different plastic boxes, Pack the badminton shuttlecock |
| Daily Life Tasks | Collect the mail, Put the milk in the refrigerator, Refill the laundry detergent, Prepare the birthday cake, Take out the trash |
Understanding the Dataset Format
This dataset follows the LeRobot v2.1 format. Each task directory contains:
data/: Parquet files storing time-series data (proprioceptive state, actions, timestamps)meta/: JSON/JSONL files containing metadata, episode information, and annotationsvideos/: MP4 video files from three camera perspectives
Key Metadata Files
meta/info.json: Contains dataset-level metadata, including:- Total number of episodes, frames, videos
- Feature definitions (shapes and names of actions and observations)
- Video specifications (resolution, codec, frame rate)
Robot type and codebase version
meta/episodes.jsonl: Each line is a JSON object representing one episode, containing:episode_index: Episode identifierlength: Number of frames in the episodetasks: List of task descriptionsvideos: Video file paths for each camerameta/sub_tasks.jsonl: Fine-grained annotations for each episode, including:task_steps: List of atomic skill segments with start/end framessuccess_rating: Overall task success rating (1-5 points)quality_assessments: Detailed quality metrics (pass/fail/valid)notes: Annotation metadatameta/camera.json: Camera intrinsics and extrinsics for each episode
Loading and Using the Dataset
This dataset is compatible with the LeRobot library. Here's how to load and use it:
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
# Load a specific task
dataset_path = "RealSource-World/Arrange_the_cups"
repo_id = "RealSourceData/RealSource-World"
# Initialize the dataset
dataset = LeRobotDataset(dataset_path, repo_id=repo_id)
# Access episode data
episode_0 = dataset[0] # First frame of the first episode
# Episode metadata
episode_info = dataset.episode_data[0]
# Iterate through episodes
for episode_idx in range(len(dataset.episode_data)):
episode_length = dataset.episode_data[episode_idx]["length"]
print(f"Episode {episode_idx} has {episode_length} frames")
# Visualize episode
dataset.show_video(episode_idx=0, video_key="observation.images.head_camera")
Data Format Details
Data Fields
Each parquet file in data/ contains the following columns:
| Column | Description |
|---|---|
observation |
Multi-modal observations (images, state) |
action |
Robot control commands |
reward |
Reward signals (optional) |
terminated |
Episode termination flag |
truncated |
Truncation flag |
language_instruction |
Natural language task description |
Proprioceptive State (71 Dimensions)
The observation.state field contains comprehensive proprioceptive information:
| Index Range | Component | Description |
|---|---|---|
| 0-15 | Joint Positions | 7 joints Γ 2 arms + 2 grippers = 16 DOF |
| 16 | Lift Position | Mobile base lift height |
| 17-22 | Left Arm Force/Torque | 6D force (fx, fy, fz, mx, my, mz) |
| 23-28 | Right Arm Force/Torque | 6D force (fx, fy, fz, mx, my, mz) |
| 29-35 | Left Arm Joint Velocities | 7 joints = 7 DOF |
| 36-42 | Right Arm Joint Velocities | 7 joints = 7 DOF |
| 43-49 | Left End-Effector Pose | Position (x, y, z) + Quaternion (qw, qx, qy, qz) |
| 50-56 | Right End-Effector Pose | Position (x, y, z) + Quaternion (qw, qx, qy, qz) |
| 57-63 | Left Arm Joint Accelerations | 7 joints = 7 DOF |
| 64-70 | Right Arm Joint Accelerations | 7 joints = 7 DOF |
State Field Names
[
"LeftFollowerArm_Joint1.pos", ..., "LeftFollowerArm_Joint7.pos",
"LeftGripper.pos",
"RightFollowerArm_Joint1.pos", ..., "RightFollowerArm_Joint7.pos",
"RightGripper.pos",
"Lift.position",
"LeftForce.fx", "LeftForce.fy", "LeftForce.fz",
"LeftForce.mx", "LeftForce.my", "LeftForce.mz",
"RightForce.fx", "RightForce.fy", "RightForce.fz",
"RightForce.mx", "RightForce.my", "RightForce.mz",
"LeftJoint_Vel1", ..., "LeftJoint_Vel7",
"RightJoint_Vel1", ..., "RightJoint_Vel7",
"LeftEnd_X", "LeftEnd_Y", "LeftEnd_Z",
"LeftEnd_Qw", "LeftEnd_Qx", "LeftEnd_Qy", "LeftEnd_Qz",
"RightEnd_X", "RightEnd_Y", "RightEnd_Z",
"RightEnd_Qw", "RightEnd_Qx", "RightEnd_Qy", "RightEnd_Qz",
"LeftJoint_Acc1", ..., "LeftJoint_Acc7",
"RightJoint_Acc1", ..., "RightJoint_Acc7"
]
Action Space (17 Dimensions)
The action field contains commands sent to the robot:
| Component | Description |
|---|---|
| 0-6 | Left arm joint positions (7 DOF) |
| 7 | Left gripper position |
| 8-14 | Right arm joint positions (7 DOF) |
| 15 | Right gripper position |
| 16 | Lift platform command |
Action Field Names
[
"LeftLeaderArm_Joint1.pos", ..., "LeftLeaderArm_Joint7.pos",
"LeftGripper.pos",
"RightLeaderArm_Joint1.pos", ..., "RightLeaderArm_Joint7.pos",
"RightGripper.pos",
"Lift.command"
]
Visual Observations
Each episode contains synchronized video from three camera perspectives:
observation.images.head_camera: Head/overhead perspectiveobservation.images.left_hand_camera: Camera mounted on left end-effectorobservation.images.right_hand_camera: Camera mounted on right end-effector
Video Specifications:
- Resolution: 720 Γ 1280 pixels
- Frame Rate: 30 FPS
- Codec: H.264
- Format: MP4
Camera Parameters
Camera parameters for each episode are stored in the meta/camera.json file, keyed by episode_XXXXXX. Camera parameters include intrinsics (camera matrix and distortion coefficients) and extrinsics (hand-eye calibration parameters).
File Structure
The camera.json file contains camera parameters for all episodes:
{
"episode_000000": {
"camera_ids": {
"head": "245022300889",
"left_arm": "245022301980",
"right_arm": "245022300408",
"foot": ""
},
"camera_parameters": {
"head": {
"720P": {
"MTX": [[648.57, 0, 645.54], [0, 647.80, 375.38], [0, 0, 1]],
"DIST": [-0.0513, 0.0587, -0.0006, 0.00096, -0.0186]
},
"480P": { ... }
},
"left_arm": { ... },
"right_arm": { ... }
},
"hand_eye": {
"left_arm_in_eye": {
"R": [[...], [...], [...]],
"T": [x, y, z]
},
"right_arm_in_eye": { ... },
"left_arm_to_eye": { ... },
"right_arm_to_eye": { ... }
}
},
"episode_000001": { ... }
}
Camera Intrinsics
Each camera (head, left_arm, right_arm) contains intrinsics for two resolutions:
MTX: 3Γ3 camera intrinsic matrix
[fx 0 cx]
[0 fy cy]
[0 0 1]
fx,fy: Focal length (in pixels)cx,cy: Principal point (optical center) coordinates (in pixels)DIST: 5-element distortion coefficients (k1, k2, p1, p2, k3)Used for correcting radial and tangential distortion
Available Resolutions:
720P: 720p video parameters (720 Γ 1280)480P: 480p video parameters (480 Γ 640)
Hand-Eye Calibration (Extrinsics)
The hand_eye section contains transformation relationships between robot end-effectors and cameras:
left_arm_in_eye: Transformation from left wrist camera (left end-effector camera) to left manipulator end centerR: 3Γ3 rotation matrixT: 3Γ1 translation vector [x, y, z] (in meters)Represents the position and orientation of the left wrist-mounted camera relative to the left manipulator end center
right_arm_in_eye: Transformation from right wrist camera (right end-effector camera) to right manipulator end center- Represents the position and orientation of the right wrist-mounted camera relative to the right manipulator end center
left_arm_to_eye: Transformation from head camera to left manipulator base coordinate systemR: 3Γ3 rotation matrixT: 3Γ1 translation vector [x, y, z] (in meters)Represents the position and orientation of the head camera relative to the left manipulator base coordinate system
right_arm_to_eye: Transformation from head camera to right manipulator base coordinate system- Represents the position and orientation of the head camera relative to the right manipulator base coordinate system
These parameters support the following coordinate transformations:
- Conversion between robot end-effector poses and camera image coordinates
- Conversion between 3D positions in robot space and image pixel coordinates
- Multi-view geometry operations and calibration
- Conversion between wrist camera coordinate system and end center
- Conversion between head camera coordinate system and manipulator base coordinate system
Camera IDs
Each camera has a unique identifier:
head: Head camera IDleft_arm: Left end-effector camera IDright_arm: Right end-effector camera IDfoot: Foot camera ID (if available)
Sub-task Annotations
Each episode in meta/sub_tasks.jsonl contains detailed annotations:
{
"task": "Separate the two stacked cups in the dish and place them on the two sides of the dish.",
"language": "en",
"task_index": 0,
"episode_index": 0,
"task_steps": [
{
"step_name": "Left arm picks up the stack of cups from the center of the plate",
"start_frame": 100,
"end_frame": 180,
"description": "Left arm picks up the stack of cups from the center of the plate",
"duration_frames": 80
},
...
],
"success_rating": 5,
"notes": "annotation_date: 2025/11/13",
"quality_assessments": {
"overall_valid": "VALID",
"movement_fluency": "PASS",
"grasp_success": "PASS",
"placement_quality": "PASS",
...
},
"total_frames": 946
}
Quality Assessment Metrics
overall_valid: Overall episode validity (valid/invalid)movement_fluency: Robot movement fluency (pass/fail)grasp_success: Grasp action success (pass/fail)placement_quality: Object placement quality (pass/fail)no_drop: No objects dropped during task (pass/fail)grasp_collisions: No collisions during grasping (pass/fail)arm_collisions: No arm collisions (pass/fail)operation_completeness: Task completion status (pass/fail)- And other metrics...
Dataset Statistics
Overall Statistics
- Total Tasks: 35
- Total Episodes: 11,428
- Total Frames: 14,085,107
- Total Videos: 34,284 (3 cameras Γ 11,428 episodes)
- Robot Type: RS-02 (dual-arm humanoid robot)
- Dataset Format: LeRobot v2.1
- Video Resolution: 720 Γ 1280
- Frame Rate: 30 FPS
Task Distribution
The dataset contains diverse manipulation tasks across multiple domains:
| Category | Task List |
|---|---|
| Kitchen Tasks | Arrange the cups, Stack the cups, Clean the kitchen counter, Tidy up the cooking counter, Cook rice using an electric rice cooker, Steam rice in a rice cooker, Steam potatoes, Steam buns, Make toast, Prepare the bread |
| Organization Tasks | Organize the TV cabinet, Organize the magazines, Organize the pen holder, Organize the repair tools, Organize the toys, Organize the glass tube on the rack, Replace the tissues and arrange them, Tidy up the children's room, Tidy up the conference room table, Place the books, Place the hairdryer, Place the slippers, Take down the book, Hang out the clothes to dry |
| Convenience Store Tasks | Clean the convenience store, Replenish tea bags |
| Industrial Tasks | Cable plugging, Move industrial parts to different plastic boxes, Pack the badminton shuttlecock |
| Daily Life Tasks | Collect the mail, Put the milk in the refrigerator, Refill the laundry detergent, Prepare the birthday cake, Take out the trash |
Robot URDF Model
RealSource World dataset is collected using the RS-02 dual-arm humanoid robot. For simulation, visualization, and research purposes, we provide the URDF (Unified Robot Description Format) model of the RS-02 robot.
RS-02 Robot Specifications
- Robot Type: Dual-arm humanoid robot
- Total Links: 46 links
- Total Joints: 45 joints
- Manipulator Arms: 2 Γ 7-DOF manipulator arms (left and right)
- End Effectors: Dual grippers, each with 8 DOF
- Base: Mobile base with wheels and lift platform
- Sensors: Head camera, left hand camera, right hand camera
URDF Package Structure
The RS-02 URDF package contains:
RS-02/
βββ urdf/
β βββ RS-02.urdf # Main URDF file (59KB)
β βββ RS-02.csv # Joint configuration data
βββ meshes/ # 3D mesh models (46 STL files)
β βββ base_link.STL
β βββ L_Link_1-7.STL # Left arm links
β βββ R_Link_1-7.STL # Right arm links
β βββ ltool_*.STL # Left gripper components
β βββ rtool_*.STL # Right gripper components
β βββ head_*.STL # Head components
β βββ camera_*.STL # Camera mounts
βββ config/
β βββ joint_names_RS-02.yaml # Joint name configuration
βββ launch/
β βββ display.launch # RViz visualization
β βββ gazebo.launch # Gazebo simulation
βββ package.xml # ROS package metadata
Using the URDF Model
ROS/ROS2 Users
The URDF model can be used with ROS tools:
Visualize in RViz:
roslaunch RS-02 display.launch
Simulate in Gazebo:
roslaunch RS-02 gazebo.launch
License and Citation
This dataset is released under the CC BY-NC-SA 4.0 license (Attribution-NonCommercial-ShareAlike).
License
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
Citation
If you use this dataset in your research, please cite:
@misc{realsourceworld,
title={RealSource World: A Large-Scale Real-World Dual-Arm Manipulation Dataset},
author={RealSource},
howpublished={\url{https://huggingface.co/datasets/RealSourceData/RealSource-World}},
year={2025}
}
- Downloads last month
- 88,499