The Dataset Viewer has been disabled on this dataset.
MPSC-RR Dataset
Respiratory Waveform Reconstruction using Persistent Independent Particles Tracking from Video
Dataset Description
The MPSC-RR dataset is a comprehensive collection of RGB videos designed for contactless respiratory rate estimation and respiratory waveform reconstruction research. The dataset captures respiratory-induced movements across multiple body positions and camera angles, providing diverse scenarios for developing robust respiratory monitoring algorithms.
Key Features
- Multi-Position Coverage: Front, back, side, and lying positions
- Natural Breathing Patterns: Regular, slow, and fast breathing captured
- Diverse Demographics: Adult volunteers across different age groups and backgrounds
- Ground Truth Annotations: Vernier Go Direct Respiration Belt measurements
- High-Quality Videos: RGB videos with clear respiratory motion visibility
- Real-World Conditions: Varied lighting, clothing, and environmental settings
Dataset Statistics
- Total Videos: 31 video sessions
- Subjects: 24 adult volunteers (Subject IDs: 1-24)
- Duration: 2.5-5 minutes per video (Average: ~4 minutes)
- Resolution: Varied resolutions from 304×304 to 1602×1080
- Frame Rate: Primarily 30 FPS (with some at 30.1, 29, and 24 FPS)
- Total Duration: ~2.1 hours of respiratory monitoring data
- Total Size: 5.18 GB
Data Structure
Video Files
s{subject_id}_{view_position}.{ext}
Naming Convention:
s1_front.mov
- Subject 1, front views20_side.mp4
- Subject 20, side view (Note: one video in MP4 format)s24_lying.mov
- Subject 24, lying position
View Positions:
front
- Frontal chest/abdomen view (6 videos)back
- Back/shoulder movement view (4 videos)side
- Side profile view (16 videos)lying
- Lying down position (5 videos)
Metadata Structure
{
"video_id": str, # e.g., "s1_front"
"subject_id": int, # Subject identifier (1-24)
"view_position": str, # front, back, side, lying
"duration_seconds": float, # Video duration
"resolution": str, # e.g., "1920x1080"
"frame_rate": float, # 30.0, 30.1, 29.0, or 24.0 FPS
"file_size_mb": float, # File size in megabytes
"filename": str # Actual filename with extension
}
Usage
Loading the Dataset
from datasets import load_dataset
# Load all videos
dataset = load_dataset("justchugh/MPSC-RR")
# Access video information
sample = dataset[0]
video_id = sample["video_id"]
subject_id = sample["subject_id"]
view_position = sample["view_position"]
Filtering Videos
# Get front view videos
front_videos = dataset.filter(lambda x: x["view_position"] == "front")
# Get specific subject's videos
subject_1 = dataset.filter(lambda x: x["subject_id"] == 1)
# Get lying position videos
lying_videos = dataset.filter(lambda x: x["view_position"] == "lying")
Basic Analysis
# Count videos per position
positions = [sample["view_position"] for sample in dataset]
position_counts = {pos: positions.count(pos) for pos in set(positions)}
print("Videos per position:", position_counts)
# List all subjects
subjects = set(sample["subject_id"] for sample in dataset)
print(f"Subjects: {sorted(subjects)}")
Data Collection
Equipment
- Camera: Standard RGB cameras (mobile phones, mounted cameras)
- Ground Truth: Vernier Go Direct Respiration Belt for pressure measurements
- Distance: 1-1.5 meters from subjects
- Environment: Clinical laboratory setting with controlled conditions
Protocol
- Subject Preparation: Comfortable positioning with clear view of respiratory regions
- Baseline Recording: 30-second calibration period
- Data Collection: 3-5 minutes of natural breathing
- Ground Truth Sync: Synchronized pressure belt data collection
- Quality Check: Manual verification of respiratory cycles
View Positions Explained
Position | Description | Captured Regions | Use Case | Count |
---|---|---|---|---|
front |
Frontal view | Chest, abdomen expansion | Standard respiratory monitoring | 6 |
back |
Back view | Shoulder, back movement | Posterior respiratory motion | 4 |
side |
Side profile | Chest wall movement | Lateral respiratory dynamics | 16 |
lying |
Supine position | Abdomen, chest (lying) | Sleep/rest respiratory patterns | 5 |
Dataset Distribution
Subject Range | Video Count | Positions Available |
---|---|---|
1-2 | 6 videos | front, back, lying |
3 | 2 videos | back, lying |
4 | 1 video | front |
5 | 2 videos | front, lying |
6 | 1 video | front |
7-22 | 16 videos | side (primarily) |
23 | 1 video | front |
24 | 2 videos | back, lying |
Benchmark Results
Performance of state-of-the-art methods on MPSC-RR:
Method | MAE (bpm) | RMSE (bpm) | Modality |
---|---|---|---|
Intensity-based | 3.25 | 5.12 | RGB |
Optical Flow | 2.42 | 3.89 | RGB |
PIPs++ | 1.62 | 2.92 | RGB |
RRPIPS | 1.01 | 1.80 | RGB |
Applications
This dataset supports research in:
- Contactless Vital Sign Monitoring
- Multi-Position Respiratory Analysis
- Computer Vision for Healthcare
- Sleep and Rest Monitoring
- Wearable-Free Health Tracking
- Clinical Decision Support Systems
Data Quality
Inclusion Criteria
- Clear visibility of respiratory-induced motion
- Stable video recording (minimal camera movement)
- Synchronized ground truth data available
- Adequate lighting conditions
Quality Metrics
- Motion Clarity: All videos show visible respiratory movement
- Synchronization: <50ms offset between video and pressure data
- Duration: Minimum 148 seconds per recording
- Resolution: Varied resolutions optimized for respiratory motion capture
Related Datasets
- AIR-125: Infant respiratory monitoring (125 videos, infants)
- BIDMC: PPG and respiration signals (53 recordings, clinical)
- Sleep Database: NIR/IR respiratory data (28 videos, adults)
Ethical Considerations
- IRB Approval: All data collection approved by institutional review board
- Informed Consent: Written consent obtained from all participants
- Privacy Protection: Faces blurred or cropped when necessary
- Data Anonymization: No personally identifiable information included
- Voluntary Participation: Participants could withdraw at any time
Citation
If you use this dataset in your research, please cite:
@inproceedings{hasan2025rrpips,
title={RRPIPS: Respiratory Waveform Reconstruction using Persistent Independent Particles Tracking from Video},
author={Hasan, Zahid and Ahmed, Masud and Sakib, Shadman and Chugh, Snehalraj and Khan, Md Azim and Faridee, Abu Zaher MD and Roy, Nirmalya},
booktitle={ACM/IEEE International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE)},
year={2025},
pages={1--12},
doi={10.1145/3721201.3721366}
}
Quick Start
# Install required packages
pip install datasets
# Load and explore dataset
from datasets import load_dataset
dataset = load_dataset("justchugh/MPSC-RR")
print(f"Total videos: {len(dataset)}")
print(f"First video: {dataset[0]['video_id']}")
print(f"Positions available: {set(s['view_position'] for s in dataset)}")
# Example: Get all side view videos
side_videos = dataset.filter(lambda x: x["view_position"] == "side")
print(f"Side view videos: {len(side_videos)}")
License
This dataset is released under Creative Commons Attribution 4.0 International License.
Links
- Code Repository: https://github.com/mxahan/RRPIPS
- Website Repository: https://github.com/justchugh/RRPIPs.github.io
Contact
- Technical Issues: [email protected]
- Dataset Questions: [email protected]
Team
![]() Zahid Hasan |
![]() Masud Ahmed |
![]() Shadman Sakib |
![]() Snehalraj Chugh |
![]() Md Azim Khan |
![]() Abu Zaher MD Faridee |
![]() Nirmalya Roy |
Dataset Version: 1.0
- Downloads last month
- 105