一. 数据集摘要 (Dataset Summary)
「CyberData塞塔」智能座舱用户行为数据集是一个专为加速智能座舱感知算法开发而设计的高质量、程序化生成的图像数据集。随着 C-NCAP、EU GSR 等全球汽车安全法规对驾驶员监控系统 (DMS) 和乘客监控系统 (OMS) 提出更高要求,安全、合规、多样化的训练数据变得至关重要。本数据集通过合成方式,旨在解决真实世界数据采集面临的隐私风险、高昂成本和长尾场景覆盖不足等核心挑战。 该数据集包含 5,000 张 由 XAI Lab 自主研发的数据集生成引擎合成的高保真座舱内用户行为图像,每张图像都附带丰富的、100% 精确的标注信息。 核心特点:
- 丰富的场景多样性: 涵盖不同年龄、性别、种族和衣着风格的虚拟人模型,以及多种驾驶与乘坐行为(如使用手机、喝水、疲劳、手势)和面部表情。
- 专为座舱感知优化: 数据集可直接用于智能座舱端侧视觉模型,尤其是 DMS/OMS 算法的训练、微调与验证,帮助模型精准理解座舱内复杂的交互与状态。
- 安全合规的数据源: 所有图像均为合成数据,从根本上规避了采集真实用户数据时涉及的隐私和肖像权问题,为算法开发提供了安全、可控、可扩展的数据基础。
- 精确的程序化标注: 所有标签(如边界框、行为类别、用户属性)均在数据生成时同步产生,保证了标注的零错误率和高度一致性,消除了人工标注的主观性和不确定性。
二. 支持的任务与排行榜 (Supported Tasks and Leaderboards)
该数据集旨在推动智能座舱内计算机视觉技术的进步,尤其是在驾驶员与乘客的主动安全和智能交互领域。
主要支持任务
- 驾驶员/乘客行为识别 (Behavior Recognition): 对座舱内人员的关键行为进行分类。此任务对于识别分心驾驶(如玩手机、抽烟)和危险行为至关重要。
- 任务示例: 图像级多标签分类,识别图中出现的 using_phone, smoking, drinking, yawning 等行为。
- 驾驶员状态监测 (Driver State Monitoring): 评估驾驶员的生理和精神状态,是预防疲劳驾驶事故的核心技术。
- 任务示例: 分类驾驶员是否处于 drowsy (疲劳), distracted (分心) 或 attentive (专注) 状态。
- 目标检测 (Object Detection): 在图像中定位关键对象及其位置,为后续的行为分析和交互提供基础。
- 任务示例: 检测图像中的 face (人脸), hand (手), phone (手机) 等物体的边界框。
- 属性识别 (Attribute Recognition): 识别用户的基本人口统计学特征,可用于个性化座舱设置。
- 任务示例: 分类用户的 age_group (年龄段) 和 gender (性别)。
排行榜 (Leaderboards)[此部分为未来计划]
我们计划未来举办挑战赛,并在此设立排行榜,展示在标准测试集上表现最佳的模型。 评估指标将可能包括:
- 行为识别: mAP (mean Average Precision)
- 目标检测: [email protected]
- 状态监测: F1-Score, Accuracy
三. 数据集结构 (Dataset Structure)
数据集遵循清晰、直观的目录结构,便于访问和解析。所有图像均存放在 images/ 目录下,对应的 JSON 标注文件存放在 annotations/ 目录下。
数据实例 (Data Instances)
每个数据点包含一张图像及其对应的元数据。 图像以 .jpg 格式存储,元数据以 .json 格式存储,文件名一一对应。
以下是一个数据实例的标注文件 (.json)
示例: { "image_path": "images/0001.jpg", "image_id": "0001", "attributes": { "age_group": "25-35", "gender": "male", "race": "asian", "clothing": "t-shirt" }, "annotations": [ { "label": "face", "bbox": [250, 150, 450, 350], "expression": "neutral" }, { "label": "behavior", "class": "using_phone", "bbox": [300, 400, 500, 600] }, { "label": "drowsiness", "class": "none", "confidence": 0.98 } ] }
Here’s the full English translation:
一. Dataset Summary
“CyberData Saita” is a high-fidelity, programmatically generated image dataset created to accelerate the development of in-cabin perception algorithms. With global automotive safety regulations such as C-NCAP and EU GSR imposing stricter requirements on Driver-Monitoring Systems (DMS) and Occupant-Monitoring Systems (OMS), safe, compliant and diverse training data has become critical. By leveraging synthetic generation, this dataset overcomes key challenges faced in real-world data collection—namely privacy risks, high costs and insufficient coverage of long-tail scenarios.
The dataset contains 5,000 photo-realistic cabin images synthesized by XAI Lab’s proprietary generation engine. Every image is accompanied by rich, 100 % accurate annotations.
Key features
• Rich scene diversity: spans virtual humans of different ages, genders, ethnicities and clothing styles, together with a wide range of driving/riding behaviors (e.g., phone use, drinking, fatigue, gestures) and facial expressions.
• Optimized for in-cabin perception: ready for on-device vision models in smart cockpits, especially for training, fine-tuning and validation of DMS/OMS algorithms that must precisely interpret complex interactions and occupant states.
• Safe & compliant data source: all images are purely synthetic, eliminating privacy and portrait-right issues inherent in real-user data collection, and providing a secure, controllable and scalable data foundation.
• Accurate programmatic labels: all tags (bounding boxes, behavior classes, user attributes) are generated synchronously with the images, yielding zero annotation errors and perfect consistency while removing human subjectivity.
二. Supported Tasks & Leaderboards
This dataset aims to advance computer-vision technologies inside the cabin, particularly for active safety and intelligent interaction of drivers and passengers.
Main tasks
• Behavior Recognition: classify critical occupant behaviors (e.g., using_phone, smoking, drinking, yawning) via image-level multi-label classification—vital for detecting distracted or dangerous driving.
• Driver State Monitoring: assess physiological/mental state (drowsy, distracted, attentive) to help prevent fatigue-related accidents.
• Object Detection: localize key objects such as faces, hands and phones via bounding-box regression.
• Attribute Recognition: identify basic demographic attributes (age_group, gender) to enable personalized cabin settings.
Leaderboards (planned)
We intend to host future challenges and publish leaderboards that rank the best-performing models on a standardized test set. Evaluation metrics will likely include:
• Behavior recognition: mAP (mean Average Precision)
• Object detection: [email protected]
• State monitoring: F1-Score, Accuracy
三. Dataset Structure
The dataset follows a clear, intuitive directory layout for easy access and parsing. All images are stored in images/; the corresponding JSON annotations reside in annotations/.
Data instance
Each sample consists of one .jpg image and a matching .json file with identical filenames. Below is an example annotation (.json):
{ "image_path": "images/0001.jpg", "image_id": "0001", "attributes": { "age_group": "25-35", "gender": "male", "race": "asian", "clothing": "t-shirt" }, "annotations": [ { "label": "face", "bbox": [250, 150, 450, 350], "expression": "neutral" }, { "label": "behavior", "class": "using_phone", "bbox": [300, 400, 500, 600] }, { "label": "drowsiness", "class": "none", "confidence": 0.98 } ] }
license: apache-2.0 size_categories: - 1K<n<10K
- Downloads last month
- 100