You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

🧡 3DFDReal: 3D Fashion Data from the Real World

ETRI Media Intellectualization Research Section, ETRI

Teaser

3DFDReal is a real-world fashion dataset tailored for 3D vision tasks such as segmentation, reconstruction, rigging, and deployment in metaverse platforms. Captured from high-resolution multi-view 2D videos (4K @ 60fps), the dataset includes both individual fashion items and combined outfits worn by mannequins.


πŸ” Overview

3DFDReal bridges the gap between high-quality 3D fashion modeling and practical deployment in virtual environments, such as ZEPETO. It features over 1,000 3D point clouds, each enriched with detailed metadata including:

  • Class labels
  • Gender and pose type
  • Texture and semantic attributes
  • Structured segmentations

This dataset provides a foundation for advancing research in pose-aware 3D understanding, avatar modeling, and digital twin applications.


πŸŽ₯ Data Collection Pipeline

The dataset is built through a structured four-stage pipeline:

  1. Asset Selection: Fashion items (e.g., shoes, tops, accessories) are selected and tagged individually or in sets.
  2. Recording Setup: Items or mannequins are filmed using an iPhone 13 Pro from multi-view angles for 3D reconstruction.
  3. 3D Ground Truth Generation: Videos are converted into colored point clouds and manually segmented using professional 3D labeling tools.
  4. Application & Validation: Assets are rigged and tested in avatar environments like ZEPETO for deployment readiness.

πŸ“Š Dataset Statistics

πŸ“ˆ Class Distribution

Used Fasion Item Count for 3D Dataset Pants and sweatshirts are used more than other fashion items.

![Fashion Item Count in Mannequin-wear Combinations](figures/Count appears in Combination.png) Sneakers and Pants are the most frequent fashion items in Mannequin-wear combinations.

πŸ‘š Combination Metadata

Combination Overview

Key observations:

  • Most mannequin outfits contain four distinct fashion items.
  • Gender distribution is balanced across combinations.
  • T-poses are selectively used for rigging, while upright poses dominate standard recordings.

πŸ“ Dataset Structure

dataset/
β”œβ”€β”€ PointCloud_Asset/
β”œβ”€β”€ Video_Asset/
β”œβ”€β”€ Label_Asset/
β”œβ”€β”€ PointCloud_Combine/
β”œβ”€β”€ Video_Combine/
β”œβ”€β”€ Label_Combine/
└── meta/
    β”œβ”€β”€ asset_meta.json
    β”œβ”€β”€ combination_meta.json
    β”œβ”€β”€ train_combination_meta.json
    β”œβ”€β”€ val_combination_meta.json
    β”œβ”€β”€ test_combination_meta.json
    └── label_map.csv

πŸ“¦ Data Description

πŸ”Ή Individual Asset Files

  • PointCloud_Asset/
    Contains raw point clouds of individual clothing or body parts in .ply format.

  • Video_Asset/
    Rendered 3D videos of individual assets showing different rotations or views.

  • Label_Asset/
    Label information (e.g., category, class ID) for each individual asset.


πŸ”Ή Combined Assets (Mannequin Representations)

  • PointCloud_Combine/
    Combined point clouds representing mannequins wearing multiple assets. Split into train, val, and test sets.

  • Video_Combine/
    Rendered 3D videos of mannequins with asset combinations. Also split into train, val, and test.

  • Label_Combine/
    Label files corresponding to the combined point clouds and videos.


πŸ—‚οΈ Metadata Files (meta/)

Each mata contains this detailed information:

  • label_str: class name

  • gender, pose, type

  • wnlemmas: fine-grained semantic tags

  • asset_meta.json:
    Metadata for individual assets

  • combination_meta.json:
    Metadata for all combinations

  • train_combination_meta.json, val_combination_meta.json, test_combination_meta.json:
    Define which combinations belong to each data split.

  • label_map.csv:
    Maps label for the first data acquisition fullid and the second acquisition label fullid.


πŸ§ͺ Benchmarks

3D object segmentation

The Baseline model using SAMPart3D demonstrates high segmentation quality (mIoU: 0.9930) but shows varying average precision (AP) across classes.

3D object segmentation with SAMPart3D

3D data reconstruction

The baseline models are DDPM-A diffusion-based probabilistic model for the generation task, and SVD-SVDFormer for the completion task. Performance is measured using Chamfer Distance (CD), Density-aware Chamfer Distance (DCD), and F1-Score (F1). For DDPM, sampled point clouds are shuffled without considering the sampling ratio 𝑛, and the performance of DDPM is measured with CD. DDPM shows 0.628Β±0.887 of the average CD.

3D data re onstruction example


πŸ’» Use Cases

  • Virtual try-on
  • Metaverse asset creation
  • Pose-aware segmentation
  • Avatar rigging & deformation simulation

πŸ“ƒ License

CC-BY 4.0


πŸ“š Citation

@misc{3DFDReal,
  title={3DFDReal: 3D Fashion Data from the Real World},
  author={Jiyoun Lim, Jungwoo Son, Alex Lee, Sun-Joong Kim, Nam Kyung Lee, Won-Joo Park},,
  year={2025},
  howpublished={\url{https://huggingface.co/datasets/kusses/3DFDReal}},
}

πŸ’¬ Contact

For questions, please reach out via [[email protected]] or use the Discussions tab on Hugging Face.

Downloads last month
71