Datasets:
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
SynHairMan: Synthetic Video Matting Dataset
This repository contains the SynHairMan
dataset, which was introduced in the paper Generative Video Matting.
Project Page: https://yongtaoge.github.io/project/gvm GitHub Repository: https://github.com/aim-uofa/GVM
Dataset Description
The SynHairMan
dataset addresses the challenge of limited high-quality ground-truth data in video matting. It is a large-scale synthetic and pseudo-labeled segmentation dataset developed through a scalable data generation pipeline. This pipeline renders diverse human bodies and fine-grained hairs, yielding approximately 200 video clips, each with a 3-second duration.
The dataset is specifically designed for pre-training and fine-tuning video matting models, aiming to improve their generalization capabilities in real-world scenarios and ensuring strong temporal consistency by bridging the domain gap between synthetic and real-world scenes.
License
For academic usage, this project is licensed under the 2-clause BSD License. For commercial inquiries, please contact Chunhua Shen ([email protected]).
Citation
If you find this dataset helpful for your research, please cite the original paper:
@inproceedings{ge2025gvm,
author = {Ge, Yongtao and Xie, Kangyang and Xu, Guangkai and Ke, Li and Liu, Mingyu and Huang, Longtao and Xue, Hui and Chen, Hao and Shen, Chunhua},
title = {Generative Video Matting},
publisher = {Association for Computing Machinery},
url = {https://doi.org/10.1145/3721238.3730642},
doi = {10.1145/3721238.3730642},
booktitle = {Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers},
series = {SIGGRAPH Conference Papers '25}
}
- Downloads last month
- 132