Datasets:

Languages:
English
ArXiv:
License:
jianhongbai commited on
Commit
0c465b0
Β·
verified Β·
1 Parent(s): e4211b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -37
README.md CHANGED
@@ -3,63 +3,109 @@ language:
3
  - en
4
  license: "apache-2.0"
5
  ---
 
 
 
 
 
6
 
7
- # Acknowledgments
8
- We thank Jinwen Cao, Yisong Guo, Haowen Ji, Jichao Wang, and Yi Wang from Kuaishou Technology for their invaluable help in constructing the SynCamVideo-Dataset.
9
 
10
  # Dataset Card
11
- ## πŸ“· SynCamVideo Dataset
12
  ### 1. Dataset Introduction
13
- The SynCamVideo Dataset is a multi-camera synchronized video dataset rendered using the Unreal Engine 5. It consists of 1,000 different scenes, each captured by 36 cameras, resulting in a total of 36,000 videos. SynCamVideo features 50 different animals as the "main subject" and utilizes 20 different locations from [Poly Haven](https://polyhaven.com/hdris) as backgrounds. In each scene, 1-2 subjects are selected from the 50 animals and move along a predefined trajectory, the background is randomly chosen from the 20 locations, and the 36 cameras simultaneously record the subjects' movements.
14
 
15
- The cameras in each scene are placed on a hemispherical surface at a distance to the scene center of 3.5m - 9m. To ensure the rendered videos have minimal domain shift with real-world videos, we constraint the elevation of each camera between 0Β° - 45Β°, and the azimuth between 0Β° - 360Β°. Each camera is randomly sampled within the constraints described above, rather than using the same set of camera positions across scenes. The figure below shows an example, where the red star indicates the center point of the scene (slightly above the ground), and the videos are rendered from the synchronized cameras to capture the movements of the main subjects (a goat and a bear in the case).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6530bf50f145530101ec03a2/3WEiTpKH9yDjOUn4wonZb.png)
18
 
19
- The SynCamVideo Dataset can be used to train multi-camera synchronized video generation models, inspiring applications in areas such as filmmaking and multi-view data generation for downstream tasks.
 
 
20
 
21
- ### 2. File Structure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ```
23
- SynCamVideo
24
  β”œβ”€β”€ train
25
- β”‚ β”œβ”€β”€ videos # training videos
26
- β”‚ β”‚ β”œβ”€β”€ scene1 # one scene
27
- β”‚ β”‚ β”‚ β”œβ”€β”€ xxx.mp4 # synchronized 100-frame videos at 480x720 resolution
28
- β”‚ β”‚ β”‚ └── ...
29
- β”‚ β”‚ β”‚ ...
30
- β”‚ β”‚ └── scene1000
31
- β”‚ β”‚ β”œβ”€β”€ xxx.mp4
32
- β”‚ β”‚ └── ...
33
- β”‚ β”œβ”€β”€ cameras # training cameras
34
- β”‚ β”‚ β”œβ”€β”€ scene1 # one scene
35
- β”‚ β”‚ β”‚ └── xxx.json # extrinsic parameters corresponding to the videos
36
- β”‚ β”‚ β”‚ ...
37
- β”‚ β”‚ └── scene1000
38
- β”‚ β”‚ └── xxx.json
39
- β”‚ └──caption
40
- β”‚ └── cogvideox_caption.csv # caption generated with "THUDM/cogvlm2-llama3-caption"
41
- └──val
42
- └── cameras # validation cameras
43
- β”œβ”€β”€ Hemi36_4m_0 # distance=4m, elevation=0Β°
44
- β”‚ └── Hemi36_4m_0.json # 36 cameras: distance=4m, elevation=0Β°, azimuth=i * 10Β°
45
- β”‚ ...
46
- └── Hemi36_7m_45
47
- └── Hemi36_7m_45.json
48
  ```
49
 
50
  ### 3. Useful scripts
 
 
 
 
51
  - Camera Visualization
52
  ```python
53
- python vis_cam.py --pose_file_path ./SynCamVideo-Dataset/val/cameras/Hemi36_4m_0/Hemi36_4m_0.json --num_cameras 36
54
  ```
55
 
56
- The visualization script is modified from [CameraCtrl](https://github.com/hehao13/CameraCtrl/blob/main/tools/visualize_trajectory.py), thanks for their inspiring work.
 
 
57
 
58
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6530bf50f145530101ec03a2/_IHYjTaCt1pUusa1qjQcX.jpeg)
 
59
 
60
- ## Citation
61
 
62
- ```bibtex
 
63
  @misc{bai2024syncammaster,
64
  title={SynCamMaster: Synchronizing Multi-Camera Video Generation from Diverse Viewpoints},
65
  author={Jianhong Bai and Menghan Xia and Xintao Wang and Ziyang Yuan and Xiao Fu and Zuozhu Liu and Haoji Hu and Pengfei Wan and Di Zhang},
 
3
  - en
4
  license: "apache-2.0"
5
  ---
6
+ [Github](https://github.com/KwaiVGI/SynCamMaster)
7
+
8
+ [Project Page](https://jianhongbai.github.io/SynCamMaster/)
9
+
10
+ [Paper](https://arxiv.org/abs/2412.07760)
11
 
 
 
12
 
13
  # Dataset Card
14
+ ## πŸ“· Dataset: SynCamVideo Dataset
15
  ### 1. Dataset Introduction
 
16
 
17
+ **TL;DR:** The SynCamVideo Dataset is a multi-camera synchronized video dataset rendered using Unreal Engine 5. It includes synchronized multi-camera videos and their corresponding camera poses. The SynCamVideo Dataset can be valuable in fields such as camera-controlled video generation, synchronized video production, and 3D/4D reconstruction. The camera is stationary in the SynCamVideo Dataset. If you require footage with moving cameras rather than stationary ones, please explore our [MultiCamVideo](https://huggingface.co/datasets/KwaiVGI/MultiCamVideo-Dataset) Dataset.
18
+
19
+ <div align="center">
20
+ <video controls autoplay style="width: 70%;" src="https://cdn-uploads.huggingface.co/production/uploads/6530bf50f145530101ec03a2/qEUQstpMa3-6UjbG_0ytq.mp4"></video>
21
+ </div>
22
+
23
+ The SynCamVideo Dataset is a multi-camera synchronized video dataset rendered using Unreal Engine 5. It includes synchronized multi-camera videos and their corresponding camera poses.
24
+ It consists of 3.4K different dynamic scenes, each captured by 10 cameras, resulting in a total of 34K videos. Each dynamic scene is composed of four elements: {3D environment, character, animation, camera}. Specifically, we use animation to drive the character
25
+ and position the animated character within the 3D environment. Then, Time-synchronized cameras are set up to render the multi-camera video data.
26
+ <p align="center">
27
+ <img src="https://github.com/user-attachments/assets/107c9607-e99b-4493-b715-3e194fcb3933" alt="Example Image" width="70%">
28
+ </p>
29
+
30
+ **3D Environment:** We collect 37 high-quality 3D environments assets from [Fab](https://www.fab.com). To minimize the domain gap between rendered data and real-world videos, we primarily select visually realistic 3D scenes, while choosing a few stylized or surreal 3D scenes as a supplement. To ensure data diversity, the selected scenes cover a variety of indoor and outdoor settings, such as city streets, shopping malls, cafes, office rooms, and the countryside.
31
+
32
+ **Character:** We collect 66 different human 3D models as characters from [Fab](https://www.fab.com) and [Mixamo](https://www.mixamo.com).
33
+
34
+ **Animation:** We collect 93 different animations from [Fab](https://www.fab.com) and [Mixamo](https://www.mixamo.com), including common actions such as waving, dancing, and cheering. We use these animations to drive the collected characters and create diverse datasets through various combinations.
35
+
36
+ **Camera:** To enhance the diversity of the dataset, each camera is randomly sampled on a hemispherical surface centered around the character.
37
+
38
+ ### 2. Statistics and Configurations
39
 
40
+ Dataset Statistics:
41
 
42
+ | Number of Dynamic Scenes | Camera per Scene | Total Videos |
43
+ |:------------------------:|:----------------:|:------------:|
44
+ | 3400 | 10 | 34,000 |
45
 
46
+ Video Configurations:
47
+
48
+ | Resolution | Frame Number | FPS |
49
+ |:-----------:|:------------:|:------------------------:|
50
+ | 1280x1280 | 81 | 15 |
51
+
52
+ Note: You can use 'center crop' to adjust the video's aspect ratio to fit your video generation model, such as 16:9, 9:16, 4:3, or 3:4.
53
+
54
+ Camera Configurations:
55
+
56
+ | Focal Length | Aperture | Sensor Height | Sensor Width |
57
+ |:-----------------------:|:------------------:|:-------------:|:------------:|
58
+ | 24mm | 5.0 | 23.76mm | 23.76mm |
59
+
60
+
61
+
62
+ ### 3. File Structure
63
  ```
64
+ SynCamVideo-Dataset
65
  β”œβ”€β”€ train
66
+ β”‚ └── f24_aperture5
67
+ β”‚ β”œβ”€β”€ scene1 # one dynamic scene
68
+ β”‚ β”‚ β”œβ”€β”€ videos
69
+ β”‚ β”‚ β”‚ β”œβ”€β”€ cam01.mp4 # synchronized 81-frame videos at 1280x1280 resolution
70
+ β”‚ β”‚ β”‚ β”œβ”€β”€ cam02.mp4
71
+ β”‚ β”‚ β”‚ β”œβ”€β”€ ...
72
+ β”‚ β”‚ β”‚ └── cam10.mp4
73
+ β”‚ β”‚ └── cameras
74
+ β”‚ β”‚ └── camera_extrinsics.json # 81-frame camera extrinsics of the 10 cameras
75
+ β”‚ β”œβ”€β”€ ...
76
+ β”‚ └── scene3400
77
+ └── val
78
+ └── basic
79
+ β”œβ”€β”€ videos
80
+ β”‚ β”œβ”€β”€ cam01.mp4 # example videos corresponding to the validation cameras
81
+ β”‚ β”œβ”€β”€ cam02.mp4
82
+ β”‚ β”œβ”€β”€ ...
83
+ β”‚ └── cam10.mp4
84
+ └── cameras
85
+ └── camera_extrinsics.json # 10 cameras for validation
 
 
 
86
  ```
87
 
88
  ### 3. Useful scripts
89
+ - Data Extraction
90
+ ```bash
91
+ tar -xzvf SynCamVideo-Dataset.tar.gz
92
+ ```
93
  - Camera Visualization
94
  ```python
95
+ python vis_cam.py
96
  ```
97
 
98
+ <p align="center">
99
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6530bf50f145530101ec03a2/3WCWS0Axlnu5MyOBqMoVC.png" alt="Example Image" width="40%">
100
+ </p>
101
 
102
+ ## Acknowledgments
103
+ We thank Jinwen Cao, Yisong Guo, Haowen Ji, Jichao Wang, and Yi Wang from Kuaishou Technology for their invaluable help in constructing the SynCamVideo-Dataset.
104
 
105
+ ## 🌟 Citation
106
 
107
+ Please cite our paper if you find our dataset helpful.
108
+ ```
109
  @misc{bai2024syncammaster,
110
  title={SynCamMaster: Synchronizing Multi-Camera Video Generation from Diverse Viewpoints},
111
  author={Jianhong Bai and Menghan Xia and Xintao Wang and Ziyang Yuan and Xiao Fu and Zuozhu Liu and Haoji Hu and Pengfei Wan and Di Zhang},