Updating README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,227 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Allo-AVA: A Large-Scale Multimodal Dataset for Allocentric Avatar Animation
|
2 |
+
|
3 |
+
## Overview
|
4 |
+
|
5 |
+
Allo-AVA (Allocentric Audio-Visual Avatar) is a large-scale multimodal dataset designed for research and development in avatar animation. It focuses on generating natural and contextually appropriate gestures from text and audio inputs in an allocentric (third-person) perspective. The dataset addresses the scarcity of high-quality, synchronized multimodal data capturing the intricate synchronization between speech, facial expressions, and body movements, essential for creating lifelike avatar animations in virtual environments.
|
6 |
+
|
7 |
+
---
|
8 |
+
|
9 |
+
## Dataset Statistics
|
10 |
+
|
11 |
+
- **Total Videos:** 7,500
|
12 |
+
- **Total Duration:** 1,250 hours
|
13 |
+
- **Average Video Length:** 10 minutes
|
14 |
+
- **Unique Speakers:** ~3,500
|
15 |
+
- **Total Word Count:** 15 million
|
16 |
+
- **Average Words per Minute:** 208
|
17 |
+
- **Total Keypoints:** ~135 billion
|
18 |
+
- **Dataset Size:** 2.46 TB
|
19 |
+
|
20 |
+
---
|
21 |
+
|
22 |
+
## Content Distribution
|
23 |
+
|
24 |
+
- **TED Talks:** 40%
|
25 |
+
- **Interviews:** 30%
|
26 |
+
- **Panel Discussions:** 20%
|
27 |
+
- **Formal Presentations:** 10%
|
28 |
+
|
29 |
+
---
|
30 |
+
|
31 |
+
## Directory Structure
|
32 |
+
|
33 |
+
```
|
34 |
+
Allo-AVA/
|
35 |
+
βββ video/
|
36 |
+
βββ audio/
|
37 |
+
βββ transcript/
|
38 |
+
βββ keypoints/
|
39 |
+
βββ keypoints_video/
|
40 |
+
```
|
41 |
+
|
42 |
+
- **`video/`**: Original MP4 video files.
|
43 |
+
- **`audio/`**: Extracted WAV audio files.
|
44 |
+
- **`transcript/`**: JSON files with word-level transcriptions and timestamps.
|
45 |
+
- **`keypoints/`**: JSON files with frame-level keypoint data.
|
46 |
+
- **`keypoints_video/`**: MP4 files visualizing the extracted keypoints overlaid on the original video.
|
47 |
+
|
48 |
+
---
|
49 |
+
|
50 |
+
## File Formats
|
51 |
+
|
52 |
+
- **Video:** MP4 (1080p, 30 fps)
|
53 |
+
- **Audio:** WAV (16-bit PCM, 48 kHz)
|
54 |
+
- **Transcripts:** JSON format with word-level timestamps.
|
55 |
+
- **Keypoints:** JSON format containing normalized keypoint coordinates.
|
56 |
+
- **Keypoints Video:** MP4 format with keypoints overlaid on the original video frames.
|
57 |
+
|
58 |
+
---
|
59 |
+
|
60 |
+
## Keypoint Data
|
61 |
+
|
62 |
+
The dataset includes detailed keypoint information extracted using a fusion of **OpenPose** and **MediaPipe** models, capturing comprehensive body pose and movement data.
|
63 |
+
|
64 |
+
### Keypoint Extraction Models
|
65 |
+
|
66 |
+
- **OpenPose**:
|
67 |
+
- Extracts 18 keypoints corresponding to major body joints.
|
68 |
+
- Robust for full-body pose estimation.
|
69 |
+
- **MediaPipe**:
|
70 |
+
- Provides 32 additional keypoints with enhanced detail on hands and facial landmarks.
|
71 |
+
- Precise capture of subtle gestures and expressions.
|
72 |
+
|
73 |
+
### Keypoint Structure
|
74 |
+
|
75 |
+
Each keypoint is represented by:
|
76 |
+
|
77 |
+
- **`x`**: Horizontal position, normalized to [0, 1] from left to right of the frame.
|
78 |
+
- **`y`**: Vertical position, normalized to [0, 1] from top to bottom of the frame.
|
79 |
+
- **`z`**: Depth, normalized to [-1, 1], with 0 at the camera plane.
|
80 |
+
- **`visibility`**: Confidence score in [0.0, 1.0], indicating the keypoint's presence and accuracy.
|
81 |
+
|
82 |
+
**Example Keypoint Entry:**
|
83 |
+
|
84 |
+
```json
|
85 |
+
{
|
86 |
+
"timestamp": 0.167,
|
87 |
+
"keypoints": [
|
88 |
+
{
|
89 |
+
"x": 0.32285,
|
90 |
+
"y": 0.25760,
|
91 |
+
"z": -0.27907,
|
92 |
+
"visibility": 0.99733
|
93 |
+
},
|
94 |
+
...
|
95 |
+
],
|
96 |
+
"transcript": "Today you're going to..."
|
97 |
+
}
|
98 |
+
```
|
99 |
+
|
100 |
+
---
|
101 |
+
|
102 |
+
## Usage
|
103 |
+
|
104 |
+
### Downloading the Dataset
|
105 |
+
|
106 |
+
To obtain access to the Allo-AVA dataset, please [contact us](#contact) for download instructions.
|
107 |
+
|
108 |
+
### Extracting the Dataset
|
109 |
+
|
110 |
+
Once downloaded, extract the dataset to your desired directory:
|
111 |
+
|
112 |
+
```bash
|
113 |
+
unzip allo-ava-dataset.zip -d /path/to/destination
|
114 |
+
```
|
115 |
+
|
116 |
+
### Accessing the Data
|
117 |
+
|
118 |
+
You can use various programming languages or tools to process the dataset. Below is an example using Python.
|
119 |
+
|
120 |
+
#### Example Usage in Python
|
121 |
+
|
122 |
+
```python
|
123 |
+
import json
|
124 |
+
import cv2
|
125 |
+
import librosa
|
126 |
+
|
127 |
+
# Paths to data
|
128 |
+
video_id = "example_video_id"
|
129 |
+
video_path = f"Allo-AVA/video/{video_id}.mp4"
|
130 |
+
audio_path = f"Allo-AVA/audio/{video_id}.wav"
|
131 |
+
transcript_path = f"Allo-AVA/transcript/{video_id}.json"
|
132 |
+
keypoints_path = f"Allo-AVA/keypoints/{video_id}.json"
|
133 |
+
|
134 |
+
# Load video
|
135 |
+
cap = cv2.VideoCapture(video_path)
|
136 |
+
|
137 |
+
# Load audio
|
138 |
+
audio, sr = librosa.load(audio_path, sr=48000)
|
139 |
+
|
140 |
+
# Load transcript
|
141 |
+
with open(transcript_path, 'r') as f:
|
142 |
+
transcript = json.load(f)
|
143 |
+
|
144 |
+
# Load keypoints
|
145 |
+
with open(keypoints_path, 'r') as f:
|
146 |
+
keypoints = json.load(f)
|
147 |
+
|
148 |
+
# Your processing code here
|
149 |
+
# For example, iterate over keypoints and synchronize with video frames
|
150 |
+
```
|
151 |
+
|
152 |
+
---
|
153 |
+
|
154 |
+
## Ethical Considerations
|
155 |
+
|
156 |
+
- **Data Source**: All videos were collected from publicly available sources such as YouTube, adhering to their terms of service.
|
157 |
+
- **Privacy**:
|
158 |
+
- **Face Blurring**: Faces in keypoint visualization videos have been blurred to protect individual identities.
|
159 |
+
- **Voice Anonymization**: Voice pitch modification has been applied to audio files to anonymize speakers.
|
160 |
+
- **Transcript Sanitization**: Personal identifiers (e.g., names, locations) in transcripts have been replaced with placeholders.
|
161 |
+
|
162 |
+
- **Usage Guidelines**:
|
163 |
+
- The dataset is intended for **research and educational purposes** only.
|
164 |
+
- Users must comply with all applicable laws and regulations regarding data privacy and intellectual property.
|
165 |
+
- Any use of the dataset must respect the rights and privacy of individuals represented in the data.
|
166 |
+
|
167 |
+
---
|
168 |
+
|
169 |
+
## License
|
170 |
+
|
171 |
+
The Allo-AVA dataset is released under the **Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)**.
|
172 |
+
|
173 |
+
Please refer to the [LICENSE](LICENSE) file for more details.
|
174 |
+
|
175 |
+
---
|
176 |
+
|
177 |
+
## Future Work
|
178 |
+
|
179 |
+
Planned enhancements for the Allo-AVA dataset include:
|
180 |
+
|
181 |
+
- **Expanding Linguistic and Cultural Diversity**: Incorporating more languages and cultural contexts to enable cross-cultural studies.
|
182 |
+
- **Enhanced Annotations**: Adding fine-grained labels for gestures, emotions, and semantic meanings.
|
183 |
+
- **Multiview Recordings**: Including multiview videos to support 3D reconstruction and the study of interactive behaviors.
|
184 |
+
- **Improved Synchronization**: Refining multimodal synchronization to capture subtle expressions and micro-movements.
|
185 |
+
- **Domain-Specific Subsets**: Creating subsets tailored to specific research domains or applications.
|
186 |
+
|
187 |
+
---
|
188 |
+
|
189 |
+
## Citing Allo-AVA
|
190 |
+
|
191 |
+
If you use the Allo-AVA dataset in your research, please cite our paper:
|
192 |
+
|
193 |
+
```bibtex
|
194 |
+
@inproceedings{punjwani2024alloava,
|
195 |
+
title={Allo-AVA: A Large-Scale Multimodal Dataset for Allocentric Avatar Animation},
|
196 |
+
author={Punjwani, Saif and Heck, Larry},
|
197 |
+
booktitle={Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing},
|
198 |
+
year={2024}
|
199 |
+
}
|
200 |
+
```
|
201 |
+
|
202 |
+
---
|
203 |
+
|
204 |
+
## Contact
|
205 |
+
|
206 |
+
For any questions or issues regarding the Allo-AVA dataset, please contact:
|
207 |
+
|
208 |
+
- **Saif Punjwani**
|
209 |
+
- Email: [[email protected]](mailto:[email protected])
|
210 |
+
- **Larry Heck**
|
211 |
+
- Email: [[email protected]](mailto:[email protected])
|
212 |
+
|
213 |
+
---
|
214 |
+
|
215 |
+
## Acknowledgments
|
216 |
+
|
217 |
+
We thank all the content creators whose public videos contributed to this dataset. This work was supported by [list any funding sources or supporting organizations].
|
218 |
+
|
219 |
+
---
|
220 |
+
|
221 |
+
## Disclaimer
|
222 |
+
|
223 |
+
The authors are not responsible for any misuse of the dataset. Users are expected to comply with all relevant ethical guidelines and legal regulations when using the dataset.
|
224 |
+
|
225 |
+
---
|
226 |
+
|
227 |
+
Thank you for your interest in the Allo-AVA dataset! We hope it serves as a valuable resource for advancing research in avatar animation and human-computer interaction.
|