Add pipeline tag and library name to metadata
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,3 +1,253 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
pipeline_tag: image-to-video
|
4 |
+
library_name: pytorch
|
5 |
+
---
|
6 |
+
|
7 |
+
<h1 align="center">KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution</h1>
|
8 |
+
|
9 |
+
<div align="center">
|
10 |
+
<a href="https://scholar.google.com/citations?user=LuIdiV8AAAAJ" target="_blank">Antoni Bigata</a><sup>1</sup> 
|
11 |
+
<a href="https://scholar.google.com/citations?user=08YfKjcAAAAJ" target="_blank">Rodrigo Mira</a><sup>1</sup> 
|
12 |
+
<a href="https://scholar.google.com/citations?user=zdg4dj0AAAAJ" target="_blank">Stella Bounareli</a><sup>1</sup> 
|
13 |
+
<a href="https://scholar.google.com/citations?user=ty2OYvcAAAAJ" target="_blank">Michał Stypułkowski</a><sup>2</sup> 
|
14 |
+
<a href="https://scholar.google.com/citations?user=WwLpK44AAAAJ" target="_blank">Konstantinos Vougioukas</a><sup>1</sup> 
|
15 |
+
<a href="https://scholar.google.com/citations?user=6v-UKEMAAAAJ" target="_blank">Stavros Petridis</a><sup>1</sup> 
|
16 |
+
<a href="https://scholar.google.com/citations?user=ygpxbK8AAAAJ" target="_blank">Maja Pantic</a><sup>1</sup>
|
17 |
+
</div>
|
18 |
+
|
19 |
+
<br>
|
20 |
+
|
21 |
+
<div align="center">
|
22 |
+
<div class="is-size-5 publication-authors" style="margin-top: 1rem;">
|
23 |
+
<span class="author-block"><sup>1</sup>Imperial College London,</span>
|
24 |
+
<span class="author-block"><sup>2</sup>University of Wrocław,</span>
|
25 |
+
</div>
|
26 |
+
</div>
|
27 |
+
|
28 |
+
<br>
|
29 |
+
|
30 |
+
<div align="center">
|
31 |
+
<a href="https://antonibigata.github.io/KeySync/"><img src="https://img.shields.io/badge/Project-Page-blue"></a>
|
32 |
+
|
33 |
+
<a href="https://huggingface.co/toninio19/keysync"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow"></a>
|
34 |
+
|
35 |
+
<a href="https://huggingface.co/spaces/toninio19/keysync-demo"><img src=https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Demo-yellow></a>
|
36 |
+
|
37 |
+
<a href="https://arxiv.org/abs/2505.00497"><img src="https://img.shields.io/badge/Paper-Arxiv-red"></a>
|
38 |
+
</div>
|
39 |
+
|
40 |
+
## 📋 Table of Contents
|
41 |
+
- [Abstract](#abstract)
|
42 |
+
- [Demo Examples](#demo-examples)
|
43 |
+
- [Architecture](#architecture)
|
44 |
+
- [Installation](#installation)
|
45 |
+
- [Quick Start Guide](#quick-start-guide)
|
46 |
+
- [Advanced Usage](#advanced-usage)
|
47 |
+
- [Citation](#citation)
|
48 |
+
- [Acknowledgements](#acknowledgements)
|
49 |
+
|
50 |
+
## Abstract
|
51 |
+
|
52 |
+
Lip synchronization, known as the task of aligning lip movements in an existing video with new input audio, is typically framed as a simpler variant of audio-driven facial animation. However, as well as suffering from the usual issues in talking head generation (e.g., temporal consistency), lip synchronization presents significant new challenges such as expression leakage from the input video and facial occlusions, which can severely impact real-world applications like automated dubbing, but are often neglected in existing works. To address these shortcomings, we present
|
53 |
+
KeySync, a two-stage framework that succeeds in solving the issue of temporal consistency, while also incorporating solutions for leakage and occlusions using a carefully designed masking strategy. We show that KeySync achieves state-of-the-art results in lip reconstruction and cross-synchronization, improving visual quality and reducing expression leakage according to LipLeak, our novel leakage metric. Furthermore, we demonstrate the effectivness of our new masking approach in handling occlusions and validate our architectural choices through several ablation studies.
|
54 |
+
|
55 |
+
### Media
|
56 |
+
|
57 |
+
<table>
|
58 |
+
<tr>
|
59 |
+
<td><img src="assets/media/vid_dub_1.gif" alt="Video 1"/></td>
|
60 |
+
<td><img src="assets/media/vid_dub_2.gif" alt="Video 2"/></td>
|
61 |
+
<td><img src="assets/media/vid_dub_3.gif" alt="Video 3"/></td>
|
62 |
+
<td><img src="assets/media/vid_dub_4.gif" alt="Video 4"/></td>
|
63 |
+
</tr>
|
64 |
+
</table>
|
65 |
+
|
66 |
+
For more visualizations, please visit [https://antonibigata.github.io/KeySync/](https://antonibigata.github.io/KeySync/)
|
67 |
+
|
68 |
+
### Online Demo
|
69 |
+
|
70 |
+
We provide an interactive demo of KeySync at [https://huggingface.co/spaces/toninio19/keysync-demo](https://huggingface.co/spaces/toninio19/keysync-demo) where you can upload your own video and audio files to create synchronized videos. Due to GPU restrictions on Hugging Face Spaces, the demo is limited to processing videos of maximum 6 seconds in length. For longer videos or better performance, we recommend using the inference scripts provided in this repository to run KeySync locally on your own hardware.
|
71 |
+
|
72 |
+
## Architecture
|
73 |
+
|
74 |
+
<div align="center">
|
75 |
+
<img src="assets/media/drawing-1.png" width="100%">
|
76 |
+
</div>
|
77 |
+
|
78 |
+
## Installation
|
79 |
+
|
80 |
+
### Prerequisites
|
81 |
+
- CUDA-compatible GPU
|
82 |
+
- Python 3.11
|
83 |
+
- Conda package manager
|
84 |
+
|
85 |
+
### Setup Environment
|
86 |
+
|
87 |
+
```bash
|
88 |
+
# Create conda environment with necessary dependencies
|
89 |
+
conda create -n KeySync python=3.11 nvidia::cuda-nvcc conda-forge::ffmpeg -y
|
90 |
+
conda activate KeySync
|
91 |
+
|
92 |
+
# Install requirements
|
93 |
+
python -m pip install -r requirements.txt --no-deps
|
94 |
+
|
95 |
+
# Install PyTorch with CUDA support
|
96 |
+
python -m pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu121
|
97 |
+
|
98 |
+
# OPTIONAL
|
99 |
+
git clone https://github.com/facebookresearch/sam2.git && cd sam2
|
100 |
+
|
101 |
+
pip install -e . --no-deps
|
102 |
+
```
|
103 |
+
|
104 |
+
### Known Issues
|
105 |
+
|
106 |
+
If you encounter synchronization issues between omegaconf and antlr4, you can fix them by running:
|
107 |
+
|
108 |
+
|
109 |
+
```bash
|
110 |
+
python -m pip uninstall omegaconf antlr4-python3-runtime -y
|
111 |
+
python -m pip install "omegaconf==2.3.0" "antlr4-python3-runtime==4.9.3"
|
112 |
+
```
|
113 |
+
|
114 |
+
|
115 |
+
### Download Pretrained Models
|
116 |
+
|
117 |
+
```bash
|
118 |
+
git lfs install
|
119 |
+
git clone https://huggingface.co/toninio19/keysync pretrained_models
|
120 |
+
```
|
121 |
+
|
122 |
+
## Quick Start Guide
|
123 |
+
|
124 |
+
### 1. Data Preparation
|
125 |
+
|
126 |
+
To use KeySync with your own data, for simplicity organize your files as follows:
|
127 |
+
- Place video files (`.mp4`) in the `data/videos/` directory
|
128 |
+
- Place audio files (`.wav`) in the `data/audios/` directory
|
129 |
+
|
130 |
+
Otherwise you need to specify a different video_dir and audio_dir.
|
131 |
+
|
132 |
+
### 2. Running Inference
|
133 |
+
|
134 |
+
For inference you need to have the audio and video embeddings precomputed.
|
135 |
+
The simplest way to run inference on your own data is using the `infer_raw.sh` script which will compute those embeddings for you:
|
136 |
+
|
137 |
+
```bash
|
138 |
+
bash scripts/infer_raw_data.sh \
|
139 |
+
--filelist "data/videos" \
|
140 |
+
--file_list_audio "data/audios" \
|
141 |
+
--output_folder "my_animations" \
|
142 |
+
--keyframes_ckpt "path/to/keyframes_model.ckpt" \
|
143 |
+
--interpolation_ckpt "path/to/interpolation_model.ckpt" \
|
144 |
+
--compute_until 45
|
145 |
+
```
|
146 |
+
|
147 |
+
This script handles the entire pipeline:
|
148 |
+
1. Extracts video embeddings
|
149 |
+
2. Computes landmarks
|
150 |
+
3. Computes audio embeddings (using WavLM, and Hubert)
|
151 |
+
4. Creates a filelist for inference
|
152 |
+
5. Runs the full animation pipeline
|
153 |
+
|
154 |
+
For more control over the inference process, you can directly use the `inference.sh` script:
|
155 |
+
|
156 |
+
```bash
|
157 |
+
bash scripts/inference.sh \
|
158 |
+
--output_folder "output_folder_name" \
|
159 |
+
--file_list "path/to/filelist.txt" \
|
160 |
+
--keyframes_ckpt "path/to/keyframes_model.ckpt" \
|
161 |
+
--interpolation_ckpt "path/to/interpolation_model.ckpt" \
|
162 |
+
--compute_until "compute_until"
|
163 |
+
```
|
164 |
+
|
165 |
+
or if you need to also save intermediate embeddings for faster recompute
|
166 |
+
|
167 |
+
```bash
|
168 |
+
bash scripts/infer_and_compute_emb.sh \
|
169 |
+
--filelist "data/videos" \
|
170 |
+
--file_list_audio "data/audios" \
|
171 |
+
--output_folder "my_animations" \
|
172 |
+
--keyframes_ckpt "path/to/keyframes_model.ckpt" \
|
173 |
+
--interpolation_ckpt "path/to/interpolation_model.ckpt" \
|
174 |
+
--compute_until 45
|
175 |
+
```
|
176 |
+
|
177 |
+
### 3. Training Your Own Models
|
178 |
+
|
179 |
+
The dataloader needs the path to all the videos you want to train on. Then you need to separate the audio and video as follows:
|
180 |
+
- root_folder:
|
181 |
+
- videos: raw videos
|
182 |
+
- videos_emb: embedding for your videos
|
183 |
+
- audios: raw audios
|
184 |
+
- audios_emb: precomputed embeddigns for the audios
|
185 |
+
- landmarks_folder: landmarks computed from raw video
|
186 |
+
|
187 |
+
You can have different folders but make sure to change them in the training scripts.
|
188 |
+
|
189 |
+
KeySYnc uses a two-stage model approach. You can train each component separately:
|
190 |
+
|
191 |
+
#### KeySync Model Training
|
192 |
+
|
193 |
+
```bash
|
194 |
+
bash train_keyframe.sh path/to/filelist.txt [num_workers] [batch_size] [num_devices]
|
195 |
+
```
|
196 |
+
|
197 |
+
#### Interpolation Model Training
|
198 |
+
|
199 |
+
```bash
|
200 |
+
bash train_interpolation.sh path/to/filelist.txt [num_workers] [batch_size] [num_devices]
|
201 |
+
```
|
202 |
+
|
203 |
+
## Advanced Usage
|
204 |
+
|
205 |
+
### Command Line Parameters
|
206 |
+
|
207 |
+
| Parameter | Description | Default |
|
208 |
+
|-----------|-------------|---------|
|
209 |
+
| `video_dir` | Directory with input videos | `data/videos` |
|
210 |
+
| `audio_dir` | Directory with input audio files | `data/audios` |
|
211 |
+
| `output_folder` | Where to save generated animations | - |
|
212 |
+
| `keyframes_ckpt` | Keyframe model checkpoint path | - |
|
213 |
+
| `interpolation_ckpt` | Interpolation model checkpoint path | - |
|
214 |
+
| `compute_until` | Animation length in seconds | 45 |
|
215 |
+
| `fix_occlusion` | Enable occlusion handling to mask objects that block the face | False |
|
216 |
+
| `position` | Coordinates of the object to mask in the occlusion pipeline (format: x,y, e.g., "450,450") | None |
|
217 |
+
| `start_frame` | Frame number where the specified position coordinates apply (using the first frame typically works best) | 0 |
|
218 |
+
|
219 |
+
### Advanced Configuration
|
220 |
+
|
221 |
+
For more fine-grained control, you can edit the configuration files in the `configs/` directory.
|
222 |
+
|
223 |
+
## LipScore Evaluation
|
224 |
+
|
225 |
+
KeySync can be evaluated using the LipScore metric available in the `evaluation/` folder. This metric measures the lip synchronization quality between generated and ground truth videos.
|
226 |
+
|
227 |
+
To use the LipScore evaluation, you'll need to install the following dependencies:
|
228 |
+
|
229 |
+
1. Face detection library: [https://github.com/hhj1897/face_detection](https://github.com/hhj1897/face_detection)
|
230 |
+
2. Face alignment library: [https://github.com/ibug-group/face_alignment](https://github.com/ibug-group/face_alignment)
|
231 |
+
|
232 |
+
Once installed, you can use the LipScore class in `evaluation/lipscore.py` to evaluate your generated animations:
|
233 |
+
|
234 |
+
|
235 |
+
## Citation
|
236 |
+
|
237 |
+
If you use KeySync in your research, please cite our paper:
|
238 |
+
|
239 |
+
```bibtex
|
240 |
+
@misc{bigata2025keysyncrobustapproachleakagefree,
|
241 |
+
title={KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution},
|
242 |
+
author={Antoni Bigata and Rodrigo Mira and Stella Bounareli and Michał Stypułkowski and Konstantinos Vougioukas and Stavros Petridis and Maja Pantic},
|
243 |
+
year={2025},
|
244 |
+
eprint={2505.00497},
|
245 |
+
archivePrefix={arXiv},
|
246 |
+
primaryClass={cs.CV},
|
247 |
+
url={https://arxiv.org/abs/2505.00497},
|
248 |
+
}
|
249 |
+
```
|
250 |
+
|
251 |
+
## Acknowledgements
|
252 |
+
|
253 |
+
This project builds upon the foundation provided by [Stability AI's Generative Models](https://github.com/Stability-AI/generative-models). We thank the Stability AI team for their excellent work and for making their code publicly available.
|