YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

MuseTalk

MuseTalk: Real-Time High-Fidelity Video Dubbing via Spatio-Temporal Sampling

Yue Zhang*, Zhizhou Zhong*, Minhao Liu*, Zhaokang Chen, Bin Wu†, Yubin Zeng, Chao Zhan, Junxin Huang, Yingjie He, Wenjiang Zhou (*Equal Contribution, †Corresponding Author, [email protected])

Lyra Lab, Tencent Music Entertainment

github huggingface space Technical report

We introduce MuseTalk, a real-time high quality lip-syncing model (30fps+ on an NVIDIA Tesla V100). MuseTalk can be applied with input videos, e.g., generated by MuseV, as a complete virtual human solution.

πŸ”₯ Updates

We're excited to unveil MuseTalk 1.5. This version (1) integrates training with perceptual loss, GAN loss, and sync loss, significantly boosting its overall performance. (2) We've implemented a two-stage training strategy and a spatio-temporal data sampling approach to strike a balance between visual quality and lip-sync accuracy. Learn more details here

Overview

MuseTalk is a real-time high quality audio-driven lip-syncing model trained in the latent space of ft-mse-vae, which

  1. modifies an unseen face according to the input audio, with a size of face region of 256 x 256.
  2. supports audio in various languages, such as Chinese, English, and Japanese.
  3. supports real-time inference with 30fps+ on an NVIDIA Tesla V100.
  4. supports modification of the center point of the face region proposes, which SIGNIFICANTLY affects generation results.
  5. checkpoint available trained on the HDTF and private dataset.

News

  • [03/28/2025] :mega: We are thrilled to announce the release of our 1.5 version. This version is a significant improvement over the 1.0 version, with enhanced clarity, identity consistency, and precise lip-speech synchronization. We update the technical report with more details.
  • [10/18/2024] We release the technical report. Our report details a superior model to the open-source L1 loss version. It includes GAN and perceptual losses for improved clarity, and sync loss for enhanced performance.
  • [04/17/2024] We release a pipeline that utilizes MuseTalk for real-time inference.
  • [04/16/2024] Release Gradio demo on HuggingFace Spaces (thanks to HF team for their community grant)
  • [04/02/2024] Release MuseTalk project and pretrained models.

Model

Model Structure MuseTalk was trained in latent spaces, where the images were encoded by a freezed VAE. The audio was encoded by a freezed whisper-tiny model. The architecture of the generation network was borrowed from the UNet of the stable-diffusion-v1-4, where the audio embeddings were fused to the image embeddings by cross-attention.

Note that although we use a very similar architecture as Stable Diffusion, MuseTalk is distinct in that it is NOT a diffusion model. Instead, MuseTalk operates by inpainting in the latent space with a single step.

Cases

Input Video


https://github.com/TMElyralab/MuseTalk/assets/163980830/37a3a666-7b90-4244-8d3a-058cb0e44107


https://github.com/user-attachments/assets/1ce3e850-90ac-4a31-a45f-8dfa4f2960ac


https://github.com/user-attachments/assets/fa3b13a1-ae26-4d1d-899e-87435f8d22b3


https://github.com/user-attachments/assets/15800692-39d1-4f4c-99f2-aef044dc3251


https://github.com/user-attachments/assets/a843f9c9-136d-4ed4-9303-4a7269787a60


https://github.com/user-attachments/assets/6eb4e70e-9e19-48e9-85a9-bbfa589c5fcb

MuseTalk 1.0


https://github.com/user-attachments/assets/c04f3cd5-9f77-40e9-aafd-61978380d0ef


https://github.com/user-attachments/assets/2051a388-1cef-4c1d-b2a2-3c1ceee5dc99


https://github.com/user-attachments/assets/b5f56f71-5cdc-4e2e-a519-454242000d32


https://github.com/user-attachments/assets/a5843835-04ab-4c31-989f-0995cfc22f34


https://github.com/user-attachments/assets/3dc7f1d7-8747-4733-bbdd-97874af0c028


https://github.com/user-attachments/assets/3c78064e-faad-4637-83ae-28452a22b09a

MuseTalk 1.5


https://github.com/user-attachments/assets/999a6f5b-61dd-48e1-b902-bb3f9cbc7247


https://github.com/user-attachments/assets/d26a5c9a-003c-489d-a043-c9a331456e75


https://github.com/user-attachments/assets/471290d7-b157-4cf6-8a6d-7e899afa302c


https://github.com/user-attachments/assets/1ee77c4c-8c70-4add-b6db-583a12faa7dc


https://github.com/user-attachments/assets/370510ea-624c-43b7-bbb0-ab5333e0fcc4


https://github.com/user-attachments/assets/b011ece9-a332-4bc1-b8b7-ef6e383d7bde

TODO:

  • trained models and inference codes.
  • Huggingface Gradio demo.
  • codes for real-time inference.
  • technical report.
  • a better model with updated technical report.
  • training and dataloader code (Expected completion on 04/04/2025).
  • realtime inference code for 1.5 version (Note: MuseTalk 1.5 has the same computation time as 1.0 and supports real-time inference. The code implementation will be released soon).

Getting Started

We provide a detailed tutorial about the installation and the basic usage of MuseTalk for new users:

Third party integration

Thanks for the third-party integration, which makes installation and use more convenient for everyone. We also hope you note that we have not verified, maintained, or updated third-party. Please refer to this project for specific results.

ComfyUI

Installation

To prepare the Python environment and install additional packages such as opencv, diffusers, mmcv, etc., please follow the steps below:

Build environment

We recommend a python version >=3.10 and cuda version =11.7. Then build environment as follows:

pip install -r requirements.txt

mmlab packages

pip install --no-cache-dir -U openmim 
mim install mmengine 
mim install "mmcv>=2.0.1" 
mim install "mmdet>=3.1.0" 
mim install "mmpose>=1.1.0" 

Download ffmpeg-static

Download the ffmpeg-static and

export FFMPEG_PATH=/path/to/ffmpeg

for example:

export FFMPEG_PATH=/musetalk/ffmpeg-4.4-amd64-static

Download weights

You can download weights manually as follows:

  1. Download our trained weights.
# !pip install -U "huggingface_hub[cli]" 
export HF_ENDPOINT=https://hf-mirror.com 
huggingface-cli download TMElyralab/MuseTalk --local-dir models/
  1. Download the weights of other components:

Finally, these weights should be organized in models as follows:

./models/
β”œβ”€β”€ musetalk
β”‚   └── musetalk.json
β”‚   └── pytorch_model.bin
β”œβ”€β”€ musetalkV15
β”‚   └── musetalk.json
β”‚   └── unet.pth
β”œβ”€β”€ dwpose
β”‚   └── dw-ll_ucoco_384.pth
β”œβ”€β”€ face-parse-bisent
β”‚   β”œβ”€β”€ 79999_iter.pth
β”‚   └── resnet18-5c106cde.pth
β”œβ”€β”€ sd-vae-ft-mse
β”‚   β”œβ”€β”€ config.json
β”‚   └── diffusion_pytorch_model.bin
└── whisper
    β”œβ”€β”€ config.json
    β”œβ”€β”€ pytorch_model.bin
    └── preprocessor_config.json
    

Quickstart

Inference

We provide inference scripts for both versions of MuseTalk:

MuseTalk 1.5 (Recommended)

sh inference.sh v1.5

This inference script supports both MuseTalk 1.5 and 1.0 models:

  • For MuseTalk 1.5: Use the command above with the V1.5 model path
  • For MuseTalk 1.0: Use the same script but point to the V1.0 model path

configs/inference/test.yaml is the path to the inference configuration file, including video_path and audio_path. The video_path should be either a video file, an image file or a directory of images.

MuseTalk 1.0

sh inference.sh v1.0

You are recommended to input video with 25fps, the same fps used when training the model. If your video is far less than 25fps, you are recommended to apply frame interpolation or directly convert the video to 25fps using ffmpeg.

## TestCases For 1.0
Image MuseV +MuseTalk

Use of bbox_shift to have adjustable results(For 1.0)

:mag_right: We have found that upper-bound of the mask has an important impact on mouth openness. Thus, to control the mask region, we suggest using the bbox_shift parameter. Positive values (moving towards the lower half) increase mouth openness, while negative values (moving towards the upper half) decrease mouth openness.

You can start by running with the default configuration to obtain the adjustable value range, and then re-run the script within this range.

For example, in the case of Xinying Sun, after running the default configuration, it shows that the adjustable value rage is [-9, 9]. Then, to decrease the mouth openness, we set the value to be -7.

python -m scripts.inference --inference_config configs/inference/test.yaml --bbox_shift -7 

:pushpin: More technical details can be found in bbox_shift.

Combining MuseV and MuseTalk

As a complete solution to virtual human generation, you are suggested to first apply MuseV to generate a video (text-to-video, image-to-video or pose-to-video) by referring this. Frame interpolation is suggested to increase frame rate. Then, you can use MuseTalk to generate a lip-sync video by referring this.

Real-time inference

Here, we provide the inference script. This script first applies necessary pre-processing such as face detection, face parsing and VAE encode in advance. During inference, only UNet and the VAE decoder are involved, which makes MuseTalk real-time.
python -m scripts.realtime_inference --inference_config configs/inference/realtime.yaml --batch_size 4

configs/inference/realtime.yaml is the path to the real-time inference configuration file, including preparation, video_path , bbox_shift and audio_clips.

  1. Set preparation to True in realtime.yaml to prepare the materials for a new avatar. (If the bbox_shift has changed, you also need to re-prepare the materials.)
  2. After that, the avatar will use an audio clip selected from audio_clips to generate video.
    Inferring using: data/audio/yongen.wav
    
  3. While MuseTalk is inferring, sub-threads can simultaneously stream the results to the users. The generation process can achieve 30fps+ on an NVIDIA Tesla V100.
  4. Set preparation to False and run this script if you want to genrate more videos using the same avatar.
Note for Real-time inference
  1. If you want to generate multiple videos using the same avatar/video, you can also use this script to SIGNIFICANTLY expedite the generation process.
  2. In the previous script, the generation time is also limited by I/O (e.g. saving images). If you just want to test the generation speed without saving the images, you can run
python -m scripts.realtime_inference --inference_config configs/inference/realtime.yaml --skip_save_images

Acknowledgement

  1. We thank open-source components like whisper, dwpose, face-alignment, face-parsing, S3FD.
  2. MuseTalk has referred much to diffusers and isaacOnline/whisper.
  3. MuseTalk has been built on HDTF datasets.

Thanks for open-sourcing!

Limitations

  • Resolution: Though MuseTalk uses a face region size of 256 x 256, which make it better than other open-source methods, it has not yet reached the theoretical resolution bound. We will continue to deal with this problem.
    If you need higher resolution, you could apply super resolution models such as GFPGAN in combination with MuseTalk.

  • Identity preservation: Some details of the original face are not well preserved, such as mustache, lip shape and color.

  • Jitter: There exists some jitter as the current pipeline adopts single-frame generation.

Citation

@article{musetalk,
  title={MuseTalk: Real-Time High-Fidelity Video Dubbing via Spatio-Temporal Sampling},
  author={Zhang, Yue and Zhong, Zhizhou and Liu, Minhao and Chen, Zhaokang and Wu, Bin and Zeng, Yubin and Zhan, Chao and He, Yingjie and Huang, Junxin and Zhou, Wenjiang},
  journal={arxiv},
  year={2025}
}

Disclaimer/License

  1. code: The code of MuseTalk is released under the MIT License. There is no limitation for both academic and commercial usage.
  2. model: The trained model are available for any purpose, even commercially.
  3. other opensource model: Other open-source models used must comply with their license, such as whisper, ft-mse-vae, dwpose, S3FD, etc..
  4. The testdata are collected from internet, which are available for non-commercial research purposes only.
  5. AIGC: This project strives to impact the domain of AI-driven video generation positively. Users are granted the freedom to create videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.
Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support