Image-to-Video
LivePortrait
ONNX

Add pipeline tag, improve model card

#38
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +20 -5
README.md CHANGED
@@ -1,7 +1,12 @@
1
  ---
 
2
  license: mit
3
- library_name: liveportrait
4
  pipeline_tag: image-to-video
 
 
 
 
 
5
  ---
6
 
7
  <h1 align="center">LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control</h1>
@@ -44,7 +49,13 @@ pipeline_tag: image-to-video
44
 
45
 
46
  ## πŸ”₯ Updates
47
- - **`2024/08/02`**: 😸 We released a version of the **Animals model**, along with several other updates and improvements. Check out the details [**here**](https://github.com/KwaiVGI/LivePortrait/blob/main/assets/docs/changelog/2024-08-02.md)!
 
 
 
 
 
 
48
  - **`2024/07/25`**: πŸ“¦ Windows users can now download the package from [HuggingFace](https://huggingface.co/cleardusk/LivePortrait-Windows/tree/main) or [BaiduYun](https://pan.baidu.com/s/1FWsWqKe0eNfXrwjEhhCqlw?pwd=86q2). Simply unzip and double-click `run_windows.bat` to enjoy!
49
  - **`2024/07/24`**: 🎨 We support pose editing for source portraits in the Gradio interface. We’ve also lowered the default detection threshold to increase recall. [Have fun](https://github.com/KwaiVGI/LivePortrait/blob/main/assets/docs/changelog/2024-07-24.md)!
50
  - **`2024/07/19`**: ✨ We support 🎞️ portrait video editing (aka v2v)! More to see [here](https://github.com/KwaiVGI/LivePortrait/blob/main/assets/docs/changelog/2024-07-19.md).
@@ -183,7 +194,7 @@ python app.py --flag_do_torch_compile
183
  ```
184
  **Note**: This method is not supported on Windows and macOS.
185
 
186
- **Or, try it out effortlessly on [HuggingFace](https://huggingface.co/spaces/KwaiVGI/LivePortrait) πŸ€—**
187
 
188
  ### 5. Inference speed evaluation πŸš€πŸš€πŸš€
189
  We have also provided a script to evaluate the inference speed of each module:
@@ -219,14 +230,17 @@ Discover the invaluable resources contributed by our community to enhance your L
219
  And many more amazing contributions from our community!
220
 
221
  ## Acknowledgements πŸ’
222
- We would like to thank the contributors of [FOMM](https://github.com/AliaksandrSiarohin/first-order-model), [Open Facevid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [SPADE](https://github.com/NVlabs/SPADE), [InsightFace](https://github.com/deepinsight/insightface) repositories, for their open research and contributions.
 
 
 
223
 
224
  ## Citation πŸ’–
225
  If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
226
  ```bibtex
227
  @article{guo2024liveportrait,
228
  title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
229
- author = {Guo, Jianzhu and Zhang, Dingyun and Liu, Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di},
230
  journal = {arXiv preprint arXiv:2407.03168},
231
  year = {2024}
232
  }
@@ -236,3 +250,4 @@ If you find LivePortrait useful for your research, welcome to 🌟 this repo and
236
 
237
  ## Contact πŸ“§
238
  [**Jianzhu Guo (郭建珠)**](https://guojianzhu.com); **[email protected]**
 
 
1
  ---
2
+ library_name: pytorch
3
  license: mit
 
4
  pipeline_tag: image-to-video
5
+ tags:
6
+ - portrait-animation
7
+ - video-generation
8
+ - keypoint-based
9
+ - efficient
10
  ---
11
 
12
  <h1 align="center">LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control</h1>
 
49
 
50
 
51
  ## πŸ”₯ Updates
52
+ - **`2025/01/01`**: 🐢 We updated a new version of the Animals model with more data, see [**here**](./assets/docs/changelog/2025-01-01.md).
53
+ - **`2024/10/18`**: ❗ We have updated the versions of the `transformers` and `gradio` libraries to avoid security vulnerabilities. Details [here](https://github.com/KwaiVGI/LivePortrait/pull/421/files).
54
+ - **`2024/08/29`**: πŸ“¦ We update the Windows [one-click installer](https://huggingface.co/cleardusk/LivePortrait-Windows/blob/main/LivePortrait-Windows-v20240829.zip) and support auto-updates, see [changelog](https://huggingface.co/cleardusk/LivePortrait-Windows#20240829).
55
+ - **`2024/08/19`**: πŸ–ΌοΈ We support **image driven mode** and **regional control**. For details, see [**here**](./assets/docs/changelog/2024-08-19.md).
56
+ - **`2024/08/06`**: 🎨 We support **precise portrait editing** in the Gradio interface, inspired by [ComfyUI-AdvancedLivePortrait](https://github.com/PowerHouseMan/ComfyUI-AdvancedLivePortrait). See [**here**](./assets/docs/changelog/2024-08-06.md).
57
+ - **`2024/08/05`**: πŸ“¦ Windows users can now download the [one-click installer](https://huggingface.co/cleardusk/LivePortrait-Windows/blob/main/LivePortrait-Windows-v20240806.zip) for Humans mode and **Animals mode** now! For details, see [**here**](./assets/docs/changelog/2024-08-05.md).
58
+ - **`2024/08/02`**: 😸 We released a version of the **Animals model**, along with several other updates and improvements. Check out the details [**here**](./assets/docs/changelog/2024-08-02.md)!
59
  - **`2024/07/25`**: πŸ“¦ Windows users can now download the package from [HuggingFace](https://huggingface.co/cleardusk/LivePortrait-Windows/tree/main) or [BaiduYun](https://pan.baidu.com/s/1FWsWqKe0eNfXrwjEhhCqlw?pwd=86q2). Simply unzip and double-click `run_windows.bat` to enjoy!
60
  - **`2024/07/24`**: 🎨 We support pose editing for source portraits in the Gradio interface. We’ve also lowered the default detection threshold to increase recall. [Have fun](https://github.com/KwaiVGI/LivePortrait/blob/main/assets/docs/changelog/2024-07-24.md)!
61
  - **`2024/07/19`**: ✨ We support 🎞️ portrait video editing (aka v2v)! More to see [here](https://github.com/KwaiVGI/LivePortrait/blob/main/assets/docs/changelog/2024-07-19.md).
 
194
  ```
195
  **Note**: This method is not supported on Windows and macOS.
196
 
197
+ **Or, try it out effortlessly on [HuggingFace](https://huggingface.co/spaces/KwaiVGI/liveportrait) πŸ€—**
198
 
199
  ### 5. Inference speed evaluation πŸš€πŸš€πŸš€
200
  We have also provided a script to evaluate the inference speed of each module:
 
230
  And many more amazing contributions from our community!
231
 
232
  ## Acknowledgements πŸ’
233
+ We would like to thank the contributors of [FOMM](https://github.com/AliaksandrSiarohin/first-order-model), [Open Facevid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [SPADE](https://github.com/NVlabs/SPADE), [InsightFace](https://github.com/deepinsight/insightface) and [X-Pose](https://github.com/IDEA-Research/X-Pose) repositories, for their open research and contributions.
234
+
235
+ ## Ethics Considerations πŸ›‘οΈ
236
+ Portrait animation technologies come with social risks, particularly the potential for misuse in creating deepfakes. To mitigate these risks, it’s crucial to follow ethical guidelines and adopt responsible usage practices. At present, the synthesized results contain visual artifacts that may help in detecting deepfakes. Please note that we do not assume any legal responsibility for the use of the results generated by this project.
237
 
238
  ## Citation πŸ’–
239
  If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
240
  ```bibtex
241
  @article{guo2024liveportrait,
242
  title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
243
+ author = {Guo, Jianzhu and Zhang, Dingyun and Liu,Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di},
244
  journal = {arXiv preprint arXiv:2407.03168},
245
  year = {2024}
246
  }
 
250
 
251
  ## Contact πŸ“§
252
  [**Jianzhu Guo (郭建珠)**](https://guojianzhu.com); **[email protected]**
253
+ ```