Datasets:

Modalities:
Image
Video
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
hanzhn commited on
Commit
81acb60
·
verified ·
1 Parent(s): 13a222a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -21,10 +21,10 @@ license: apache-2.0
21
  <b>Tongyi Lab - <a href="https://github.com/Wan-Video/Wan2.1"><img src='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png' alt='wan_logo' style='margin-bottom: -4px; height: 20px;'></a> </b>
22
  <br>
23
  <br>
24
- <a href="https://arxiv.org/abs/2503.07598"><img src='https://img.shields.io/badge/arXiv-VACE-red' alt='Paper PDF'></a>
25
- <a href="https://ali-vilab.github.io/VACE-Page/"><img src='https://img.shields.io/badge/Project_Page-VACE-green' alt='Project Page'></a>
26
- <a href="https://huggingface.co/ali-vilab/VACE-Wan2.1-1.3B-Preview"><img src='https://img.shields.io/badge/Model-VACE-yellow'></a>
27
- <a href="https://modelscope.cn/collections/VACE-8fa5fcfd386e43"><img src='https://img.shields.io/badge/VACE-ModelScope-purple'></a>
28
  <br>
29
  </p>
30
 
@@ -36,7 +36,7 @@ license: apache-2.0
36
 
37
 
38
  ## 🎉 News
39
- - [x] Mar 31, 2025: 🔥[VACE-Wan2.1-1.3B-Preview](https://huggingface.co/ali-vilab/VACE-Wan2.1-1.3B-Preview) and [VACE-LTX-Video-0.9](https://huggingface.co/ali-vilab/VACE-LTX-Video-0.9) models are now available at HuggingFace and [ModelScope](https://modelscope.cn/collections/VACE-8fa5fcfd386e43)!
40
  - [x] Mar 31, 2025: 🔥Release code of model inference, preprocessing, and gradio demos.
41
  - [x] Mar 11, 2025: We propose [VACE](https://ali-vilab.github.io/VACE-Page/), an all-in-one model for video creation and editing.
42
 
@@ -75,7 +75,7 @@ pip install -r requirements/annotator.txt
75
  Please download [VACE-Annotators](https://huggingface.co/ali-vilab/VACE-Annotators) to `<repo-root>/models/`.
76
 
77
  ### Local Directories Setup
78
- It is recommended to download [VACE-Benchmark](https://huggingface.co/ali-vilab) to `<repo-root>/benchmarks/` as examples in `run_vace_xxx.sh`.
79
 
80
  We recommend to organize local directories as:
81
  ```angular2html
@@ -122,7 +122,7 @@ The output video together with intermediate video, mask and images will be saved
122
 
123
  #### 2) Preprocessing
124
  To have more flexible control over the input, before VACE model inference, user inputs need to be preprocessed into `src_video`, `src_mask`, and `src_ref_images` first.
125
- We assign each [preprocessor](https://github.com/ali-vilab/VACE/blob/main/vace/configs/__init__.py) a task name, so simply call [`vace_preprocess.py`](https://github.com/ali-vilab/VACE/blob/main/vace/vace_preproccess.py) and specify the task name and task params. For example:
126
  ```angular2html
127
  # process video depth
128
  python vace/vace_preproccess.py --task depth --video assets/videos/test.mp4
@@ -133,7 +133,7 @@ python vace/vace_preproccess.py --task inpainting --mode bbox --bbox 50,50,550,7
133
  The outputs will be saved to `./proccessed/` by default.
134
 
135
  > 💡**Note**:
136
- > Please refer to [run_vace_pipeline.sh](https://github.com/ali-vilab/VACE/blob/main//run_vace_pipeline.sh) preprocessing methods for different tasks.
137
  Moreover, refer to [vace/configs/](https://github.com/ali-vilab/VACE/blob/main/vace/configs/) for all the pre-defined tasks and required params.
138
  You can also customize preprocessors by implementing at [`annotators`](https://github.com/ali-vilab/VACE/blob/main/vace/annotators/__init__.py) and register them at [`configs`](https://github.com/ali-vilab/VACE/blob/main/vace/configs).
139
 
@@ -154,7 +154,7 @@ python vace/vace_ltx_inference.py --ckpt_path <path-to-model> --text_encoder_pat
154
  The output video together with intermediate video, mask and images will be saved into `./results/` by default.
155
 
156
  > 💡**Note**:
157
- > (1) Please refer to [vace/vace_wan_inference.pyhttps://github.com/ali-vilab/VACE/blob/main/vace/vace_wan_inference.py) and [vace/vace_ltx_inference.py](https://github.com/ali-vilab/VACE/blob/main/vace/vace_ltx_inference.py) for the inference args.
158
  > (2) For LTX-Video and English language Wan2.1 users, you need prompt extension to unlock the full model performance.
159
  Please follow the [instruction of Wan2.1](https://github.com/Wan-Video/Wan2.1?tab=readme-ov-file#2-using-prompt-extension) and set `--use_prompt_extend` while running inference.
160
 
 
21
  <b>Tongyi Lab - <a href="https://github.com/Wan-Video/Wan2.1"><img src='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png' alt='wan_logo' style='margin-bottom: -4px; height: 20px;'></a> </b>
22
  <br>
23
  <br>
24
+ <a href="https://arxiv.org/abs/2503.07598"><img src='https://img.shields.io/badge/VACE-arXiv-red' alt='Paper PDF'></a>
25
+ <a href="https://ali-vilab.github.io/VACE-Page/"><img src='https://img.shields.io/badge/VACE-Project_Page-green' alt='Project Page'></a>
26
+ <a href="https://huggingface.co/collections/ali-vilab/vace-67eca186ff3e3564726aff38"><img src='https://img.shields.io/badge/VACE-HuggingFace_Model-yellow'></a>
27
+ <a href="https://modelscope.cn/collections/VACE-8fa5fcfd386e43"><img src='https://img.shields.io/badge/VACE-ModelScope_Model-purple'></a>
28
  <br>
29
  </p>
30
 
 
36
 
37
 
38
  ## 🎉 News
39
+ - [x] Mar 31, 2025: 🔥VACE-Wan2.1-1.3B-Preview and VACE-LTX-Video-0.9 models are now available at [HuggingFace](https://huggingface.co/collections/ali-vilab/vace-67eca186ff3e3564726aff38) and [ModelScope](https://modelscope.cn/collections/VACE-8fa5fcfd386e43)!
40
  - [x] Mar 31, 2025: 🔥Release code of model inference, preprocessing, and gradio demos.
41
  - [x] Mar 11, 2025: We propose [VACE](https://ali-vilab.github.io/VACE-Page/), an all-in-one model for video creation and editing.
42
 
 
75
  Please download [VACE-Annotators](https://huggingface.co/ali-vilab/VACE-Annotators) to `<repo-root>/models/`.
76
 
77
  ### Local Directories Setup
78
+ It is recommended to download [VACE-Benchmark](https://huggingface.co/datasets/ali-vilab/VACE-Benchmark) to `<repo-root>/benchmarks/` as examples in `run_vace_xxx.sh`.
79
 
80
  We recommend to organize local directories as:
81
  ```angular2html
 
122
 
123
  #### 2) Preprocessing
124
  To have more flexible control over the input, before VACE model inference, user inputs need to be preprocessed into `src_video`, `src_mask`, and `src_ref_images` first.
125
+ We assign each [preprocessor](https://raw.githubusercontent.com/ali-vilab/VACE/refs/heads/main/vace/configs/__init__.py) a task name, so simply call [`vace_preprocess.py`](https://raw.githubusercontent.com/ali-vilab/VACE/refs/heads/main/vace/vace_preproccess.py) and specify the task name and task params. For example:
126
  ```angular2html
127
  # process video depth
128
  python vace/vace_preproccess.py --task depth --video assets/videos/test.mp4
 
133
  The outputs will be saved to `./proccessed/` by default.
134
 
135
  > 💡**Note**:
136
+ > Please refer to [run_vace_pipeline.sh](https://github.com/ali-vilab/VACE/blob/main/run_vace_pipeline.sh) preprocessing methods for different tasks.
137
  Moreover, refer to [vace/configs/](https://github.com/ali-vilab/VACE/blob/main/vace/configs/) for all the pre-defined tasks and required params.
138
  You can also customize preprocessors by implementing at [`annotators`](https://github.com/ali-vilab/VACE/blob/main/vace/annotators/__init__.py) and register them at [`configs`](https://github.com/ali-vilab/VACE/blob/main/vace/configs).
139
 
 
154
  The output video together with intermediate video, mask and images will be saved into `./results/` by default.
155
 
156
  > 💡**Note**:
157
+ > (1) Please refer to [vace/vace_wan_inference.py](https://github.com/ali-vilab/VACE/blob/main/vace/vace_wan_inference.py) and [vace/vace_ltx_inference.py](https://github.com/ali-vilab/VACE/blob/main/vace/vace_ltx_inference.py) for the inference args.
158
  > (2) For LTX-Video and English language Wan2.1 users, you need prompt extension to unlock the full model performance.
159
  Please follow the [instruction of Wan2.1](https://github.com/Wan-Video/Wan2.1?tab=readme-ov-file#2-using-prompt-extension) and set `--use_prompt_extend` while running inference.
160