Diffusers
Safetensors
WanPipeline
BrianChen1129 commited on
Commit
b94c1f9
·
verified ·
1 Parent(s): 26689c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -22,7 +22,10 @@ license: apache-2.0
22
  - It achieves up to 2.1x speed up on a single **H100** GPU.
23
  - Our model is trained on **77×768×1280** resolution, but it supports generating videos with any resolution.(quality may degrade).
24
  - We set **VSA attention sparsity** to 0.9, and training runs for **1500 steps (~14 hours)**. You can tune this value from 0 to 0.9 to balance speed and performance for inference.
25
- - Both [**finetuning**](https://github.com/hao-ai-lab/FastVideo/blob/main/examples/training/finetune/Wan2.1-VSA/Wan-Syn-Data/T2V-14B-VSA.slurm) and [**inference**](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/inference/v1_inference_wan_VSA.sh) scripts are available in the [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository.
 
 
 
26
  - Try it out on **FastVideo** — we support a wide range of GPUs from **H100** to **4090**
27
  - We use [FastVideo 720P Synthetic Wan dataset](https://huggingface.co/datasets/FastVideo/Wan-Syn_77x768x1280_250k) for training.
28
 
 
22
  - It achieves up to 2.1x speed up on a single **H100** GPU.
23
  - Our model is trained on **77×768×1280** resolution, but it supports generating videos with any resolution.(quality may degrade).
24
  - We set **VSA attention sparsity** to 0.9, and training runs for **1500 steps (~14 hours)**. You can tune this value from 0 to 0.9 to balance speed and performance for inference.
25
+ - Finetuning and inference scripts are available in the [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository:
26
+ - [1 Node/GPU debugging finetuning script](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/finetune/finetune_v1_VSA.sh)
27
+ - [Slurm training example script](https://github.com/hao-ai-lab/FastVideo/blob/main/examples/training/finetune/Wan2.1-VSA/Wan-Syn-Data/T2V-14B-VSA.slurm)
28
+ - [Inference script](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/inference/v1_inference_wan_VSA.sh)
29
  - Try it out on **FastVideo** — we support a wide range of GPUs from **H100** to **4090**
30
  - We use [FastVideo 720P Synthetic Wan dataset](https://huggingface.co/datasets/FastVideo/Wan-Syn_77x768x1280_250k) for training.
31