Update README.md
Browse files
README.md
CHANGED
|
@@ -22,7 +22,8 @@ license: apache-2.0
|
|
| 22 |
|
| 23 |
## Model Overview
|
| 24 |
- This model is finetuned with [VSA](https://arxiv.org/pdf/2505.13389), based on [Wan-AI/Wan2.1-T2V-14B-Diffusers](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B-Diffusers).
|
| 25 |
-
- It achieves up to 2.1x speed up on a single **H100** GPU.
|
|
|
|
| 26 |
- Both [finetuning](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/finetune/finetune_v1_VSA.sh) and [inference](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/inference/v1_inference_wan_VSA.sh) scripts are available in the [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository.
|
| 27 |
- Try it out on **FastVideo** — we support a wide range of GPUs from **H100** to **4090**
|
| 28 |
- We use [FastVideo 720P Synthetic Wan dataset](https://huggingface.co/datasets/FastVideo/Wan-Syn_77x768x1280_250k) for training.
|
|
|
|
| 22 |
|
| 23 |
## Model Overview
|
| 24 |
- This model is finetuned with [VSA](https://arxiv.org/pdf/2505.13389), based on [Wan-AI/Wan2.1-T2V-14B-Diffusers](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B-Diffusers).
|
| 25 |
+
- It achieves up to 2.1x speed up on a single **H100** GPU.
|
| 26 |
+
- Supports generating videos with **77×768×1280** resolution.
|
| 27 |
- Both [finetuning](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/finetune/finetune_v1_VSA.sh) and [inference](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/inference/v1_inference_wan_VSA.sh) scripts are available in the [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository.
|
| 28 |
- Try it out on **FastVideo** — we support a wide range of GPUs from **H100** to **4090**
|
| 29 |
- We use [FastVideo 720P Synthetic Wan dataset](https://huggingface.co/datasets/FastVideo/Wan-Syn_77x768x1280_250k) for training.
|