VORTA: Efficient Video Diffusion via Routing Sparse Attention
TL;DR - VORTA accelerates video diffusion transformers by sparse attention and dynamic routing, achieving speedup with negligible quality loss.
Quick Start
- Download the checkpoints into the
./results
directory under the VORTA GitHub code repository.
git lfs install
git clone [email protected]:anonymous728/VORTA
# mv VORTA/<model_name> results/, <model_name>: wan-14B, hunyuan; e.g.
mv VORTA/wan-14B results/
Other alternative methods to download the models can be found here.
- Follow the
README.md
instructions to run the sampling with speedup. ๐ค
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for anonymous728/VORTA
Base model
Wan-AI/Wan2.1-T2V-14B-Diffusers