This is a GGUF conversion of QuantStack/Wan2.1-14B-T2V-FusionX-VACE.
All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the ComfyUI-GGUF GitHub repository.
Usage
The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:
Type | Name | Location | Download |
---|---|---|---|
Main Model | Wan2.1-14B-T2V-FusionX-VACE-GGUF | ComfyUI/models/unet |
GGUF (this repo) |
Text Encoder | umt5-xxl-encoder | ComfyUI/models/text_encoders |
Safetensors / GGUF |
VAE | Wan2_1_VAE_bf16 | ComfyUI/models/vae |
Safetensors |
Notes
All original licenses and restrictions from the base models still apply.
Reference
- For more information about the source model, refer to QuantStack/Wan2.1-14B-T2V-FusionX-VACE, where the model creation process is explained.
- For an overview of quantization types, please see the GGUF quantization types.
- Downloads last month
- 66
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for QuantStack/Wan2.1-14B-T2V-FusionX-VACE-GGUF
Base model
QuantStack/Wan2.1-14B-T2V-FusionX-VACE