This is a GGUF conversion of QuantStack/Wan2.1-14B-T2V-FusionX-VACE.

All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the ComfyUI-GGUF GitHub repository.

Usage

The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:

Type Name Location Download
Main Model Wan2.1-14B-T2V-FusionX-VACE-GGUF ComfyUI/models/unet GGUF (this repo)
Text Encoder umt5-xxl-encoder ComfyUI/models/text_encoders Safetensors / GGUF
VAE Wan2_1_VAE_bf16 ComfyUI/models/vae Safetensors

ComfyUI example workflow

Notes

All original licenses and restrictions from the base models still apply.

Reference

Downloads last month
66
GGUF
Model size
17.3B params
Architecture
wan
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantStack/Wan2.1-14B-T2V-FusionX-VACE-GGUF

Quantized
(1)
this model