Wan2.1 FusionX GGUFs
Collection
Based on https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX
โข
4 items
โข
Updated
This is a GGUF conversion of QuantStack/Wan2.1_T2V_14B_FusionX_VACE.
All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the ComfyUI-GGUF GitHub repository.
The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:
Type | Name | Location | Download |
---|---|---|---|
Main Model | Wan2.1_T2V_14B_FusionX_VACE-GGUF | ComfyUI/models/unet |
GGUF (this repo) |
Text Encoder | umt5-xxl-encoder | ComfyUI/models/text_encoders |
Safetensors / GGUF |
VAE | Wan2_1_VAE_bf16 | ComfyUI/models/vae |
Safetensors |
All original licenses and restrictions from the base models still apply.
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Base model
QuantStack/Wan2.1_T2V_14B_FusionX_VACE