This is a merge of Wan-AI/Wan2.1-VACE-14B scopes and vrgamedevgirl84/Wan14BT2VFusionX.
The process involved extracting VACE scopes and injecting into the target models.
- FP8 model weight was then converted to specific FP8 formats (E4M3FN and E5M2) using ComfyUI custom node ComfyUI-ModelQuantizer by lum3on.
Usage
The model files can be used in ComfyUI with the WanVaceToVideo node. Place the required model(s) in the following folders:
Type | Name | Location | Download |
---|---|---|---|
Main Model | Wan2.1-14B-T2V-FusionX-VACE | ComfyUI/models/diffusion_models |
Safetensors (this repo) |
Text Encoder | umt5-xxl-encoder | ComfyUI/models/text_encoders |
Safetensors / GGUF |
VAE | Wan2_1_VAE_bf16 | ComfyUI/models/vae |
Safetensors |
Notes
All original licenses and restrictions from the base models still apply.
Reference
- For more information about the GGUF-quantized versions, refer to QuantStack/Wan2.1-14B-T2V-FusionX-VACE-GGUF.
- For an overview of Safetensors format, please see the Safetensors.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for QuantStack/Wan2.1-14B-T2V-FusionX-VACE
Merge model
this model