This is a direct GGUF conversion of bytedance-research/Phantom .

The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:

Type Name Location Download
Main Model Phantom_Wan_14B ComfyUI/models/unet GGUF (this repo)
Text Encoder umt5-xxl-encoder ComfyUI/models/text_encoders Safetensors / GGUF
VAE wan_2.1_vae ComfyUI/models/vae Safetensors

Example workflow

Notes

As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.

Downloads last month
3,167
GGUF
Model size
14.3B params
Architecture
wan
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantStack/Phantom_Wan_14B-GGUF

Quantized
(2)
this model