File size: 2,056 Bytes
6f30115
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1dd3db0
6f30115
 
 
 
 
 
 
7edfb15
6f30115
 
 
5981afd
6f30115
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
base_model:
- vrgamedevgirl84/Wan14BT2VFusioniX
base_model_relation: quantized
library_name: gguf
quantized_by: lym00
tags:
- image-to-video
- quantized
language:
- en
license: apache-2.0
---

This is a GGUF conversion of [Wan14BT2VFusioniX_Phantom_fp16.safetensors](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX/blob/main/Wan14BT2VFusioniX_Phantom_fp16.safetensors) by [@vrgamedevgirl84](https://huggingface.co/vrgamedevgirl84).

All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF/tree/main/tools) GitHub repository. 

## Usage

The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:

| Type         | Name                                | Location                       | Download         |
| ------------ | ----------------------------------- | ------------------------------ | ---------------- |
| Main Model   | Phantom_Wan_14B_FusionX-GGUF        | `ComfyUI/models/unet`          | GGUF (this repo) |
| Text Encoder | umt5-xxl-encoder                    | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) |
| VAE          | Wan2_1_VAE_bf16                     | `ComfyUI/models/vae`           | [Safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) |

[**ComfyUI example workflow**](https://huggingface.co/QuantStack/Phantom_Wan_14B_FusionX-GGUF/resolve/main/Phantom_example_workflow.json)

### Notes

*All original licenses and restrictions from the base models still apply.*

 ## Reference

- For an overview of quantization types, please see the [GGUF quantization types](https://huggingface.co/docs/hub/gguf#quantization-types).