GGUF quants of https://huggingface.co/black-forest-labs/FLUX.1-Canny-dev/ made using instructions from https://github.com/city96/ComfyUI-GGUF/

Quantized fork at https://github.com/mhnakif/ComfyUI-GGUF/ to get Q4_0, Q5_0, Q8_0 and F16 GGUF quants which are compatible with both ComfyUI and stable-diffusion-webui-forge.

Note that Forge does not yet have support for Flux inpainting or controlnet as of 2024-11-21.

Downloads last month
3,978
GGUF
Model size
11.9B params
Architecture
flux

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.