This GGUF file is a direct conversion of black-forest-labsFLUX.1-Fill-dev

Since this is a quantized model, all original licensing terms and usage restrictions remain in effect.

Usage

The model can be used with the ComfyUI custom node ComfyUI-GGUF by city96

Place model files in ComfyUI/models/unet see the GitHub readme for further installation instructions.

Interface Used

These models were quantized using EasyQuantizationGUI by rainlizard

Downloads last month
13,336
GGUF
Model size
11.9B params
Architecture
flux
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for YarvixPA/FLUX.1-Fill-dev-gguf

Quantized
(5)
this model

Space using YarvixPA/FLUX.1-Fill-dev-gguf 1