Visual comparison of Flux-dev model outputs using BF16 and BnB&Hqq 4bit quantization

BF16
Flux-dev output with BF16: Baroque, Futurist, Noir styles BnB 4-bit (DiT) & Hqq 4-bit (T5)
BnB 4-bit (DiT) & Hqq 4-bit (T5) Output

Usage with Diffusers

To use this quantized FLUX.1 [dev] checkpoint, you need to install the ๐Ÿงจ diffusers, transformers, bitsandbytes and hqq library:

pip install git+https://github.com/huggingface/diffusers.git@599c887 # add support for `PipelineQuantizationConfig`
pip install git+https://github.com/huggingface/transformers.git@3dbbf01 # add support for hqq quantized model in diffusers pipeline
pip install -U bitsandbytes
pip install -U hqq

After installing the required library, you can run the following script:

import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained(
    "HighCWu/FLUX.1-dev-bnb-hqq-4bit",
    torch_dtype=torch.bfloat16
)

# Use model cpu offload or all on cuda
pipe.enable_model_cpu_offload()
# pipe.to("cuda")

prompt = "Baroque style, a lavish palace interior with ornate gilded ceilings, intricate tapestries, and dramatic lighting over a grand staircase."

pipe_kwargs = {
    "prompt": prompt,
    "height": 1024,
    "width": 1024,
    "guidance_scale": 3.5,
    "num_inference_steps": 50,
    "max_sequence_length": 512,
}

image = pipe(
    **pipe_kwargs, generator=torch.manual_seed(0),
).images[0]

image.save("flux.png")

How to generate this quantized checkpoint ?

This checkpoint was created with the following script using "black-forest-labs/FLUX.1-dev" checkpoint:


import torch

assert torch.cuda.is_available() # force initialization of cuda

from diffusers import FluxPipeline
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from diffusers.quantizers import PipelineQuantizationConfig
from transformers import HqqConfig as TransformersHqqConfig

pipeline_quant_config = PipelineQuantizationConfig(
    quant_mapping={
        "transformer": DiffusersBitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16),
        "text_encoder_2": TransformersHqqConfig(nbits=4, group_size=64),
    }
)

pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16
)

pipe.save_pretrained("FLUX.1-dev-bnb-hqq-4bit")
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for HighCWu/FLUX.1-dev-bnb-hqq-4bit

Quantized
(40)
this model