--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-Krea-dev/blob/main/LICENSE.md language: - en library_name: diffusers base_model: - black-forest-labs/FLUX.1-Krea-dev pipeline_tag: text-to-image widget: - text: a frog holding a sign that says hello world output: url: output1.png - text: a pig holding a sign that says hello world output: url: output2.png - text: a wolf holding a sign that says hello world output: url: output3.png --- # **gguf quantized version of krea** - run it with diffusers (see example below) ```py import torch from transformers import T5EncoderModel from diffusers import FluxPipeline, GGUFQuantizationConfig, FluxTransformer2DModel model_path = "https://huggingface.co/calcuis/krea-gguf/blob/main/flux1-krea-dev-q2_k.gguf" transformer = FluxTransformer2DModel.from_single_file( model_path, quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16), torch_dtype=torch.bfloat16, config="callgg/krea-decoder", subfolder="transformer" ) text_encoder = T5EncoderModel.from_pretrained( "chatpig/t5-v1_1-xxl-encoder-fp32-gguf", gguf_file="t5xxl-encoder-fp32-q2_k.gguf", torch_dtype=torch.bfloat16 ) pipe = FluxPipeline.from_pretrained( "callgg/krea-decoder", transformer=transformer, text_encoder_2=text_encoder, torch_dtype=torch.bfloat16 ) pipe.enable_model_cpu_offload() # could change it to cuda if you have good gpu prompt = "a pig holding a sign that says hello world" image = pipe( prompt, height=1024, width=1024, guidance_scale=2.5, ).images[0] image.save("output.png") ```