--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-Krea-dev/blob/main/LICENSE.md language: - en library_name: diffusers base_model: - black-forest-labs/FLUX.1-Krea-dev pipeline_tag: text-to-image widget: - text: a frog holding a sign that says hello world output: url: output1.png - text: a pig holding a sign that says hello world output: url: output2.png - text: a wolf holding a sign that says hello world output: url: output3.png - text: >- cute anime girl with massive fluffy fennec ears and a big fluffy tail blonde messy long hair blue eyes wearing a maid outfit with a long black gold leaf pattern dress and a white apron mouth open holding a fancy black forest cake with candles on top in the kitchen of an old dark Victorian mansion lit by candlelight with a bright window to the foggy forest and very expensive stuff everywhere output: url: workflow-embedded-demo1.png - text: >- on a rainy night, a girl holds an umbrella and looks at the camera. The rain keeps falling. output: url: workflow-embedded-demo2.png - text: drone shot of a volcano erupting with a pig walking on it output: url: workflow-embedded-demo3.png tags: - gguf-node - gguf-connector --- # **gguf quantized version of krea** - run it straight with `gguf-connector` - opt a `gguf` file in the current directory to interact with by: ``` ggc k ``` > >GGUF file(s) available. Select which one to use: > >1. flux-krea-lite-q2_k.gguf >2. flux-krea-lite-q4_0.gguf >3. flux-krea-lite-q8_0.gguf > >Enter your choice (1 to 3): _ > note: try experimental lite model with 8-step operation; save up to 70% loading time ![screenshot](https://raw.githubusercontent.com/calcuis/gguf-pack/master/k4.png) - run it with diffusers (see example inference below) ```py import torch from transformers import T5EncoderModel from diffusers import FluxPipeline, GGUFQuantizationConfig, FluxTransformer2DModel model_path = "https://huggingface.co/calcuis/krea-gguf/blob/main/flux1-krea-dev-q2_k.gguf" transformer = FluxTransformer2DModel.from_single_file( model_path, quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16), torch_dtype=torch.bfloat16, config="callgg/krea-decoder", subfolder="transformer" ) text_encoder = T5EncoderModel.from_pretrained( "chatpig/t5-v1_1-xxl-encoder-fp32-gguf", gguf_file="t5xxl-encoder-fp32-q2_k.gguf", torch_dtype=torch.bfloat16 ) pipe = FluxPipeline.from_pretrained( "callgg/krea-decoder", transformer=transformer, text_encoder_2=text_encoder, torch_dtype=torch.bfloat16 ) pipe.enable_model_cpu_offload() # could change it to cuda if you have good gpu prompt = "a pig holding a sign that says hello world" image = pipe( prompt, height=1024, width=1024, guidance_scale=2.5, ).images[0] image.save("output.png") ``` ## **run it with gguf-node via comfyui** - drag **krea** to > `./ComfyUI/models/diffusion_models` - drag **clip-l-v2 [[248MB](https://huggingface.co/calcuis/kontext-gguf/blob/main/clip_l_v2_fp32-f16.gguf)], t5xxl [[2.75GB](https://huggingface.co/calcuis/kontext-gguf/blob/main/t5xxl_fp32-q4_0.gguf)]** to > `./ComfyUI/models/text_encoders` - drag **pig [[168MB](https://huggingface.co/calcuis/kontext-gguf/blob/main/pig_flux_vae_fp32-f16.gguf)]** to > `./ComfyUI/models/vae` ![screenshot](https://raw.githubusercontent.com/calcuis/comfy/master/krea.png) ### **reference** - base model from [black-forest-labs](https://huggingface.co/black-forest-labs) - for model merge details, see [sayakpaul](https://huggingface.co/sayakpaul/FLUX.1-merged) - diffusers from [huggingface](https://github.com/huggingface/diffusers) - comfyui from [comfyanonymous](https://github.com/comfyanonymous/ComfyUI) - gguf-node ([pypi](https://pypi.org/project/gguf-node)|[repo](https://github.com/calcuis/gguf)|[pack](https://github.com/calcuis/gguf/releases)) - gguf-connector ([pypi](https://pypi.org/project/gguf-connector))