cwhuh's picture
End of training
c0ef7de verified
---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
instance_prompt: A newborn <s0><s1><s2><s3><s4><s5><s6><s7><s8><s9><s10><s11><s12><s13><s14><s15><s16><s17><s18><s19><s20><s21><s22><s23><s24><s25><s26><s27><s28><s29><s30><s31><s32><s33><s34><s35><s36><s37><s38>
baby. The baby is wearing a white beanie and is swaddled in a white blanket. The
background is a soft, neutral white, matching the original clean studio aesthetic.
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - cwhuh/babyface_flux_dlora_hsfw_East_Asian
<Gallery />
## Model description
These are cwhuh/babyface_flux_dlora_hsfw_East_Asian DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
Pivotal tuning was enabled: True.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `hsfw` → use `<s0><s1><s2><s3><s4><s5><s6><s7><s8><s9><s10><s11><s12><s13><s14><s15><s16><s17><s18><s19><s20><s21><s22><s23><s24><s25><s26><s27><s28><s29><s30><s31><s32><s33><s34><s35><s36><s37><s38>` in your prompt
## Download model
[Download the *.safetensors LoRA](cwhuh/babyface_flux_dlora_hsfw_East_Asian/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cwhuh/babyface_flux_dlora_hsfw_East_Asian', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='cwhuh/babyface_flux_dlora_hsfw_East_Asian', filename='/nas/checkpoints/sangmin/babyface_flux_dlora_hsfw_East_Asian_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>", "<s2>", "<s3>", "<s4>", "<s5>", "<s6>", "<s7>", "<s8>", "<s9>", "<s10>", "<s11>", "<s12>", "<s13>", "<s14>", "<s15>", "<s16>", "<s17>", "<s18>", "<s19>", "<s20>", "<s21>", "<s22>", "<s23>", "<s24>", "<s25>", "<s26>", "<s27>", "<s28>", "<s29>", "<s30>", "<s31>", "<s32>", "<s33>", "<s34>", "<s35>", "<s36>", "<s37>", "<s38>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
image = pipeline('A newborn <s0><s1><s2><s3><s4><s5><s6><s7><s8><s9><s10><s11><s12><s13><s14><s15><s16><s17><s18><s19><s20><s21><s22><s23><s24><s25><s26><s27><s28><s29><s30><s31><s32><s33><s34><s35><s36><s37><s38> baby. The baby is wearing a white beanie and is swaddled in a white blanket. The background is a soft, neutral white, matching the original clean studio aesthetic.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]