File size: 1,307 Bytes
c1a093b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
tags:
- image-to-video
- lora
- diffusers
- template:sd-lora
- ai-toolkit
base_model: Qwen/Qwen-Image
license: creativeml-openrail-m
inference:
parameters:
width: 1024
height: 1024
instance_prompt: Baba
---
# Baba_my_first_lora_v1-lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
## Trigger words
You should use `Baba` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](metababa/Baba_my_first_lora_v1-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('Qwen/Qwen-Image', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('metababa/Baba_my_first_lora_v1-lora', weight_name='Baba_my_first_lora_v1_000001000.safetensors')
image = pipeline('Baba').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|