|
--- |
|
base_model: |
|
- wikeeyang/SRPO-Refine-Quantized-v1.0 |
|
- rockerBOO/flux.1-dev-SRPO |
|
- tencent/SRPO |
|
tags: |
|
- srpo |
|
- flux-dev |
|
- flux |
|
pipeline_tag: text-to-image |
|
library_name: diffusers |
|
--- |
|
|
|
### Flux.1-Dev SRPO LoRAs |
|
|
|
These LoRAs were extracted from **three sources**: |
|
- the original SRPO (Flux.1-Dev): tencent/SRPO |
|
- community checkpoint: rockerBOO/flux.1-dev-SRPO |
|
- community checkpoint (quantized/refined): wikeeyang/SRPO-Refine-Quantized-v1.0 |
|
|
|
They are designed to provide modular, lightweight adaptations you can mix with other LoRAs, reducing storage and enabling fast experimentation across ranks (8, 16, 32, 64, 128). |
|
|
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
|
|
*Example comparison between Flux1-Dev baseline and LoRA extractions* |
|
|
|
use with 🧨diffusers: |
|
``` |
|
import torch |
|
from diffusers import FluxPipeline |
|
|
|
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16) |
|
|
|
pipe.load_lora_weights('Alissonerdx/flux.1-dev-SRPO-LoRas', weight_name='srpo_128_base_R%26Q_model_fp16.safetensors') |
|
pipe.to("cuda") |
|
|
|
prompt = "aiyouxiketang, a man in armor with a beard and a beard" |
|
|
|
image = pipe( |
|
prompt, |
|
num_inference_steps=28, |
|
guidance_scale=5.0, |
|
generator=torch.Generator("cpu").manual_seed(0) |
|
).images[0] |
|
|
|
``` |