LoRA weights in diffusers format ?

#1
by NagaSaiAbhinay - opened

Hello @lzyhha , Thank you for releasing the weights in diffusers format. Do you have any plans to release just the LoRA's in diffusers format ? It would make it easier to switch between 384 and 512 resolution during every generation.

I ended up porting them here: NagaSaiAbhinay/VisualCloze, would you be interested in moving them to this repo for completeness ? I have tested the 384 lora. Still testing the 512.

VisualCloze org

Hello, thank you for your interest in our work. Are you saying that the model you released on NagaSaiAbhinay/VisualCloze can be used in the following way, i.e., loading from FLUX.1-Fill-dev and then loading LoRA?

from diffusers import VisualClozePipeline
pipe = VisualClozePipeline.from_pretrained("black-forest-labs/FLUX.1-Fill-dev", torch_dtype=torch.bfloat16).to("cuda")
pipeline.load_lora_weights("NagaSaiAbhinay/VisualCloze", weight_name="visualcloze_384.safetensors")

Yes that's exactly how I'm loading them. I'm using your diffusers fork. I have tested both the 384 and 512 resolution lora's. There seems to be some slight differences which might be coming from non-determinstic parts.

Very minor differences but they are there.
UntitledDiff (1).png

VisualCloze org

Hello, we will take a closer look and further address this issue after this busy week is over.

Sounds good! Thank you

VisualCloze org

Hello, thank you for your attention and efforts. We have released the LoRA weights at 384 and 512, with an acknowledgement for you.

The weights you provided are generally correct. However, we notice an inconsistency caused by the layer of proj_out: its output dimension is 64, while our LoRA rank is 256. In this case, Diffusers sets the scaling factor to 4, whereas in our code, the scaling is 1. To resolve this inconsistency, we modified the weights you provided by multiplying the LoRA weights of proj_out by 0.5.

Thank you for pointing out the inconsistency! I didn’t realise that. Appreciate the acknowledgement

Sign up or log in to comment