A 2.45 GB Lora?
A 2.45 GB Lora this can't be right, can you make the Lora smaller?
I just used rank 256 to preserve as much details from the original Krea model - that's how it got to this size. Rank 32 was only 250-ish MB
Didn't really expect anyone to see it - uploaded for myself to free up the space on hard drive :)
This was my workflow for extracting the LoRA from Krea Model - change it to rank 32 or 64, and you'll get a smaller size, although less details. You run it by selecting one of the preview nodes and Extract LoRA node together, and hitting play. In case Extract LoRA is not connected, you can wire it to clip encode text field, otherwise it doesn't compute - apparently, it can't run on its own.
https://gist.github.com/ifilipis/a5077f3f4265ff517fa40384da7f69c1
pls can u make a lower version like 128-64?, 256 its too much for the mayority
Just tried making rank 64. It's too much off, even in terms of quality and style. Don't see the point scaling it down. I would actually try and make rank 512 instead
why not 128?
is working in kontext? i badly want
you can use it with kontext, but results are not that good