-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable LoRAs to patch the text_encoder as well as the unet #3214
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. I appreciate the cleanup.
I think ultimately we need to come up with a way of registering modules that modify, the textencoder, tokenizer, unet, etc. The code now has a lot of different places where there is a check for a LoRA being present and different actions are taken depending on that. Maybe better to have a series of modules registered somewhere and then applied at points where they might want to do something. What do you think?
It's like LoHA but use Kronecker product instead of Hadamard product. https://github.com/KohakuBlueleaf/LyCORIS#lokr I tested it on this 2 LoKR's: https://civitai.com/models/34518/unofficial-vspo-yakumo-beni https://civitai.com/models/35136/mika-pikazo-lokr More tests hard to find as it's new format) Better to test with #3214 Also a bit refactor forward function. //LyCORIS also have (IA)^3 format, but I can't find examples in this format and even on LyCORIS page it's marked as experimental. So, until there some test examples I prefer not to add this.
@GreggHelt2 and @blessedcoolant Could one of you have a look at this PR? It fixes LoRA issues and I'm hoping to put it into a 2.3.5 release. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general this looks good for v2.3.5. Nice to see code shrinkage in some places!
For v.3.0, is the plan to port changes in ldm/* to new locations in backend/* ?
Yep. Quite a bit of work involved here, but I'll have it in in time for 3.0
L
|
Load LoRAs during compel's text embedding encode pass in case there are requested LoRAs which also want to patch the text encoder.
Also generally cleanup the attention processor patching stuff. It's still a mess, but at least now it's a stateless mess.