Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable LoRAs to patch the text_encoder as well as the unet #3214

Merged
merged 4 commits into from
Apr 24, 2023

Conversation

damian0815
Copy link
Contributor

@damian0815 damian0815 commented Apr 16, 2023

Load LoRAs during compel's text embedding encode pass in case there are requested LoRAs which also want to patch the text encoder.

Also generally cleanup the attention processor patching stuff. It's still a mess, but at least now it's a stateless mess.

@damian0815 damian0815 changed the title Enable LoRAs patching the text_encoder as well as the unet Enable LoRAs to patch the text_encoder as well as the unet Apr 16, 2023
@lstein lstein enabled auto-merge April 18, 2023 23:09
Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. I appreciate the cleanup.
I think ultimately we need to come up with a way of registering modules that modify, the textencoder, tokenizer, unet, etc. The code now has a lot of different places where there is a check for a LoRA being present and different actions are taken depending on that. Maybe better to have a series of modules registered somewhere and then applied at points where they might want to do something. What do you think?

lstein added a commit that referenced this pull request Apr 19, 2023
It's like LoHA but use Kronecker product instead of Hadamard product.
https://github.com/KohakuBlueleaf/LyCORIS#lokr

I tested it on this 2 LoKR's:
https://civitai.com/models/34518/unofficial-vspo-yakumo-beni
https://civitai.com/models/35136/mika-pikazo-lokr

More tests hard to find as it's new format)
Better to test with #3214

Also a bit refactor forward function.
//LyCORIS also have (IA)^3 format, but I can't find examples in this
format and even on LyCORIS page it's marked as experimental. So, until
there some test examples I prefer not to add this.
@lstein lstein requested a review from GreggHelt2 as a code owner April 22, 2023 19:22
@lstein
Copy link
Collaborator

lstein commented Apr 22, 2023

@GreggHelt2 and @blessedcoolant Could one of you have a look at this PR? It fixes LoRA issues and I'm hoping to put it into a 2.3.5 release.

Copy link
Contributor

@GreggHelt2 GreggHelt2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general this looks good for v2.3.5. Nice to see code shrinkage in some places!
For v.3.0, is the plan to port changes in ldm/* to new locations in backend/* ?

@lstein lstein merged commit 96c39b6 into invoke-ai:v2.3 Apr 24, 2023
@lstein
Copy link
Collaborator

lstein commented Apr 24, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants