-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Lora support in 2.3 #3072
Conversation
- removed app directory (a 3.0 feature), so app tests had to go too - fixed regular expression in the concepts lib which was causing deprecation warnings
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
approved from install/packaging perspective
Bias parsing, fix LoHa parsing and weight calculation
Implementation of LyCORIS(extended LoRA), which is 2 formats - LoCon and LoHa([info1](https://github.com/KohakuBlueleaf/LyCORIS/blob/locon-archive/README.md), [info2](https://github.com/KohakuBlueleaf/LyCORIS/blob/main/Algo.md)). It's works but i found 2 a bit different implementations of forward function for LoHa. Both works, but I don't know which is better. 2 functions generate same images if remove `self.org_module.weight.data` addition from LyCORIS implementation, but who's right?
… into feat/lora-support-2.3
add line to docs
I can't find anywhere the |
Really good question! The hooks are inserted into the diffusers pipeline at pipeline load time via the KohyaLoraManager's init routine and never removed. The hooks remain attached to the pipeline as it is moved in and out of VRAM. In the event that the pipeline is purged from CPU RAM cache, all the LoRA objects are deleted and ultimately garbage collected. Tracing the code, it doesn't seem to me that there is a good reason to clear the hooks. If no LoRA condition is present, then the code that the hooks point to are no-ops. Clearing the hooks after a generation is completed looks easy to do, but reinitializing them when a LoRA condition is requested seems dicey as there are several layers of code to go through. My feeling is that it is safe to keep the hooks attached. However, I will do some generations with and without the LoRA manager installed to ensure that the presence of the hooks don't themselves affect the generated image, memory consumption, or performance. ADDENDUM: Just compared with and without the LoRA infrastructure. I observed no measurable difference in image quality, generation speed or memory consumption. |
- Remove unused (and probably dangerous) `unload_applied_loras()` method - Remove unused `LoraManager.loras_to_load` attribute - Change default LoRA weight to 0.75 when using WebUI to add a LoRA to prompt.
Found a problem with the
|
ldm/invoke/conditioning.py
Outdated
should_use_lora_manager = True | ||
if model.peft_manager: | ||
should_use_lora_manager = model.peft_manager.should_use(positive_prompt.lora_weights) | ||
if not should_use_lora_manager: | ||
model.peft_manager.set_loras(positive_prompt.lora_weights) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsure how beneficial this is, has anyone even tested peft formatted LoRA?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed not. Where do you even find a peft formatted LoRA to test with?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've never heard of anyone training them outside of random tests. Unfortunately, the example linked from huggingface/peft is disabled, and I cannot find any SD PEFT LoRA anywhere.
https://github.com/invoke-ai/InvokeAI/pull/3072/files#r1160306518 or similar should resolve that issue. |
this is superseded with my latest commit + new compel version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I Approve My Own Changes 👍
I ran some tests. Seems to be good. If one or more of you can run some final tests, I think we need to get another party to approve this PR. Coz I don't think @lstein or @damian0815 or my own approval counts now coz we all contributed to this. We merge this up and fix any pending bugs in another PR. If it's clean, it's good for a release. |
keturn was remaining code owner, but he has been offline for some time. I went ahead and merged on the basis of the other reviews. |
NOTE: This PR works with
diffusers
models only. As a result InvokeAI is now converting all legacy checkpoint/safetensors files into diffusers models on the fly. This introduces a bit of extra delay when loading legacy models. You can avoid this by converting the files to diffusers either at import time, or after the fact.Instructions:
Download LoRA .safetensors files of your choice and place in
INVOKEAIROOT/loras
. Unlike the draft version of this PR, the file names can now contain underscores and hyphens. Names with arbitrary unicode characters are not supported.Add
withLora(lora-file-basename,weight)
to your prompt. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file namedloras/sushi.safetensors
is present:Multiple
withLora()
prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. The last version of the syntax, which uses the default weight of 1.0, is waiting on the next version of the Compel library to be released and may not work at this time.In my limited testing, I found it useful to reduce the CFG to avoid oversharpening. Also I got better results when running the LoRA on top of the model on which it was based during training.
Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. You will get a nasty stack trace. This needs to be cleaned up.
loras
directory by passing the--lora_directory
option to `invokeai.Documentation can be found in docs/features/LORAS.md.
Note that this PR incorporates the unmerged 2.3.3 PR code (#3058) and bumps the version number up to 2.3.4a0.
A zillion thanks to @felorhik, @neecapp and many others for this implementation. @blessedcoolant and I just did a little tidying up.