Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Lora support in 2.3 #3072

Merged
merged 117 commits into from
Apr 7, 2023
Merged

[FEATURE] Lora support in 2.3 #3072

merged 117 commits into from
Apr 7, 2023

Conversation

lstein
Copy link
Collaborator

@lstein lstein commented Mar 30, 2023

NOTE: This PR works with diffusers models only. As a result InvokeAI is now converting all legacy checkpoint/safetensors files into diffusers models on the fly. This introduces a bit of extra delay when loading legacy models. You can avoid this by converting the files to diffusers either at import time, or after the fact.

Instructions:

  1. Download LoRA .safetensors files of your choice and place in INVOKEAIROOT/loras. Unlike the draft version of this PR, the file names can now contain underscores and hyphens. Names with arbitrary unicode characters are not supported.

  2. Add withLora(lora-file-basename,weight) to your prompt. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file named loras/sushi.safetensors is present:

family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)

Multiple withLora() prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. The last version of the syntax, which uses the default weight of 1.0, is waiting on the next version of the Compel library to be released and may not work at this time.

In my limited testing, I found it useful to reduce the CFG to avoid oversharpening. Also I got better results when running the LoRA on top of the model on which it was based during training.

Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. You will get a nasty stack trace. This needs to be cleaned up.

  1. You can change the location of the loras directory by passing the --lora_directory option to `invokeai.

Documentation can be found in docs/features/LORAS.md.

Note that this PR incorporates the unmerged 2.3.3 PR code (#3058) and bumps the version number up to 2.3.4a0.

A zillion thanks to @felorhik, @neecapp and many others for this implementation. @blessedcoolant and I just did a little tidying up.

felorhik and others added 30 commits February 18, 2023 05:29
Rewrite lora manager with hooks
- removed app directory (a 3.0 feature), so app tests had to go too
- fixed regular expression in the concepts lib which was causing deprecation warnings
Copy link
Member

@ebr ebr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

approved from install/packaging perspective

pyproject.toml Show resolved Hide resolved
Sergey Borisov and others added 9 commits April 5, 2023 17:59
Bias parsing, fix LoHa parsing and weight calculation
Implementation of LyCORIS(extended LoRA), which is 2 formats - LoCon and
LoHa([info1](https://github.com/KohakuBlueleaf/LyCORIS/blob/locon-archive/README.md),
[info2](https://github.com/KohakuBlueleaf/LyCORIS/blob/main/Algo.md)).

It's works but i found 2 a bit different implementations of forward
function for LoHa. Both works, but I don't know which is better.

2 functions generate same images if remove `self.org_module.weight.data`
addition from LyCORIS implementation, but who's right?
@damian0815
Copy link
Contributor

I can't find anywhere the LoraModuleWrapper's clear_hooks() method is called - this seems to be not ok, is this ok?

@lstein
Copy link
Collaborator Author

lstein commented Apr 6, 2023

I can't find anywhere the LoraModuleWrapper's clear_hooks() method is called - this seems to be not ok, is this ok?

Really good question! The hooks are inserted into the diffusers pipeline at pipeline load time via the KohyaLoraManager's init routine and never removed. The hooks remain attached to the pipeline as it is moved in and out of VRAM. In the event that the pipeline is purged from CPU RAM cache, all the LoRA objects are deleted and ultimately garbage collected.

Tracing the code, it doesn't seem to me that there is a good reason to clear the hooks. If no LoRA condition is present, then the code that the hooks point to are no-ops. Clearing the hooks after a generation is completed looks easy to do, but reinitializing them when a LoRA condition is requested seems dicey as there are several layers of code to go through.

My feeling is that it is safe to keep the hooks attached. However, I will do some generations with and without the LoRA manager installed to ensure that the presence of the hooks don't themselves affect the generated image, memory consumption, or performance.

ADDENDUM: Just compared with and without the LoRA infrastructure. I observed no measurable difference in image quality, generation speed or memory consumption.

- Remove unused (and probably dangerous) `unload_applied_loras()` method
- Remove unused `LoraManager.loras_to_load` attribute
- Change default LoRA weight to 0.75 when using WebUI to add a LoRA to prompt.
@lstein lstein requested a review from StAlKeR7779 April 6, 2023 20:37
@Void2258
Copy link

Void2258 commented Apr 6, 2023

Found a problem with the .blend function. It's trying to work with loras even without loras present.

("girl","cat").blend(1,1) (simplest possible test for blending) gives
Traceback (most recent call last):
File "F:\Invoke_AI\.venv\lib\site-packages\invokeai\backend\invoke_ai_web_server.py", line 1353, in generate_images self.generate.prompt2image(
File "F:\Invoke_AI\.venv\lib\site-packages\ldm\generate.py", line 524, in prompt2image uc, c, extra_conditioning_info = get_uc_and_c_and_ec(
File "F:\Invoke_AI\.venv\lib\site-packages\ldm\invoke\conditioning.py", line 67, in get_uc_and_c_and_ec should_use_lora_manager = model.peft_manager.should_use(positive_prompt.lora_weights)
AttributeError: 'Blend' object has no attribute 'lora_weights'

Comment on lines 65 to 69
should_use_lora_manager = True
if model.peft_manager:
should_use_lora_manager = model.peft_manager.should_use(positive_prompt.lora_weights)
if not should_use_lora_manager:
model.peft_manager.set_loras(positive_prompt.lora_weights)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unsure how beneficial this is, has anyone even tested peft formatted LoRA?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed not. Where do you even find a peft formatted LoRA to test with?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've never heard of anyone training them outside of random tests. Unfortunately, the example linked from huggingface/peft is disabled, and I cannot find any SD PEFT LoRA anywhere.

ldm/invoke/conditioning.py Outdated Show resolved Hide resolved
@neecapp
Copy link

neecapp commented Apr 6, 2023

Found a problem with the .blend function. It's trying to work with loras even without loras present.

("girl","cat").blend(1,1) (simplest possible test for blending) gives Traceback (most recent call last): File "F:\Invoke_AI\.venv\lib\site-packages\invokeai\backend\invoke_ai_web_server.py", line 1353, in generate_images self.generate.prompt2image( File "F:\Invoke_AI\.venv\lib\site-packages\ldm\generate.py", line 524, in prompt2image uc, c, extra_conditioning_info = get_uc_and_c_and_ec( File "F:\Invoke_AI\.venv\lib\site-packages\ldm\invoke\conditioning.py", line 67, in get_uc_and_c_and_ec should_use_lora_manager = model.peft_manager.should_use(positive_prompt.lora_weights) AttributeError: 'Blend' object has no attribute 'lora_weights'

https://github.com/invoke-ai/InvokeAI/pull/3072/files#r1160306518 or similar should resolve that issue.

@damian0815
Copy link
Contributor

Found a problem with the .blend function. It's trying to work with loras even without loras present.
("girl","cat").blend(1,1) (simplest possible test for blending) gives Traceback (most recent call last): File "F:\Invoke_AI\.venv\lib\site-packages\invokeai\backend\invoke_ai_web_server.py", line 1353, in generate_images self.generate.prompt2image( File "F:\Invoke_AI\.venv\lib\site-packages\ldm\generate.py", line 524, in prompt2image uc, c, extra_conditioning_info = get_uc_and_c_and_ec( File "F:\Invoke_AI\.venv\lib\site-packages\ldm\invoke\conditioning.py", line 67, in get_uc_and_c_and_ec should_use_lora_manager = model.peft_manager.should_use(positive_prompt.lora_weights) AttributeError: 'Blend' object has no attribute 'lora_weights'

https://github.com/invoke-ai/InvokeAI/pull/3072/files#r1160306518 or similar should resolve that issue.

this is superseded with my latest commit + new compel version

Copy link
Contributor

@damian0815 damian0815 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I Approve My Own Changes 👍

@blessedcoolant
Copy link
Collaborator

I ran some tests. Seems to be good. If one or more of you can run some final tests, I think we need to get another party to approve this PR. Coz I don't think @lstein or @damian0815 or my own approval counts now coz we all contributed to this.

We merge this up and fix any pending bugs in another PR. If it's clean, it's good for a release.

@lstein lstein merged commit 6d1f8e6 into v2.3 Apr 7, 2023
@lstein lstein deleted the feat/lora-support-2.3 branch April 7, 2023 13:37
@lstein
Copy link
Collaborator Author

lstein commented Apr 7, 2023

keturn was remaining code owner, but he has been offline for some time. I went ahead and merged on the basis of the other reviews.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.