Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added additional parameters to mixing multiple LoRAs through SVD, added ability to mix LoRAs through concatenation #817

Merged
merged 5 commits into from
Aug 16, 2023

Conversation

kovalexal
Copy link
Contributor

Hi!

This PR introduces the following changes:

  1. Fixes an error during SVD decomposition when Conv2d layers are present in LoRAs (caused by the lack of fan_in_fan_out in Conv2d LoRALayer)
    if target.fan_in_fan_out:
    delta_weight = delta_weight.T
  2. Adds an ability to combine multiple adapters with concatenation of their A and B matrices. It's super fast compared to SVD decomposition (e.g. on my GPU SVD takes about 30+secs, and concatenation takes less than 1sec), and is also mathematically equivalent to successive applying of multple LoRAs. When using this method, one should be aware that mixing LoRAs may result in target LoRA being too large, so in this case SVD decomposition should be preferred.
  3. Adds an ability to provide the desired target SVD rank when performing SVD decomposition. So, one can find a balance between inference speed and loss in precision.
  4. Adds an ability to provide additional parameters to torch.linalg.svd like driver to use during decomposition.
  5. Adds an ability to disable clamping matrices values after SVD decomposition. From my experiments I see that clamping this values for Stable Diffusion LoRAs also influences results compared to Automatic1111 webui.

You can see my previous results here #643.

webui output:
webui_output

SVD decomposed mixture of 2 LoRAs, clamped:
peft_diffusers_clamped

SVD decomposed mixture of 2 LoRAs, unclamped:
peft_diffusers_unclamped

@pacman100 your review is kindly appreciated.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the fixes and the new method, this looks pretty solid.

I'm not an expert on this topic (@pacman100 will know better), but the implementation looks quite good. However, I would like to see the unit tests extended to cover the new combination type.

src/peft/tuners/lora.py Outdated Show resolved Hide resolved
src/peft/tuners/lora.py Outdated Show resolved Hide resolved
src/peft/tuners/lora.py Outdated Show resolved Hide resolved
src/peft/tuners/lora.py Outdated Show resolved Hide resolved
src/peft/tuners/lora.py Show resolved Hide resolved
Copy link
Contributor

@pacman100 pacman100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much @kovalexal for adding the concat method from our previous discussions and improving upon the svd method, you rock 🔥. LGTM! 🤗

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks a lot for addressing my comments.

@BenjaminBossan BenjaminBossan merged commit 438b16b into huggingface:main Aug 16, 2023
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants