-
Notifications
You must be signed in to change notification settings - Fork 20
Ryanontheinside/fix/perm merge #166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: RyanOnTheInside <7623207+ryanontheinside@users.noreply.github.com>
Signed-off-by: RyanOnTheInside <7623207+ryanontheinside@users.noreply.github.com> fixup community support Signed-off-by: RyanOnTheInside <7623207+ryanontheinside@users.noreply.github.com>
|
Tested the following:
|
| __all__ = ["PermanentMergeLoRAStrategy"] | ||
|
|
||
|
|
||
| def convert_community_lora_to_peft_format( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a specific name for this format? And are there many alternative formats in the community or just this one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not that I know of or could find specifically. I considered naming the function after the keys, considering we go from lora_unet_blocks_0_cross_attn_k.alpha -> diffusion_model.blocks.0.cross_attn.k.lora_A.weight, but I am not sure what other formats we may come across. A better name than 'community lora' may have been 'non peft'.
| converted_state[f"{normalized_key}.lora_A.weight"] = lora_A | ||
| converted_state[f"{normalized_key}.lora_B.weight"] = lora_B_scaled | ||
|
|
||
| temp_fd, temp_path = tempfile.mkstemp( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a need to fix for this PR, but noting that it looks like we have some repeated overhead here - every time we load a non-PEFT compliant format LoRA we load the state dict, re-format keys, write the converted state dict to a temp file. In the future, maybe worth considering writing the converted state dict to disk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can actually eliminate IO entirely using functionality I previously implemented but overlooked.
WDYT?
Fix: Enable community LoRA support with multi-adapter compatibility
TESTED:
-Single + Multi lora, with and without alternate adapter formats, for longlive and streamdiffusion
-Single lora support for Krea (examples of alternate adapter format unknown, not explicitly tested)