Skip to content

Conversation

@ryanontheinside
Copy link
Collaborator

Fix: Enable community LoRA support with multi-adapter compatibility

  • Supports community LoRA formats (lora_up/down, lora_unet_* prefixes)
  • Preserves pre-existing PEFT adapters during merge (e.g., LongLive performance LoRA)
  • Auto-converts incompatible formats to PEFT-compatible structure
  • Uses PEFT's merge_and_unload for all adapters simultaneously

TESTED:
-Single + Multi lora, with and without alternate adapter formats, for longlive and streamdiffusion
-Single lora support for Krea (examples of alternate adapter format unknown, not explicitly tested)

Signed-off-by: RyanOnTheInside <7623207+ryanontheinside@users.noreply.github.com>
Signed-off-by: RyanOnTheInside <7623207+ryanontheinside@users.noreply.github.com>

fixup community support

Signed-off-by: RyanOnTheInside <7623207+ryanontheinside@users.noreply.github.com>
@yondonfu
Copy link
Contributor

yondonfu commented Nov 20, 2025

Tested the following:

  • 1 LoRA perm merge with streamdiffusionv2
  • 1 LoRA perm merge with longlive
  • 2 LoRA perm merge with longlive
  • 2 LoRA runtime PEFT with diff scales with longlive
  • 1 LoRA perm merge with krea
  • 1 LoRA runtime PEFT with diff scales with krea

__all__ = ["PermanentMergeLoRAStrategy"]


def convert_community_lora_to_peft_format(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a specific name for this format? And are there many alternative formats in the community or just this one?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not that I know of or could find specifically. I considered naming the function after the keys, considering we go from lora_unet_blocks_0_cross_attn_k.alpha -> diffusion_model.blocks.0.cross_attn.k.lora_A.weight, but I am not sure what other formats we may come across. A better name than 'community lora' may have been 'non peft'.

converted_state[f"{normalized_key}.lora_A.weight"] = lora_A
converted_state[f"{normalized_key}.lora_B.weight"] = lora_B_scaled

temp_fd, temp_path = tempfile.mkstemp(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a need to fix for this PR, but noting that it looks like we have some repeated overhead here - every time we load a non-PEFT compliant format LoRA we load the state dict, re-format keys, write the converted state dict to a temp file. In the future, maybe worth considering writing the converted state dict to disk.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can actually eliminate IO entirely using functionality I previously implemented but overlooked.

#171

WDYT?

@yondonfu yondonfu merged commit 6312357 into main Nov 20, 2025
5 checks passed
@yondonfu yondonfu deleted the ryanontheinside/fix/perm-merge branch November 20, 2025 19:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants