Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you load LoRAs with lmd, or lmd_plus? #13

Open
KristianMischke opened this issue Dec 3, 2023 · 0 comments
Open

Can you load LoRAs with lmd, or lmd_plus? #13

KristianMischke opened this issue Dec 3, 2023 · 0 comments

Comments

@KristianMischke
Copy link

I've been loving this tool but have been wanting to use it with some LoRAs I've created.

Is it possible to:

  • load LoRAs based on runwayml/stable-diffusion-v1-5 when generating with lmd?
  • load LoRAs based on CompVis/stable-diffusion-v1-4 when generating with lmd_plus?
  • finetune or create LoRAs based on longlian/lmd_plus?
    Unexpected key(s) in state_dict: "position_net.null_positive_feature", "position_net.null_position_feature", 
    "position_net.linears.0.weight", "position_net.linears.0.bias", "position_net.linears.2.weight", "position_net.linears.2.bias", 
    "position_net.linears.4.weight", "position_net.linears.4.bias", "down_blocks.0.attentions.0.transformer_blocks.0.fuser.alpha_attn", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.alpha_dense", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.linear.weight", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.linear.bias", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.attn.to_q.weight", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.attn.to_k.weight", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.attn.to_v.weight", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.attn.to_out.0.weight", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.attn.to_out.0.bias", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.ff.net.0.proj.weight", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.ff.net.0.proj.bias", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.ff.net.2.weight", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.ff.net.2.bias", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.norm1.weight", 
    "down_blocks.0.attentions.0.transformer_blocks.0.fuser.norm1.bias"
    ...
    
    It seems the attentions from GLIGEN are messing up the training
  • So far with my attempts at loading a LoRA for lmd_plus with the pipeline on huggingface https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#llm-grounded-diffusion have been unsuccessful. I get 'UNet2DConditionModel' object has no attribute 'attn_processors' after I load LoRAs on the pipeline. It seems that loading a LoRA clears our the attention processors from GLIGEN which causes issues down stream. Is there a way to preserve them? Does that even make sense, I'm not super familiar with how the GLIGEN attention processors work and if updating the unet layers with LoRAs would mess up the attentions.

So I guess is it possible? and are there any examples of it that I can reference? any additional help would be appreciated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant