Skip to content

load_lora_weights function fails to load custom lora #6392

@akameswa

Description

@akameswa

Describe the bug

Base model: segmind/tiny-sd
LoRA: akameswa/lcm-lora-tiny-sd

Error:
Got the following error using load_lora_weights.
ValueError: Target modules {'base_model.model.up_blocks.0.attentions.0.transformer_blocks.0.attn1.to_q', ...} not found in the base model. Please check the target modules and try again.

Steps taken:

  1. Checked if the target modules are absent in the base model. But, they are present.
  2. Tried unet.load_attn_procs from here. The results are bad and blurry.

Fix:
Using the following code worked.

from peft import PeftModel, PeftConfig
config = PeftConfig.from_pretrained(adapter_id)
lora_unet = PeftModel.from_pretrained(pipe.unet, adapter_id)

Training:
Follow LCM-LoRA at tutorial with
MODEL_NAME='segmind/tiny-sd' and DATASET_DIR='akameswa/improved_aesthetics_6.5plus_webdataset'

Additional comments:
Did not face any issues during training. Attaching training log

Note:
Please refer to the attached colab. Walks through the steps taken, the errors, and the results in a clear manner.

Reproduction

https://colab.research.google.com/drive/135v4cBVFu_zxXhcyao5lVZFlfscde0n0?usp=sharing

Logs

No response

System Info

  • diffusers version: 0.25.0.dev0
  • Platform: Windows-10-10.0.22635-SP0
  • Python version: 3.10.13
  • PyTorch version (GPU?): 2.1.2+cu121 (True)
  • Huggingface_hub version: 0.19.4
  • Transformers version: 4.36.2
  • Accelerate version: 0.25.0
  • peft version: 0.7.0
  • xFormers version: 0.0.23.post1

Who can help?

@sayakpaul @patrickvonplaten @DN6

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingpeftstaleIssues that haven't received updatestraining

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions