Skip to content

LoraLoaderMixin.load_lora_weights() empties state_dict passed as input param. #7054

@slep0v

Description

@slep0v

Describe the bug

The case happens when LoRA's are opened beforehand and stored as state_dict for optimization purposes (lasy-load, re-use etc) in production.

After pipe.load_lora_weights(adapter_state_dict, adapter_name=adapter_name) is called, adapter_state_dict becomes empty dict and can't be reused to apply LoRA in further calls.

Current workaround is to pass adapter_state_dict.copy(), but I think this should not happen and can possibly confuse others as it confused me (took quite time to debug the issue)


Also, I don't know if that's intended, but you can call set_adapters with adapter name not loaded into the model which will quitely do nothing, but when you call get_active_adapters - you will see adapter name there, which is also a bit misleading.

Reproduction

  1. Download any .safetensors LoRA first
import wget
wget.download("https://huggingface.co/latent-consistency/lcm-lora-sdv1-5/resolve/main/pytorch_lora_weights.safetensors", "test_lora.safetensors")
  1. Reproduce code
import torch
from diffusers import AutoPipelineForText2Image, LCMScheduler
from safetensors.torch import load_file
import wget

pipe = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

lcm_lora = load_file("./test_lora.safetensors")
pipe.load_lora_weights(lcm_lora, adapter_name="lcm")

print(lcm_lora)  # >>> {}

This is enough, but you can also check:

2.1)

pipe.delete_adapters(["lcm"])
pipe.load_lora_weights(lcm_lora, adapter_name="lcm")  # This will silently not load anything!
pipe.get_list_adapters()  # empty

Logs

No response

System Info

  • diffusers version: 0.25.0
  • Platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
  • Python version: 3.10.6
  • PyTorch version (GPU?): 2.0.1+cu118 (True)
  • Huggingface_hub version: 0.19.4
  • Transformers version: 4.37.2
  • Accelerate version: 0.24.1
  • xFormers version: not installed
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No
  • PEFT version: 0.8.2

Who can help?

@sayakpaul @patrickvonplaten

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingpeft

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions