Skip to content

LORA Unsupported Layers continuation from #6368 #8831

@AbhinavGopal

Description

@AbhinavGopal

Describe the bug

Based on #6368, I attempted the code provided for pruning the unsupported layers, but it actually prunes the entire dictionary.

Reproduction

from requests import get
from diffusers import StableDiffusionXLPipeline, LCMScheduler
import safetensors

sdxl_pipeline_id = "stabilityai/stable-diffusion-xl-base-1.0"
lora_url = "https://civitai.com/api/download/models/247778?type=Model&format=SafeTensor"
lora_path = "./lcm-turbo-mix.safetensors"

with open(lora_path, "wb") as fh:
  data = get(lora_url, stream=True)
  for chunk in data.iter_content(chunk_size=8192):
    fh.write(chunk)

state_dict = safetensors.torch.load_file(lora_path, device="cpu")
state_dict = {k: w for k, w in state_dict.items() if k in ["input_blocks", "middle_block", "output_blocks"]}


pipe = StableDiffusionXLPipeline.from_pretrained(
  sdxl_pipeline_id,
  variant="fp16"
)


pipe.load_lora_weights(state_dict)

pipe.enable_model_cpu_offload()
image = pipe(
  prompt="a horse, highly detailed, 4k, professional",
  negative_prompt="blurry",
  num_inference_steps=8,
).images[0]

print(state_dict)

Logs

Loading pipeline components...: 100%|███████████████████████████████████████████████████████| 7/7 [00:22<00:00,  3.15s/it]
100%|███████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:18<00:00,  2.32s/it]
{}

System Info

  • 🤗 Diffusers version: 0.29.2
  • Platform: Linux-5.10.219-208.866.amzn2.x86_64-x86_64-with-glibc2.31
  • Running on a notebook?: No
  • Running on Google Colab?: No
  • Python version: 3.11.9
  • PyTorch version (GPU?): 2.3.1+cu121 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.23.4
  • Transformers version: 4.42.3
  • Accelerate version: 0.32.1
  • PEFT version: 0.11.1
  • Bitsandbytes version: not installed
  • Safetensors version: 0.4.3
  • xFormers version: 0.0.27
  • Accelerator: NVIDIA L4, 23034 MiB VRAM
  • Using GPU in script?: No
  • Using distributed or parallel set-up in script?: No

Who can help?

@yiyixuxu @sayak

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstaleIssues that haven't received updates

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions