During inference, is it possible to load multiple LoRA weights on top of the frozen model. Something like:
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
pipe.unet.load_attn_procs("a.bin")
pipe.unet.load_attn_procs("b.bin")
pipe.to("cuda")
This seems to not work currently due to current API design I think but is there a workaround for doing this?
Thanks