You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems the attentions from GLIGEN are messing up the training
So far with my attempts at loading a LoRA for lmd_plus with the pipeline on huggingface https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#llm-grounded-diffusion have been unsuccessful. I get 'UNet2DConditionModel' object has no attribute 'attn_processors' after I load LoRAs on the pipeline. It seems that loading a LoRA clears our the attention processors from GLIGEN which causes issues down stream. Is there a way to preserve them? Does that even make sense, I'm not super familiar with how the GLIGEN attention processors work and if updating the unet layers with LoRAs would mess up the attentions.
So I guess is it possible? and are there any examples of it that I can reference? any additional help would be appreciated
The text was updated successfully, but these errors were encountered:
I've been loving this tool but have been wanting to use it with some LoRAs I've created.
Is it possible to:
runwayml/stable-diffusion-v1-5
when generating with lmd?CompVis/stable-diffusion-v1-4
when generating with lmd_plus?longlian/lmd_plus
?'UNet2DConditionModel' object has no attribute 'attn_processors'
after I load LoRAs on the pipeline. It seems that loading a LoRA clears our the attention processors from GLIGEN which causes issues down stream. Is there a way to preserve them? Does that even make sense, I'm not super familiar with how the GLIGEN attention processors work and if updating the unet layers with LoRAs would mess up the attentions.So I guess is it possible? and are there any examples of it that I can reference? any additional help would be appreciated
The text was updated successfully, but these errors were encountered: