-
Notifications
You must be signed in to change notification settings - Fork 6.5k
[Tencent Hunyuan Team] Add LoRA Inference Support for Hunyuan-DiT #8468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
yiyixuxu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
look good to me!
I think we don't need pass cross_attention_kwargs down if we don't use them
| temb: Optional[torch.Tensor] = None, | ||
| image_rotary_emb=None, | ||
| skip=None, | ||
| cross_attention_kwargs: Optional[Dict[str, Any]] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| cross_attention_kwargs: Optional[Dict[str, Any]] = None, |
| encoder_hidden_states=encoder_hidden_states, | ||
| image_rotary_emb=image_rotary_emb, | ||
| skip=skip, | ||
| cross_attention_kwargs=cross_attention_kwargs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| cross_attention_kwargs=cross_attention_kwargs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with this!
| temb=temb, | ||
| encoder_hidden_states=encoder_hidden_states, | ||
| image_rotary_emb=image_rotary_emb, | ||
| cross_attention_kwargs=cross_attention_kwargs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| cross_attention_kwargs=cross_attention_kwargs, |
| @@ -0,0 +1,13 @@ | |||
| import torch | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we will remove this file before merge, no?
in the future, maybe it's easier to just post testing example in PR description:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Additionally, I think this should be turned into a proper test suite: test_lora_layers_hunyuan_dit.py. Just including a SLOW test is sufficient for the time being.
Here is an example:
diffusers/tests/lora/test_lora_layers_sd.py
Line 205 in d457bee
| class LoraIntegrationTests(unittest.TestCase): |
| else 128 | ||
| ) | ||
|
|
||
| self.unet_name = 'transformer' # to support load_lora_weights |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, this feels a little weird to me. But okay for now. I will attempt to refactor this to harmonize. Cc: @yiyixuxu
sayakpaul
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for adding so quickly. Also glad that it required minimal changes on your end :-)
I have left some comments on the implementation.
Would it make sense to also add a note about the LoRA support (with an example) in the pipeline doc? WDYT?
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
Thanks for the comments! I'll update the commit. My question is: if we don't add I will provide a doc update after all the changes |
|
I think the comment was about not propagating
|
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Added LoRA support to HunyuanDiT pipeline. Currently can only support
lora_scale=1.You may test the PR with
test_hunyuandit_lora.py. A pre-trained LoRA model is uploaded here: https://huggingface.co/XCLiu/hunyuandit-lora-testPlease change
YOUR_LORA_PATHto the dir you store the downloaded lora file.The generated image should be

@yiyixuxu @sayakpaul Please have a look! thank you so much!