Skip to content

Conversation

@gnobitab
Copy link
Contributor

Added LoRA support to HunyuanDiT pipeline. Currently can only support lora_scale=1.

You may test the PR with test_hunyuandit_lora.py. A pre-trained LoRA model is uploaded here: https://huggingface.co/XCLiu/hunyuandit-lora-test

Please change YOUR_LORA_PATH to the dir you store the downloaded lora file.

The generated image should be
img_0

@yiyixuxu @sayakpaul Please have a look! thank you so much!

Copy link
Collaborator

@yiyixuxu yiyixuxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

look good to me!
I think we don't need pass cross_attention_kwargs down if we don't use them

temb: Optional[torch.Tensor] = None,
image_rotary_emb=None,
skip=None,
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cross_attention_kwargs: Optional[Dict[str, Any]] = None,

encoder_hidden_states=encoder_hidden_states,
image_rotary_emb=image_rotary_emb,
skip=skip,
cross_attention_kwargs=cross_attention_kwargs,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cross_attention_kwargs=cross_attention_kwargs,

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with this!

temb=temb,
encoder_hidden_states=encoder_hidden_states,
image_rotary_emb=image_rotary_emb,
cross_attention_kwargs=cross_attention_kwargs,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cross_attention_kwargs=cross_attention_kwargs,

@@ -0,0 +1,13 @@
import torch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we will remove this file before merge, no?
in the future, maybe it's easier to just post testing example in PR description:)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Additionally, I think this should be turned into a proper test suite: test_lora_layers_hunyuan_dit.py. Just including a SLOW test is sufficient for the time being.

Here is an example:

class LoraIntegrationTests(unittest.TestCase):
.

@yiyixuxu yiyixuxu requested a review from sayakpaul June 11, 2024 05:59
else 128
)

self.unet_name = 'transformer' # to support load_lora_weights
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this feels a little weird to me. But okay for now. I will attempt to refactor this to harmonize. Cc: @yiyixuxu

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for adding so quickly. Also glad that it required minimal changes on your end :-)

I have left some comments on the implementation.

Would it make sense to also add a note about the LoRA support (with an example) in the pipeline doc? WDYT?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@gnobitab
Copy link
Contributor Author

Thanks for the comments! I'll update the commit.

My question is: if we don't add cross_attention_kwargs, how do we control lora scale?

I will provide a doc update after all the changes

@sayakpaul
Copy link
Member

I think the comment was about not propagating cross_attention_kwargs throughout the codebase. Just the Transfomer block is fine similar to:

scale_lora_layers(self, lora_scale)

@sayakpaul
Copy link
Member

@gnobitab please take note of #8670. It is going to help with this support too. IMO, it might be better to wait for a little while till that PR gets merged. Once done, we will surely revisit this.

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot added the stale Issues that haven't received updates label Sep 14, 2024
@yiyixuxu yiyixuxu closed this Nov 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

stale Issues that haven't received updates

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants