Skip to content

Conversation

@Wang-Xiaodong1899
Copy link
Contributor

What does this PR do?

Since UNet_Spatio_Temporal_Condition file was designed for Stable Video Diffusion, the default parameter num_attention_heads should be [5, 10, 20, 20] as the SVD config shows. If people initialize the UNetSpatioTemporalConditionModel class by self.__init__() rather than loading by load_pretrained(config) using pretrained config file, they can not realize that they have initialized the different Attention Head, and also there is no warning or error when load pretrained weights into different Attention Head, causing the wrong attention output.

We should avoid the situation that will cause the wrong attention output. This PR does that.

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@DN6 DN6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch 👍🏽

@sayakpaul
Copy link
Member

@DN6 good to merge?

@yiyixuxu yiyixuxu merged commit 6f2b310 into huggingface:main Mar 9, 2024
@Wang-Xiaodong1899 Wang-Xiaodong1899 deleted the fix_unet_spatio_temporal_condition_num_attention_heads branch January 15, 2025 09:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants