Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In Diffusion Models: Why does not use 'Positional Encoding' in self-attention layers? #4

Open
lionking6792 opened this issue Mar 5, 2024 · 1 comment

Comments

@lionking6792
Copy link

lionking6792 commented Mar 5, 2024

Thx for nice practicing about DM.
Actually, I'm really curious about why does not use 'Positional Encoding' (which was used in ViT or VanillaTransformer.. etc..) in self-attention layers?
Is that any reason and can we ensure self-attention in DDPM U-Net can maintain its position(pixel-wise) information?

@explainingai-code
Copy link
Owner

Hi,

This DDPM code was created replicating the blocks from the official version for stable diffusion with the goal to create a minimalistic diffusion model purely for understanding purposes(and then progress to Stable Diffusion from here).

As far as I saw in the official repo, the authors don't add any positional encoding information prior to attention in their transformer block.
That was the only reason for me also not adding in here and without adding it, I was getting decent results so then didn't end up adding it.

I don't think in the stable diffusion repo, the authors anywhere mention why that is the case though. The same is asked here as well. So to answer why its not added, I honestly don't know. Maybe after adding positional information the result might indeed end up being significantly better. Or maybe the presence of padding in the resnet convolutions already give the network enough indication of positional information in its feature map representations, enough that it manages to do fine without any explicitly provided positional information.

If you want to experiment with adding the positional information to the attention block , you can just replicate what is done in diffusers library here, where they simply add sinusoidal embeddings prior to every attention block .
Do let me know what results you end up with, in case you decide to try it out or if you end up finding more information on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants