Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zero initialization of convolutions #3

Open
nicolas-dufour opened this issue Nov 25, 2022 · 1 comment
Open

Zero initialization of convolutions #3

nicolas-dufour opened this issue Nov 25, 2022 · 1 comment

Comments

@nicolas-dufour
Copy link

Hi,
I have observed that the code carefully initialize certain convolutions with zeros init.
Do you have any reference for this kind of design decision?

Thanks!

@FutureXiang
Copy link

Hi, I am also confused about the weight initialization in different implementations.

Each implementation has its own initialization style

In the official DDPM repo, the convs before residual connections and the final conv are initialized with zeros, while other convs are initialized with zero-mean uniform distributions.
In the ADM guided-diffusion repo, the convs before residual connections and the final conv are also initialized with zeros, while others are initialized by PyTorch default.
In the Score-Based SDE repo, the implementation covers both DDPM/NCSN style initialization.
In this repo, I think it's similar to the Score-Based SDE, but it's still different to the three codebase mentioned above.

My experiments and observations

Recently, I tried to train diffusion models (DDPM, DDIM, EDM, ...) with the original basic UNet (35.7M #params) on CIFAR-10. Here are some observations:

  • I can successfully reproduce the FIDs reported by DDPM and DDIM without any custom weight initialization. All parameters are initialized by PyTorch default.
  • However, my optimal learning rate differs from those in the official repo (1e-4 vs 2e-4). When I tried the official one (2e-4), the FID result got far worse.
  • I train the EDM model with my no-initialization 35.7M mini network with my learning rate, and the results are reasonable (better than DDIM).
  • However, when I train with the EDM proposed 10e-4 learning rate, the FID result got far worse. To confirm it, I replace the networks.py with mine and run with the official EDM code, the FID is still bad.

Seemingly, the mathematical diffusion model (training + sampler) can be decoupled as an individual component. But the neural network model (and its initialization) may be strongly coupled with the hyper-parameters (?).

I wonder if it is really the case, and why the initialization / hyper-parameter matters a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants