Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FID score of CelebA-HQ 256x256 #30

Open
jychoi118 opened this issue Nov 3, 2021 · 1 comment
Open

FID score of CelebA-HQ 256x256 #30

jychoi118 opened this issue Nov 3, 2021 · 1 comment

Comments

@jychoi118
Copy link

I'm quite confused about FID of CelebA-HQ.

In NCP-VAE and VAEBM paper, it is reported as 40.26, while recent paper LSGM reported as 29.76.

Were there further improvement of NVAE after publication of NCP-VAE and VAEBM?

@arash-vahdat
Copy link
Contributor

arash-vahdat commented Nov 7, 2021

In NCP-VAE and VAEBM, we trained new NVAEs from scratch using a Gaussian image decoder (i.e., p(x|z)). This was primarily because for VAEBM, we needed to backpropagate through generated images in the decoder, and this was easy to formulate with a Gaussian decoder using the reparameterization trick. This decoder type was not needed for NCP-VAE as it was completely formulated in the latent space. But, at the time, we didn't know about the implications of the Gaussian decoder.

In the original NVAE paper and later in LSGM, we used the discretized logistic mixture distribution in the decoder. You can read about this distribution in this paper. When writing the LSGM paper, we went back and computed FID for the original publicly available NVAE checkpoints and we were also surprised to see that they obtain a lower FID (29.76) compared to NVAEs trained for NCP-VAE and VAEBM (~40).

Here is why we think the FID score gets better with the discretized logistic mixture: This decoder is a better statistical model for representing pixel intensities in an image and it forms simple conditional dependencies between the RGB channels. In contrast, the Gaussian decoder is a simple model that predicts RGB channels independently. Our experiments show that the discretized logistic mixture requires encoding less information in the latent space to reconstruct input images which in turn translates to having fewer prior holes in the prior distribution. Because of this, it seems that the FID score gets better with this decoder.

I hope this clarifies the confusion. If you have any further questions, please let me know here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants