Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot reproduce the FID results of provided pre-trained models (LSUN - Churches & CelebA) #213

Closed
hieuphung97 opened this issue Jan 2, 2023 · 11 comments

Comments

@hieuphung97
Copy link

Has anyone had the issue when re-evaluating the provided pre-trained models on LSUN - Churches and CelebA datasets?
I cannot get the FIDs as reported in the paper.
In fact, the results are far from the reported ones (LSUN-Churches: 4.02 in the paper compared with 11.5, CelebA: 5.11 in the paper compared with 17.4)
lsun-church
celeba

The only difference I noticed is that I sampled only 10k images for each case instead of 50k as in the paper (to save time). I don't know whether the number of samples has this significant impact.

@dongzhuoyao
Copy link
Member

I think larger sampling number like 50k can further drop the performance by 30%?

@ader47
Copy link

ader47 commented Mar 23, 2023

Hi @hieuphung97, have you reproduced the results reported in the paper? I have sampled 35k images using the provided pre-trained model on lsun_churches, calculated the FID via the torch-fidelity, and the FID was 15.89.

@ThisisBillhe
Copy link

Hi @hieuphung97 , I wonder if you are using the celebA or celebA-HQ dataset? I got a FID of 27 on celebA-HQ dataset.

@ader47
Copy link

ader47 commented Apr 19, 2023

Hi @hieuphung97 , I wonder if you are using the celebA or celebA-HQ dataset? I got a FID of 27 on celebA-HQ dataset.

Does your dataset for caculating FID pre-processed correctly? It's quite important.

@ThisisBillhe
Copy link

Hi @hieuphung97 , I wonder if you are using the celebA or celebA-HQ dataset? I got a FID of 27 on celebA-HQ dataset.

Does your dataset for caculating FID pre-processed correctly? It's quite important.

I use pytorch-fid, which directly uses path-to-dataset-folder as args... So it should not be a pre-processing issue

@ader47
Copy link

ader47 commented Apr 19, 2023

Are the images in the dataset size 256*256?

@ThisisBillhe
Copy link

Are the images in the dataset size 256*256?

Yes...I download the celebA-HQ dataset from here. But I would greatly appreciate it if you could provide me with the URL of your dataset. I would have a try.

@ThisisBillhe
Copy link

Hi @hieuphung97, have you reproduced the results reported in the paper? I have sampled 35k images using the provided pre-trained model on lsun_churches, calculated the FID via the torch-fidelity, and the FID was 15.89.

Update: I also got a FID of ~17 on CelebA dataset, which is similar to @hieuphung97

@hieuphung97
Copy link
Author

Hi @ader47 @ThisisBillhe
The issue is solved by using the correct inference config and the correct number of generated images

  • DDIM step: 200
  • eta: 0
  • Num of samples: 50k

@ThisisBillhe I use celebA-HQ dataset

@ader47
Copy link

ader47 commented Apr 24, 2023

Thank you so much :)

@Alii-Ganjj
Copy link

Alii-Ganjj commented Aug 15, 2023

Hi @ader47 @ThisisBillhe The issue is solved by using the correct inference config and the correct number of generated images

  • DDIM step: 200
  • eta: 0
  • Num of samples: 50k

@ThisisBillhe I use celebA-HQ dataset

Hi @hieuphung97
I am trying to reproduce the results for LSUN-Church. I have used the same setting that you mentioned, but I got FID 17.58 which is far from 4.02. How could you solve this issue?

Details of what I did: I sampled 50k images with the setting that you mentioned. Then, I wrote a script that instantiates two dataloaders with the same pre-processing as the original ones for the real and fake image directories. I pass these two dataloaders to the torch-fidelity package as the paper suggested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants