Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about FID score #6

Closed
dcahn12 opened this issue Jan 13, 2022 · 4 comments
Closed

Question about FID score #6

dcahn12 opened this issue Jan 13, 2022 · 4 comments

Comments

@dcahn12
Copy link

dcahn12 commented Jan 13, 2022

Hi, thank you for your great work!

I have a question about implementing how to evaluate the FID score on your generated images.
I tried to reproduce FID score using your pre-trained weight of DuCo-StoryGAN, but I couldn't reproduce your results shown in table 1 in your paper.

Could you elaborate about how to reproduce your FID score?

Thanks!

@hyeonjinXZ
Copy link

Hi,
You evaluated FID score.

  1. Would you find my issue How to run eval_vfid.py code? #7 . And would you comment please?
  2. How much did you get as FID score?

@adymaharana
Copy link
Owner

adymaharana commented Jan 25, 2022

Hi @dcahn12 , thank you for bringing this issue to my notice. I had the wrong checkpoint posted in the repository, and also the inference/evaluation scripts were using the 'val' mode instead of 'test' at some places. I have updated the scripts (see the FID script to make sure you have the latest one), the link to the best checkpoint, and also provided the test predictions, in the StoryViz repository. Apologies for this inconvenience.

@Hyeonjin1989 please cross-check your eval_vfid script against the one available in the StoryViz repo. I have started looking into the FID issue with this repository and will update accordingly asap. Thank you for your patience!

@hyeonjinXZ
Copy link

@adymaharana Thank you for your reply! I think this repo's evaluation scripts is same with what you changed in StoryViz. So I think there is another issue in this repo. I hope this repo also be updated!
Thank you for your all effort!

@adymaharana
Copy link
Owner

@dcahn12 I spoke too soon. With the help of @Hyeonjin1989, I discovered that there was an issue with the FID calculations. See the latest update in Readme. I have fixed the same now - it involved adding a single line in the eval_vfid.py script, to avoid getting reference images from a hard-coded directory on my local system. With the correct reference images, the FID scores are much higher. I apologize to you and @Hyeonjin1989 for this confusion.

@Hyeonjin1989 please validate your trained checkpoint against the FID scores now available in the Readme. Kindly ignore the FID values in the paper for now.

@dcahn12 dcahn12 closed this as completed Feb 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants