New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get D&C score #4
Comments
Hi @ygjwd12345 thanks for your interest. We sampled 19 times for two purposes: 1. The D&C score is sensitive to image numbers in our setting, so we sampled multiple times; 2. We would like to make sure these scores (LPIPS for diversity and D&C for fidelity and diversity) come from the same tested results. |
Would you mind to tell me how to sample 19 times? |
Meanwhile, LPIPS should be smaller is better according to the original paper(The Unreasonable Effectiveness of Deep Features as a Perceptual Metric) and official GitHub(https://github.com/richzhang/PerceptualSimilarity). And I wonder how real images measure LPIPS FID and D&C. |
If there are more than 100 images, how to sample them? why not you test all images directly? |
Please refer to the paper BicycleGAN, where the LPIPS is used to evaluate the diversity of the results. The larger score means larger difference between the generated results. |
Please refer to the BicycleGAN and MUNIT to get multiple and diverse results. |
However, there is no get_z_random in your code. "sh ./scripts/test_fid.sh" can not realise it. |
According to "Similar to LPIPS, we first sampled 19 pairs for each image and then used the code at https://github.com/mseitzer/pytorch-fid to extract the features of real and generated images. Finally, we fed these 1900 generated samples and real features with 2048-dimensions to the PRDC function provided in https://github.com/clovaai/generative-evaluation-prdc to calculate these scores."
So will you run 19times to generate enough generated samples?
The text was updated successfully, but these errors were encountered: