New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about evaluation #38
Comments
We actually follow the evaluation of DALL-E. Since the 30,000 captions are sampled at random, I think the difference is normal. It is possible that DM-GAN performs better on our sampled sub-datasets. Maybe I should change the performance to the official number, thank you. |
Thank you for your quick reply. |
@FrankCast1e Hi, after comparing details with previous works, we find our sampling is slightly different from the previous works, but should be equal in the view of evaluation: |
|
@FrankCast1e
|
sorry, I'm confused. |
@FrankCast1e Hi, |
Hi, thanks a lot. |
@FrankCast1e , you can email me according to the address in the paper |
ok. An email has been sent. |
Hi, sorry to bother you again. Have you received my mail? Looking forward to hearing from you. |
Hi,
How do you get "26.0" FID on mscoco using DM-GAN? Because the official result reported in https://github.com/MinfengZhu/DM-GAN is 26.55.
I ran DM-GAN myself and managed to get a similar result(26.54), instead of "26.0".
The text was updated successfully, but these errors were encountered: