-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large differences in experimental results when BATCH_SIZE = 16 and EPOCH=500 #9
Comments
Sorry for that, there are some typos in evaluator.py. We have already fixed that. |
Thanks for reply. I am sure my code is up to date. |
I have trained for 1500 epochs with a batch size of 16 and I have a 12.9409 in FID compared to the 5.9 reported in the paper. Is there any reason for such a difference? All the rest of the parameters in the configs files were the ones used in the training of the model reported in the paper? Thanks :) |
I am figuring it out. I will contact you as soon as possible. |
@tr3e Any news on the issue? I have trained a model with same configuration as the one in your repo (except the batch size)
But these are my results using your evaluation script:
As you can observe, the results are very distant from the ones provided in the paper. I am in an ongoing research using your dataset, but in order to make a fair comparison, we need to be able to replicate your results. Hope you find what's going on :) |
Hello! ========== MM Distance Summary ========== We suggest that you can update to the newest code, and kindly increase the batch size. |
@tr3e I am still unable to replicate the results. Can you provide me with some contact method to talk with you and not fill this issue? |
me too. |
my email is lianghan@shanghaitech.edu.cn :) |
Hi, I found that the MMDist here is lower than what is presented in the paper. When I am reproducing your work as well as my model, this MMDist is always around 4. Is there any mistake in the calculation? |
Thanks for sharing your great work!
I have trained the model myself with respect to your readme guideline, but set BATCH_SIZE = 16 and EPOCH=500 due to the lack of computing resources. In this setting, my trained model has much worse performance compared with the evaluation results presented in the paper. I am wondering if it is essential to have exact same training setting to make the model have similar performance to paper's model. Besides, could you kindly release the checkpoint that exclusively trained on the training set? I think that would be really helpful for me!
Thanks for your time and patience!
The text was updated successfully, but these errors were encountered: