Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the problem about eval.py #34

Closed
LearningJack opened this issue Oct 28, 2021 · 2 comments
Closed

the problem about eval.py #34

LearningJack opened this issue Oct 28, 2021 · 2 comments

Comments

@LearningJack
Copy link

In the eval.py code, why you annotated the net.ig.eval() ? So you didn't use net_ig.eval(). But when we do the test, we would load the model and then do the model.eval().

@odegeasslbc
Copy link
Owner

feel free to uncomment it and check out the result difference. Its a little bit involved about why I comment out .eval(). Basically it is because the running mean and running std in the BatchNorm layer does not saved properly when I do EMA optimizing of the Generator. It common in many otherprojects where we don't use eval mode on models trained as GAN.

@yangyu615
Copy link

What's the difference of optimizerG.state_dict() and netG.state_dict(), thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants