-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tutorial: adversarial training seems slow. maybe i'm wrong #5
Comments
without looking at the code, I bet we're missing a stop_gradient somewhere |
3.7 seconds per 100 batches for naive training |
52 sec per 100 batches for adv training |
in pylearn2, my result with adversarial training takes 3 sec per full epoch |
in pylearn2, without adversarial training, my code runs in 1 sec per full epoch |
naive training is forward-back, O(2). |
whoa, actually something is seriously weird. |
You are right: the issue was due to my naive implementation which redefined the adversarial loss in the TF graph at each iteration (batch...). I fixed by introducing a new function that add the loss to the graph, and which returns the TF var to be evaluated at each iteration d7a95d3 |
should benchmark it and make sure the runtime is correct.
The text was updated successfully, but these errors were encountered: