-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi GPU support #20
Comments
Issue explained: file Line 27 in cc0c4be
This results into wrong dimensinality during ditributed training as batch_size is actually divided by number of GPUs or replicas during .fit() I have been thinking for a while about changes in this function but nothing worked. This is what I tried
Using a tf.Variable inside a lambda is a bad idea, if you can suggest something better let me know |
fixed for |
PR for fix #25 |
Using MirroredStrategy for distributed training results in an error
The text was updated successfully, but these errors were encountered: