Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix multi-gpu bug #2

Merged
merged 1 commit into from Feb 1, 2022
Merged

Fix multi-gpu bug #2

merged 1 commit into from Feb 1, 2022

Conversation

tremblerz
Copy link
Contributor

Whenever there is more than one GPU, the code invokes first_batch that goes on to call self.model(). While other instantiations of self.model() pass on forward_kwargs, the implementation of first_batch does not which leads to some problems. I have patched it by simply passing on forward_kwargs wherever necessary. Tested on different number of GPUs.

@tremblerz
Copy link
Contributor Author

Fixes #1

@chrvt chrvt merged commit 025b473 into chrvt:master Feb 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants