Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix Bug] Crash when training model on multiple GPUs #63

Merged
merged 2 commits into from
Sep 29, 2019
Merged

Conversation

wkcn
Copy link
Owner

@wkcn wkcn commented Sep 29, 2019

This PR will fix the bug which crash when training model on multiple GPUs. #62

The reason is that NDArrays are released before the function runs.

However, this PR will allocate the memory before the function runs rather than during the function runs. I will try to find a better solution.

@coveralls
Copy link

coveralls commented Sep 29, 2019

Pull Request Test Coverage Report for Build 687

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 85.395%

Totals Coverage Status
Change from base Build 680: 0.0%
Covered Lines: 1526
Relevant Lines: 1787

💛 - Coveralls

@wkcn wkcn merged commit b1f989d into master Sep 29, 2019
@wkcn wkcn deleted the fix_async_mx_bug branch September 29, 2019 11:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants