New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix mp serialization for integer nn.Parameter on CUDA #56529
Fix mp serialization for integer nn.Parameter on CUDA #56529
Conversation
💊 CI failures summary and remediationsAs of commit 49a95e7 (more details on the Dr. CI page):
2 failures not recognized by patterns:
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
test/test_multiprocessing.py
Outdated
@@ -833,12 +833,21 @@ def test_cuda_parameter_sharing(self): | |||
@unittest.skipIf(NO_MULTIPROCESSING_SPAWN, "Disabled for environments that \ | |||
don't support multiprocessing with spawn start method") | |||
def test_integer_parameter_serialization(self): | |||
iparam = torch.nn.Parameter(torch.tensor(0, dtype=torch.int64), requires_grad=False) | |||
for device in ['cpu', 'cuda']: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test errors are real, you can skip this test if cuda ipc is not available, like the tests above. Also, nit, prefer using tuples ('cpu', 'cuda') over lists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ngimel oops! my mistake. Thanks for the advice.
I split the test in two: for CPU and for CUDA. The first one is going to be run regardless of the availability of CUDA IPC. The second test is skipped if CUDA is not available.
Codecov Report
@@ Coverage Diff @@
## master #56529 +/- ##
==========================================
+ Coverage 77.54% 77.78% +0.24%
==========================================
Files 1923 1923
Lines 190853 190854 +1
==========================================
+ Hits 147996 148457 +461
+ Misses 42857 42397 -460 |
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Summary: Fixes pytorch#56342 Pull Request resolved: pytorch#56529 Reviewed By: albanD Differential Revision: D27896094 Pulled By: ngimel fbshipit-source-id: fe817781eb7139ea57c78acfd56e7c11b61eb4ed
Fixes #56342