Skip to content

Commit

Permalink
Update on "Better handing of Autograd+Fork errors."
Browse files Browse the repository at this point in the history
Fixes: #32835
Fixes: #5834

Can not combine with CUDA's implementation as each of them requires individual `std::once_flag` as well as different `forked_autograd_child` functions. CUDA version relays to python module, autograd uses TORCH_CHECK to report error to python and cpp.

Differential Revision: [D20144024](https://our.internmc.facebook.com/intern/diff/D20144024)

[ghstack-poisoned]
  • Loading branch information
VitalyFedyunin committed Feb 27, 2020
1 parent 2f5a90b commit a3c88af
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions test/test_multiprocessing.py
Expand Up @@ -356,15 +356,15 @@ def test_inherit_tensor(self):
p.join(1)
self.assertEqual(t, torch.ones(5, 5) * 3, 0)

@unittest.skipIf(IS_WINDOWS, "Test need to use fork multiprocessing")
@unittest.skipIf(IS_WINDOWS, "Test needs to use fork multiprocessing")
def test_autograd_errors(self):
ctx = mp.get_context('fork')
simple_autograd_function()
with self.assertRaisesRegex(RuntimeError, r'Unable to handle autograd'):
with ctx.Pool(3) as pool:
pool.map(simple_autograd_function, [1, 2, 3])

@unittest.skipIf(NO_MULTIPROCESSING_SPAWN, "Test need to use spawn multiprocessing")
@unittest.skipIf(NO_MULTIPROCESSING_SPAWN, "Test needs to use spawn multiprocessing")
def test_autograd_fine_with_spawn(self):
ctx = mp.get_context('spawn')
simple_autograd_function()
Expand Down

0 comments on commit a3c88af

Please sign in to comment.