-
Notifications
You must be signed in to change notification settings - Fork 22.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: context has already been set(multiprocessing) #3492
Comments
When using |
I have the same problem and the solution provided by @apaszke doesn't work for me.. |
Hi @pancho111203 , You might have other files in your project which also have a Regards |
Even inside the
|
I found a solution, which is to use a context object in multiprocessing.
And then replace |
Setting 'spawn' multiple times in the process causes RuntimeError. To avoid this issue we can use force=True but according to following issue, it causes leaking of semephores. Therefore I decided to use try-except to avoid this RuntimeError. See also: pytorch/pytorch#3492
For me the problem was that i did a
which will explicitly set the start method. After calling this - you will be unable to set the change method. Minor bug, if you could even call it that. |
yep!!! |
I still see this issue in 2022. Apparently some other modules, namely
result: Torch version == 1.12.1, Python 3.9.13, Ubuntu+WSL, conda A quick workaround is to import torch and set start_method first, but you may consider reopening and fixing this issue. |
context does not get messed up - seems to be a known bug pytorch/pytorch#3492
My solution: when using |
Solved problems of “Cannot re-initialize CUDA in forked subprocess” and "context has already been set(multiprocessing)" by
|
I use a spawn start methods to share CUDA tensors between processes
It returns a wrong result and shows an error
The text was updated successfully, but these errors were encountered: