We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I minimally reproduced the startup process:
import torch from boxx import * mp = torch.multiprocessing.get_start_method( allow_none=True) print(mp)
I expect to get None or spawn The actual got the fork
Background:
self.sampler = DataLoader.Sampler() mp_start_method = torch.multiprocessing.get_start_method( allow_none=True) if mp_start_method is None: torch.multiprocessing.set_start_method('spawn')
I expect CUDA tensors to be shared between different processes.
The text was updated successfully, but these errors were encountered:
神奇的 bug, 我毫无头绪 🤣
Sorry, something went wrong.
No branches or pull requests
I minimally reproduced the startup process:
I expect to get None or spawn
The actual got the fork
Background:
I expect CUDA tensors to be shared between different processes.
The text was updated successfully, but these errors were encountered: