You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This was originally discovered in #714 - setting num_workers above 0 causes Chemprop to hang on Windows on both GitHub actions and locally, and on MacOS for GitHub actions.
See these two replies to a highly relevant issue on the PyTorch forum - we may need to refactor our calls to train, or just disallow parallel dataloading based on platform:
Seems like the default of 8 is a remnant of v1. I don't think it's a bad change to use the torch default, and if we reintroduce caching (#697) then there's really no need to parallelize dataloading. FWIW I think this is due to differences in parallelism implementations across platforms because of python GIL; POSIX uses fork() which is significantly faster to spin up and wind down than spawn() used in Windows/MacOS
This was originally discovered in #714 - setting
num_workers
above0
causes Chemprop to hang on Windows on both GitHub actions and locally, and on MacOS for GitHub actions.See these two replies to a highly relevant issue on the PyTorch forum - we may need to refactor our calls to train, or just disallow parallel dataloading based on platform:
The text was updated successfully, but these errors were encountered: