-
Notifications
You must be signed in to change notification settings - Fork 24.9k
Neg n_workers are now the same as zero #4019
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
FWIW, scikit-learn and joblib use a different convention: -1 means "use the same number of workers as CPU cores", -2 means "use num_cores-1", etc. From joblib docs:
|
Taking into account @kmike's comment I feel like it'd be best to raise |
Changed it to a |
torch/utils/data/dataloader.py
Outdated
@@ -219,6 +219,9 @@ def __init__(self, loader): | |||
# prime the prefetch loop | |||
for _ in range(2 * self.num_workers): | |||
self._put_indices() | |||
elif self.num_workers < 0: |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
1675c26
to
cb44b81
Compare
Thank you! |
I foolishly passed the kwarg
num_workers=-1
to aDataLoader
expecting that it would default to serial processing (I correctly recalled that it should be a non-positive number, but failed to remember that it must be zero to get this behavior).Normally this would be fine, but the error message that it spat out when I did this was very confusing:
What made the matter worse is that I recently put a print statement in that file to test something, so I though I had broken pytorch, but I couldn't figure out where the difference was. After spending too much time searching for cached pyc files, I finally realized my mistake.
I wanted to submit a patch so it would at least spit out a ValueError. However, I think it is a simpler change to have it accept negative numbers. This change simply makes it so setting num_workers to a non-positive number results in the same behavior as zero.