Skip to content
This repository has been archived by the owner on Aug 18, 2020. It is now read-only.

fix distributed training after DataLoader change (device parameter) #85

Merged
merged 1 commit into from
Feb 10, 2020

Conversation

jaidmin
Copy link
Contributor

@jaidmin jaidmin commented Feb 10, 2020

fixes the distributed training issue introduces with the change from a Cuda callback to a device parameter on DataLoader reported in #57 and here in the forums.

The fix is simply passing the device parameter from the DataLoader to the DistributedDL when it is initialized.

Also fixes this error:

warn(f"You are setting an attribute ({name}) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.{name}` if you would like to change it in the learner.")
/home/jaidmin/Software/Devel/forks/fastai2/fastai2/learner.py:30: UserWarning: You are setting an attribute (dl) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.dl` if you would like to change it in the learner.

If this behavior is wanted, let me know and I'll change the PR

@sgugger
Copy link
Contributor

sgugger commented Feb 10, 2020

Looking good, thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants