You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For now a fixed this with #typing: ignore but curious to know if there is a better solution
I do think however that using self.device is a bit strange (since pytorch lightning automatically moves tensors to GPU) and perhaps should be avoided in cases where it's possible.
The text was updated successfully, but these errors were encountered:
If it's just a typing problem, it's okay to ignore since the typing ecosystem isn't perfect yet in Python. Is there a specific location where this is happening and it's an issue?
As far as I am aware, PyTorch lightning only makes sure that tensors from the DataLoader and the LightningModule are on the same device during training. Thus, we still need to get the device for any new tensors being created during calculations. That said, relying on the device attribute of the LightningModule or LightningDataset is often not necessary, because we can often figure out the device from other tensors in scope.
For now a fixed this with
#typing: ignore
but curious to know if there is a better solutionI do think however that using
self.device
is a bit strange (since pytorch lightning automatically moves tensors to GPU) and perhaps should be avoided in cases where it's possible.The text was updated successfully, but these errors were encountered: