Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transforms tutorial fails if GPU available #763

Closed
adamjstewart opened this issue Sep 7, 2022 · 1 comment · Fixed by #767
Closed

Transforms tutorial fails if GPU available #763

adamjstewart opened this issue Sep 7, 2022 · 1 comment · Fixed by #767
Labels
documentation Improvements or additions to documentation transforms Data augmentation transforms
Milestone

Comments

@adamjstewart
Copy link
Collaborator

Description

The transforms tutorial crashes if you run it on a system like Google Colab where a GPU is present.

Steps to reproduce

Run the tutorial on Google Colab. The following cell:

%%timeit -n 1 -r 1
_ = transforms_gpu(batch_gpu)

fails with the following error message:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
[<ipython-input-15-75fa7a521613>](https://localhost:8080/#) in <module>
----> 1 get_ipython().run_cell_magic('timeit', '-n 1 -r 1', '_ = transforms_gpu(batch_gpu)\n')

8 frames
<decorator-gen-53> in timeit(self, line, cell, local_ns)

<magic-timeit> in inner(_it, _timer)

[<ipython-input-3-61c1063eeb18>](https://localhost:8080/#) in forward(self, inputs)
     13         # Batch
     14         if x.ndim == 4:
---> 15             x = (x - self.min[None, ...]) / self.denominator[None, ...]
     16         # Sample
     17         else:

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Version

releases/v0.3

@adamjstewart adamjstewart added documentation Improvements or additions to documentation transforms Data augmentation transforms labels Sep 7, 2022
@adamjstewart adamjstewart added this to the 0.3.1 milestone Sep 7, 2022
@calebrob6
Copy link
Member

FWIW this was caused by the fix from a few days ago. https://github.com/microsoft/torchgeo/pull/756/files#diff-30f150fdd744014d8396edf56eec96f7285e9f26375f39cc182b1dc4d07bc178

Previously, because everything was incorrectly on the CPU anyway, we didn't see this conflict.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation transforms Data augmentation transforms
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants