You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is only one thing left to do; turn our tensor into a GPU tensor. That is what [to()](https://bit.ly/32Mgxjc) is good for. It sends a tensor to the specified device.
Hi Dan,
I love your book and tutorials! May I kindly ask does to() method copy the data input the device (GPU or CPU) memory directly?
The reason I am asking is that you mentioned before that torch.as_tensor(x_train) will shares the underlying data with the original Numpy array, but when we used torch.as_tensor(x_train).to(device) I found that x_train data won't change.
Do I understand it correctly?
Best,
Jun
The text was updated successfully, but these errors were encountered:
You got it absolutely right - once you send data to the GPU, it needs to be copied there.
For CPU tensors, the data is stored in the computer's RAM, and it can be accessed by both Numpy and PyTorch, and the underlying data is shared by them.
But, the moment you send data to the GPU, it will be copied to the GPU's RAM, and it won't be shared with Numpy anymore.
Numpy does not support GPUs, which is the reason why we have to use .cpu() to bring the tensor back to the main RAM before turning it into a Numpy array.
Hi Dan,
I love your book and tutorials! May I kindly ask does
to()
method copy the data input the device (GPU or CPU) memory directly?The reason I am asking is that you mentioned before that
torch.as_tensor(x_train)
willshares the underlying data with the original Numpy array
, but when we usedtorch.as_tensor(x_train).to(device)
I found that x_train data won't change.Do I understand it correctly?
Best,
Jun
The text was updated successfully, but these errors were encountered: