Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same #52

Closed
andriilitvynchuk opened this issue Jul 24, 2019 · 7 comments

Comments

@andriilitvynchuk
Copy link

Hi there! Thanks for your great repo, but i faced with some difficulties while trying to inference on device ('cuda:0'). My code:
device = torch.device('cuda:0')
tfms = transforms.Compose([transforms.Resize(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),])
img = tfms(Image.open('img.png')).unsqueeze(0)
img.to(device)
model.to(device)
model.eval()
with torch.no_grad(): outputs = model(img)
And thrown error:
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same

@cardoso-neto
Copy link

I'm pretty sure Tensor.to(device) is not an in-place operation and you're discarding the cuda tensor using it as such.

@andriilitvynchuk
Copy link
Author

So it's impossibile to run on GPU or i missed something?

@cardoso-neto
Copy link

Are you serious?
Your tensor's the problem.

Did you even try img = img.to(device) as I've told you?

@andriilitvynchuk
Copy link
Author

Sorry, i didn't understand your answer. Thank you very much,,

@HoiM
Copy link

HoiM commented Aug 14, 2019

The problem is that you didn't convert the type of the input data correctly. You can try something like this:
batch = torch.from_numpy(image_batch).type(torch.cuda.FloatTensor)
batch_features = model.extract_features(batch)

@P-DX
Copy link

P-DX commented Sep 25, 2020

hi, do you solve this problem?
could you share the solution with me?

@andriilitvynchuk
Copy link
Author

I just wrote

img = img.to(device)

As wrote before, sending tensor to device is not inplace operation and it returns new tensor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants