-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_mm #2951
Comments
Thanks for filing. Can you share the command you ran for context? Assigning to @klshuster and @EricMichaelSmith. I think the patch above is probably enough. |
I am having trouble reproducing on my end, could you please share the exact command you ran @curehabit? |
I think the key is —no-cuda |
I tried with and without |
Sorry, I found I commented projects/personality_captions/interactive line:278-279 because I want to run with cuda. #opt['no_cuda'] = True Than run command
and my config:(hidden some path information with XXX) [ optional arguments: ] and the error information: |
@klshuster You have more experience with personality_captions than I do - do you think the proposed fix in the PR description would be enough? I just made a PR for this at #2963 and tested it out - for both CUDA enabled and disabled, I get a broken image, so I'm not sure if this fix is sufficient |
This issue has not had activity in 30 days. Please feel free to reopen if you have more issues. You may apply the "never-stale" tag to prevent this from happening. |
fix was merged a long time ago, please reopen if the issue persists |
Bug description
Hi, when use GPU to run personality_caption project, may cause this error in project/personality_captions/transresnet/modules:402-407:
Runtime Error: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_mm
Reproduction steps
Set config: no_cuda = False and run project personality_captions
Expected behavior
Variable 'context_encoding' and 'candidates_encoded[img_index]' should be in same GPU device.
Additional context
I change code like this and it works:
scores = torch.mm(
candidates_encoded[img_index].to(context_encoding.device)
if not one_cand_set
else candidates_encoded.to(context_encoding.device),
context_encoding.transpose(0, 1),
)
Thanks
The text was updated successfully, but these errors were encountered: