Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] openAI-Clip demo failed on cuda machine #21

Closed
mestrona-3 opened this issue Mar 11, 2024 · 1 comment
Closed

[BUG] openAI-Clip demo failed on cuda machine #21

mestrona-3 opened this issue Mar 11, 2024 · 1 comment
Labels
assigned We're actively working on this issue and hope to provide an update soon

Comments

@mestrona-3
Copy link

On AI Hub Models Slack, Hu Eric shared that openAI-Clip demo failed for him. https://qualcomm-ai-hub.slack.com/archives/C06LT6T3REY/p1709470194079099

Kory initially took a look. It seems like this has something to do with CUDA availability. This bug is being filed to look into the initially reported bug.

(qai_hub) a19284@njai-ubuntu:~/workspace/qai-hub-clip$ python -m qai_hub_models.models.openai_clip.demo
Traceback (most recent call last):
File "/home/a19284/mambaforge/envs/qai_hub/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/a19284/mambaforge/envs/qai_hub/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/a19284/mambaforge/envs/qai_hub/lib/python3.8/site-packages/qai_hub_models/models/openai_clip/demo.py", line 98, in
main()
File "/home/a19284/mambaforge/envs/qai_hub/lib/python3.8/site-packages/qai_hub_models/models/openai_clip/demo.py", line 72, in main
predictions = app.predict_similarity(images, text).flatten()
File "/home/a19284/mambaforge/envs/qai_hub/lib/python3.8/site-packages/qai_hub_models/models/openai_clip/app.py", line 64, in predict_similarity
image_features = self.image_encoder(image)
File "/home/a19284/mambaforge/envs/qai_hub/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/a19284/mambaforge/envs/qai_hub/lib/python3.8/site-packages/qai_hub_models/models/openai_clip/model.py", line 134, in forward
image_features = self.net.encode_image(image)
File "/home/a19284/.qaihm/models/openai_clip/v1/openai_CLIP_git/clip/model.py", line 341, in encode_image
return self.visual(image.type(self.dtype))
File "/home/a19284/mambaforge/envs/qai_hub/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(input, **kwargs)
File "/home/a19284/.qaihm/models/openai_clip/v1/openai_CLIP_git/clip/model.py", line 224, in forward
x = self.conv1(x) # shape = [
, width, grid, grid]
File "/home/a19284/mambaforge/envs/qai_hub/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/a19284/mambaforge/envs/qai_hub/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/a19284/mambaforge/envs/qai_hub/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper___slow_conv2d_forward)

@mestrona-3 mestrona-3 changed the title [BUG] openAI-Clip demo failed [BUG] openAI-Clip demo failed on cuda machine Mar 14, 2024
@mestrona-3 mestrona-3 added the assigned We're actively working on this issue and hope to provide an update soon label Mar 21, 2024
@mestrona-3
Copy link
Author

Closing this as it has been fixed in the latest release (yesterday) or Model Zoo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
assigned We're actively working on this issue and hope to provide an update soon
Projects
None yet
Development

No branches or pull requests

1 participant