-
Notifications
You must be signed in to change notification settings - Fork 622
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about ExternalSource and GPU #2052
Comments
Hi, |
Thank you for clarifying. So, if I do some preprocessing in the GPU in my ExternalSource iterator, I need to move the data back to the CPU before it gets to the ExternalOperator so that ExternalOperator will move it back to the GPU? That adds an expensive round-trip between GPU and CPU. Will PyTorch Tensors in the GPU also be supported in ExternalSource in the near future? |
Hi, |
Hi, |
Thank you. Will this functionality be in Dali versions for CUDA 10.1? I understand PyTorch does not support CUDA 10.2 yet. |
Hi, |
Hello
I am using
CUDA release 9.1, V9.1.85
Dali 0.21.0
I am trying to use an ExternalSource that does its processing in the GPU.
The documentation says ExetrnalSource supports both CPU and GPU.
But I get this error when I run my pipeline, which suggests GPU is not supported.
File "/home/kikoaumond/.local/lib/python3.7/site-packages/nvidia/dali/pipeline.py", line 447, in feed_input
inp = Tensors.TensorListCPU(data, layout)
TypeError: init(): incompatible constructor arguments. The following argument types are supported:
1. nvidia.dali.backend_impl.TensorListCPU(arg0: buffer, arg1: str)
Moreover, the method documentation in feed_input (see below) says that "In case of GPU external sources, this (data) must be a
numpy.ndarray
."which is confusing. Do you mean a CUPY array since I don't see how you can gave a numpy array in a GPU.
So, can you please clarify
Thank you
The text was updated successfully, but these errors were encountered: