Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to get a Vulkan tensor using to('vulkan') #56006

Open
skyline75489 opened this issue Apr 14, 2021 · 1 comment
Open

Unable to get a Vulkan tensor using to('vulkan') #56006

skyline75489 opened this issue Apr 14, 2021 · 1 comment
Labels
module: vulkan oncall: mobile Related to mobile support, including iOS and Android

Comments

@skyline75489
Copy link
Contributor

Am I doing something wrong, or this is expected?

>>> torch.is_vulkan_available()
True
>>> a = torch.rand(1,3)
>>> a.to('vulkan')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\skyline\.conda\envs\pytorch-build-py37\lib\site-packages\torch\_tensor.py", line 203, in __repr__
    return torch._tensor_str._str(self)
  File "C:\Users\skyline\.conda\envs\pytorch-build-py37\lib\site-packages\torch\_tensor_str.py", line 406, in _str
    return _str_intern(self)
  File "C:\Users\skyline\.conda\envs\pytorch-build-py37\lib\site-packages\torch\_tensor_str.py", line 381, in _str_intern
    tensor_str = _tensor_str(self, indent)
  File "C:\Users\skyline\.conda\envs\pytorch-build-py37\lib\site-packages\torch\_tensor_str.py", line 242, in _tensor_str
    formatter = _Formatter(get_summarized_data(self) if summarize else self)
  File "C:\Users\skyline\.conda\envs\pytorch-build-py37\lib\site-packages\torch\_tensor_str.py", line 90, in __init__
    nonzero_finite_vals = torch.masked_select(tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0))
NotImplementedError: Could not run 'aten::abs.out' with arguments from the 'Vulkan' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::abs.out' is only available for these backends: [CPU, BackendSelect, Named, InplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].

CPU: registered at aten\src\ATen\RegisterCPU.cpp:9269 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: fallthrough registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:11 [kernel]
InplaceOrView: registered at ..\torch\csrc\autograd\generated\InplaceOrViewType_1.cpp:2056 [kernel]
AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_4.cpp:8844 [autograd kernel]
AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_4.cpp:8844 [autograd kernel]
AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_4.cpp:8844 [autograd kernel]
AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_4.cpp:8844 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at ..\torch\csrc\autograd\generated\VariableType_4.cpp:8844 [autograd kernel]
AutogradMLC: registered at ..\torch\csrc\autograd\generated\VariableType_4.cpp:8844 [autograd kernel]
AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_4.cpp:8844 [autograd kernel]
AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_4.cpp:8844 [autograd kernel]
AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_4.cpp:8844 [autograd kernel]
AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_4.cpp:8844 [autograd kernel]
Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_4.cpp:9609 [kernel]
Autocast: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:250 [backend fallback]
Batched: registered at ..\aten\src\ATen\BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]

The main issue seem to be:

Could not run 'aten::abs.out' with arguments from the 'Vulkan' backend. 

I checked and found not abs.out implementation in 'Vulkan' backend

@skyline75489 skyline75489 changed the title Unable to get a Vulkan tensor using to('vulkan) Unable to get a Vulkan tensor using to('vulkan') Apr 14, 2021
@skyline75489
Copy link
Contributor Author

OK I get it what's happening. The Vulkan backend is actually working:

>>> import torch
>>> a = torch.rand(1,3)
>>> a
tensor([[0.1951, 0.5828, 0.5201]])
>>> t = a.to('vulkan')
>>> t.device
device(type='vulkan')

But the missing operators causes the error when using __str__ (or __repr__?), which makes it very difficult to use.

@albanD albanD added oncall: mobile Related to mobile support, including iOS and Android and removed triage review labels Apr 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: vulkan oncall: mobile Related to mobile support, including iOS and Android
Projects
None yet
Development

No branches or pull requests

4 participants