Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Register jagged_index_select to CUDA and CPU backend for inference #1880

Closed
wants to merge 1 commit into from

Conversation

wpc
Copy link

@wpc wpc commented Jul 15, 2023

Summary:
we have a new model using jagged_index_select ops and seems it does not work when we scripted the model and run in inference server with following error:

NotImplementedError: Could not run 'fbgemm::jagged_index_select' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'fbgemm::jagged_index_select' is only available for these backends: [BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

BackendSelect: fallthrough registered at fbcode/caffe2/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at fbcode/caffe2/aten/src/ATen/core/PythonFallbackKernel.cpp:153 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at fbcode/caffe2/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at fbcode/caffe2/aten/src/ATen/FunctionalizeFallbackKernel.cpp:290 [backend fallback]
Named: registered at fbcode/caffe2/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at fbcode/caffe2/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at fbcode/caffe2/aten/src/ATen/native/NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at fbcode/caffe2/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at fbcode/caffe2/aten/src/ATen/core/VariableFallbackKernel.cpp:71 [backend fallback]
AutogradOther: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradCPU: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradCUDA: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradHIP: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradXLA: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradMPS: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradIPU: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradXPU: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradHPU: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradVE: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradLazy: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradMTIA: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradPrivateUse1: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradPrivateUse2: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradPrivateUse3: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradMeta: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradNestedTensor: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
Tracer: registered at fbcode/caffe2/torch/csrc/autograd/TraceTypeManual.cpp:296 [backend fallback]
AutocastCPU: fallthrough registered at fbcode/caffe2/aten/src/ATen/autocast_mode.cpp:383 [backend fallback]
AutocastCUDA: fallthrough registered at fbcode/caffe2/aten/src/ATen/autocast_mode.cpp:250 [backend fallback]
FuncTorchBatched: registered at fbcode/caffe2/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:710 [backend fallback]
FuncTorchVmapMode: fallthrough registered at fbcode/caffe2/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at fbcode/caffe2/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at fbcode/caffe2/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at fbcode/caffe2/aten/src/ATen/functorch/TensorWrapper.cpp:201 [backend fallback]
PythonTLSSnapshot: registered at fbcode/caffe2/aten/src/ATen/core/PythonFallbackKernel.cpp:161 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at fbcode/caffe2/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at fbcode/caffe2/aten/src/ATen/core/PythonFallbackKernel.cpp:165 [backend fallback]
PythonDispatcher: registered at fbcode/caffe2/aten/src/ATen/core/PythonFallbackKernel.cpp:157 [backend fallback]

this diff help register the jagged_index_select forward path with CUDA and CPU backend so that we can use the op in inference.

Differential Revision: D47462912

Summary:
we have a new model using jagged_index_select ops and seems it does not work when we scripted the model and run in inference server with following error:
```
NotImplementedError: Could not run 'fbgemm::jagged_index_select' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'fbgemm::jagged_index_select' is only available for these backends: [BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

BackendSelect: fallthrough registered at fbcode/caffe2/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at fbcode/caffe2/aten/src/ATen/core/PythonFallbackKernel.cpp:153 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at fbcode/caffe2/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at fbcode/caffe2/aten/src/ATen/FunctionalizeFallbackKernel.cpp:290 [backend fallback]
Named: registered at fbcode/caffe2/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at fbcode/caffe2/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at fbcode/caffe2/aten/src/ATen/native/NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at fbcode/caffe2/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at fbcode/caffe2/aten/src/ATen/core/VariableFallbackKernel.cpp:71 [backend fallback]
AutogradOther: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradCPU: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradCUDA: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradHIP: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradXLA: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradMPS: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradIPU: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradXPU: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradHPU: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradVE: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradLazy: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradMTIA: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradPrivateUse1: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradPrivateUse2: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradPrivateUse3: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradMeta: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
AutogradNestedTensor: registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 [autograd kernel]
Tracer: registered at fbcode/caffe2/torch/csrc/autograd/TraceTypeManual.cpp:296 [backend fallback]
AutocastCPU: fallthrough registered at fbcode/caffe2/aten/src/ATen/autocast_mode.cpp:383 [backend fallback]
AutocastCUDA: fallthrough registered at fbcode/caffe2/aten/src/ATen/autocast_mode.cpp:250 [backend fallback]
FuncTorchBatched: registered at fbcode/caffe2/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:710 [backend fallback]
FuncTorchVmapMode: fallthrough registered at fbcode/caffe2/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at fbcode/caffe2/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at fbcode/caffe2/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at fbcode/caffe2/aten/src/ATen/functorch/TensorWrapper.cpp:201 [backend fallback]
PythonTLSSnapshot: registered at fbcode/caffe2/aten/src/ATen/core/PythonFallbackKernel.cpp:161 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at fbcode/caffe2/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at fbcode/caffe2/aten/src/ATen/core/PythonFallbackKernel.cpp:165 [backend fallback]
PythonDispatcher: registered at fbcode/caffe2/aten/src/ATen/core/PythonFallbackKernel.cpp:157 [backend fallback]
```
this diff help register the jagged_index_select forward path with CUDA and CPU backend so that we can use the op in inference.

Differential Revision: D47462912

fbshipit-source-id: a0cc7dd641dd99ed911ee6114c21495a39f90b44
@netlify
Copy link

netlify bot commented Jul 15, 2023

Deploy Preview for pytorch-fbgemm-docs canceled.

Name Link
🔨 Latest commit 907839c
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/64b2f458d2d5780008039c38

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47462912

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 72c6035.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants