You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that RTX 30xx cards are only compatible with TensorRT version 7.2.1 and newer. For some of our products we are currently unable to upgrade to this very latest version of TensorRT for various reasons and are running 7.1.3 (20.09 release) and 7.0.0 (20.01 release). Considering that the 20.09 release is not even a few months old, is there any way to enable support of the 30xx cards for 7.1 and 7.0 versions?
If not, could someone outline the technical reasons for the incompatibility of the new GPU generation with 7.1 and 7.0?
The actual bug im receiving when trying to build an engine with RTX 3060Ti on TensorRT 7.1.3 (same for anything <= 7.1.3) with driver 455.45.01 on Ubuntu 20.04:
Creating builder
Creating model
[12/19/2020-21:29:35] [W] [TRT] Half2 support requested on hardware without native FP16 support, performance will be negatively affected.
[12/19/2020-21:29:36] [E] [TRT] ../rtSafe/cuda/caskUtils.cpp (98) - Assertion Error in trtSmToCask: 0 (Unsupported SM.)
main: /workspace/xxx/main.cpp:27: int main(int, char**): Assertion `engine != nullptr' failed.
Environment
TensorRT Version: 7.1.3 GPU Type: RTX 3060 Ti (probably 30xx in general, 8.6 compute capability) Nvidia Driver Version: 455.45.01 CUDA Version: CUDNN Version: Operating System + Version: ubuntu 20.04 Python Version (if applicable): TensorFlow Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if container which image + tag): Triton Inference Server <= 20.09 shipping with TensorRT <= 7.1.3, NGC container
The text was updated successfully, but these errors were encountered:
Hello @philipp-schmidt , thanks for reporting.
The technical reasons for the incompatibility is that TRT has dedicated optimization for different gpu arch, and by the time we develop 7.0, we have no sm_86 (RTX30xx) support. Could you elaborate more on the reason why cannot upgrade to 7.2?
Hi, thanks for replying. We currently face a series of bugs with the recent versions of Triton Inference Server which forces us to use an older version. This version happens to be compiled with older TensorRT versions, so we basically currently can not deploy our product with the newest generation of GPUs. I don't expect a "fix" in TensorRT, I was just curios to the reasons for the fast pace of TensorRT regarding backwards compatibility, thanks for giving the technical explanation. Considering this closed.
Description
It seems that RTX 30xx cards are only compatible with TensorRT version 7.2.1 and newer. For some of our products we are currently unable to upgrade to this very latest version of TensorRT for various reasons and are running 7.1.3 (20.09 release) and 7.0.0 (20.01 release). Considering that the 20.09 release is not even a few months old, is there any way to enable support of the 30xx cards for 7.1 and 7.0 versions?
If not, could someone outline the technical reasons for the incompatibility of the new GPU generation with 7.1 and 7.0?
The actual bug im receiving when trying to build an engine with RTX 3060Ti on TensorRT 7.1.3 (same for anything <= 7.1.3) with driver 455.45.01 on Ubuntu 20.04:
Environment
TensorRT Version: 7.1.3
GPU Type: RTX 3060 Ti (probably 30xx in general, 8.6 compute capability)
Nvidia Driver Version: 455.45.01
CUDA Version:
CUDNN Version:
Operating System + Version: ubuntu 20.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): Triton Inference Server <= 20.09 shipping with TensorRT <= 7.1.3, NGC container
The text was updated successfully, but these errors were encountered: