You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CUDA:11.4.315
cuDNN:8.6.0.166
TensorRT:8.5.2.2
Jetpack:5.1.2
I tried deploying the 23.06 version image, but it failed.
Is it not possible to deploy using containers?
I tried using another method for deployment, but encountered a lot of issues when installing environment dependencies according to the documentation. I believe that container deployment is a simpler and faster method. https://github.com/triton-inference-server/server/blob/r23.06/docs/user_guide/jetson.md
The text was updated successfully, but these errors were encountered:
When I use the above environment (Jetson 5.1.2), this image is normal.
Now,
CUDA:10.2.300
cuDNN:8.2.1.32
TensorRT:8.0.1.6
Jetpack:4.6
I try use nvcr.io/nvidia/tritonserver:24.02-py3-igpu, but failed,reason:unable to load shared library libnvcudla.so.
I want to know if he has a version correspondence.
CUDA:11.4.315
cuDNN:8.6.0.166
TensorRT:8.5.2.2
Jetpack:5.1.2
I tried deploying the 23.06 version image, but it failed.
Is it not possible to deploy using containers?
I tried using another method for deployment, but encountered a lot of issues when installing environment dependencies according to the documentation. I believe that container deployment is a simpler and faster method.
https://github.com/triton-inference-server/server/blob/r23.06/docs/user_guide/jetson.md
The text was updated successfully, but these errors were encountered: