-
-
Notifications
You must be signed in to change notification settings - Fork 337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate .so shared library for Cuda 10.1 #31
Comments
Can you send the output log when you try to run YOLOv4? Are you generating a new engine from model in CUDA 10.1? |
The file is exist and path are correct. Thanks! |
Hi, you need to compile nvdsinfer_custom_impl_Yolo for CUDA 10.1 to generate a new libnvdsinfer_custom_impl_Yolo.so Do make with this command
You need to generate model.engine for yor CUDA and TensorRT version too. |
Thank you for checking. I tried to generate .so for 10.1, bit didn't work nano@nano:/opt/nvidia/deepstream/deepstream-5.0/sources/yolo$ CUDA_VER=10.1 make -C nvdsinfer_custom_impl_Yolo |
Add file
Then you run |
nano@nano:/etc/ld.so.conf.d$ sudo vi cuda.conf cuda.conf: nano@nano:/opt/nvidia/deepstream/deepstream-5.0/sources/yolo$ CUDA_VER=10.1 make -C nvdsinfer_custom_impl_Yolomake: Entering directory '/opt/nvidia/deepstream/deepstream-5.0/sources/yolo/nvdsinfer_custom_impl_Yolo' |
I think you have a problem with your CUDA installation, try to reinstall CUDA |
Thanks! Summary: The AWS VM has Cuda 10.1 and Deepstream DSK is not installed there, so I couldn't run your solution at docker. Tried your cuda.conf solution on my Nano to generate .so file for Cuda 10.1 and it didn't work. Do you think that I have Cuda installation problem on Jetson Nano (Cuda 10.2) or AWS (Cuda 10.1)? |
You can't generate .so lib for CUDA 10.1 in Nano, because it uses CUDA 10.2. I don't know how AWS VM works, but you need to install CUDA 10.1/10.2 (if not installed yet) and install DeepStream SDK (local/docker) to run the model, compiling .so lib and model engine directly into AWS based in installed CUDA version. |
Hi,
I am using Jetson Nano and able to generate .so shared library for pre-trained and custom yolov4 models and they works perfect. The Cuda version of my Jetson Nano is 10.2.
I am using Nvidia-Deepstream docker with yoloV4/yoloV4-tiny models at Jetson Nano an it works just fine.
Also, I use docker on AWS VM for better performance. The yoloV3/yoloV3-tiny models works without issue, but when I tried yoloV4 model with your solution, it didn't work. I checked my AWS VM Cuda version and it is 10.1. I do not have deepstream installed on VM, so couldn't generated .so shared library for Cuda 10.1
I believe the problem comes from different Cuda version, because docker is running over there without any issue for yoloV3.
If it is a version issue, how could I generate .so lib file for Cuda 10.1 or is there any other solution to get around the issue.
Your help would be appreciated!
The text was updated successfully, but these errors were encountered: