New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation Fault when launching the server with custom built TensorRT plugins #2227
Comments
Could you use gdb and share a backtrace for the segfault? |
@CoderHam, thanks for the quick response below list the backtrace in gdb for the seg fault.
i'm not really familar with c/c++/gdb, so i'm not sure if i'm giving you the right information, i did the following
please let me know if it's not what you want. thanks. |
@zmy1116 can you shared the model you are using with the plugin shared library? |
@CoderHam It does not really matter what you put in the model repository, even if none of the models in the repository uses the plugin. From the output above you can see that the error happens before any of the model is loaded. That's said, I have tested with a repository only containing with the dummy example in the triton server repo As you can see this model does not use custom plugin. However, when starting the triton server with the plugin, segmentation fault still occurs. Thanks |
@zmy1116 I tried loading your shared library with
|
@CoderHam thanks for the directions. actually non of my built operations seem to work with In our current production environment. We run TensorRT models directly in python. I just do |
@zmy1116 I hit the same issue. have you got a solution? |
@tianq01 It appears that this specific problem does not exist in the 20.12 version (TRT, triton). |
Description
I want to serve a TensorRT model with custom built plugins on Triton Server. It generates segmenation fault immediately.
I can confirm the TensorRT model plan and the plugin are built correctly, we are currently using this tensorrt model in our production environment.
I have successfully setup other TensorRT models that do not require custom plugins on Triton server, so I think the problem is isolated to custom plugins.
I can reproduce issue with the example detectionLayerPlugin in the nvidia TensorRT repo
Triton Information
To Reproduce
I use the example plugin detectionLayerPlugin in the nivida TensorRT repo https://github.com/NVIDIA/TensorRT/tree/master/plugin to reproduce a custom plugin that cause the issue.
To facilitate your test I create a repo with all the necessary files
https://github.com/zmy1116/triton_server_custom_plugin_issue
So I basically put the following files under the folder plugin:
https://github.com/NVIDIA/TensorRT/blob/master/plugin/detectionLayerPlugin/detectionLayerPlugin.cpp
https://github.com/NVIDIA/TensorRT/blob/master/plugin/detectionLayerPlugin/detectionLayerPlugin.h
https://github.com/NVIDIA/TensorRT/blob/master/plugin/common/plugin.h
https://github.com/NVIDIA/TensorRT/blob/master/plugin/common/checkMacrosPlugin.cpp
https://github.com/NVIDIA/TensorRT/blob/master/plugin/common/checkMacrosPlugin.h
https://github.com/NVIDIA/TensorRT/blob/master/plugin/common/kernels/maskRCNNKernels.cu
https://github.com/NVIDIA/TensorRT/blob/master/plugin/common/kernels/maskRCNNKernels.h
To build the plugin , under the TensorRT container, it's the standard
To launch the triton server, within the Triton server container, assuming the model is in path
/ubuntu/model_repository
and the plugin is at/ubuntu/libtestplugins.so
In the model_repository, just put any model so that triton server will launch, the model does not need to call the custom plugin, the error occurs before models are loaded.
Expected behavior
Immediately the segmentation shows up
Please let me know if you need any additional information and I will get back to you ASAP.
Thank you
The text was updated successfully, but these errors were encountered: