You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I want to perform inference using multiple models in my design on Jetson devices.
I came across this issue here, but it only addresses the scenario of multiple inputs for a single model.
I found that the Triton server has the capability of loading multiple models into the GPU simultaneously, but the "triton_node" only accepts a single const std::string model_name_.
I have the following questions that I would be grateful if someone could answer:
1- Should I create multiple TensorRT or Triton server engines in my ROS environment? Is it even possible or recommended?
In another scenario, I would like to switch between different models at runtime.
2- Since the model_name is provided as a parameter, is it possible to switch between different models at runtime without shutting down the DNN node? If not, what is the proper way of switching between models?
UPDATE:
Following this link and this link it seems like running tensorrt engine in multiprocess create two context and two context get scheduled in time slice fashion cause inference time to increase. so should i create a composable node out of all tensorrt engines so they created in multithreading mode and not multiprocessing ?
But in Nvidia github here, they simply run two launch files (stereo and unet) as seperate nodes. isn't it cause a time slice problem ?!
The text was updated successfully, but these errors were encountered:
h-sh-h
changed the title
Multiple model Inference
Multiple model Inference And Runtime Model Switching
May 9, 2024
Hello,
I want to perform inference using multiple models in my design on Jetson devices.
I came across this issue here, but it only addresses the scenario of multiple inputs for a single model.
I found that the Triton server has the capability of loading multiple models into the GPU simultaneously, but the "triton_node" only accepts a single
const std::string model_name_
.I have the following questions that I would be grateful if someone could answer:
1- Should I create multiple TensorRT or Triton server engines in my ROS environment? Is it even possible or recommended?
In another scenario, I would like to switch between different models at runtime.
2- Since the model_name is provided as a parameter, is it possible to switch between different models at runtime without shutting down the DNN node? If not, what is the proper way of switching between models?
UPDATE:
Following this link and this link it seems like running tensorrt engine in multiprocess create two context and two context get scheduled in time slice fashion cause inference time to increase. so should i create a composable node out of all tensorrt engines so they created in multithreading mode and not multiprocessing ?
But in Nvidia github here, they simply run two launch files (stereo and unet) as seperate nodes. isn't it cause a time slice problem ?!
The text was updated successfully, but these errors were encountered: