New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensorrt engine instead of tflite for faster inference on nvidia xavier. #723
Comments
MediaPipe supports TensorFlow model inference on Nvidia GPUs. See the instruction at https://github.com/google/mediapipe/blob/master/mediapipe/docs/gpu.md#tensorflow-cuda-support-and-setup-on-linux-desktop. For TensorRT, I think we need to do two more things
|
We are closing this issue for now due to lack of activity. |
@jiuqiant
|
if to adds manually some lines so that the cuda argument gets processed the failure will be due to non defined cuda local config
where do we add this line?
Args:
|
reopened at #2000 |
The mediapipe_plus project can help you use the tensorrt interface to accelerate mediapipe inference. |
so what is the status on TensorRT support? |
Hi @jiuqiant does mediapipe still support |
Hi is it possible to support tensorrt instead of tflite.
The text was updated successfully, but these errors were encountered: