Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorrt engine instead of tflite for faster inference on nvidia xavier. #723

Closed
yshvrdhn opened this issue May 20, 2020 · 9 comments
Closed
Assignees

Comments

@yshvrdhn
Copy link

Hi is it possible to support tensorrt instead of tflite.

@jiuqiant
Copy link
Collaborator

MediaPipe supports TensorFlow model inference on Nvidia GPUs. See the instruction at https://github.com/google/mediapipe/blob/master/mediapipe/docs/gpu.md#tensorflow-cuda-support-and-setup-on-linux-desktop. For TensorRT, I think we need to do two more things

  1. install libnvinfer via package manager
  2. add --action_env TF_NEED_TENSORRT=1 to the bazel build command. For example,
$ bazel build -c opt --config=cuda --spawn_strategy=local \
    --define no_aws_support=true --copt -DMESA_EGL_NO_X11_HEADERS \
    --action_env TF_NEED_TENSORRT=1 \
    mediapipe/examples/desktop/object_detection:object_detection_tensorflow

@jiuqiant jiuqiant self-assigned this May 21, 2020
@jiuqiant
Copy link
Collaborator

jiuqiant commented Jun 2, 2020

We are closing this issue for now due to lack of activity.

@jiuqiant jiuqiant closed this as completed Jun 2, 2020
@Gowtham171996
Copy link

@AndreV84
Copy link

AndreV84 commented May 7, 2021

@jiuqiant
Any ideas?

 bazel build -c opt --config=cuda --spawn_strategy=local \
>     --define no_aws_support=true --copt -DMESA_EGL_NO_X11_HEADERS \
>     --action_env TF_NEED_TENSORRT=1 \
>     mediapipe/examples/desktop/object_detection:object_detection_tensorflow
Starting local Bazel server and connecting to it...
WARNING: ignoring LD_PRELOAD in environment.
ERROR: Config value 'cuda' is not defined in any .rc file

@AndreV84
Copy link

AndreV84 commented May 7, 2021

if to adds manually some lines so that the cuda argument gets processed the failure will be due to non defined cuda local config

cuda_configure(name = "local_config_cuda")

where do we add this line?
from file .cache/bazel/_bazel_nvidia/ff4425722229fc486cc849b5677abe3f/external/org_tensorflow/third_party/gpus/cuda_configure.bzl

"""Detects and configures the local CUDA toolchain.

Add the following to your WORKSPACE FILE:

```python
cuda_configure(name = "local_config_cuda")

Args:
name: A unique name for this workspace rule.
"""

the error comes up after adding 
 This config refers to building CUDA op kernels with nvcc.
build:cuda --config=using_cuda
build:cuda --define=using_cuda_nvcc=true
build:cuda_configure(name = "local_config_cuda")
//mediapipe/examples/desktop/object_detection:object_detection_tensorflow depends on @org_tensorflow//tensorflow/core:direct_session in repository @org_tensorflow which failed to fetch. no such package '@local_config_cuda//cuda': Repository command failed

@AndreV84
Copy link

AndreV84 commented May 7, 2021

reopened at #2000

@kevinwhzou
Copy link

kevinwhzou commented Nov 3, 2021

The mediapipe_plus project can help you use the tensorrt interface to accelerate mediapipe inference.
https://github.com/houmo-ai/mediapipe_plus

@AndreV84
Copy link

so what is the status on TensorRT support?
@jiuqiant

@Gaozhongpai
Copy link

MediaPipe supports TensorFlow model inference on Nvidia GPUs. See the instruction at https://github.com/google/mediapipe/blob/master/mediapipe/docs/gpu.md#tensorflow-cuda-support-and-setup-on-linux-desktop. For TensorRT, I think we need to do two more things

  1. install libnvinfer via package manager
  2. add --action_env TF_NEED_TENSORRT=1 to the bazel build command. For example,
$ bazel build -c opt --config=cuda --spawn_strategy=local \
    --define no_aws_support=true --copt -DMESA_EGL_NO_X11_HEADERS \
    --action_env TF_NEED_TENSORRT=1 \
    mediapipe/examples/desktop/object_detection:object_detection_tensorflow

Hi @jiuqiant does mediapipe still support TF_NEED_TENSORRT? Do I need to convert tflite models to tensorrt models?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants