Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What the advantages of TRTorch? #34

Closed
dancingpipi opened this issue Apr 1, 2020 · 2 comments
Closed

What the advantages of TRTorch? #34

dancingpipi opened this issue Apr 1, 2020 · 2 comments
Labels
question Further information is requested

Comments

@dancingpipi
Copy link

I used to use torch2trt to convert pytorch module, could you explain the advatage over torch2trt?

If the model contain op that tensorrt don't support, can trtorch convert it to engine?
Otherwise run the op supported by tensorrt with tensorrt, and other use libtorch?

I really appreciate for your great works, if you can answer my doubts, I will be very grateful.

@narendasan
Copy link
Collaborator

I think the main differences between torch2trt and TRTorch are its approach to conversion and its focus.

TRTorch is designed to be a robust path from PyTorch and TorchScript to TensorRT supporting C++ (via LibTorch) and eventually Python (via PyTorch) with the end goal being integrated into PyTorch itself as a inference backend for deployment scenarios.

torch2trt is great for experimentation and prototyping, it's lightweight and good for applications where the final application will remain in Python, it also currently has greater layer support.

Under the hood, TRTorch compiles stand alone torchscript code (no python dependency) to TensorRT and wraps it in a module, where as torch2trt monkey-patches PyTorch python functions to emit TensorRT layers when they are run, using that to construct the engine which is returned as a module.

In terms of advantages right now TRTorch is good if you want to deploy in C++ and torch2trt is good if you want to deploy in Python. In the future, hopefully you can do both in TRTorch without having to leave PyTorch in which case torch2trt is really useful in quickly adding extra capabilities that TRTorch doesn't have yet.

Right now our approach to unsupported TensorRT ops is to have users segment their models based on backend. So for example you may have your backbone running in TensorRT but later layers still running in LibTorch. You would have a module for each, compile the backbone then link to your later LibTorch layers. We are discussing other strategies for supporting unsupported TRT ops but right now this is the solution.

@dancingpipi
Copy link
Author

Thanks for your reply!

Now I understand it, looking forward to your subsequent great work~

@narendasan narendasan added the question Further information is requested label Apr 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
Development

No branches or pull requests

2 participants