Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Serve TensorRT or torch2trt model #1243

Closed
pallashadow opened this issue Sep 13, 2021 · 11 comments
Closed

Serve TensorRT or torch2trt model #1243

pallashadow opened this issue Sep 13, 2021 · 11 comments
Labels
enhancement New feature or request

Comments

@pallashadow
Copy link

pallashadow commented Sep 13, 2021

TensorRT can decrease the latency dramatically on some model, especially when batchsize=1.

torch2trt is a PyTorch to TensorRT converter which utilizes the TensorRT Python API. It can simple convert the model to tensorRT in 1 line of code, and run it with Pytorch input/output. see https://github.com/NVIDIA-AI-IOT/torch2trt.

I am wondering if

  1. Is there any risk to serve a tensorrt or torch2trt model by torchserve?
  2. Will it be an official support for serving tensorRT model?

Describe the solution

It seems that torchserve can serve torch2trt model pretty well, simply by rewriting the handler like this.

from torch2trt import TRTModule

class Yolov5FaceHandler(BaseHandler):
    def initialize(self, context):
        serialized_file = context.manifest["model"]["serializedFile"]
        if serialized_file.split(".")[-1] == "torch2trt": #if serializedFile ends with .torch2trt instead of .pt
            self._load_torchscript_model = self._load_torch2trt_model # overwrite load model function
        self.super().initializer(context)

    def _load_torch2trt_model(self, torch2trt_path):
        logger.info("Loading torch2trt model")
        model_trt = TRTModule()
        model_trt.load_state_dict(torch.load(torch2trt_path))
        return model_trt

Describe alternatives solution

Maybe this feature can be add to ts/torch_handler/base_handler.py?
Or there would be a new exemplar handler for it.

@pallashadow pallashadow changed the title Add supports for serving TensorRT and torch2trt model Add supports for serving TensorRT or torch2trt model Sep 13, 2021
@chauhang chauhang added the enhancement New feature or request label Sep 14, 2021
@pallashadow pallashadow changed the title Add supports for serving TensorRT or torch2trt model Serve TensorRT or torch2trt model Sep 16, 2021
@msaroufim
Copy link
Member

Hi @pallashadow this looks good to me! would you be interested in contributing this change? I'd suggest making a change to the base handler as you suggest and also creating an a quick example in examples/TensorRT with a short README

@pallashadow
Copy link
Author

@msaroufim I'd like to. I have utilized torch2trt with torchserve in production environment for months. It worked well. Maybe I can try to write an example on yolov5 object detection with torch2trt.

@msaroufim
Copy link
Member

msaroufim commented Feb 9, 2022

Let me know if you need any help! Happy to spend any amount of time to unblock you. Especially if you only make a new example instead of changing the base handler, a PR like that can be merged immediately.

And out of curiosity which company do you work at? We're always looking to highlight production users for torchserve.

@pallashadow
Copy link
Author

I created a github repo, with self._load_torchscript_model overwritten trick mentioned above. But It's a production ready demo with Yolov5_face + Torchserve + TensorRT + Docker.
https://github.com/pallashadow/yolov5face_torchserve_tensorrt

@msaroufim
Copy link
Member

msaroufim commented Feb 16, 2022

I love it! Honestly you can contribute it as is in examples repo. Would love to have this. And you can link your main repo back from the readme in example

I'm also planning on adding a link to your code directly from the main torchserve README this is an extremely valuable contribution https://github.com/pytorch/serve/blob/de301a55aae7894b963e9f323ae08b255434ab49/README.md

@HamidShojanazeri
Copy link
Collaborator

Thanks @pallashadow, thats a great example of using TRT with Torchserve in production. As @msaroufim mentioned it is an invaluable contribution and we would love to help and get it merged.

@pallashadow
Copy link
Author

pallashadow commented Feb 21, 2022

@msaroufim , I have seen #1440 . I think it should be done with option 1 Inheritance. Because it should import torch2trt somewhere in the beginning of the handler. I don't know how and where to import it with other options.

@msaroufim
Copy link
Member

That's great feedback @pallashadow thank you!

@msaroufim
Copy link
Member

msaroufim commented Mar 30, 2022

@pallashadow would you be interested in making a technical tutorial in pytorch/examples? You could go over how the integration worked and talk about the performance improvements you got. Perhaps this article is good inspiration pytorch/tutorials#1880

I don't think why I didn't do this sooner but would also be worth for us building a custom TensorRT handler.

cc: @HamidShojanazeri

@pallashadow
Copy link
Author

Sorry for late reply. Yes, I would like to do it.

@pallashadow
Copy link
Author

I think it is simply a torch2trt handler, not a full TensorRT handler. Torch2trt have the full capability of TensorRT, but it cannot handle all use-case, Are you sure it is what you want?
I am no longer working on TensorRT optimization due to some recent professional change. I am sorry that I don't think I am a good person to carry this project, but I would like to help if someone takes in charge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants