New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MMDeploy exported models on Nvidia Jetson devices #122
Comments
@RS00001 Hi, you could refer to the tutorial how_to_install_mmdeploy_on_jetsons.md for the jetson device. BTW, current we have not supported any mmpose model in mmdeploy yet. But it will come soon. |
@RunningLeon Thank you for the help. If you don't mind, could you please clarify this for me... My intention is to generate the model and the SDK using my Mac and then get these deployed artifacts onto my Jetson Nano. So, above installation tutorial, is it for the device where I run inference or for the (Mac) device where I generate the SDK and my final model for TensorRT deployment? Also, the device where i run inference, will I still need to install the PyTorch? I am assuming that because we are converting this model for TensorRT, there will be no need to install the PyTorch to run inference. Is that correct? |
@RS00001 Hi, how_to_install_mmdeploy_on_jetsons.md is for installing mmdeploy on Jetson Nano device. Once you have mmdeploy installed, you could do the conversion from PyTorch model to ONNX and then to TensorRT engine. With TensorRT engine, you can do the deployment. |
Thanks @RunningLeon |
I followed the instructions and have spent whole day today trying to install the MMDEploy which includes installing MMCV. I tried both, installing from source and also the pre-compiled packages. In both, it is stuck at the stage "Building setup.py..." and even after hours there is no result. Could you please help |
@RS00001 Some workarounds might be necessary necessary due to an old python 3.6 version. It sounds like you are running into similar issues as I did. Which package does it get stuck at building? Maybe first run the following tool and list the versions of your OS and installed packages
In (very) broad terms, I did the following (Some of these steps might not be relevant to you). Also, the below steps is still very much a WIP. Maybe they can be useful for mmdeploy documentation.
|
@tehkillerbee big thank you! for the detailed steps. I will give it a go. Btw, I think you mentioned in the initial comment that the inference performance with TensorRT may not match that of PyTorch even on AGX. We are planning to deploy on Nano or may be Xavier NX which are smaller devices. Any suggestion on improving the inference performance on those? Also, if we find a way to build higher version of Torch and TorchVision along with Python 3.8, will that help? The reason we are going with Torch 1.10 is that that's the highest version I found on the Nvidia forum here: https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048 |
@RS00001 Well, the TensorRT backend is much faster than Pytorch on both my PC and Jetson AGX. The detections themselves appear different, distorted somewhat and the boundary boxes are also shifted slightly. I have created an issue #127 . Maybe a bug in mmdeploy? It could also be an incompatibility with the version of TensorRT7 bundled with the old Jetpack 4.5I am using so I will try to upgrade to Jetpack 4.6 asap I never experienced this issue when deploying TensorRT using mmdetection-to-tensorrt by @grimoire. |
@tehkillerbee - got it, thanks. Btw, were you able to install it on your AGX with a higher Python and Pytorch version? |
@RS00001 Sure, I used the latest(?) Pytorch 1.10.0 listed by Nvidia. I also used the latest mmdeploy, although with some modifications to build with TensorRT7. |
@tehkillerbee - did that work with python 3.7 or later? |
@RS00001 I have only tested with 3.6 |
@tehkillerbee - I am also trying with 3.6 and with the steps you provided. However, I am getting stuck at Torchvision installation. It is failing with a runtime error. Let me know if you encountered anything like this...... (mmdeploy2) nano4g@nano4g-desktop:~/torchvision$ python3 setup.py install --user The above exception was the direct cause of the following exception: Traceback (most recent call last): |
@RS00001 It looks like you are running out of RAM and the build process was killed. I guess that makes sense since you are using a jetson nano 4G, whereas I'm using either 8 or 32GB platforms. Try increasing your swap and see if it helps. |
@tehkillerbee thank you! I tried with the Xavier Nx which is 8gb RAM and it worked. The steps you provided earlier were super useful. Did MMDeploy work for you for MMPOSE models? It works for MM Detection but I am yet to test it with MMPOSE. Please let me know if you had any success with that. |
@RS00001 Great to hear! I have not tried mmpose models myself, so I cannot give any advice there. |
Closed due to age. |
MMDeploy looks good and I had a few questions relating to deploying the output model on Nvidia EDGE device that I could not find on Readthedocs and Github pages. Apologies in advance if the questions are very basic in nature:
For Nvidia deployment, we will be selecting the TensorRT deployment option. So once I convert the model and get the SDK using my Mac, will I still need to install PyTorch on the Nvidia device that I want to use for inference? In other words, My Nvidia Device has the JetPack already installed, which includes TensorRT. So, can I simply 1) run the converted model, 2) need SDK with the converted model or 3) Need SDK, converted model and PyTorch + other installations suggested by MMPose?
I intend to use the model for pose detection using MMPose. Will there be any additional dependencies?
Update: I am planning to install the MMDeploy on my Mac and then get the updated model and SDK for my inference device which is Nvidia
The text was updated successfully, but these errors were encountered: