-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert mmpose model to ONNX for TensorRT #131
Comments
this will be added to the next iteration, with great possibility |
That would be fantastic. I can help implement C++ inference code using TensorRT, for the bottom-up approach. |
maybe this pr open-mmlab/mmaction2#160 is helpful |
Thanks
…On Tue, Sep 15, 2020 at 5:30 PM lizz ***@***.***> wrote:
maybe this pr open-mmlab/mmaction2#160
<open-mmlab/mmaction2#160> is helpful
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#131 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AARLXKGR4NU7QKHWMOEEJO3SF6CCBANCNFSM4RNH4YGA>
.
--
Ha Huy Khanh
Student
School of Information & Communication Technology - Hanoi University of
Science & Technology
Mobile: 01686499853
Y!M: khanhaaaa
Skype: khanhhh89
|
I find the bottom-up approach have the matrix algorithm. |
np & torch can be very fast for matrix things. Mostly there is no need for c++ reimplementation. Could you please open a new issue giving details on where is the slowness? We will try to optimize it once the bottleneck is clear |
Hi @khanhha , |
* [Feature]: Add dist semantics in checkpoint hook * [Fix]: Delete sync buffer in checkpoint hook
Hello,
I would like to ask if it's possible to convert MMPose models to ONNX format for using with TensorRT for better real-time performance.
Thanks
The text was updated successfully, but these errors were encountered: