Replies: 1 comment
-
Hi @cqs0925 , thanks for using RTMPose. Updates to the models on the Deploee platform might lag behind the RTMPose official documentation. Therefore, you'll need to download the latest models from the RTMPose homepage. The model used in the online demo is trained on the You can find the and you can convert them into BTW, the code of online demo is also released, you can access to it here |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
Thanks for the good work.
I successfully did the inference on a single cpu with minimal essential packages. However, the skeleton I extracted is not as accurate as one from the website demo inference. (https://openxlab.org.cn/apps/detail/mmpose/RTMPose), using the example image,(and I tried all the .onnx model from the official website https://platform.openmmlab.com/deploee)
Does the website demo uses different model? Will it be released?
Thanks again.
Beta Was this translation helpful? Give feedback.
All reactions