-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement] Accelerate Associative Embedding inference #1099
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1099 +/- ##
==========================================
- Coverage 83.39% 83.38% -0.01%
==========================================
Files 196 196
Lines 15142 15144 +2
Branches 2736 2736
==========================================
+ Hits 12627 12628 +1
Misses 1823 1823
- Partials 692 693 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Have you tested the model inference? Could it produce the same accuracy as before? |
I tested the accuracy on two different backbones. The results can be found in https://openmmlab.feishu.cn/docs/doccnZWiwgGNh3fuxcTQv49f8qg. |
lint fails. Please re-run pre-commit. |
Change this pull request to branch dev-0.22 |
Motivation
Improve the inference speed of bottom-up human pose estimators.
Modification
Re-implement
AssociativeEmbedding.refine
with torch instead of numpy. Accelerate tensor operations and avoid moving large tensors from GPU to CPU.AssociativeEmbedding.parse
is also adjusted to fit the changes inAssociativeEmbedding.refine
.BC-breaking (Optional)
Use cases (Optional)
Checklist
Before PR:
After PR: