-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement a batch inference #781
Labels
enhancement
New feature or request
Comments
see #25 |
I will recommend using |
Closed
rollingman1
pushed a commit
to rollingman1/mmpose
that referenced
this issue
Nov 5, 2021
* first commit * update docs * add unittest * update changelog
For those who still stuck on this issue, here is quick fix:"img_metas = img_metas.data[0]" |
@dongrongliang Where to apply the fix and which version you are using? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I would like to modify the top_down_pose_tracking_demo_with_mmdet.py for batch inference.
After reading #608, I aggregate the bboxes from multiple images before collating the batch.
However, I have a question about this line of code:
mmpose/mmpose/apis/inference.py
Line 299 in e7f929f
Since the model hasn't utilized the GPUs fully, I want to increase the size of samples_per_gpu but I got the error as below
assert img.size(0) == len(img_metas)
The text was updated successfully, but these errors were encountered: