New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about 2d annotation during testing in test datasets #4
Comments
For internet videos, we use the detected 2D keypoints by AlphaPose. For benchmarks, we use the annotation contained in themselves, following previous works such as ISO \etc. |
Thanks for your detailed answer! |
Sorry to bother you again. I still have a small question about the 2d benchmarks annotation. For benchmarks, do you use the training data of the target domain or just operate online adaption directly on the test data of the target domain after training on the source domain(h36m)? |
Don't be sorry. Feel free to contact me using GitHub or email. |
Thanks for your patience! My question is solved. |
Hi @syguan96 |
Yes, I reproject 3d skeleton of SMPL to image space. |
I had tried to do this, but the camera poses seem to be very bad, how do you deal with that? |
In my experience, the annotated camera pose is accurate. You can find the refined 2D pose in File 1 (see Readme). |
Got it, thank you very much! |
Hello! Thanks for your great work!
I have some confusion about the 2d annotation during online adaption. Do you mean that you use the 2d ground truth key points of the test set? Or do you obtain the 2d key points as annotation after sending the frames into an off-the-shelf 2d pose estimator?
The text was updated successfully, but these errors were encountered: