-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training pipeline of PoseNet #29
Comments
Hi, the details about PoseNet training are in the paper supplement B.1. The training data are rendered on the fly. For more details, see here the training code of PoseNet, as well as the rendering + augmentation pipeline. Unfortunately, the sheep mesh is under commercial license and we are not allowed to release it.
|
Thanks for your reply! I have a couple more follow-up questions
|
|
Hi @gengshan-y, after looking through the training code of PoseNet, I would like to ask that did you only use a single human mesh to train PoseNet for humans, and another sheep mesh to train PoseNet for quadruped animals ? If we put aside that there are various types of humans and animals, that might be handled well by the pretrained CSE features. How can your PoseNet predict the root pose very well when the objects make various poses ? |
Yes. The initial poses passed into banmo are noisy indeed due to deformations/shape variations. BANMo is updating the root poses during optimization. See here. |
Thank you for your quick response. But I would like to clarify a little bit that my question is about the function forward_warmup that you used to pre-train PoseNet, which is stored in human.pth and and quad.pth. |
That is correct, we use rest shape of the sheep/human only to train PoseNet. |
Hi @gengshan-y . Thanks for your amazing work and for sharing the code! I see that the sheep mesh cannot be released. Can you suggest a way to train PoseNet on our custom mesh? I suppose we cannot get CSE embeddings for our custom mesh. |
Hi, I think there is no off-the-shelf solution. The most straightforward way to get vertex embedding is to follow the CSE solution, where one can label corresponding 3D keypoints on a canonical mesh, and label 2D keypoints on images, from which you can learn a vertex embedding that corresponds to pixel features. Starting from a mesh with vertex features, you would be able to train banmo's posenet following the paper. |
Hello there!
Thanks a lot for sharing your work!
I have a couple of questions:
1.What is the dataset you used to train the PoseNet for root pose initialization?
2.What is the
occ
from the optical flow model? It seems that it's loaded in dataloader but is not used anywhere in the training.The text was updated successfully, but these errors were encountered: