Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About obtaining pose from point clouds #6

Closed
vtasStu opened this issue Apr 28, 2022 · 2 comments
Closed

About obtaining pose from point clouds #6

vtasStu opened this issue Apr 28, 2022 · 2 comments

Comments

@vtasStu
Copy link

vtasStu commented Apr 28, 2022

Hi Yan Di, I see the input of your network is P = points - points.mean(dim=1, keepdim=True),where points is obtained from the backprojection of the depth map.

In my shallow knowledge, two pieces of information are needed to obtain the pose(R only discussed), the points P=R @ P_ori after the R transformation and the original points P_ori.

The input of network only has P=R @ P_ori, how does the network get rotationR without knowing the original points P_ori

@shangbuhuan13
Copy link

In the training of the network, we know the ground truth 9D pose and use it to supervise the network to learn how to transform the observed point cloud into the canonical space.
The canonical coordinate space is pre-defined as in the paper NOCS.
SO during inference, we don't need the P_ori in canonical space

@vtasStu
Copy link
Author

vtasStu commented Apr 29, 2022

Thank you very much for your reply. I think there may be a sentence to try to explain my confusion:

Understanding the Limitations of CNN-based Absolute Camera Pose Regression (CVPR 2019)

We have also shown that APR(e2e method) is more closely related to image retrieval approaches than to methods that accurately estimate camera poses via 3D geometry.

@vtasStu vtasStu closed this as completed Apr 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants