Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where are the groundtruth poses used? #5

Closed
jc3342 opened this issue Sep 8, 2021 · 1 comment
Closed

Where are the groundtruth poses used? #5

jc3342 opened this issue Sep 8, 2021 · 1 comment

Comments

@jc3342
Copy link

jc3342 commented Sep 8, 2021

Thanks for your work!
Maybe I missed something, but when I go through the code, in the data class DatasetVisibilityKittiSingle defined in DataVisibilityKitti.py, I didn't see that the groundtruth poses self.GTs_T and selfGTs_R are used. But if without groundtruth, how could you train the model?
Another question is, when doing inference, T_predicted, R_predicted = models[iteration](rgb, lidar) , error_t is norm of T_predicted? I thought it should be norm(T_predicted-T_groundtruth).
I guess I understand something wrong. But could you plz help me with that? Thanks a lot !

@cattaneod
Copy link
Owner

Thank you for your interest in our work.

The ground truths are embedded in the preprocessed point cloud, i.e., in the preprocessing step I save the local point cloud for each camera frame. This means that the ground truth for each (camera, point cloud) pair is the identity.

During training/inference, I apply a random transformation H_init to the point cloud, feed it to the network (together with the image), and the network predicts the transformation H_pred. The composed pose H_init*H_pred should be as close as possible to the identity transformation.

I hope my explanation is clear.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants