You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your work!
Maybe I missed something, but when I go through the code, in the data class DatasetVisibilityKittiSingle defined in DataVisibilityKitti.py, I didn't see that the groundtruth poses self.GTs_T and selfGTs_R are used. But if without groundtruth, how could you train the model?
Another question is, when doing inference, T_predicted, R_predicted = models[iteration](rgb, lidar) , error_t is norm of T_predicted? I thought it should be norm(T_predicted-T_groundtruth).
I guess I understand something wrong. But could you plz help me with that? Thanks a lot !
The text was updated successfully, but these errors were encountered:
The ground truths are embedded in the preprocessed point cloud, i.e., in the preprocessing step I save the local point cloud for each camera frame. This means that the ground truth for each (camera, point cloud) pair is the identity.
During training/inference, I apply a random transformation H_init to the point cloud, feed it to the network (together with the image), and the network predicts the transformation H_pred. The composed pose H_init*H_pred should be as close as possible to the identity transformation.
Thanks for your work!
Maybe I missed something, but when I go through the code, in the data class DatasetVisibilityKittiSingle defined in DataVisibilityKitti.py, I didn't see that the groundtruth poses self.GTs_T and selfGTs_R are used. But if without groundtruth, how could you train the model?
Another question is, when doing inference, T_predicted, R_predicted = models[iteration](rgb, lidar) , error_t is norm of T_predicted? I thought it should be norm(T_predicted-T_groundtruth).
I guess I understand something wrong. But could you plz help me with that? Thanks a lot !
The text was updated successfully, but these errors were encountered: