You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am having a lot of trouble trying to get the camera poses to be correct. I use the same camera poses in other TSDF implementations, and it has no issues.
I noticed that in the python sample, the pose matrix is inverted before passing it into the call to integrate, but I assume this is just because of the input format (as the C++ sample does not invert):
I have tried transforming the pose matrix programatically using Eigen to try and correct the orientation (even though no rotation transformation should even be necessary between the views), but nothing I do seems to be able to correct the orientation of the point cloud. Once the orientation is corrected using affine rotation transformations, the point cloud for two different views is misaligned. The transformation should be be doing any translation. An example rotation transformation:
Unfortunately, I can't share the screen captures, but any advice on this would be much appreciated! I would really like to use Open3D for my project, but I've been spending a lot of working around things like this which hasn't been a problem with other TSDF implementations. I fully realize how hard it is to get the input to be standardized for datasets!
The text was updated successfully, but these errors were encountered:
Hi @nigeljw1. It is hard to understand the issue based on the description. The issue maybe due to inaccurate camera trajectories. Can you share minimal example or code, so that I can help you with?
I am having a lot of trouble trying to get the camera poses to be correct. I use the same camera poses in other TSDF implementations, and it has no issues.
I get the camera pose from a file:
Which outputs the following:
0.998357 -0.027082 0.050504 0.070919
0.053046 0.103227 -0.993242 1.02161
0.021686 0.994289 0.104494 1.5307
0 0 0 1
But the call to integrate does not position the point cloud for the camera pose correctly for each view:
I noticed that in the python sample, the pose matrix is inverted before passing it into the call to integrate, but I assume this is just because of the input format (as the C++ sample does not invert):
I have tried transforming the pose matrix programatically using Eigen to try and correct the orientation (even though no rotation transformation should even be necessary between the views), but nothing I do seems to be able to correct the orientation of the point cloud. Once the orientation is corrected using affine rotation transformations, the point cloud for two different views is misaligned. The transformation should be be doing any translation. An example rotation transformation:
Unfortunately, I can't share the screen captures, but any advice on this would be much appreciated! I would really like to use Open3D for my project, but I've been spending a lot of working around things like this which hasn't been a problem with other TSDF implementations. I fully realize how hard it is to get the input to be standardized for datasets!
The text was updated successfully, but these errors were encountered: