Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
I am having a lot of trouble trying to get the camera poses to be correct. I use the same camera poses in other TSDF implementations, and it has no issues.
I get the camera pose from a file:
Which outputs the following:
But the call to integrate does not position the point cloud for the camera pose correctly for each view:
I noticed that in the python sample, the pose matrix is inverted before passing it into the call to integrate, but I assume this is just because of the input format (as the C++ sample does not invert):
I have tried transforming the pose matrix programatically using Eigen to try and correct the orientation (even though no rotation transformation should even be necessary between the views), but nothing I do seems to be able to correct the orientation of the point cloud. Once the orientation is corrected using affine rotation transformations, the point cloud for two different views is misaligned. The transformation should be be doing any translation. An example rotation transformation:
Unfortunately, I can't share the screen captures, but any advice on this would be much appreciated! I would really like to use Open3D for my project, but I've been spending a lot of working around things like this which hasn't been a problem with other TSDF implementations. I fully realize how hard it is to get the input to be standardized for datasets!