I noticed that in the python sample, the pose matrix is inverted before passing it into the call to integrate, but I assume this is just because of the input format (as the C++ sample does not invert):
I have tried transforming the pose matrix programatically using Eigen to try and correct the orientation (even though no rotation transformation should even be necessary between the views), but nothing I do seems to be able to correct the orientation of the point cloud. Once the orientation is corrected using affine rotation transformations, the point cloud for two different views is misaligned. The transformation should be be doing any translation. An example rotation transformation:
Unfortunately, I can't share the screen captures, but any advice on this would be much appreciated! I would really like to use Open3D for my project, but I've been spending a lot of working around things like this which hasn't been a problem with other TSDF implementations. I fully realize how hard it is to get the input to be standardized for datasets!
The text was updated successfully, but these errors were encountered: