Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Camera Pose/Trajectory Extrinsic #504

Closed
nigeljw1 opened this issue Aug 13, 2018 · 2 comments
Labels

Comments

@nigeljw1
Copy link

@nigeljw1 nigeljw1 commented Aug 13, 2018

I am having a lot of trouble trying to get the camera poses to be correct. I use the same camera poses in other TSDF implementations, and it has no issues.

I get the camera pose from a file:

file_name = data_path + "frame-00000" + std::to_string(i) + ".pose.txt";
std::ifstream file_input(file_name);

Eigen::Matrix4d pose;

for (int32_t j = 0; j < 4; ++j) {
    file_input >> pose(j, 0);
    file_input >> pose(j, 1);
    file_input >> pose(j, 2);
    file_input >> pose(j, 3);
}

std::cout << pose << std::endl;

Which outputs the following:
0.998357 -0.027082 0.050504 0.070919
0.053046 0.103227 -0.993242 1.02161
0.021686 0.994289 0.104494 1.5307
0 0 0 1

But the call to integrate does not position the point cloud for the camera pose correctly for each view:

three::PinholeCameraIntrinsic intrinsics(1920, 1080, 1303.675, 1303.675, 960.0, 540.0);
volume.Integrate(*rgbd, intrinsics, pose);

I noticed that in the python sample, the pose matrix is inverted before passing it into the call to integrate, but I assume this is just because of the input format (as the C++ sample does not invert):

volume.integrate(rgbd, PinholeCameraIntrinsic(
                PinholeCameraIntrinsicParameters.PrimeSenseDefault),
                np.linalg.inv(camera_poses[i].pose))

I have tried transforming the pose matrix programatically using Eigen to try and correct the orientation (even though no rotation transformation should even be necessary between the views), but nothing I do seems to be able to correct the orientation of the point cloud. Once the orientation is corrected using affine rotation transformations, the point cloud for two different views is misaligned. The transformation should be be doing any translation. An example rotation transformation:

Eigen::Vector3d axis_z(0.0, 0.0, 1.0);
Eigen::Affine3d rotation = Eigen::Affine3d(Eigen::AngleAxisd(M_PI, axis_z));
volume.Integrate(*rgbd, intrinsics, pose*rotation.matrix());

Unfortunately, I can't share the screen captures, but any advice on this would be much appreciated! I would really like to use Open3D for my project, but I've been spending a lot of working around things like this which hasn't been a problem with other TSDF implementations. I fully realize how hard it is to get the input to be standardized for datasets!

@syncle

This comment has been minimized.

Copy link
Contributor

@syncle syncle commented Aug 16, 2018

Hi @nigeljw1. It is hard to understand the issue based on the description. The issue maybe due to inaccurate camera trajectories. Can you share minimal example or code, so that I can help you with?

@syncle syncle added the question label Aug 16, 2018
@syncle

This comment has been minimized.

Copy link
Contributor

@syncle syncle commented Sep 28, 2018

Closing the issue due to inactivity. @nigeljw1: please feel free to reopen if necessary.

@syncle syncle closed this Sep 28, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.