Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion regarding extrinsic params #4

Closed
mnauf opened this issue May 18, 2022 · 5 comments
Closed

Confusion regarding extrinsic params #4

mnauf opened this issue May 18, 2022 · 5 comments

Comments

@mnauf
Copy link

mnauf commented May 18, 2022

The supplementary paper states that:

"We use checkerboard to calibrate the relative poses between different kinects in a pairwise manner. Specifically, we capture 20 pairs of RGB-D images from two kinects and then register each color image with corresponding depth image such that they have the same resolution. We then use OpenCV to extract the checkerboard corners in the color images and obtain their 3D camera coordinates utilizing the registered depth map. Finally, we perform a Procrustes registration on these ordered 3D checkerboard corners to obtain the relative transformation between two kinects. We obtain 3 pairs of relative transformation for 4 kinects and combine them to compute the transformation under a common world coordinate."

I was hoping that extrinsic parameters are the position of a camera with respect to the world coordinates of checkerboard, but it looks like I got it wrong. Reading the supplementary paper statement about how extrinsic params were obtained confused me even more.

Q1: Can you please explain what the extrinsics for each camera represent in this paper? Are these relative to cam1? because the rotation of cam1 is the identity matrix and the translation matrix is a zero matrix. But what does it mean "We obtain 3 pairs of relative transformation for 4 kinects and combine them to compute the transformation under a common world coordinate"?

{
  "rotation": [
    1.0,
    0.0,
    0.0,
    0.0,
    1.0,
    0.0,
    0.0,
    0.0,
    1.0
  ],
  "translation": [
    0.0,
    0.0,
    0.0
  ]
}

Q2: Are depth and color images already aligned or do I need to transform coordinates of depth image to color camera coordinate system?

@xiexh20
Copy link
Owner

xiexh20 commented May 18, 2022 via email

@mnauf
Copy link
Author

mnauf commented May 18, 2022

Thanks @xiexh20. Does that mean if I want to transfer coordinates of cam0 to cam2, I will first transfer coordinates of cam0 to cam3 using cam0 extrinsincs, and then transfer coordinates from cam3 coordinate system to cam2 coordinate system using inverse of cam2 intrinsics?

In other words, cam0 extrinsincs allow me to move from cam0 to cam3 (0 -> 3) and then inverse of cam2 extrinsics allow me to move from cam3 to cam2 (3 -> 2)

(0 -> 3 -> 2)

Is that correct?

@xiexh20
Copy link
Owner

xiexh20 commented May 18, 2022

yes, the overall idea is correct, except here you should do 0->1->2, because we use camera 1 as the world coordinate.

@xiexh20
Copy link
Owner

xiexh20 commented May 18, 2022

the transformations between cameras are wrapped in this class: https://github.com/xiexh20/behave-dataset/blob/main/data/kinect_transform.py
you can play around with it

@mnauf
Copy link
Author

mnauf commented May 18, 2022

@xiexh20 oh yeah yeah. That's what I meant. Thanks loads

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants