Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

camera intrinsic matrix #4

Closed
zinuok opened this issue Nov 30, 2021 · 5 comments
Closed

camera intrinsic matrix #4

zinuok opened this issue Nov 30, 2021 · 5 comments

Comments

@zinuok
Copy link

zinuok commented Nov 30, 2021

Hello, I have a question about the values in "K.txt"

in original VOID dataset, the intrinsic parameters provided in here are:

"f_x": 514.638,
"f_y": 518.858,
"c_x": 315.267,
"c_y": 247.358,

However, in the "K.txt":

5.471833965147203571e+02 0.000000000000000000e+00 3.176305425559989430e+02
0.000000000000000000e+00 5.565094509450176474e+02 2.524727249693490592e+02
0.000000000000000000e+00 0.000000000000000000e+00 1.000000000000000000e+00```

, where K =
f_x   0    c_x
0     f_y  c_y
0      0      1
(as I know)

They are somewhat different.

Q1. Is the camera's distortion model (radtan) already applied in "K.txt" ?

Q2. And the second question is that, why the intrinsic parameters are different across the different sequence? Did you use different sensor setup in each sequence? (in your paper, it is written that D435i was used for data acquisition). If so, which intrinsic data should be used for real-usage, like VIO ?

Very thanks in advance

@alexklwong
Copy link
Owner

Ah, let me clarify. The distortion model should not be included in calibration. The intrinsic parameters I believe we got those off the factory calibration settings -- this was done afterwards since someone asked for it. For the actual dataset, we calibrated the sensor for each sequence, so the numbers are slightly different across all the sequences. You should use K.txt provided for each sequence when running it.

@zinuok
Copy link
Author

zinuok commented Dec 1, 2021

Thank you very much for taking your valuable time to answer my questions.
Sorry to bother you, but I have one more question,

You said that "The distortion model should not be included in calibration".

Looking through your code and paper, it seems that the distortion model is not taken into account. (only inverse intrinsics are considered when feature is back-projected into 3D space)

I wonder if the distortion model doesn't need to be taken into account in your network.
(or did the images already rectified using the distortion coefficients..?)

Thank you very much

@alexklwong
Copy link
Owner

That's correct, its a rudimentary calibration model i.e. pinhole so we do not account for distortion in the paper. I don't recall going through a rectification process when we collected the dataset either. I think this works because there isn't anything noticeable in terms of lens distortion. On the otherhand, if you were to use a fisheye then yes you definitely need to undistort first. This was the case in a previous paper (https://arxiv.org/pdf/1905.08616.pdf) where we were tried out tum vi.

@zinuok
Copy link
Author

zinuok commented Dec 1, 2021

Oh I see..
I'll refer the link.
Thanks again !

@zinuok zinuok closed this as completed Dec 1, 2021
@alexklwong
Copy link
Owner

In case you are interested:
IntelRealSense/librealsense#1430 (comment)
for the RealSense d400 series, images would be off by 1 pixel at the extreme, so there would be little to no difference

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants