Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Have trouble to calibrate multiple D435C #1

Closed
cdb0y511 opened this issue Jan 10, 2020 · 5 comments
Closed

Have trouble to calibrate multiple D435C #1

cdb0y511 opened this issue Jan 10, 2020 · 5 comments

Comments

@cdb0y511
Copy link

Hi,@puzzlepaint. I am really appreciate your great work for the community.
But, I have some trouble to calibrate the D435 Color cameras.
We have two D435 cameras in a fixed triple. Moving the target feature little by little.
First camera
000006
Second camera
000006
resolution 1280*720 for both RGB cameras, and half window 10, cell 30.
About 2769 pics for each camera.
However, it ends like this.
First one
report_camera0_error_directions
report_camera0_error_magnitudes
report_camera0_errors_histogram
report_camera0_grid_point_locations
report_camera0_observation_directions
report_camera0_removed_outliers

Second one
report_camera1_error_directions
report_camera1_error_magnitudes
report_camera1_errors_histogram
report_camera1_grid_point_locations
report_camera1_observation_directions
report_camera1_removed_outliers
The second one is weird to me. I have no problem to calibrate the infrared cameras. 900 pics will converge to a proper error. Maybe it is caused by the shutter type, I guess. There are a lot of outlier are removed from second one. Am I missing some trick there?
camera_tr_rig.yaml is the location of both cameras. So I can align both cameras into one unify world coordinate systems, for depth projection of each camera, and the scale is meters. Am I aright?

@puzzlepaint
Copy link
Owner

The second camera's calibration is clearly in a bad state from which the calibration process likely cannot recover. I would guess that this issue is already in the initialization (unless there currently is a bug in the optimization code). In addition, the direction visualizations also indicate that both cameras seem to be calibrated with extremely low fields-of-views, which also likely seems wrong.

Unfortunately I cannot pinpoint a concrete problem / solution directly. What I would try is:

  • If these problems appear in the visualizations from the beginning, did you try to perform the calibration a second time? There is some randomness in the initialization. If the issue is due to bad initialization, it may thus be fixed by better initialization luck.
  • One could also try using a simpler camera model at first (for example, by increasing the grid cell size; or, in case you used the non-central model, by using a central model first) and see whether that converges properly. The model could then be refined in a second step.
  • Were the cameras and the target completely static while the pictures were recorded? If yes, then the shutter type should not play a role. If no, maybe the rolling shutter causes some inaccuracy, but I would not expect it to break the calibration completely.

Regarding camera_tr_rig.yaml:
Yes, that should contain the relative pose of the cameras that could be employed to use the cameras as a stereo rig. The scale is determined by the calibration pattern's scale, and yes, I think it should be in meters. However, unless you manufactured the pattern very accurately, I would suggest to treat this as an approximate value.

@cdb0y511
Copy link
Author

Thanks for quick response.
I have tried another time. I restart with new images. I move my camera rig slowly instead of moving target slowly.
This time, everything seems fine to me.
first camera
report_camera0_error_directions
report_camera0_error_magnitudes
report_camera0_errors_histogram
report_camera0_grid_point_locations
report_camera0_observation_directions
report_camera0_removed_outliers
second one
report_camera1_error_directions
report_camera1_error_magnitudes
report_camera1_errors_histogram
report_camera1_grid_point_locations
report_camera1_observation_directions
report_camera1_removed_outliers

@puzzlepaint
Copy link
Owner

Yes, this looks good indeed. Glad that it seems to have worked now.

@cdb0y511
Copy link
Author

@puzzlepaint , I have come an ideal. Even the printer is not very accurate. But I can measure the actual target size. Then I change the file like pattern_resolution_17x24_segments_16_apriltag_0.yaml


num_star_segments: 16
squares_x: 17
squares_y: 24
square_length_in_meters: 0.011882352941176469
page:
width_mm: 210.0
height_mm: 297.0
pattern_start_x_mm: 4.0
pattern_start_y_mm: 5.911764705882365
pattern_end_x_mm: 205.99999999999997
pattern_end_y_mm: 291.0882352941177
apriltags:

  • tag_x: 6
    tag_y: 10
    width: 4
    height: 4
    index: 0

square_length_in_meters and page will be my measurement. Then calculate the feature. The results will be closer to the real world coordinates in meters. I think the algorithm reads the pattern_resolution_17x24_segments_16_apriltag_0.yaml for real world scale.
Is this a practical method?

@puzzlepaint
Copy link
Owner

If it is possible to do that measurement accurately, then this seems like a practical method. I think that only the square_length_in_meters value should be relevant.

If it is not possible to measure the pattern size accurately, another method could for example be to measure a room's size instead if that is easier, use the calibrated stereo camera to reconstruct that room, and compare the measured and reconstructed size to get a correction factor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants