Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to calibrate 360 camera #23

Closed
vswdigitall opened this issue Jul 24, 2022 · 59 comments
Closed

How to calibrate 360 camera #23

vswdigitall opened this issue Jul 24, 2022 · 59 comments
Assignees
Labels
question Further information is requested

Comments

@vswdigitall
Copy link

vswdigitall commented Jul 24, 2022

Hi,

Tell me please what is the square_size?
Square_length is measured length of every square. Marker_length is inside in each square. In my case 0.02m and 0.16m.

In you example 0.04m, 0.03m and square_size 0.192m Why?
And it is resolution X Y? Frame width and height?

@BAILOOL BAILOOL self-assigned this Jul 24, 2022
@BAILOOL BAILOOL added the question Further information is requested label Jul 24, 2022
@BAILOOL
Copy link
Collaborator

BAILOOL commented Jul 24, 2022

Thank you for your interest in MC-Calib. Here is the Charuco board:
image

It has main three components:

  • large black and white squares with size square_size
  • each white square contains another smaller black square inside it with size length_square
  • each of these smaller black squares contains markers of size length_marker

Does it answer your question @vswdigitall ?

@vswdigitall
Copy link
Author

photo1658699425

I am right?

@BAILOOL
Copy link
Collaborator

BAILOOL commented Jul 25, 2022

@vswdigitall yes, you are correct.

@vswdigitall
Copy link
Author

vswdigitall commented Jul 25, 2022

Thank you.
I want calibrate my 360 rig. It is 5 cameras mounted together in pentagon form factor and 1 top camera. All cameras has overlap about 10%, resolution 4032x3040, f2.8mm, imx577 1/2.5".
Will your app calibrate intrinsics and extrinsics of that setup?
Should i calibrate only pairs or do it for all cameras at once?

I need to know transformation of every pixel from camera to camera.

@rameau-fr
Copy link
Owner

Hello, I recommend you calibrate all the cameras at once since the toolbox can chain all the transformations and implicitly enforce the poses to be consistent. The application will calibrate together all the intrinsic and extrinsic parameters.

To calibrate your system, you will have two options:

  1. You can use a single large board that you can move around the cameras, in this case you might need to reduce the parameter 'min_perc_pts' to a smaller value like 0.3 (I mentioned the board should be large because you need to see a substantial part of the checkerboard on both cameras sharing the same field of view).
  2. OR you can install multiple fixed boards all around and move the camera rig (imh, you might reach better results like that).

A final comment, if you consider method 2, you will have to avoid purely planar motion (try to rotate the system in all directions).

P.S: Regarding your comment, "I need to know transformation of every pixel from camera to camera.". This can be done only assuming you have a neglectable translation between your cameras therefore, you will exclusively utilize the rotation matrices returned by the calibration toolbox.

@vswdigitall
Copy link
Author

vswdigitall commented Jul 26, 2022

Thank you.

I've ordered 80x80cm 8x8 rigid high quality charuco boards, 3pcs, with different id's. I will try 2nd case. Will rotate and move rig to cover all cameras by boards.
Will reply.

@vswdigitall
Copy link
Author

vswdigitall commented Aug 6, 2022

Hi, have tried test.
168
photo1659817100

CONFIG:

%YAML:1.0

number_x_square: 6 #number of squares in the X direction
number_y_square: 6 #number of squares the Y direction
resolution_x: 4032 # horizontal resolution in pixel
resolution_y: 3040 # vertical resolution in pixel
length_square: 0.093 # parameters on the marker (can be kept as it is)
length_marker: 0.07 # parameters on the marker (can be kept as it is)
number_board: 3
boards_index: []
square_size: 0.12
number_x_square_per_board: []
number_y_square_per_board: []
square_size_per_board: []

distortion_model: 0
distortion_per_camera : []
number_camera: 3
refine_corner: 1
min_perc_pts: 0.3

cam_params_path: "/home/vsw/calib.yml"
fix_intrinsic: 1

root_path: "/home/vsw/1659813866954/"
cam_prefix: "cam"

ransac_threshold: 10
number_iterations: 1000

he_approach: 1
m/whatever you want

number_x_square_per_board: []
number_y_square_per_board: []
square_size_per_board: []

distortion_model: 0
distortion_per_camera : []
number_camera: 3
refine_corner: 1
min_perc_pts: 0.3

cam_params_path: "/home/vsw/calib.yml"
fix_intrinsic: 1

root_path: "/home/vsw/1659813866954/"
cam_prefix: "cam"

ransac_threshold: 10
number_iterations: 1000

he_approach: 1

save_path: "/home/vsw/1659813866954/out/"
save_detection: 0
save_reprojection: 0
camera_params_file_name: ""

@vswdigitall
Copy link
Author

But result is very far from reality.
Please check which steps i can improve.

@vswdigitall
Copy link
Author

Screenshot from 2022-08-06 23-52-14
Screenshot from 2022-08-06 23-52-58
Screenshot from 2022-08-06 23-53-18
Screenshot from 2022-08-06 23-53-52

@rameau-fr
Copy link
Owner

Hello,

your setup looks great, the problem might come from the pictures, the synchronization or simply from a bug in the toolbox.
It will really help if you can share your pictures with me, such that we can figure out what is going wrong in this use case. It might help you to calibrate your system and us to make the toolbox more robust ;-)!
A few additional questions that would help me:

  • are these cameras synchronized
  • could you share the specs of these cameras plz? To verify if the sensors are rolling shutter or global shutter for instance

Thank you very for testing MC-Calib!
PS: I am currently in summer vacation until Wednesday but I think I might be able to check by the end of this week if you can share these information

@vswdigitall
Copy link
Author

Hi, thank you for help.

Please share your email. I will send you images.

Yes, it is hardware synchronized rig.
It is imx577, rolling.
Also i tried to move slowly to exclude rolling issue.

@vswdigitall
Copy link
Author

Here is intrinsic, computed with checker board. Same for all cameras.

camera_matrix: !!opencv-matrix # 3x3 intrinsic matrix [fx, 0, cx, 0, fy, cy, 0, 0, 1]
rows: 3
cols: 3
dt: d
data: [ 1862.54046, 0., 1946.57564, 0., 1864.49349, 1574.93196, 0., 0., 1. ]
distortion_vector: !!opencv-matrix # 1x5 distortion vector (Brown model here)
rows: 1
cols: 5
dt: d
data: [ -2.492228e-01, 6.307843e-02, -4.479687e-04, -1.323748e-03, -6.928991e-03 ]

@vswdigitall
Copy link
Author

Have sent 2 datasets to alexandr.baylo email.

@rameau-fr
Copy link
Owner

Good morning,

First of all, thank you again for sharing your data with us. With the data you provided, we identified a bug in the toolbox when using multiple boards of different sizes.

Specific instructions to use the toolbox

I also figured out a few mistakes in how you used the toolbox (which are probably not clear enough in the tutorial that we might need to improve for future users).
To be specific:

  1. the variables resolution_x and resolution_x are used only for the generation of the boards; it is basically the resolution of the board image generated by the code generate_charuco. So it should not be the resolution of your image. Since you have square boards, you can, for instance, use the values resolution_x: 600 and resolution_y: 600
  2. Most importantly, it seems that the generation of the boards is wrong. The Charuco index does not correspond to the ones generated using our code 'generate_charuco'. May I know how you generated the boards? Because of that, the calibration cannot work. I recommend you use the boards created by this configuration and by running ./generate_charuco ../configs/your_config_file.yml:
number_x_square: 6  #number of squares in the X direction
number_y_square: 6  
resolution_x: 600  
resolution_y: 600  
length_square: 0.04  # no need to modify!!
length_marker: 0.03   # no need to modify!!
number_board: 3 
boards_index: [ ] 
square_size: 0.12  # the size of each square in the real world

It will generate 3 boards you can use for your calibration!

  1. No need to use precalibrated intrinsic in your case; you can simply use fix_intrinsic: 0, and maybe you can acquire a slightly longer sequence where you also acquire images as if you were calibrating all cameras independently.
  2. In your previous message, you mentioned the Brown model. Unfortunately, this model is unsuitable for fisheye images, and I recommend using distortion_model: 1 to apply the Kanalla model.
  3. Since you use high-resolution images with strong distortion, you can set a higher RANSAC threshold, like 10 or 20 pixels.
  4. given your configuration, I recommend setting the parameter min_perc_pts: 0.5 to ensure the robustness of the detection

Bug in the toolbox

Regarding the bug in the toolbox side, when using multiple boards with different sizes, the toolbox failed to deal with the indexing. We determined how to deal with this problem, and the new version will be available very soon.

Calibration results using a single board

despite all these shortcomings, I manage to figure out the indexing of one of your boards. So I calibrate your system using this single board without any prior intrinsic calibration. The configuration file I used is the following:

%YAML:1.0

######################################## Boards Parameters ###################################################
number_x_square: 6  #number of squares in the X direction
number_y_square: 6  #number of squares the Y direction
resolution_x: 500  # horizontal resolution in pixel
resolution_y: 500  # vertical resolution in pixel
length_square: 0.04  # parameters on the marker (can be kept as it is)
length_marker: 0.03   # parameters on the marker (can be kept as it is)
number_board: 4 # number of boards used for calibration (for overlapping camera 1 is enough ...)
boards_index: [1] #leave it empty [] if the board index are ranging from zero to number_board
# example of usage boards_index: [5,10] <-- only two board with index 5/10
square_size: 0.12  # size of each square of the board in cm/mm/whatever you want

############# Boards Parameters for different board size (leave empty if all boards have the same size) #################
number_x_square_per_board: [3, 6, 6, 3]
number_y_square_per_board: [7, 6, 6, 3]
square_size_per_board: [0.12, 0.12, 0.12, 0.12]

######################################## Camera Parameters ###################################################
distortion_model: 1 #0:Brown (perspective) // 1: Kannala (fisheye)
distortion_per_camera : [] #specify the model per camera, 
#leave "distortion_per_camera" empty [] if they all follow the same model (make sure that the vector is as long as cameras nb)
number_camera: 3  # number of cameras in the rig to calibrate
refine_corner: 1  # activate or deactivate the corner refinement
min_perc_pts: 0.5  # min percentage of points visible to assume a good detection

cam_params_path: "None" # "../../Images_Plan/calibrated_cameras_data.yml"  # file with cameras intrinsics to initialize the intrinsic, write "None" if no initialization available 
fix_intrinsic: 0 #if 1 then the intrinsic parameters will not be estimated nor refined (initial value needed)

######################################## Images Parameters ###################################################
root_path: "../../data/dataset1/" #"../../Images_Sim1Cam3Board/" /Images_NonOver6Cam/"
cam_prefix: "cam"

######################################## Optimization Parameters ###################################################
ransac_threshold: 20 #RANSAC threshold in pixel (keep it high just to remove strong outliers)
number_iterations: 1000 #Max number of iterations for the non linear refinement

######################################## Hand-eye method #############################################
he_approach: 0 #0: bootstrapped he technique, 1: traditional he

######################################## Output Parameters ###################################################
save_path: "../../data/dataset1/"
save_detection: 1
save_reprojection: 1
camera_params_file_name: ""

I obtained a reprojection error ~2.4 pixels; most errors seem to come from the inaccuracy of the fisheye model (so you might improve it further with more various viewpoints). I only used the dataset1 you provided.
It seems that your system can be calibrated with a single board, but it might be better if you could regenerate the boards, as I mentioned earlier, and reprint them accordingly to ensure better accuracy.

Thank you again very much for your support and the data you provided. It is very valuable to us. Please keep us in touch regarding your project.

Calib_Test
Calib_test_error

@rameau-fr
Copy link
Owner

rameau-fr commented Aug 10, 2022

Unsing a single board on your second sequence with the parameters min_perc_pts: 0.7 and ransac_threshold: 50, I reached under 1 pixel error (max mean frame error ~3.5 pixels). I attached the calibration results here
Calibration_sequence2.zip

@vswdigitall
Copy link
Author

Thank you very much for help.

1,2. I generated 6 boards with resolution 2200px. Will make them. My boards are from calib.io, why they are failed (800x800, 6x6)?
3,4,5,6. OK.

Also i have questions:

  1. How much boards i need to calibrate my setup ( 6 cameras 12mp )?
  2. It is important equal number frames from each camera? ( cam0 0,1,2,3.... cam1 0,1,2,3 )

@rameau-fr
Copy link
Owner

okay, I understand better why I struggled with the indexing; for MC-Calib, the calibration boards must be generated with the toolbox itself.

For your additional questions:

  1. For your setup a single board might even be sufficient, but the 3 boards setup you already experimented with should work very well for your use-case
  2. Indeed, in its current version MC-Calib requires the same number of images for each camera. It does not matter if certain images do not contain a checkerboard. The data you provided me was good in this regard!

@vswdigitall
Copy link
Author

Ok, got it.

@vswdigitall
Copy link
Author

Also i always got this error in python and zoom not working:

Exception in Tkinter callback
Traceback (most recent call last):
File "/usr/lib/python3.9/tkinter/init.py", line 1892, in call
return self.func(*args)
File "/usr/lib/python3/dist-packages/matplotlib/backends/_backend_tk.py", line 272, in motion_notify_event
FigureCanvasBase.motion_notify_event(self, x, y, guiEvent=event)
File "/usr/lib/python3/dist-packages/matplotlib/backend_bases.py", line 1903, in motion_notify_event
self.callbacks.process(s, event)
File "/usr/lib/python3/dist-packages/matplotlib/cbook/init.py", line 229, in process
self.exception_handler(exc)
File "/usr/lib/python3/dist-packages/matplotlib/cbook/init.py", line 81, in _exception_printer
raise exc
File "/usr/lib/python3/dist-packages/matplotlib/cbook/init.py", line 224, in process
func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/matplotlib/backend_bases.py", line 2990, in mouse_move
s = event.inaxes.format_coord(event.xdata, event.ydata)
File "/usr/lib/python3/dist-packages/mpl_toolkits/mplot3d/axes3d.py", line 1194, in format_coord
x, y, z = proj3d.inv_transform(xd, yd, z, self.M)
File "/usr/lib/python3/dist-packages/mpl_toolkits/mplot3d/proj3d.py", line 125, in inv_transform
iM = linalg.inv(M)
File "<array_function internals>", line 5, in inv
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 546, in inv
ainv = _umath_linalg.inv(a, signature=signature, extobj=extobj)
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 88, in _raise_linalgerror_singular
raise LinAlgError("Singular matrix")
numpy.linalg.LinAlgError: Singular matrix

@rameau-fr
Copy link
Owner

rameau-fr commented Aug 10, 2022

Thanks a lot, indeed I faced the same problem during my last test. It is due to the versioning of matplotlib. Now matplotlib supports orthogonal view so the lines:

def orthogonal_proj(zfront, zback):
    a = (zfront+zback)/(zfront-zback)
    b = -2*(zfront*zback)/(zfront-zback)
    return np.array([[1,0,0,0],
                        [0,1,0,0],
                        [0,0,a,b],
                        [0,0,0,zback]])
proj3d.persp_transformation = orthogonal_proj

should removed and the declaration of the figure should be changed accordingly:

#2. declare figure
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_proj_type('ortho')
ax.set_title('Calibration Result')

in the figure declaration. I will try to include these changes in the next PR. I also noticed a minor display error probably also related to this versioning issue.

@vswdigitall
Copy link
Author

Confirmed, works now. Also zoom works with right mouse btn too.

@rameau-fr
Copy link
Owner

I am now waiting for the other contributors to approve the next push request, and everything should be fine (they are quite busy, so it might take up to a few days). But if you print the boards as I mentioned before, you can just use the current version of the code without trouble.
However, it seems that with fisheye cameras, certain images contain strong errors (our robust optimization can mitigate such issues to a certain extent). Instead of filtering these frames manually, if you face any problem in the future, I can modify the code to reject these frames automatically. Please try to keep us in touch!

@vswdigitall
Copy link
Author

Thank you. I've start produce boards, when ready will try and reply.

@vswdigitall
Copy link
Author

vswdigitall commented Aug 31, 2022

Hi, we have built new generated boards and made tests.
Git pull was performed.

All tests with 6 boards and 1 board failed for 3 cameras ( mre always >10+9px ), but for 2 cameras mre < 1px.
We tried adjust parameters but 3 cameras was not calibrated.

Please help to fix more than 2 camera calibration.

Previous links updated:
/dataset1.zip - 6 boards 2m distance
/dataset2.zip - 6 boards 1.5m distance
/dataset3.zip - 1 board 2m distance
/calib.zip - calib configs and generated charucos

@rameau-fr
Copy link
Owner

Hello!

Thank you very much for your feedback. Your images look great; I managed to calibrate your camera system directly with this configuration file --> projection error 0.35px on dataset1 (I just did not adjust the parameter square_size to your board size, so the scale is obviously wrong ): https://files.catbox.moe/c2k1lq.yml

Screenshot from 2022-09-01 11-01-33

Here is the calibration result: https://files.catbox.moe/2l40n6.yml
(I repeated the calibration twice and obtained satisfying results both times, I also checked the reconstructed structure, and it looks perfect.)

It seems the calibration worked well this time. However, I have a few additional pieces of advice in case you calibrate more complex systems (non-overlapping) or if you want to be sure to have a reliable calibration:

  • These sequences are rather short; I would advise using more images since we calibrate both the intrinsic and extrinsic
  • The motion diversity can be quite problematic. In the sequence, I have seen you are almost in a pure rotational configuration around a single axis with apparently barely any translation. If you deal with a system without overlapping, it will not work at all. You should vary the rotations, and all the translation directions have to be covered in your calibration sequence. For instance, if you do not move the cameras vertically, there is no way to estimate this degree of freedom for non-overlapping systems (it is an inherent limitation of hand-eye-based camera calibration).
  • You can also try to place your boards at more diverse positions to cover the images as much as possible (it will become even more important if you need to calibrate more complex systems without overlapping)

Now it seems to me that you will be able to have a well-calibrated system very soon and that you can move forward in your project to have a full 360-surrounding view camera ;-).

Best

P.S: @BAILOOL did a wonderful job speeding up the calibration process via multi-threading. I will try to test, and merge this modification on the main branch soon!

@vswdigitall
Copy link
Author

vswdigitall commented Sep 1, 2022

Hi, thank you but i can't reproduce your result.
Config exactly yours.
dataset1.txt

config.txt

May be we are using different version of mc-calib?
We have OpenCV 4.6.0-dev

@vswdigitall
Copy link
Author

vswdigitall commented Sep 1, 2022

Also you can remove 3 lines. It brakes system opencv path:

Screenshot from 2022-09-01 18-39-08

@rameau-fr
Copy link
Owner

That's great, try to force the translation vector to be full of zeros and try again plz. It will correspond to a single view point configuration which is the commonly admitted assumption for building a panorama. If it does not work it might be a problem with the undistortion or with the "direction" of the transformation, I will personally check on Monday if needed. I am pretty sure the calibration is okay ;-)!

@vswdigitall
Copy link
Author

Ok, i will try board as world origin, to measure true distance.

@vswdigitall
Copy link
Author

That's great, try to force the translation vector to be full of zeros and try again plz. It will correspond to a single view point configuration which is the commonly admitted assumption for building a panorama. If it does not work it might be a problem with the undistortion or with the "direction" of the transformation, I will personally check on Monday if needed. I am pretty sure the calibration is okay ;-)!

Will try.

@vswdigitall
Copy link
Author

vswdigitall commented Sep 3, 2022

Here is cam.tvec = cv::Mat::zeros( 3, 1, CV_64F );
It looks like better.
proj

@vswdigitall
Copy link
Author

On Monday we will make calibration with more images and angles.

@vswdigitall
Copy link
Author

vswdigitall commented Sep 3, 2022

Here, i detected aruco board by cam0 and got 4x4 matrix.
Then all 4x4 poses were multiplicated:
cam.pose = M * cam.pose;

Now it is world origin for all cameras.

2d point from cam0 projected to 3d world space and then reprojected back to 2d image for each camera.

And it looks like good.

Am i right?

proj

@vswdigitall
Copy link
Author

vswdigitall commented Sep 4, 2022

Tell me please, MC-Calib computes {Xc,Yc,Zc}, for each camera relative to cam0 (as drawn on your python output)?

calib

@vswdigitall
Copy link
Author

vswdigitall commented Sep 4, 2022

I've measured coordinates from calibration and they are approximately points to red dots on image.
It is true?

cam

@rameau-fr
Copy link
Owner

Tell me please, MC-Calib computes {Xc,Yc,Zc}, for each camera relative to cam0 (as drawn on your python output)?

Yes, everything is expressed in the referential of cam0

I've measured coordinates from calibration and they are approximately points to red dots on image. It is true?

Very difficult to say, such kind of measurement will simply give you a rough idea about the quality of the calibration

Today, I tested your calibration results but plotted epipolar lines between images and it looks quite correct to me.
Nonetheless, I encourage you to run more tests (as you did), for instance, you can try also to triangulate points or you can reproject the points on the checkerboard etc.

@vswdigitall vswdigitall changed the title What for square_size? How to calibrate 360 camera Sep 5, 2022
@vswdigitall
Copy link
Author

vswdigitall commented Sep 6, 2022

Hi, thanks for help.
We made new dataset with more angles and images.
But i got error when calibrate it:
0001708 | 2022-09-06, 17:55:38.742625 [info] - Board extraction done! 0001709 | 2022-09-06, 17:55:38.742675 [info] - Intrinsic calibration initiated 0001710 | 2022-09-06, 17:55:38.742685 [info] - Initializing camera calibration using images 0001711 | 2022-09-06, 17:55:38.742693 [info] - NB of board available in this camera :: 329 0001712 | 2022-09-06, 17:55:38.742699 [info] - NB of frames where this camera saw a board :: 73 0001713 | 2022-09-06, 17:55:42.852374 [info] - cameraMatrix : [1283.425461093044, 0, 2015.5; 0, 1283.425461093044, 1519.5; 0, 0, 1] 0001714 | 2022-09-06, 17:55:42.909568 [info] - distCoeffs : [0; 0; 0; 0] 0001715 | 2022-09-06, 17:55:42.909617 [info] - NB of board available in this camera :: 365 0001716 | 2022-09-06, 17:55:42.909627 [info] - NB of frames where this camera saw a board :: 65 0001717 | 2022-09-06, 17:55:44.727771 [info] - cameraMatrix : [1283.425461093044, 0, 2015.5; 0, 1283.425461093044, 1519.5; 0, 0, 1] 0001718 | 2022-09-06, 17:55:44.727834 [info] - distCoeffs : [0; 0; 0; 0] 0001719 | 2022-09-06, 17:55:44.727883 [info] - NB of board available in this camera :: 370 0001720 | 2022-09-06, 17:55:44.727891 [info] - NB of frames where this camera saw a board :: 74 terminate called after throwing an instance of 'cv::Exception' what(): OpenCV(4.5.0) /home/vsw/opencv/opencv/modules/calib3d/src/fisheye.cpp:1400: error: (-215:Assertion failed) fabs(norm_u1) > 0 in function 'InitExtrinsics'

Can you help to solve this error? I uploaded it to /dataset1.zip

@rameau-fr
Copy link
Owner

rameau-fr commented Sep 7, 2022

Hello!
I have not seen this error before, I will try to check it today.
Thank you very much for sharing your data!

@rameau-fr
Copy link
Owner

I managed to reproduce your error, and I dug a little bit into it.
The problem comes from the OpenCV fisheye calibration library we use to initialize the intrinsic parameters of the cameras. Overall, I find this openCV library much less robust than the "normal" calibration process, and it often leads to various kinds of errors. Admittedly, we mostly tested our toolbox using perspective cameras, so it is interesting to see the difficulty when calibrating with fisheye images.

After quickly lurking through the internet, this bug is quite common and occurs when the corners are too close to the image's border. The good news is that there is an easy fix to that: the corners too close to the image's borders can be removed from consideration. The bad news is that we have to implement it manually and that it might take me a bit of time before proposing a new PR (we have national holidays here, so you will have to be patient until at least the end of next week, but I will try my best to propose it asap).
Reference for the fix in python: Here

Another quick and temporary fix I tried is to use prior camera calibration parameters to avoid the OpenCV intrinsic calibration function, so in the configuration file, I activated:

cam_params_path: "../../data/dataset5/Init_Intrinsics.yml" 
fix_intrinsic: 0

and the calibration is working properly with a mean projection error inferior to 0.4px. So if you have some prior calibration of the camera's intrinsic (even an approximation of it), it should work properly!

Thank you again for your feedback, every problem you face makes MC-Calib more robust ;-)

@rameau-fr
Copy link
Owner

Good news! I fixed it actually quickly; your calibration is now working with these modifications. I sent a PR #27 , and I am waiting for feedback from other contributors. The feature will be available soon on the main.

I basically remove boards with points too close to the borders of the image (empirically with a 5% margin, to be exact), thus about 30 boards have been removed from your initialization process in your sequence.

Thankfully this fix: 1) only affects fisheye cameras, 2) the "bad" boards are removed only for the camera intrinsic initialization process conducted by the OpenCV library, but all the boards are still used in the rest of the calibration process.

Regarding your calibration, it now works smoothly with a projection error ~0.33px.

@vswdigitall
Copy link
Author

Thank you, it works now.
0015375 | 2022-09-07, 14:12:30.021064 [info] - Final refinement done
0015376 | 2022-09-07, 14:12:30.021106 [info] - Save parameters
0015377 | 2022-09-07, 14:12:30.069013 [info] - mean reprojection error :: 0.382421
0015378 | 2022-09-07, 14:12:30.132400 [info] - Calibration took 1992 seconds

Now we will build full 360 setup and test. Will reply.

@vswdigitall
Copy link
Author

I managed to reproduce your error, and I dug a little bit into it. The problem comes from the OpenCV fisheye calibration library we use to initialize the intrinsic parameters of the cameras. Overall, I find this openCV library much less robust than the "normal" calibration process, and it often leads to various kinds of errors. Admittedly, we mostly tested our toolbox using perspective cameras, so it is interesting to see the difficulty when calibrating with fisheye images.

After quickly lurking through the internet, this bug is quite common and occurs when the corners are too close to the image's border. The good news is that there is an easy fix to that: the corners too close to the image's borders can be removed from consideration. The bad news is that we have to implement it manually and that it might take me a bit of time before proposing a new PR (we have national holidays here, so you will have to be patient until at least the end of next week, but I will try my best to propose it asap). Reference for the fix in python: Here

Another quick and temporary fix I tried is to use prior camera calibration parameters to avoid the OpenCV intrinsic calibration function, so in the configuration file, I activated:

cam_params_path: "../../data/dataset5/Init_Intrinsics.yml" 
fix_intrinsic: 0

and the calibration is working properly with a mean projection error inferior to 0.4px. So if you have some prior calibration of the camera's intrinsic (even an approximation of it), it should work properly!

Thank you again for your feedback, every problem you face makes MC-Calib more robust ;-)

Got it. Thanks.

@vswdigitall
Copy link
Author

vswdigitall commented Oct 5, 2022

Here we start calibration full 360 camera: https://youtu.be/JJ4M4WmVNCQ

@rameau-fr
Copy link
Owner

Thank you so much for sharing your progress.
Your system is impressive, congratulations!

@vswdigitall
Copy link
Author

Hi Rameau,
i made cool dataset with our roboarm system.
After 2 hours features extraction on the last step it crashed. Can you help to solve new bug?
Dataset here: /etc/dataset1.zip about 8.5gb.
Also good idea to save keypoints after extraction if crash happens.

@vswdigitall
Copy link
Author

photo1665330030

@rameau-fr
Copy link
Owner

Hello,
You are facing the same problem related to the OpenCV toolbox. It seems my fix did not entirely solve the problem. I cannot check it out before tomorrow, but you can also try by yourself by modifying

const float border_marging = 0.05f; // border margin tolerance

For instance, by setting a higher threshold like 0.1 or 0.2 (and recompiling the project), I believe it should solve the problem.

Regarding the point extraction saving, it is a great idea (especially for big project like that). I will try to include this feature in the coming weeks!

Best regards

@vswdigitall
Copy link
Author

Thank you. border_marging=0.2 works.

Screenshot from 2022-10-10 14-30-35

MRE=0.76

Will analize.

@vswdigitall
Copy link
Author

I think we got very nice result.

Thank you for you excellent tool.

pano4

@rameau-fr
Copy link
Owner

Congratulations!
It was our pleasure to help you during your calibration process. Thanks to your experiments, we have discovered multiple shortcomings and bugs for multi-fisheye calibration, we even released the version 1.1 of MC-Calib containing these improvements.
On this good news, I will close this issue ;-)!

@mitch-galea
Copy link

I think we got very nice result.

Thank you for you excellent tool.

pano4

Hello, we are looking at developing a similar system with machine vision cameras. Its great to see that you were able to get such an accurate calibration. May I ask what algorithm you are using to generate the spherical from the images, the quality looks great!

@vswdigitall
Copy link
Author

I think we got very nice result.
Thank you for you excellent tool.
pano4

Hello, we are looking at developing a similar system with machine vision cameras. Its great to see that you were able to get such an accurate calibration. May I ask what algorithm you are using to generate the spherical from the images, the quality looks great!

Hi, it is opencv fisheye undistorsion. All calibrations came from mc-calib.

@mitch-galea
Copy link

Thanks for that @vswdigitall, I assume the opencv fisheye undistortion was used to undistort the separate images but how did you handle the stitching and combination of the images?

@vswdigitall
Copy link
Author

Thanks for that @vswdigitall, I assume the opencv fisheye undistortion was used to undistort the separate images but how did you handle the stitching and combination of the images?

It is opencv too. Stitching details example does all steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants