Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tutorial_chessboard_pose z-axis (extrinsic calibration) #729

Closed
mbajlo opened this issue Mar 23, 2020 · 9 comments
Closed

tutorial_chessboard_pose z-axis (extrinsic calibration) #729

mbajlo opened this issue Mar 23, 2020 · 9 comments

Comments

@mbajlo
Copy link

mbajlo commented Mar 23, 2020

Hello,

I am trying to make the extrinsic camera calibration with <tutorial_handeye_calibration>, but I keep getting the error/warning <Problem in solving Hand-Eye Calibration by VVS means> because the calibration cannot converge. I have already asked about this situation, but now I decided to open a new issue, because I think the problem is not coming out of the end effector transform-base transform, but you can check what have I written here #659

I am not sure what could have caused this, but I am confused with the coordinate system orientation while I am trying to get the chessboard pose with the <tutorial_chessboard_pose>. the Z-axis (blue axis) is oriented into the chessboard, is it this correct? Should it be this way or it should be oriented out of the chessboard(towards the camera)?
On the image it is visible the Z-axis(blue) positive direction is looking into the chessboard, it is cross product of the X*Y. (X-axis is red and Y-axis is green)
anyway, is this correct direction of the Z-axis and could this be the reason why the calibration is not can't converge?

Screenshot from 2020-03-23 20-53-56

thanks

@s-trinh
Copy link
Contributor

s-trinh commented Mar 27, 2020

Coordinates system of the chessboard is irrelevant.
What you need is a consistent coordinates system for the chessboard.

What is needed for hand-eye calibration is:

int calibrate(const std::vector<vpHomogeneousMatrix> &cMo, 
              const std::vector<vpHomogeneousMatrix> &rMe,
              vpHomogeneousMatrix &eMc);

that is a list of:

  • chessboard poses in the camera frame
  • end-effector poses in the robot base frame
    In output you will get the transformation between the end-effector and the camera frames.

I would suggest you to dump theses transformations (cMo, rMe) and attach it to this issue.

@mbajlo
Copy link
Author

mbajlo commented Mar 27, 2020

Hello @s-trinh,

thanks for the response, I am using tutorial-hand-eye-calibration and in there the calibration is done with:
vpHandEyeCalibration::calibrate(cMo, wMe, eMc);
basically the only difference is in the name of the std::vector<vpHomogeneousMatrix> of the end effector transform, in your example is rMe and in tutorial-hand-eye-calibration is wMe.
Anyway the cMo and wMe(rMe) transformations given by the tutorial-hand-eye-calibration are:

~/Desktop/VISP_github/visp/build/tutorial/calibration$ ./tutorial-hand-eye-calibration --ndata 8
cMo size=8
wMe size=8
eMc size=16
Use fPe=pose_fPe_1.yaml, cPo=pose_cPo_1.yaml
homogeneous matrix wMe[0] (size ROWSxCOLL:4x4):
0.9999752662 -0.006157972626 0.003397996831 0.892669
0.006186558616 0.5403023883 -0.8414481896 2.24037e-05
0.003345669115 0.8414483993 0.5403271212 1.07353
0 0 0 1

homogeneous matrix cMo[0] (size ROWSxCOLL:4x4):
-0.9999988078 0.0001222254535 0.00153932259 0.111087
-0.0001001710744 -0.9998974691 0.01431926408 0.0270523
0.00154091494 0.01431909281 0.9998962892 0.190433
0 0 0 1

Use fPe=pose_fPe_2.yaml, cPo=pose_cPo_2.yaml
homogeneous matrix wMe[1] (size ROWSxCOLL:4x4):
0.7807061402 -0.2340874376 -0.5793970954 0.968709
-0.2251023587 0.7595927188 -0.6102031052 0.0909864
0.5829466963 0.6068129638 0.5403250653 1.09495
0 0 0 1

homogeneous matrix cMo[1] (size ROWSxCOLL:4x4):
-0.04538836813 -0.9989409172 0.007545853198 0.0430852
0.9989655802 -0.04536596379 0.003114296995 -0.129328
-0.002768673793 0.007679400477 0.9999666801 0.211508
0 0 0 1

Use fPe=pose_fPe_3.yaml, cPo=pose_cPo_3.yaml
homogeneous matrix wMe[2] (size ROWSxCOLL:4x4):
0.5410711792 -0.01793864227 0.8407854567 1.06497
-0.01911072427 0.999252027 0.03361795396 0.0162635
-0.8407596323 -0.03425772503 0.5403236521 1.09632
0 0 0 1

homogeneous matrix cMo[2] (size ROWSxCOLL:4x4):
0.9967624794 -0.08040203928 -0.0002679739841 -0.114733
0.08039029255 0.9966638245 -0.01409332718 -0.0688643
0.001400212222 0.01402615724 0.9999006482 0.210244
0 0 0 1

Use fPe=pose_fPe_4.yaml, cPo=pose_cPo_4.yaml
homogeneous matrix wMe[3] (size ROWSxCOLL:4x4):
0.7817979167 0.2251153059 0.5814766689 1.0256
0.2339679253 0.7585027625 -0.6082208227 -0.115011
-0.5779714762 0.611552662 0.5403261185 1.0959
0 0 0 1

homogeneous matrix cMo[3] (size ROWSxCOLL:4x4):
-0.05168126297 0.9984441542 -0.0209360445 -0.0879639
-0.9986377093 -0.05151751895 0.008286781841 0.0706921
0.007195315817 0.02133579487 0.9997464735 0.209678
0 0 0 1

Use fPe=pose_fPe_5.yaml, cPo=pose_cPo_5.yaml
homogeneous matrix wMe[4] (size ROWSxCOLL:4x4):
0.9993656285 -0.02839598582 0.02149438499 0.906502
0.03363392773 0.5540987938 -0.8317711738 0.0234654
0.01170894966 0.8319664625 0.5547023579 1.06577
0 0 0 1

homogeneous matrix cMo[4] (size ROWSxCOLL:4x4):
-0.9360567812 -0.001876593604 -0.3518439722 0.102347
-0.0241036038 -0.9972945338 0.06944515256 0.0472446
-0.3510223906 0.07348531368 0.9334790785 0.226976
0 0 0 1

Use fPe=pose_fPe_6.yaml, cPo=pose_cPo_6.yaml
homogeneous matrix wMe[5] (size ROWSxCOLL:4x4):
0.5515210477 0.009087499086 0.8341114741 1.02841
-0.0765170319 0.9962760277 0.03973940791 0.0444462
-0.8306441343 -0.08574085416 0.550162547 1.10917
0 0 0 1

homogeneous matrix cMo[5] (size ROWSxCOLL:4x4):
0.984211661 -0.131448541 0.1185271587 -0.1257
0.1597517426 0.9480399098 -0.2751358032 -0.0715238
-0.07620227696 0.289726786 0.9540710678 0.205516
0 0 0 1

Use fPe=pose_fPe_7.yaml, cPo=pose_cPo_7.yaml
homogeneous matrix wMe[6] (size ROWSxCOLL:4x4):
0.5517001137 -0.1412002306 0.822003333 1.04046
0.05288090647 0.9895041566 0.134480979 -0.0286737
-0.83236446 -0.03072489001 0.5533763519 1.11112
0 0 0 1

homogeneous matrix cMo[6] (size ROWSxCOLL:4x4):
0.9540754668 -0.2210153645 -0.2022182295 -0.076989
0.1608120273 0.9473936868 -0.2767393975 -0.0716796
0.2527439328 0.2315111464 0.9394270028 0.175573
0 0 0 1

Use fPe=pose_fPe_8.yaml, cPo=pose_cPo_8.yaml
homogeneous matrix wMe[7] (size ROWSxCOLL:4x4):
0.786690954 -0.184848113 -0.5890233595 1.01267
-0.2679590475 0.7573147494 -0.5955437173 0.0545188
0.5561612102 0.6263429935 0.5462409384 1.1264
0 0 0 1

homogeneous matrix cMo[7] (size ROWSxCOLL:4x4):
-0.05344934729 -0.9943901062 0.0912769633 0.0950597
0.9720909002 -0.07272474288 -0.2230479623 -0.112229
0.2284347806 0.07680773743 0.9705246635 0.195912
0 0 0 1

Problem in solving Hand-Eye Calibration by VVS
Calibration error(VISP_err=1): the algorithm doesn't converge

Save eMc.yaml
homogeneous matrix eMc (size ROWSxCOLL:4x4):

Output: Hand-eye calibration result: eMc estimated
0.2574766519 -0.5206150395 -0.8140428456 0.01195631143
0.3695098551 -0.7253686364 0.5807777615 0.05242570706
0.8928427861 0.4503335674 -0.005606899914 -0.06320333902
0 0 0 1
theta U (deg): -8.610970274 -112.6760876 58.75952596

I didn't know how to debug this info, maybe it is going to make more sense to you, thanks in advace

@fspindle
Copy link
Contributor

In ViSP there are a couple of algorithms that are used to estimate the hand-eye transformation and the last one that uses a VVS (Virtual Visual Servoing) scheme doesn't converge before reaching the max iteration number set to 30 here.

You are not lucky ;-) With you data it needs 52 iterations instead.

Modifying the max iteration number I obtain the following transformation:

eMc:
0.2574433311  -0.5206185105  -0.8140511641  0.01195816774
0.3694777807  -0.7253943263  0.5807660811  0.05242369119
0.8928656678  0.450288172  -0.005609040142  -0.06320249863
0  0  0  1
theta U (deg): -8.613182802 -112.677976 58.75754851

It is not so far from the one that you obtain and that correspond to the result given by the algorithm called before VVS.

Displaying the wMe poses like in the next image:
Screenshot from 2020-03-27 18-09-26
shows that some of them are very close, like pairs (ie 0 and 4,...).
To get better results you should consider a half sphere over the calibration grid and select 8 positions on that sphere to ensure having a better conditioning for the minimization process.

@mbajlo
Copy link
Author

mbajlo commented Mar 30, 2020

Hello @fspindle,

Thanks for the reply, it means I can modify NB_ITER_MAX to for example 200 do the make again and try again?

@fspindle
Copy link
Contributor

You can, but it is not the solution. You should rather select better robot positions to take the data.
This is confirmed displaying your cMo poses:

cMo

Don't forget also that for each position the calibration grid should be as big as possible in the image

@mbajlo
Copy link
Author

mbajlo commented Mar 31, 2020

Hello @fspindle thanks for the response.
I have made a lot of images/pose pairs, anyway I tried to run the chessboard pose binary over it. The result is usually not converging. Most of the images are looking like this, the chessboard is as as big as the image. Do you think I should have taken the images from side under the angle, more parallel with the chessboard plane?

image-1
image-2
image-3
image-4
image-5
image-6
image-7
image-8

Thanks

@fspindle
Copy link
Contributor

The size of the grid in the image is perfect.
You should just bring more orientation to the camera like in the attached image were blue axis represents the z-axis of the camera. Here I plot 10 positions of the camera covering the half sphere over the grid
eye2hand-camera-poses

The grid z-axis frame doesn't matter. It could be going upward or not.

@mbajlo
Copy link
Author

mbajlo commented Apr 1, 2020

Hello @fspindle,

Thanks for the response, I have made a sphere over the chessboard and moved the camera over it while taking pictures and poses. The algorithm has converged and I got the transform.

Thanks a lot ;)
I will close this one ;)

@sp7412
Copy link

sp7412 commented Apr 15, 2024

The size of the grid in the image is perfect. You should just bring more orientation to the camera like in the attached image were blue axis represents the z-axis of the camera. Here I plot 10 positions of the camera covering the half sphere over the grid eye2hand-camera-poses

The grid z-axis frame doesn't matter. It could be going upward or not.

Can I ask how you generated this figure or it's source?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants