-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the calibration results accuracy #53
Comments
Not directly. The ViSP library used to report the residuals of the Tsai-Lenz algorithm, which are very loosely correlated to it. This was removed recently (I guess because people trusted this value too much). I am working on a tool to check the accuracy after the calibration, but I have very limited time to do this. |
My current temporary solution is to estimate multiple hand-eye transformation and average them. I planned to make a structure to fix the marker in order to get a "ground truth" of the marker pose and cross check with the estimated hand-eye transformation, but physically it is still impossible to know the accuracy position of the marker. Do you mind to share your idea on how to check the accuracy? |
It is a very empirical consideration: if the marker is not moving with respect to the base of the robot, then the concatenation of the robot forward kinematics, the hand-eye calibration and the marker tracking must add up to the same geometric transformation for all poses of the robot (up to FK and tracking uncertainty). If this does not hold, either you have a problem with the robot (hopefully unlikely), or with the marker tracking (already more likely and easy to verify), or with the result of the calibration. Of course, while moving the robot the marker must stay within the field-of-view of the camera for the tracking to work. |
Hi @waiyc, I started working on the evaluation tool. If you are interested, you can check out the development branch and try it out. Feedback is very welcome! |
The tool is now released on the master branch. It automatically samples a position when the robot and the marker have stopped moving for a while (to be sure to sample a consistent state of the system), and both are visible. It then computes the average distance of each subsequent sample from the first one. If the calibration is good, this error should be of the same order of magnitude of the combined robot and tracking RMS error. |
After we get the estimated hand-eye transformation, is there any ways to know the accuracy of the results ?
The text was updated successfully, but these errors were encountered: