Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I cannot finish calibration process #28

Closed
victor-robotic-24 opened this issue Oct 1, 2020 · 19 comments
Closed

I cannot finish calibration process #28

victor-robotic-24 opened this issue Oct 1, 2020 · 19 comments

Comments

@victor-robotic-24
Copy link

At the last tab of 'HandEyeCalibration' panel ('Calibrate'), I can take four different samples with manual calibration and I can record those joint states. But, when I try to take the fifth sample it appears the next error message: "Solver failed to return a calibration". I don't know the reason of that issue.

@JStech
Copy link
Contributor

JStech commented Oct 1, 2020

After taking the fifth sample, there's enough data to calculate a calibration, so the solver is run for the first time. Somehow it's failing for you, it seems.

To start with, some basic debugging questions:

  • Are you moving the robot between each sample?
  • Do you include rotations in those motions?
  • Have you set the correct sensor configuration on the Context tab? (Eye-in-hand for a wrist camera, Eye-to-hand for a static camera.)

Finally, what solver are you using? ("AX=XB Solver" drop-down on the Calibrate tab.) If you collect four samples, and then expand them all in the "Pose samples" pane on the "Calibrate" tab, are they all different? Could you share those sample values, maybe with a screenshot?

@victor-robotic-24
Copy link
Author

Yes, I'm moving the robot between each sample and I'm including rotations. Also, I set Eye-to-hand because I'm using static camera.

I'm using crigroup/Daniilidis1999 "AX=XB Solver". I have expanded all four samples and I have seen that 'TF base-to-eff' and 'TF camera-to-target' of each sample are different.

These are my pose samples:

poseSample12

poseSample34

Pose sample which launchs error message:

poseSample5

@JStech
Copy link
Contributor

JStech commented Oct 2, 2020

So, I OCRed these transforms and ran them through my Python implementation of the Daniilidis solver, and it seems they are invalid for an AX=XB calibration. In particular, sample 4 looks bad. (More about that below, but hopefully not too much.)

I suggest you make sure that the correct frames are selected in the Context tab, and add the TF display to RViz and give it a visual sanity check before taking samples (i.e. the target and robot poses have been updated and look like they do in real life). Checking the target detection image is also a good idea, to ensure the target is in view and has a good detection (the axis is drawn nicely in the corner of the target).

All the details are in the paper, but the short version is that the Daniilidis dual-quaternion solver takes each motion (a pair of samples), and calculates A as the transform the eye moved through and B as the transform the hand moved through. As a sanity check, regardless of the calibration transform, the rotation in A and the rotation in B will have the same angle. In your dataset, this is not true for the motions from sample 3 to sample 4, and from sample 4 to sample 5, so it looks like sample 4 is bad.

@beduffy
Copy link

beduffy commented Oct 2, 2020

Hi @JStech

Fantastic work on this repo, it works really well and I've been following the progress for a few months now. I'm working on applying it to both eye-in-hand and eye-in-base situations with my Dorna arm to pick up many objects. Above you mentioned you had a Python implementation of the Daniilidis solver. Is it possible for you to share this? I'd like to understand it more with reference to the paper. I see the C++ code has many PyObject references but cannot find the Python code within the repo.

@JStech
Copy link
Contributor

JStech commented Oct 2, 2020

Thanks, glad it's getting some use.

My dual-quaternion solver is part of a proprietary client project, so I can't share it immediately, but it's only about 100 lines so maybe this weekend I'll rewrite it from the paper.

Another Python implementation Here's the Python implementation used in MoveIt Cal. It seems to have all the bells and whistles, though, to make it very thorough and robust, but unfortunately less readable.

(Edited to point to correct Python handeye implementation.)

@JStech
Copy link
Contributor

JStech commented Oct 4, 2020

@beduffy I've uploaded a quick implementation here. It's completely untested because I don't have a dataset on hand--I guarantee there are bugs. The input it takes is just space-separated values, 14 per line:

tx ty tz qx qy qz qw tx ty tz qx qy qz qw
#   ^^ eef_tf ^^           ^^ camera_tf ^^

If you have a dataset to test it on, please post it as an issue on that repo, and I'll work out the bugs.

@beduffy
Copy link

beduffy commented Oct 4, 2020

Thanks a lot @JStech ! I will be studying this. Looks much much clearer than the hand_eye_calibration repo.

@victor-robotic-24
Copy link
Author

So, I OCRed these transforms and ran them through my Python implementation of the Daniilidis solver, and it seems they are invalid for an AX=XB calibration. In particular, sample 4 looks bad. (More about that below, but hopefully not too much.)

I suggest you make sure that the correct frames are selected in the Context tab, and add the TF display to RViz and give it a visual sanity check before taking samples (i.e. the target and robot poses have been updated and look like they do in real life). Checking the target detection image is also a good idea, to ensure the target is in view and has a good detection (the axis is drawn nicely in the corner of the target).

All the details are in the paper, but the short version is that the Daniilidis dual-quaternion solver takes each motion (a pair of samples), and calculates A as the transform the eye moved through and B as the transform the hand moved through. As a sanity check, regardless of the calibration transform, the rotation in A and the rotation in B will have the same angle. In your dataset, this is not true for the motions from sample 3 to sample 4, and from sample 4 to sample 5, so it looks like sample 4 is bad.

Is it necessary that in each pose sample taken (each joint state record) the matrix of blue circles and target frame is seen completely in image? Or is it enough that I see a good detection of the target frame? Can the robot partially cover the target and obtain good results in calibration task ?

@victor-robotic-24
Copy link
Author

So, I OCRed these transforms and ran them through my Python implementation of the Daniilidis solver, and it seems they are invalid for an AX=XB calibration. In particular, sample 4 looks bad. (More about that below, but hopefully not too much.)
I suggest you make sure that the correct frames are selected in the Context tab, and add the TF display to RViz and give it a visual sanity check before taking samples (i.e. the target and robot poses have been updated and look like they do in real life). Checking the target detection image is also a good idea, to ensure the target is in view and has a good detection (the axis is drawn nicely in the corner of the target).
All the details are in the paper, but the short version is that the Daniilidis dual-quaternion solver takes each motion (a pair of samples), and calculates A as the transform the eye moved through and B as the transform the hand moved through. As a sanity check, regardless of the calibration transform, the rotation in A and the rotation in B will have the same angle. In your dataset, this is not true for the motions from sample 3 to sample 4, and from sample 4 to sample 5, so it looks like sample 4 is bad.

Is it necessary that in each pose sample taken (each joint state record) the matrix of blue circles and target frame is seen completely in image? Or is it enough that I see a good detection of the target frame? Can the robot partially cover the target and obtain good results in calibration task ?

In the case of the eye to hand method, wouldn't it be that the rotation in B was the same as the rotation in X, since the camera is static?

@JStech
Copy link
Contributor

JStech commented Oct 6, 2020

You don't need to have the full frame in the image--that's the benefit of ArUco/ChArUco boards. You want to have an axis displayed on the corner of the target, like in the image and video at the bottom of this tutorial. The ArUco board shows green squares around the tags with blue dots in the corners. If you're using OpenCV 3.2 (which is standard for Ubuntu 18.04), it has buggy ArUco board pose detection, so that could be causing it.

As for hand-in-eye vs. eye-in-hand, they are both calibrated by solving an AX=XB equation. The transforms reported by the visual target detector are simply inverted in one case, and then the A transform (motion) is calculated in the same way. I hadn't taken this into account when I ran my solver on the data you posted, so I just fixed that and re-ran it. Now it's reporting more motions as having mismatched rotation angles, unfortunately.

I'm definitely getting the sense that this is the OpenCV 3.2 problem again.

@victor-robotic-24
Copy link
Author

victor-robotic-24 commented Oct 7, 2020

OK, thank you.
What version of OpenCV could I use?

@victor-robotic-24
Copy link
Author

Is it possible to use this repository on Ubuntu 16.04 with ROS Kinetic?

@JStech
Copy link
Contributor

JStech commented Oct 7, 2020

I know OpenCV 3.4 and newer versions work, and I think 3.3 does also. I don't know if anyone has run this on Kinetic. I've used Melodic mostly, and I just got it running on Noetic.

Another approach would be to try using a ChArUco target, which seems to have more-stable detections, even with older OpenCV versions. There might be other issues with Kinetic, though, that I'm not aware of.

@victor-robotic-24
Copy link
Author

Can I send you a short video of how I'm doing the calibration process? I don't know what could be failing.

@samarth-robo
Copy link
Contributor

samarth-robo commented Oct 20, 2020

@victor-robotic-24 sorry to jump into the discussion, but I faced the same problems and wanted to report some findings which might shed some light on the issue.
Are you doing eye-to-hand or eye-in-hand calibration? While doing eye-to-hand calibration myself, the solver gave the same error when the sensor frame in the Context tab is different from the header/frame_id field of the image topic selected in the Target tab (refer to this comment). I have the same situation as the screenshots there -- using a RealSense camera, I select /camera/color/image_rect_color as the image topic and camera_link as the sensor frame.
But the error went away when I selected camera_color_optical_frame as my sensor frame (which is == header/frame_id field of the image topic). And the result looked correct.
In theory this should not happen, since the transform from camera_link to camera_color_optical_frame is a constant, and the solver should take care of this.
I am using OpenCV 3.4.5, the handeye_target pose looks correct. Haven't tried any of this with eye-in-hand mode.

@samarth-robo
Copy link
Contributor

samarth-robo commented Oct 21, 2020

@beduffy I've uploaded a quick implementation here. It's completely untested because I don't have a dataset on hand--I guarantee there are bugs. The input it takes is just space-separated values, 14 per line:

tx ty tz qx qy qz qw tx ty tz qx qy qz qw
#   ^^ eef_tf ^^           ^^ camera_tf ^^

If you have a dataset to test it on, please post it as an issue on that repo, and I'll work out the bugs.

@JStech , is this implementation somehow specific to eye-in-hand mode? I created 2 "datasets" of eye-to-hand calibration, and neither of them passed through the checks in Step 1 of that script. I checked with inverted eye motion matrices too.
One of those (calib_dset_camera_color_optical_frame_processed.txt) actually gave me a good calibration result with the GUI.
Attaching both datasets for reference.

The format of those files is the same as above, first 7 numbers base-to-eef and the last 7 numbers camera-to-target.

@victor-robotic-24
Copy link
Author

@victor-robotic-24 sorry to jump into the discussion, but I faced the same problems and wanted to report some findings which might shed some light on the issue.
Are you doing eye-to-hand or eye-in-hand calibration? While doing eye-to-hand calibration myself, the solver gave the same error when the sensor frame in the Context tab is different from the header/frame_id field of the image topic selected in the Target tab (refer to this comment). I have the same situation as the screenshots there -- using a RealSense camera, I select /camera/color/image_rect_color as the image topic and camera_link as the sensor frame.
But the error went away when I selected camera_color_optical_frame as my sensor frame (which is == header/frame_id field of the image topic). And the result looked correct.
In theory this should not happen, since the transform from camera_link to camera_color_optical_frame is a constant, and the solver should take care of this.
I am using OpenCV 3.4.5, the handeye_target pose looks correct. Haven't tried any of this with eye-in-hand mode.

I am doing Eye-to-hand calibration. I select /camera/color/image_raw as Image topic in Target tab, which header/frame_id is "camera_color_optical_frame". This is the frame that I am using as sensor frame in Context tab.

@samarth-robo
Copy link
Contributor

samarth-robo commented Oct 24, 2020

@victor-robotic-24 OK that seems fine.

Another possible problem might be with the transforms library baldor that the underlying solver package, handeye, uses.

I am facing the same error as you when I use camera_link as my sensor frame, and looking at the debug messages it seems to be crashing in Line 158 of the to_axis_angle() function.

Looking at the same function from the popular transforms3d package, the mat2axangle() has the threshold for near-1 singular values unit_thresh=1e-5, whereas baldor uses much stricter 1e-8. Changing this to 1e-5 solved the issue for me. I also verified that loosening the threshold does not produce wrong output.

If you also have some specific angles that might hit this threshold because of your camera setup, this might solve the problem for you as well.

To summarize,

  • Git clone baldor in your workspace and make sure the handeye package is using this baldor, not the system-wide install. You can use workspace overlaying or whatever else you want to.
  • Change 1e-8 to 1e-5 in lines 158 and 162 of the baldor code's transform.py
  • Remake your workspace

@JStech
Copy link
Contributor

JStech commented Feb 18, 2021

This issue should be fixed by #63

@JStech JStech closed this as completed Feb 19, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants