Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Depth Camera Calibration #13

Open
jundengdeng opened this issue Mar 22, 2019 · 1 comment

Comments

Projects
None yet
2 participants
@jundengdeng
Copy link

commented Mar 22, 2019

Hi Andy,

Thanks for sharing the code. I'm quite interested the depth camera calibration process, can you please share more details on the calibration process? For example,

  1. How did you get the workspace limit in robot coordinates?
  2. How did you get the checkerboard offset from tool?
  3. Where can I download the checkerboard picture?

Thanks in advance!

Best,

Jun

@nouyang

This comment has been minimized.

Copy link

commented Jun 19, 2019

I'm not sure this is correct, but:

  1. Using the pendant, the limits are the X, Y, Z as displayed under the "TCP" box (it is displayed in mm; the code is in meters).
    e.g.
[[0.4, 0.75], [-0.25, 0.15], [-0.2 + 0.4, -0.1 + 0.4]])  [1]
[minx, max x], [miny, max y], [minz, max z]
  1. This is also just experimentally measured. I'm least certain on this part, but I think it is what the tool would need to do to move to the checkerboard center. So if it needs to move +20cm X - 0.01cm Z to the center of the checkerboard. Presumably the tool center = the middle area of the gripper fingers.

EDIT: Wow not sure what I was thinking, but it's to the "tool center" of the robot (what is reported on the pendant / over TCP from the UR). And as to the sign of the offset -- it's really checkerboard_pos = tool_pos + offset, so define the offset appropriately. Well, that's my current belief based on inspecting the code, but maybe I will update the belief tomorrow, who knows. end edit

The readme implies this calibration isn't so important if you're using the Intel D415 realsense. For what it's worth the format of the files is (ignore the actual values)

EDIT: Yup, changed my mind. The calibration actual provides the pose of the camera relative to the robot frame. In this way, the image from the camera, which may be looking at the workspace from the side or at an angle, can be morphed/transformed so that the image is from a perfectly "birds eye" camera. end edit

--

Also, for starting out, a blank file named camera_depth_scale.txt will suffice to kill errors preventing code run.

real/camera_depth_scale.txt
1.012695312500000000e+00
real/camera_pose.txt
9.968040993643140224e-01 -1.695732684590832429e-02 -7.806431039047095899e-02 6.748152280106306522e-01
5.533242197034894325e-03 -9.602075096454146808e-01 2.792327374276499796e-01 -3.416026459607500732e-01
-7.969297786685919371e-02 -2.787722860809356273e-01 -9.570449528584960008e-01 6.668261082482905833e-01
0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 1.000000000000000000e+00
  1. Any 4x4 checkerboard will work. I used some online checkerboard generator and then printed it out. e.g. here is one
    4x4

[1] Note that it's possible the pendant display somehow differs from the actual TCP values -- my z-values were 0.07 on the pendant corresponding to 0.47 in python; to debug, can use examples/simple.py https://github.com/SintefManufacturing/python-urx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.