Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refine ZED-Velodyne calibration #179

Closed
aaguiar96 opened this issue Jun 13, 2020 · 88 comments
Closed

Refine ZED-Velodyne calibration #179

aaguiar96 opened this issue Jun 13, 2020 · 88 comments
Assignees
Labels
enhancement New feature or request

Comments

@aaguiar96
Copy link
Collaborator

Now that we have the objective function working, we should:

  • Recalibrate the ZED camera intrinsics
  • Record a new dataset
@aaguiar96 aaguiar96 added the enhancement New feature or request label Jun 13, 2020
@eupedrosa
Copy link
Collaborator

Following #149, @aaguiar96 that rotation the ZED camera have may be cause by that -83 in the P matrix. Try to redo the calibration now, but using the K matrix to see if anything changes.

@aaguiar96
Copy link
Collaborator Author

Following #149, @aaguiar96 that rotation the ZED camera have may be cause by that -83 in the P matrix. Try to redo the calibration now, but using the K matrix to see if anything changes.

Ok, but since, the -83 is not correct right?
I'll go to the lab on Monday and try to recalibrate the camera and record a new dataset.

@eupedrosa
Copy link
Collaborator

You do not need a new dataset, you can use the one that you have right now. The K matrix does not have that -83.

@miguelriemoliveira
Copy link
Member

Hi @aaguiar96, when you say "Record a new dataset" you mean record a new bag file, right?

@eupedrosa , if @aaguiar96 is going to the lab he could indeed record a new bag file because the one we have right now has several problems (#157 )

@eupedrosa
Copy link
Collaborator

I known!! I was just curious to see if we already have a working solution...

@aaguiar96
Copy link
Collaborator Author

I known!! I was just curious to see if we already have a working solution...

I don't think it's working...

rviz_screenshot_2020_06_13-12_54_37
rviz_screenshot_2020_06_13-12_55_36

@aaguiar96
Copy link
Collaborator Author

Hi @aaguiar96, when you say "Record a new dataset" you mean record a new bag file, right?

@eupedrosa , if @aaguiar96 is going to the lab he could indeed record a new bag file because the one we have right now has several problems (#157 )

Yes, I don't know if I'll have time to solve all the issues since I have to perform the calibration, but at least I can record a longer dataset, with the pattern closer to the camera.

@eupedrosa
Copy link
Collaborator

eupedrosa commented Jun 13, 2020

Whitout the camera I cannot judge :\ It is not visible. Can you add the tf tree in the visualization?

@aaguiar96
Copy link
Collaborator Author

Whitout the camera I cannot judge :\ It is not visible. Can you add the tf tree in the visualization?

The camera is there.

rviz_screenshot_2020_06_13-12_55_36

@eupedrosa
Copy link
Collaborator

Just to be clear, this calibration uses the K matrix or the `P' matrix?

@aaguiar96
Copy link
Collaborator Author

Just to be clear, this calibration uses the K matrix or the `P' matrix?

The K matrix...

@aaguiar96
Copy link
Collaborator Author

Calibration result:

Left:

image_width: 1280
image_height: 720
camera_name: narrow_stereo/left
camera_matrix:
  rows: 3
  cols: 3
  data: [652.25783,   0.     , 670.14407,
           0.     , 652.16532, 365.16427,
           0.     ,   0.     ,   1.     ]
camera_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [-0.196334, 0.034772, -0.002054, 0.003261, 0.000000]
rectification_matrix:
  rows: 3
  cols: 3
  data: [ 0.99760051,  0.00946769, -0.06858274,
         -0.00884404,  0.99991679,  0.00939135,
          0.06866594, -0.00876227,  0.99760123]
projection_matrix:
  rows: 3
  cols: 4
  data: [672.44807,   0.     , 822.27319,   0.     ,
           0.     , 672.44807, 366.49005,   0.     ,
           0.     ,   0.     ,   1.     ,   0.     ]

Right:

image_width: 1280
image_height: 720
camera_name: narrow_stereo/right
camera_matrix:
  rows: 3
  cols: 3
  data: [672.6178 ,   0.     , 650.11811,
           0.     , 672.20899, 380.37149,
           0.     ,   0.     ,   1.     ]
camera_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [-0.233312, 0.078484, -0.003954, 0.002391, 0.000000]
rectification_matrix:
  rows: 3
  cols: 3
  data: [ 0.99575785,  0.00898342, -0.09157295,
         -0.00981562,  0.99991449, -0.00864153,
          0.09148749,  0.00950371,  0.99576087]
projection_matrix:
  rows: 3
  cols: 4
  data: [672.44807,   0.     , 822.27319, -78.89036,
           0.     , 672.44807, 366.49005,   0.     ,
           0.     ,   0.     ,   1.     ,   0.     ]

@aaguiar96
Copy link
Collaborator Author

Btw, the -78 appears again. This might me the translation of the right camera in relation with the left one...

I doubt that the manufactury calibration is wrong...

@eupedrosa
Copy link
Collaborator

I also doubt it is wrong. But we should only care about the K matrix. However, for that to work, we need to have access to the unrectified images.

The image the ZED sdk returns, is it rectified or not? or it publishes both?

@miguelriemoliveira
Copy link
Member

miguelriemoliveira commented Jun 15, 2020 via email

@aaguiar96
Copy link
Collaborator Author

I also doubt it is wrong. But we should only care about the K matrix. However, for that to work, we need to have access to the unrectified images.

The image the ZED sdk returns, is it rectified or not? or it publishes both?

It publishes both. Tomorrow I will record both. I guess that the one we're using is rectified...
Can that be a problem on the optimization?

@aaguiar96
Copy link
Collaborator Author

Also, to know if the calibration was any good can you tell us the reprojection error (printed by the ros calibration) as well as the number of images?

The calibration have 52 images for each camera. I did not save the reprojection error, I don't know it now...

@miguelriemoliveira
Copy link
Member

It publishes both. Tomorrow I will record both. I guess that the one we're using is rectified...
Can that be a problem on the optimization?

Hi @aaguiar96 , you should record only the raw (unrectified image). It is a problem for the optimization because we would be undistorting and undistorted image.

@aaguiar96
Copy link
Collaborator Author

Hi @aaguiar96 , you should record only the raw (unrectified image). It is a problem for the optimization because we would be undistorting and undistorted image.

Ok so... This could improve the result! Tomorrow I'll fix this! :)

@aaguiar96
Copy link
Collaborator Author

I @eupedrosa

How did you overcome the blocking problem when saving a collection?
It's happening to me, even with bag_rate = 0.5 ...

Here's my command:

roslaunch agrob_calibration collect_data.launch output_folder:=$ATOM_DATASETS bag_rate:=0.5 overwrite:=true

@aaguiar96
Copy link
Collaborator Author

aaguiar96 commented Jun 19, 2020

Btw, the rosbag looks really nice this time! :)

@aaguiar96
Copy link
Collaborator Author

aaguiar96 commented Jun 19, 2020

Another thing, did you adjust the initial_estimate?

The robot is really misaligned with the pattern points. See here (for a frame where I am in front of the camera):
rviz_screenshot_2020_06_19-11_16_55

@eupedrosa
Copy link
Collaborator

How did you overcome the blocking problem when saving a collection?

It has something to do with difference of timestamps. I'm just trying to fix it and I'll will push it.

Another thing, did you adjust the initial_estimate?

Yes I did.

@aaguiar96
Copy link
Collaborator Author

It has something to do with difference of timestamps. I'm just trying to fix it and I'll will push it.

Ok... But do you think that they are too different? They just differ on the nsecs parameter...

@eupedrosa
Copy link
Collaborator

@aaguiar96, I solved it. You cal pull the change. It was a typo in the key used in the config dictionary.

This is something fragil in our code. There is no hardening against this kinda stuff.

@aaguiar96
Copy link
Collaborator Author

New dataset taken from the most recent bag file.

29-06.zip

@aaguiar96
Copy link
Collaborator Author

Hi @miguelriemoliveira and @eupedrosa

Here's the (link) new set of bagfiles that consider the new approach of having the pattern at ~45º.
I'll try out the optimization with them in the afternoon.

I have started yesterday thinking about that problem on the LiDAR beam clustering.
I tried to convert the 2D projection that we have of each LiDAR points in the pattern to an image to apply the Hough transform, but I'm afraid that this approach will fail sometimes...

I'll try out something more brute force, a clustering based on a distance threshold for example.
When I have some news I'll let you know.

@miguelriemoliveira
Copy link
Member

miguelriemoliveira commented Jul 1, 2020 via email

@aaguiar96
Copy link
Collaborator Author

About the labelling I also have some ideas. If you want we can discuss in the afternoon? 14h30? 15h?

14h30 is fine by me @miguelriemoliveira, deal! :)

@aaguiar96
Copy link
Collaborator Author

Hi @miguelriemoliveira and @eupedrosa

I recorded a dataset with one of the today's bagfiles.
Here it is.
dataset-01-17-2020.zip

I'll try it out now,

@aaguiar96
Copy link
Collaborator Author

Here:

roslaunch agrob_calibration calibrate.launch dataset_file:=/home/andre-criis/Documents/saved_datasets/29-06/data_collected.json csf:="lambda name: int(name) < 3

miguelriemoliveira added a commit that referenced this issue Jul 1, 2020
…ag run_calibration on the calibrate.launch which allows to run the system without running the optimization, which can then be launched in a separate window (better for debugging)
@miguelriemoliveira
Copy link
Member

Hi @aaguiar96 ,

Check the latest commit. It automatically creates the labeled_data rviz marker.

@aaguiar96
Copy link
Collaborator Author

Hi @aaguiar96 ,

Check the latest commit. It automatically creates the labeled_data rviz marker.

Hi @miguelriemoliveira, great, I'll pull now.

Good news, the labeling procedure is working with the solution you proposed.
Here are two images, one with the dataset that I was showing yesterday in the meeting, and other with the dataset that I recorded today. In the last one, it is quite visible the correct labeling.

Thanks for the help @miguelriemoliveira :-)

rviz_screenshot_2020_07_01-18_51_28

rviz_screenshot_2020_07_01-18_52_16

@aaguiar96
Copy link
Collaborator Author

@eupedrosa just for you to know, @miguelriemoliveira proposed a labeling solution considering the spherical representation [r, theta, phi] of the velodyne data.
The beams are clustered using the theta component, and the limits are the maximum and minimum value of the phi component for each cluster.

aaguiar96 added a commit that referenced this issue Jul 1, 2020
@aaguiar96
Copy link
Collaborator Author

@miguelriemoliveira, I committed the changes so that we're synchronized.
Now we just have to solve the mystery of the optimization!

@eupedrosa
Copy link
Collaborator

Great! The rotated pattern seems to provide more information.

miguelriemoliveira added a commit that referenced this issue Jul 2, 2020
… up the objective_function.py before. Now its fixed and easier.
@eupedrosa
Copy link
Collaborator

Any news? Should we keep this issue open?

miguelriemoliveira added a commit that referenced this issue Jul 2, 2020
@miguelriemoliveira
Copy link
Member

Hi @aaguiar96 and @eupedrosa

I was working with @eupedrosa after working with @aaguiar96 and we looked into the normalization. It is not yet complete but I think the results have improved very much. The previous normalization had a bug in some collections which could be catastrophic.

Also, we added the optimization of the intrinsics and the distortion components (distortion was not there) since there images are clearly in need of some distortion. After this, camera errors droped from 2.5 pixels avg to bellow 1 pixel.

Then we ran a full agrob system calibration . Here is the result with some 8 collections

https://youtu.be/6yydzfB6UBM

Then I ran with all 20 collections

https://youtu.be/k5B0wRdWcpo

Conclusions:

All laser data points are near the patterns, where they should, both orthogonally and longitudinally.

  1. Point cloud avg error 0.008 meters, bellow 1 centimeter
  2. Image avg error 0.99 pixels, bellow 1 pixel.
  3. Velodyne is tilted upward as it should (to compensate for the tild downward that the zed has but is not in the xacro)
  4. Velodyne z seems to be fine
  5. Velodyne is shifted to the right ... that's the only thing that still bothers me.

So to me this looks really good, the only thing is 5. @aaguiar96 can you take (or ask someone to take some pictures, and even measurements with a metric tape).

If the velodyne is really in the center of the camera we can resort to the best calibration technique: grab a hammer and hit the sensor from left to right until it goes to the position estimated in hte optimization :)

@aaguiar96
Copy link
Collaborator Author

aaguiar96 commented Jul 2, 2020

Uau, looks really really great.

So, the problem was on the nornalization and intrinsics right? How do you added the distortion components since they were not available on the camera info?

I forgot to share, but I took some pictures of the sensors. Here:

IMG_20200615_082427
IMG_20200615_082435

So, now we just have to check and try to measure the distance between the calibration and the real sensor configuration.
I'll ask someone tomorrow to measure the distances with a hammer. :)

Other thing, did you optimize the three sensors or just a single camera and the lidar?

@eupedrosa
Copy link
Collaborator

From I'm seeing the calibration is working !! I'm more inclined to say the URDF is wrong, not the calibration.
@aaguiar96, I think you should focus on getting the correct pose of the velodyne on the robot and use it as the anchored sensor.

Furthermore, I think we can finally close this long issue.

@aaguiar96
Copy link
Collaborator Author

👏 good work team! :-)

Ok, I think you're right.
Closing this one, and opening another to focus on that.

@miguelriemoliveira
Copy link
Member

Hi guys. I agree, we make a very good team! And we are almost there.

But I am not sure I am so optimistic as @eupedrosa ... perhaps, but I mean, from the picture we have this yellow distance

distances_velo_camera

and from the optimization we have

distances_velo_camera2

The question is: is this pose of the velodyne w.r.t. the camera consistent with reality? That will not change if we fix the velodyne instead of the camera I think ...

@miguelriemoliveira
Copy link
Member

One thing that does not make sense is the size of the zed camera ... the cad model seems too small. @aaguiar96 can you check it?

@eupedrosa
Copy link
Collaborator

I am a little more optimistic because our eyes easily lie to us. Obviously, we should check.

@aaguiar96
Copy link
Collaborator Author

The best thing to do is measure it on rviz and physically...
I can do that on monday! And also I'll take a look at the URDFs.

@miguelriemoliveira
Copy link
Member

Ok, I hope you are right :) another thing is that there is a second camera there? It is not being used right?

@aaguiar96
Copy link
Collaborator Author

another thing is that there is a second camera there? It is not being used right?

No. The other camera is a stereo realsense. It has two fisheye len cameras.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants