-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refine ZED-Velodyne calibration #179
Comments
Following #149, @aaguiar96 that rotation the ZED camera have may be cause by that -83 in the |
Ok, but since, the -83 is not correct right? |
You do not need a new dataset, you can use the one that you have right now. The |
Hi @aaguiar96, when you say "Record a new dataset" you mean record a new bag file, right? @eupedrosa , if @aaguiar96 is going to the lab he could indeed record a new bag file because the one we have right now has several problems (#157 ) |
I known!! I was just curious to see if we already have a working solution... |
Yes, I don't know if I'll have time to solve all the issues since I have to perform the calibration, but at least I can record a longer dataset, with the pattern closer to the camera. |
Whitout the camera I cannot judge :\ It is not visible. Can you add the tf tree in the visualization? |
Just to be clear, this calibration uses the |
The |
Calibration result: Left:
Right:
|
Btw, the -78 appears again. This might me the translation of the right camera in relation with the left one... I doubt that the manufactury calibration is wrong... |
I also doubt it is wrong. But we should only care about the The image the ZED sdk returns, is it rectified or not? or it publishes both? |
Hi,
That was an advance. Great work!
How about a comparison between the K matrix now and the k matrix we had
before?
Also, to know if the calibration was any good can you tell us the
reprojection error (printed by the ros calibration) as well as the number
of images?
Recording the non rectified is important...
…On Mon, Jun 15, 2020, 12:26 Eurico F. Pedrosa ***@***.***> wrote:
I also doubt it is wrong. But we should only care about the K matrix.
However, for that to work, we need to have access to the unrectified images.
The image the ZED sdk returns, is it rectified or not? or it publishes
both?
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#179 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACWTHVSK7SG5RJ6WRM5UKVTRWYANJANCNFSM4N46MZ2Q>
.
|
It publishes both. Tomorrow I will record both. I guess that the one we're using is rectified... |
The calibration have 52 images for each camera. I did not save the reprojection error, I don't know it now... |
Hi @aaguiar96 , you should record only the raw (unrectified image). It is a problem for the optimization because we would be undistorting and undistorted image. |
Ok so... This could improve the result! Tomorrow I'll fix this! :) |
How did you overcome the blocking problem when saving a collection? Here's my command:
|
Btw, the rosbag looks really nice this time! :) |
It has something to do with difference of timestamps. I'm just trying to fix it and I'll will push it.
Yes I did. |
Ok... But do you think that they are too different? They just differ on the |
@aaguiar96, I solved it. You cal pull the change. It was a typo in the key used in the This is something fragil in our code. There is no hardening against this kinda stuff. |
New dataset taken from the most recent bag file. |
Hi @miguelriemoliveira and @eupedrosa Here's the (link) new set of bagfiles that consider the new approach of having the pattern at ~45º. I have started yesterday thinking about that problem on the LiDAR beam clustering. I'll try out something more brute force, a clustering based on a distance threshold for example. |
Hi André,
Thanks. I will download. About the labelling I also have some ideas. If you
want we can discuss in the afternoon? 14h30? 15h?
Miguel
* Miguel Riem de Oliveira *
email: mriem@ua.pt | m.riem.oliveira@gmail.com
office: 22.2.18 | ext: 23887
Dep. de Engenharia Mecânica, Universidade de Aveiro
*Campus Universitário de Santiago, 3810-193 Portugal*
…On Wed, 1 Jul 2020 at 10:48, André Aguiar ***@***.***> wrote:
Hi @miguelriemoliveira <https://github.com/miguelriemoliveira> and
@eupedrosa <https://github.com/eupedrosa>
Here's the (link
<https://drive.google.com/file/d/1vJuBApSFSqjgWRJK47A7wwvQBK_ZucPd/view?usp=sharing>)
new set of bagfiles that consider the new approach of having the pattern at
~45º.
I'll try out the optimization with them in the afternoon.
I have started yesterday thinking about that problem on the LiDAR beam
clustering.
I tried to convert the 2D projection that we have of each LiDAR points in
the pattern to an image to apply the *Hough transform*, but I'm afraid
that this approach will fail sometimes...
I'll try out something more brute force, a clustering based on a distance
threshold for example.
When I have some news I'll let you know.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#179 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACWTHVWAYJX6SXJA7UJSK2LRZMA5FANCNFSM4N46MZ2Q>
.
|
14h30 is fine by me @miguelriemoliveira, deal! :) |
Hi @miguelriemoliveira and @eupedrosa I recorded a dataset with one of the today's bagfiles. I'll try it out now, |
Here:
|
…ag run_calibration on the calibrate.launch which allows to run the system without running the optimization, which can then be launched in a separate window (better for debugging)
Hi @aaguiar96 , Check the latest commit. It automatically creates the labeled_data rviz marker. |
Hi @miguelriemoliveira, great, I'll pull now. Good news, the labeling procedure is working with the solution you proposed. Thanks for the help @miguelriemoliveira :-) |
@eupedrosa just for you to know, @miguelriemoliveira proposed a labeling solution considering the spherical representation |
@miguelriemoliveira, I committed the changes so that we're synchronized. |
Great! The rotated pattern seems to provide more information. |
… up the objective_function.py before. Now its fixed and easier.
Any news? Should we keep this issue open? |
Hi @aaguiar96 and @eupedrosa I was working with @eupedrosa after working with @aaguiar96 and we looked into the normalization. It is not yet complete but I think the results have improved very much. The previous normalization had a bug in some collections which could be catastrophic. Also, we added the optimization of the intrinsics and the distortion components (distortion was not there) since there images are clearly in need of some distortion. After this, camera errors droped from 2.5 pixels avg to bellow 1 pixel. Then we ran a full agrob system calibration . Here is the result with some 8 collections Then I ran with all 20 collections Conclusions: All laser data points are near the patterns, where they should, both orthogonally and longitudinally.
So to me this looks really good, the only thing is 5. @aaguiar96 can you take (or ask someone to take some pictures, and even measurements with a metric tape). If the velodyne is really in the center of the camera we can resort to the best calibration technique: grab a hammer and hit the sensor from left to right until it goes to the position estimated in hte optimization :) |
Uau, looks really really great. So, the problem was on the nornalization and intrinsics right? How do you added the distortion components since they were not available on the camera info? I forgot to share, but I took some pictures of the sensors. Here: So, now we just have to check and try to measure the distance between the calibration and the real sensor configuration. Other thing, did you optimize the three sensors or just a single camera and the lidar? |
From I'm seeing the calibration is working !! I'm more inclined to say the URDF is wrong, not the calibration. Furthermore, I think we can finally close this long issue. |
👏 good work team! :-) Ok, I think you're right. |
Hi guys. I agree, we make a very good team! And we are almost there. But I am not sure I am so optimistic as @eupedrosa ... perhaps, but I mean, from the picture we have this yellow distance and from the optimization we have The question is: is this pose of the velodyne w.r.t. the camera consistent with reality? That will not change if we fix the velodyne instead of the camera I think ... |
One thing that does not make sense is the size of the zed camera ... the cad model seems too small. @aaguiar96 can you check it? |
I am a little more optimistic because our eyes easily lie to us. Obviously, we should check. |
The best thing to do is measure it on rviz and physically... |
Ok, I hope you are right :) another thing is that there is a second camera there? It is not being used right? |
No. The other camera is a stereo realsense. It has two fisheye len cameras. |
Now that we have the objective function working, we should:
The text was updated successfully, but these errors were encountered: