-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to collect data on my lidar_camera system? #5
Comments
hello, Can you share your method to extract corners of camera targets? Thanks a lot! |
Hi, Glad that you came across this package and would like to give it a try! I will try my best to answer your questions. It all depends on your accuracy/precision of the calibration. For us, we collected multiple senses. Some of them are for training and the others are for validation. On page 6 of the paper, we compared the RMSEs. The more senses you have; the better results you would get. To extract corners of camera targets in this repo, we manually click on the corners and write them down in the getBagData.m. This package will refine the clicked corners automatically. When you write down the corners, please follow the top-left-right-bottom order. Also, when you place boards, please make sure the left corner is taller than the right corner as shown in here Please let me know if you have other questions! |
Thanks, I understood ! I have recorded some packages and trying your way. |
Is there any requirement for the environment? For example, in an open scene, there can be no objects around the calibration board? ? |
Not exactly. We tested in both outdoor and indoor scenes. However, even in a cluster environment, please make sure that you are able to extract points on your targets. Before you use your own datasets, I would suggest you try with the provided datasets, and follow the instruction to see if it works on your machine. Please let me know if you have any other questions! |
cool ! I will have a try on your data , thanks |
Hello , I am trying your another system - front-end on your data, but got errors : UMich-BipedLab/automatic_lidar_camera_calibration#2 Can you help me? Thanks |
Definitely, I will reply in that repo instead. |
By the way, there is still a question, must the tag be a square object? Is a rectangle OK? ? |
For now, due to the optimization setup, calibration targets have to be square. If you really want to use rectangle objects, it could be relaxed easily. You could change the same target_size into (h, w) if you wish. You also need to add an extra parameter in the getBagData. For example in the getBagData.m, you may have:
and in computeConstraintCustomizedCost.m
I was about to fix this but I am packed with something else. I will get back to this in a bit. I am willing to accept a pull request! |
Oh oh , I get it , great ! thank you very much ! |
Not a problem! Please feel free to let me know if you have other questions! |
Thank you very much, There is no other questions in this question. |
Glad to hear that! I will wait for a few days and then close this issue. |
Hi, I am a bit confused about these instructions: What do you mean manually click on the corners? Also what do the numbers in the getBagData.m for these corners refer to? Do you just have an image and then you find the pixel location? Further information would be great. Also regarding the order of writing down the corners is that after they are rotated such that the top left is taller than the right corner? So after this rotation, the first corner would be top left, the second corner would be top right, the third would be bottom left and the fourth would be bottom right? Thanks! |
hello,
A beautiful tool for calibration.
I want to calibrate my system, I read from paper:
For each scene, we collect approximately 10 s of synchronized data, resulting in approximately
100 pairs of scans and images.
Is that says, only need a 10 s of synchronized data for calibration? need change scene?
Thanks a lot!
The text was updated successfully, but these errors were encountered: