Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to collect data on my lidar_camera system? #5

Closed
plumewind opened this issue Mar 31, 2020 · 15 comments
Closed

How to collect data on my lidar_camera system? #5

plumewind opened this issue Mar 31, 2020 · 15 comments

Comments

@plumewind
Copy link

hello,
A beautiful tool for calibration.

I want to calibrate my system, I read from paper:

For each scene, we collect approximately 10 s of synchronized data, resulting in approximately
100 pairs of scans and images.

Is that says, only need a 10 s of synchronized data for calibration? need change scene?

Thanks a lot!

@plumewind
Copy link
Author

hello,

Can you share your method to extract corners of camera targets?

Thanks a lot!

@brucejk
Copy link
Member

brucejk commented Mar 31, 2020

Hi,

Glad that you came across this package and would like to give it a try! I will try my best to answer your questions.

It all depends on your accuracy/precision of the calibration. For us, we collected multiple senses. Some of them are for training and the others are for validation. On page 6 of the paper, we compared the RMSEs. The more senses you have; the better results you would get.

To extract corners of camera targets in this repo, we manually click on the corners and write them down in the getBagData.m. This package will refine the clicked corners automatically. When you write down the corners, please follow the top-left-right-bottom order.

Also, when you place boards, please make sure the left corner is taller than the right corner as shown in here

Please let me know if you have other questions!

@plumewind
Copy link
Author

Thanks, I understood !

I have recorded some packages and trying your way.

@plumewind
Copy link
Author

Is there any requirement for the environment? For example, in an open scene, there can be no objects around the calibration board? ?

@brucejk
Copy link
Member

brucejk commented Apr 1, 2020

Not exactly. We tested in both outdoor and indoor scenes. However, even in a cluster environment, please make sure that you are able to extract points on your targets. Before you use your own datasets, I would suggest you try with the provided datasets, and follow the instruction to see if it works on your machine.

Please let me know if you have any other questions!

@plumewind
Copy link
Author

cool !

I will have a try on your data , thanks

@plumewind
Copy link
Author

Hello , I am trying your another system - front-end on your data, but got errors :

UMich-BipedLab/automatic_lidar_camera_calibration#2

Can you help me? Thanks

@brucejk
Copy link
Member

brucejk commented Apr 2, 2020

Definitely, I will reply in that repo instead.

@plumewind
Copy link
Author

By the way, there is still a question, must the tag be a square object? Is a rectangle OK? ?

@brucejk
Copy link
Member

brucejk commented Apr 2, 2020

For now, due to the optimization setup, calibration targets have to be square. If you really want to use rectangle objects, it could be relaxed easily. You could change the same target_size into (h, w) if you wish. You also need to add an extra parameter in the getBagData.

For example in the getBagData.m, you may have:

BagData(4).lidar_target(1).tag_size.h = 0.8051;
BagData(4).lidar_target(1).tag_size.w = 0.4;

and in computeConstraintCustomizedCost.m

cost_z = cost_z + checkCost(z_prime(i), -target_size.h/2, target_size.w/2);
cost_y = cost_y + checkCost(y_prime(i), -target_size.h/2, target_size.w/2);

I was about to fix this but I am packed with something else. I will get back to this in a bit. I am willing to accept a pull request!

@plumewind
Copy link
Author

Oh oh , I get it , great !

thank you very much !

@brucejk
Copy link
Member

brucejk commented Apr 2, 2020

Not a problem! Please feel free to let me know if you have other questions!

@plumewind
Copy link
Author

Thank you very much,

There is no other questions in this question.

@brucejk
Copy link
Member

brucejk commented Apr 2, 2020

Glad to hear that! I will wait for a few days and then close this issue.

@brucejk brucejk closed this as completed Apr 6, 2020
@anujchadha284
Copy link

Hi,

I am a bit confused about these instructions:
"To extract corners of camera targets in this repo, we manually click on the corners and write them down in the getBagData.m. This package will refine the clicked corners automatically. When you write down the corners, please follow the top-left-right-bottom order."

What do you mean manually click on the corners? Also what do the numbers in the getBagData.m for these corners refer to? Do you just have an image and then you find the pixel location? Further information would be great. Also regarding the order of writing down the corners is that after they are rotated such that the top left is taller than the right corner? So after this rotation, the first corner would be top left, the second corner would be top right, the third would be bottom left and the fourth would be bottom right?

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants