-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature-Request] Multi-cam calibration and multi-pointcloud alignment #747
Comments
Azure Kinect also has it. It only works up to 6 meters. If Luxonis gives longer distance, he will win. |
Sweet! Yes, this would be great! |
This would be a very useful feature. Between the IMU data of each camera, or the ability to define the relative orientations of each camera the required information is already there, but an example of the appropriate way to do this would be very useful. |
plus one! |
This would be a great feature for us as well. |
Progress: Multi-cam calibration & spatial detection fusion |
When I try to run the Multi-cam calibration script, it keeps getting stuck in the capturing still image loop. There is nothing moving in the camera frame, is there anything that could be going wrong? |
CC: @MaticTonin |
Start with the
why
:For folks that would want to use our cameras for 3D object scanning, using multiple cameras would be crucial.
Move to the
what
:Create a script that will let you calibrate multiple cameras looking at the same scene. This will provide extrinsics of cameras relative to each other, which would allow aligning multiple point clouds (produced from different cameras). We could then do some additional filtering of this combined pointcloud.
Move to the
how
:1. Calibrating
2. Alignment
The text was updated successfully, but these errors were encountered: