Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alignment of RGB and depth in multi-camera mode #1833

Closed
manyaafonso opened this issue Jun 5, 2018 · 5 comments
Closed

Alignment of RGB and depth in multi-camera mode #1833

manyaafonso opened this issue Jun 5, 2018 · 5 comments
Assignees

Comments

@manyaafonso
Copy link


Required Info
Camera Model D415
Operating System & Version Win 10
Platform PC
SDK Version 2.10.4
Language C

Issue Description

We are trying to acquire images simultaneously for 3 or 4 realsense D415 cameras and would like to align the RGB and depth images (to be able to discard the background), ie, for each camera have an RGB and depth image pair which are aligned pixel to pixel. However, the function rs2::align() works only in single camera mode. Has anyone done this alignment for multiple cameras? Is it possible without having to get the extrinsics?

@dorodnic
Copy link
Contributor

dorodnic commented Jun 5, 2018

Hi @manyaafonso
rs2::align is not limited to a single camera.
You can create multiple pipeline objects like in the multicam example and apply rs2::align on color+depth pairs from each

@manyaafonso
Copy link
Author

The problem is that we need single shots instead of a video stream. We cannot stream all cameras simultaneously in high resolution because this uses too much bandwidth and causes missing frames and errors. Therefore we start, wait for frames, and stop all cameras one by one. We cannot use a pipeline for each camera as in the multicam example as stopping the pipeline takes about 5 seconds per camera and about half a minute for a 5 camera setup, which is too much for our robotic application. So we used the FrameQue protocol of the individual sensors to get the last frame, which unfortunately does not support rs2::align. If you have a suggestion, I would appreciate it greatly!

@dorodnic
Copy link
Contributor

dorodnic commented Jun 5, 2018

Yes, you can create a rs2::syncer and pass it to all of the sensors instead of frame queues. On this object you then can call wait_for_frames just as you would with the pipeline and pass the result to rs2::align.

One corner case - the syncer might decide that the pair of some frame was lost in transmission and only output one frame. Pipeline handles this for the user, but you would need to filter out any such orphan frames.

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
@manyaafonso
Does the suggestion solve your problem?

@manyaafonso
Copy link
Author

Thanks for the suggestion, @dorodnic .

@RealSense-Customer-Engineering we tried it but it did not work, but since we need to resolve some issues with the robot platform, for now we will try to do the alignment by measuring the camera parameters. So I am closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants