You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using two cameras at the same time with the multicam feature. I would like to make a live reconstruction, and not display two views of each camera but display only one view with a reconstruction of what both camera detect. I could transmit the angle difference between the two cameras to falicitate the reconstruction.
Has someone already implemented that kind of feature?
Best regards,
Emmanuel
The text was updated successfully, but these errors were encountered:
In January 2018 at the Sundance festival, Intel demonstrated 'volumetric capture' 180 degree scanning of the human body with four hardware-synced D435s.
According to an Intel blog article about the demo, they took point cloud scans from four separate PCs (one for each camera) and used a 5th PC to sync it all together.
My transcript of Intel's recent webinar about multiple cameras has suggestions about how to approach combining point clouds into a single one - by using the software Vicalib, or by doing an Affine Transform operation to rotate and move the clouds in 3D space so they can then be appended together into a single cloud.
Also please take a look at #2531. I know this issue is Windows and ROS is (mostly) Ubuntu, but this can give you some direction (and perhaps you could even make it work with newly announced ROS for Windows (#2465)
Hello,
I am using two cameras at the same time with the multicam feature. I would like to make a live reconstruction, and not display two views of each camera but display only one view with a reconstruction of what both camera detect. I could transmit the angle difference between the two cameras to falicitate the reconstruction.
Has someone already implemented that kind of feature?
Best regards,
Emmanuel
The text was updated successfully, but these errors were encountered: