You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, this is really a nice work,
However, as far as I know, the lidar equipped in Apple device can only acquire 9X64 points at a time. So I wonder how can you acquire the depth map in real-time?
Is it generated by fusing the depth information from the lidar sensor and other information(such as RGB and IMU) through "sceneDepth" API?
The text was updated successfully, but these errors were encountered:
I'm also wondering the same. This article provides a nice explanation of the working principle of the apple lidar, it can give 24x24 points from a single scan.
The resolution of a single depth image retrieved from ARKit Depth is 256x192, I'm wondering how apple can go from a 24x24 lidar point cloud to the 256x192 depth image in real-time. Is it through monocular depth estimation corrected by LiDAR points?
The colored RGB image from the wide-angle camera and the depth readings from the LiDAR scanner are fused together using advanced machine learning algorithms to create a dense depth map that is exposed through the API.
Hello, this is really a nice work,
However, as far as I know, the lidar equipped in Apple device can only acquire 9X64 points at a time. So I wonder how can you acquire the depth map in real-time?
Is it generated by fusing the depth information from the lidar sensor and other information(such as RGB and IMU) through "sceneDepth" API?
The text was updated successfully, but these errors were encountered: