-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why depth points captured by LiDAR sensor in L515 is less than the number of pixels in RGB image? #12966
Comments
Hi @ankuraditya98 It is normal for a point cloud image not to fill up the full pixel space provided by a resolution setting and for there to be empty spaces on the image. You could try aligning depth to color so that the depth and RGB coordinates are mapped together into an aligned image with the map_to() instruction. You can then retrieve and print the coordinate values from the point cloud as vertices with the points.get_vertices() instruction. There is an L515 Python script demonstrating these functions at #11335 (comment) |
Thanks for the quick response. I think I did't explain the 2nd part of query well. I have captured the LiDAR point cloud (at resolution 1024 x 768) and it is saved as a .ply file. I also have corresponding RGB image (Resolution of 1920 x 1080) saved. I want to extract 2 things: 1) value of depth relative to camera (just to confirm - does the z coordinate in (x,y,z) of saved point cloud represents the relative distance from camera? ) and 2) (u,v) pixels which represents this depth value. As an example in one of the depth capture I got 'element vertex 257866' points, so I need mapping from 257866 (x,y,z) to 257866 (u,v) pixels value. Note that I am working on saved point cloud and RGB file. |
Z represents distance from the camera, yes. .ply files exported from the RealSense SDK can be imported into 3D software tools such as MeshLab and Blender to do further work on the data, such as converting a point cloud into a solid object with a mesh. You can also also import it into point cloud processing libraries such as PCL and Open3D. Data from a .ply cannot be imported back into the SDK though once it has been exported. If you need to import recorded data into the SDK then exporting the dara as a .bag file instead would be an appropriate choice. SDK scripts can read data from bag files as though the data is coming from a real live camera. |
Hi @ankuraditya98 Do you require further assistance with this case, please? Thanks! |
No, Thanks for the help. |
You are very welcome. Thanks very much for the update! |
@MartyG-RealSense I have a followup question. I am able to align the depth frame and color frame and I am able to extract the depth coordinates for all the RGB pixels using this code.
Specifically this line captures the depth for all the pixels in RGB image of size 1280 x 720, hence I have total 1280 x 720 depth point values in meters. I believe that this depth value captured is in real-world coordinate, which i also verified by physically measuring the distance using measuring tape/ ruler. Is there any way to also capture real-world coordinate (x,y) ? Although I tried using intrinsic, but I am not sure if this is correct method to get world coordinate.
|
If you do not want to use pointcloud vertices to obtain the real-world XYZ like in #12966 (comment) and #4612 (comment) then you could use intrinsics like in your method above, yes. A more advanced L515 Python script for using intrinsics for printing XYZ is at #10484 (comment) |
Thanks a lot for your response @MartyG-RealSense The main reason behind I decided to use intrinsics was that alignment of depth with colour image matches the dimensions of the depth with colour image which is 1280x720, which means that it contains around 900K depth points. But in reality the L515 was able to extract only 200K points as seen in point cloud PLY file. So this leaves me with the question of which data to trust (XYZ from PLY or XYZ from get_distance() and intrinsics). I understand that alignment upscales the original depth image, but doesn’t that also mean it will introduce errors. Let me know if I am missing something or if there is any way to sample trustable depth points after alignment. |
There is the potential for inaccuracy when using the rs.align method of alignment like in your script above, but the inaccuracy should be small. Accuracy is better if depth and color are aligned by mapping the two together into a pointcloud with the Pointcloud .ply files tend to be quite reliable, but when they are imported into software outside of the RealSense SDK (like a 3D tool such as MeshLab or Blender) then the pointcloud image can become distorted when imported unless the depth scale of the 3D software is set to the same depth scale that the RealSense SDK is using when saving the data to file. The depth scale of the L515 camera model is 0.000250. I would therefore recommend using get_distance() if possible. |
Hi @ankuraditya98 Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Issue Description
I captured the raw point cloud from L515 today, and I observed that it can capture at a maximum resolution of 1024 x 768 = 786K, but the number of points it actually captures is lesser than 786K, and varies from scene to scene. I am just wondering what could be the reason behind this.
Also the number of depth points is lesser than the RGB image pixels, so I am wondering what could be the best way to map the LiDAR point cloud to RGB pixels. - In other words I want to know the depth information of each and every RGB pixels.
The text was updated successfully, but these errors were encountered: