New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert pointcloud to Depth image (3d to 2d) #2204
Comments
Hi @alt01 |
Hi Dorodnic, this case is also on the Intel Support forums. alt01 agreed with your suggestion that using rs2_project_point_to_pixel was the way to go. I wasn't able to provide an explanation for how to do this in their particular project though. The full discussion on Intel Support is here: |
I read that this function needs the intrinsic paramenters of the depth camera, and I have alrready obtained them: I dont really understand what does the""rs2_project_point_to_pixel(...)" function return. In the documentation, is said tha it converts a 3d point into a 2d image, but is it a depth image? and Do I have to introduce every point (x,y,z) independently? I couldn't find an example on how to do this. |
float point[2];
float vertex[3] { x, y, z };
rs2_project_point_to_pixel(point, &intrinsics, vertex);
// read output pixel coordinates from point[0] and point[1] You'd have to iterate over all points (x,y,z) in the point-cloud. There is no simple example for this use-case because this is not a common use-case, but |
Thanks @dorodnic . Are point [0] and point[1] the row and column of the pixel, on the depth image, corresponding to the (x,y,z) point? Because for having a depth image I would need 3 outputs: row, column and value. |
I applied sdk´s function "rs.rs2_project_point_to_pixel()" with the following inputs:
The result I got is: [13.718368530273438, -0.24103546142578125] How should I interpret this value if its suposed to give a 2d image pixel? |
[Realsense Customer Engineering Team Comment] Your result should be mapped to color pixel. I modified the python code based on #1890 to convert from depth to 2D color. you can also refer to below article. import pyrealsense2 as rs Intrinsics & Extrinsicsdepth_intrin = depth_frame.profile.as_video_stream_profile().intrinsics Depth scale - units of the values inside a depth frame, i.e how to convert the value to units of 1 meterdepth_sensor = pipe_profile.get_device().first_depth_sensor() from pixel to 3D pointdepth_point = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_pixel, depth_value) from 3D depth point to 3D color pointcolor_point = rs.rs2_transform_point_to_point(depth_to_color_extrin, depth_point) from color point to 2D color pixelcolor_pixel = rs.rs2_project_point_to_pixel(color_intrin, color_point) |
[Realsense Customer Engineering Team Comment] any update? |
The solution was to deproject each point (x,y,z) from the point cloud |
[Realsense Customer Engineering Team Comment] |
hi all, I'm struggling with the same problem here (using pyrealsense 2). |
@RealSense-Customer-Engineering I've ran your code but the mapped pixel point was In other words, I think the code has a flaw. Or better said, somewhere, something is not right. Here is my full output:
Also, could you please re-open the issue? I don't think it's a completely solved one, yet. |
Issue Description
I obtained a depth image, RGB image and Point cloud (.ply) from the intel RealSense Viewer. Then I made a segmentation process in matlab, so I deleted some points of the original point cloud, but Im still having a .ply file. Now I need to convert this .ply file (the processed point cloud) into a depth image. I would like to use either matlab or python.
Do you have any idea on how to do this?.
Original data obtained from Intel´s SDK Viewer:
a) Original depth image (I also have it in gray scale).
b) Original point cloud
c) Processed pointcloud, in matlab (it is also a .ply file). It is the same as the original, but I deleted some points. I need to convert this one into a depth image.
The text was updated successfully, but these errors were encountered: