Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert pointcloud to Depth image (3d to 2d) #2204

Closed
alt01 opened this issue Aug 7, 2018 · 13 comments
Closed

Convert pointcloud to Depth image (3d to 2d) #2204

alt01 opened this issue Aug 7, 2018 · 13 comments
Assignees

Comments

@alt01
Copy link

alt01 commented Aug 7, 2018

Required Info
Camera Model Depth camera D435
Firmware Version 05.09.11.00
Operating System & Version Windows 10
Platform PC
SDK Version 2.12.0
Language python
Segment Robot

Issue Description

I obtained a depth image, RGB image and Point cloud (.ply) from the intel RealSense Viewer. Then I made a segmentation process in matlab, so I deleted some points of the original point cloud, but Im still having a .ply file. Now I need to convert this .ply file (the processed point cloud) into a depth image. I would like to use either matlab or python.

Do you have any idea on how to do this?.

Original data obtained from Intel´s SDK Viewer:

image

a) Original depth image (I also have it in gray scale).

image

b) Original point cloud

image

c) Processed pointcloud, in matlab (it is also a .ply file). It is the same as the original, but I deleted some points. I need to convert this one into a depth image.

@dorodnic
Copy link
Contributor

dorodnic commented Aug 7, 2018

Hi @alt01
We don't have a closed processing block ready for this task, but you could use rs2_project_point_to_pixel to project new points to camera plane.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 7, 2018

Hi Dorodnic, this case is also on the Intel Support forums. alt01 agreed with your suggestion that using rs2_project_point_to_pixel was the way to go. I wasn't able to provide an explanation for how to do this in their particular project though.

The full discussion on Intel Support is here:

https://communities.intel.com/message/558449#558449

@alt01
Copy link
Author

alt01 commented Aug 7, 2018

I read that this function needs the intrinsic paramenters of the depth camera, and I have alrready obtained them:
width: 424, height: 240, ppx: 214.063, ppy: 120.214, fx: 213.177, fy: 213.177, model: Brown Conrady, coeffs: [0, 0, 0, 0, 0]

I dont really understand what does the""rs2_project_point_to_pixel(...)" function return. In the documentation, is said tha it converts a 3d point into a 2d image, but is it a depth image? and Do I have to introduce every point (x,y,z) independently? I couldn't find an example on how to do this.

@dorodnic
Copy link
Contributor

dorodnic commented Aug 7, 2018

float point[2];
float vertex[3] { x, y, z };
rs2_project_point_to_pixel(point, &intrinsics, vertex);
// read output pixel coordinates from point[0] and point[1]

You'd have to iterate over all points (x,y,z) in the point-cloud.

There is no simple example for this use-case because this is not a common use-case, but rs2_project_point_to_pixel is being used in some SDK tools.

@alt01
Copy link
Author

alt01 commented Aug 7, 2018

Thanks @dorodnic .

Are point [0] and point[1] the row and column of the pixel, on the depth image, corresponding to the (x,y,z) point?

Because for having a depth image I would need 3 outputs: row, column and value.

@alt01
Copy link
Author

alt01 commented Aug 7, 2018

I applied sdk´s function "rs.rs2_project_point_to_pixel()" with the following inputs:

  • Depth intrinsic values: width: 424, height: 240, ppx: 214.063, ppy: 120.214, fx: 213.177, fy: 213.177, model: Brown Conrady, coeffs: [0, 0, 0, 0, 0]

  • point to convert (from pointcloud): [-0.968, -0.582, 1.03]

The result I got is: [13.718368530273438, -0.24103546142578125]

How should I interpret this value if its suposed to give a 2d image pixel?

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
Hi @alt01,

Your result should be mapped to color pixel. I modified the python code based on #1890 to convert from depth to 2D color.

you can also refer to below article.
https://github.com/IntelRealSense/librealsense/wiki/Projection-in-RealSense-SDK-2.0

import pyrealsense2 as rs
import numpy as np
config = rs.config()
config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
pipeline = rs.pipeline()
pipe_profile = pipeline.start(config)
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()

Intrinsics & Extrinsics

depth_intrin = depth_frame.profile.as_video_stream_profile().intrinsics
color_intrin = color_frame.profile.as_video_stream_profile().intrinsics
depth_to_color_extrin = depth_frame.profile.get_extrinsics_to(color_frame.profile)
color_to_depth_extrin = color_frame.profile.get_extrinsics_to(depth_frame.profile)
print("\n Depth intrinsics: " + str(depth_intrin))
print("\n Color intrinsics: " + str(color_intrin))
print("\n Depth to color extrinsics: " + str(depth_to_color_extrin))

Depth scale - units of the values inside a depth frame, i.e how to convert the value to units of 1 meter

depth_sensor = pipe_profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("\n\t depth_scale: " + str(depth_scale))
depth_image = np.asanyarray(depth_frame.get_data())
depth_pixel = [200, 200] # Random pixel
depth_value = depth_image[200][200]*depth_scale
print("\n\t depth_pixel@" + str(depth_pixel) + " value: " + str(depth_value) + " meter")

from pixel to 3D point

depth_point = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_pixel, depth_value)
print("\n\t 3D depth_point: " + str(depth_point))

from 3D depth point to 3D color point

color_point = rs.rs2_transform_point_to_point(depth_to_color_extrin, depth_point)
print("\n\t 3D color_point: " + str(color_point))

from color point to 2D color pixel

color_pixel = rs.rs2_project_point_to_pixel(color_intrin, color_point)
print("\n\t color_pixel: " + str(color_pixel))

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
Hi @alt01,

any update?

@alt01
Copy link
Author

alt01 commented Aug 29, 2018

The solution was to deproject each point (x,y,z) from the point cloud

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
Hi @alt01,

Is this issue resolved? You can also review #1601 to correctly convert to 2D pixel from point cloud.

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
Still need any support for this topic? If not, will close this.

@Woodstock94
Copy link

hi all, I'm struggling with the same problem here (using pyrealsense 2).
I only have the ply file (every point has x,y,z,r,g,b info).
It is clear that I can use
color_pixel = rs.rs2_project_point_to_pixel(color_intrin, color_point)
to work on the single point.
I'm able to use this function on all my points... but how can I use the output of this function in order to retrieve a real image?
I don't know how to put all the single results of the "rs2_project_point_to_pixel" function in order to build a matrix corresponding to an image.
Could you please help me figure out this?

@eyildiz-ugoe
Copy link

eyildiz-ugoe commented Aug 9, 2019

@RealSense-Customer-Engineering I've ran your code but the mapped pixel point was color_pixel: [55854.78125, 143.9385223388672] which is ridiculous, since X cannot be that high. I am using D435 with resolutions of both sensors (color and depth) being 1280x720.

In other words, I think the code has a flaw. Or better said, somewhere, something is not right.

Here is my full output:


Depth intrinsics: width: 1280, height: 720, ppx: 643.548, ppy: 367.861, fx: 652.776, fy: 652.776, model: Brown Conrady, coeffs: [0, 0, 0, 0, 0]
Color intrinsics: width: 1280, height: 720, ppx: 640.522, ppy: 360.351, fx: 926.419, fy: 927.058, model: Inverse Brown Conrady, coeffs: [0, 0, 0, 0, 0]
Depth to color extrinsics: rotation: [0.999905, 0.0111846, -0.0081191, -0.0111741, 0.999937, 0.00133797, 0.00813355, -0.00124712, 0.999966]
translation: [0.0147091, -5.76127e-05, 0.000246798]

depth_scale: 0.0010000000475
depth_pixel@[200, 200] value: 0.0 meter
3D depth_point: [-0.0, -0.0, 0.0]
3D color_point: [0.01470907311886549, -5.761265492765233e-05, 0.00024679797934368253]
color_pixel: [55854.78125, 143.9385223388672]

Also, could you please re-open the issue? I don't think it's a completely solved one, yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants