Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Understanding differences between point clouds obtained with rs2_deproject_pixel_to_point and point clouds viewed in RealSense Viewer #11744

Closed
WRWA opened this issue Apr 28, 2023 · 3 comments

Comments

@WRWA
Copy link

WRWA commented Apr 28, 2023

Required Info
Camera Model D415
Firmware Version 05.13.00.50
Operating System & Version Linux (Ubuntu 20)
Kernel Version (Linux Only) 5.15.0
Platform PC
SDK Version 2.53.1
Language python
Segment Robot

Issue Description

I used rs2_deproject_pixel_to_point to generate the point cloud, and there is a difference between what I see in the RealSense viewer. As you can see in the photo below, the point cloud generated by the function, judging by the shape of the can opener, looks like it has been rotated 180 degrees around the x-axis of the image when compared to what it looks like in the RealSense Viewer. Also, if you look at the boundaries of the can, the point cloud I got contains a wider margin around the can with an incorrect depth value. My question is,

  1. is there any way to convert the pointcloud to what it looks like in RealSense instead of what it looks like rotated around the x-axis except changing sign of y,z values?
  2. is there any way to fix the background around the can being flush with the can, even though I used an aligned depth frame and image frame?
    Untitled
    Untitled (1)
    Untitled (2)
    Untitled (3)

The code I used is as below;


def init_realsense():
    pipeline = rs.pipeline()
    config = rs.config()
    config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
    config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
    profile = pipeline.start(config)

    align_to = rs.stream.color
    align = rs.align(align_to)

    return pipeline, profile, align

def get_frames(pipeline, align):
    frames = pipeline.wait_for_frames()
    aligned_frames = align.process(frames)

    depth_frame = aligned_frames.get_depth_frame()
    color_frame = aligned_frames.get_color_frame()

    return depth_frame, color_frame

test_points = []
for x in range(roi_coord[0][0], roi_coord[1][0]):
    for y in range(roi_coord[0][1], roi_coord[1][1]):
        coord = [x, y]
        depth = depth_frame.get_distance(x, y)
        if depth == 0:
            continue

        xyz = rs.rs2_deproject_pixel_to_point(intrinsics, coord, depth)
        pixel_color = _img[y, x]
        test_points.append([xyz[0], xyz[1], xyz[2], pixel_color[0], pixel_color[1], pixel_color[2]])

test_points = np.array(test_points)

@MartyG-RealSense
Copy link
Collaborator

Hi @WRWA The two main techniques of generating an RGB-textured pointcloud with the RealSense SDK are to use align_to and rs2_deproject_pixel_to_point (as you did) or to use pc.calculate and pc.map_to to map depth and RGB together.

It is my understanding that the Viewer uses pc.calculate for its pointcloud generation, as demonstrated by the SDK's C++ example program rs-pointcloud.

https://github.com/IntelRealSense/librealsense/blob/master/examples/pointcloud/rs-pointcloud.cpp

Using rs2_deproject_pixel_to_point in combination with alignment can be less accurate than the pc.calculate method because the alignment process can introduce a small amount of inaccuracy. #4612 (comment) has an example of Python code for using pc.calculate.

If a pointcloud is 'wavy' like in your images - like in #1375 - then moving the camera further away from the object can help to reduce waviness. Performing a calibration of the camera can also help.

The table under the tray may not be rendering well because of its color. This is because it is a general physics principle (not specific to RealSense) that dark grey or black absorbs light and so makes it more difficult for depth cameras to read depth information from such surfaces. The darker the color shade, the more light that is absorbed and so the less depth detail that the camera can obtain. Casting a strong light source onto a black surface can help to bring out depth detail from that surface, though increasing illumination could reduce the quality of the tray area of the image by increasing reflections from the metal.

@WRWA
Copy link
Author

WRWA commented May 2, 2023

I appreciate for your help! I'll try pc.calculate.

@WRWA WRWA closed this as completed May 2, 2023
@MartyG-RealSense
Copy link
Collaborator

You are very welcome. Thanks very much for the update!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants