Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D405: Poor Depth Alignment #11329

Closed
vonHartz opened this issue Jan 18, 2023 · 10 comments
Closed

D405: Poor Depth Alignment #11329

vonHartz opened this issue Jan 18, 2023 · 10 comments

Comments

@vonHartz
Copy link

Required Info
Camera Model D405 }
Firmware Version 05.13.00.50
Operating System & Version Ubuntu 20
Platform PC
Language Python
Segment Robot

Issue Description

It seems that for the D405, depth and RGB are poorly aligned.
For a quick reproduction, consider the following code (slightly modified from align-depth2color.py):

## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2017 Intel Corporation. All Rights Reserved.

#####################################################
##              Align Depth to Color               ##
#####################################################

# First import the library
import pyrealsense2 as rs
# Import Numpy for easy array manipulation
import numpy as np
# Import OpenCV for easy image rendering
import cv2

# Create a pipeline
pipeline = rs.pipeline()

# Create a config and configure the pipeline to stream
#  different resolutions of color and depth streams
config = rs.config()

# Get device product line for setting a supporting resolution
pipeline_wrapper = rs.pipeline_wrapper(pipeline)
pipeline_profile = config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))

# found_rgb = False
# for s in device.sensors:
#     if s.get_info(rs.camera_info.name) == 'RGB Camera':
#         found_rgb = True
#         break
# if not found_rgb:
#     print("The demo requires Depth camera with Color sensor")
#     exit(0)

config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)

if device_product_line == 'L500':
    config.enable_stream(rs.stream.color, 960, 540, rs.format.bgr8, 30)
else:
    config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

# Start streaming
profile = pipeline.start(config)

# Getting the depth sensor's depth scale (see rs-align example for explanation)
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("Depth Scale is: " , depth_scale)

# We will be removing the background of objects more than
#  clipping_distance_in_meters meters away
clipping_distance_in_meters = 0.5 #1 meter
clipping_distance = clipping_distance_in_meters / depth_scale

# Create an align object
# rs.align allows us to perform alignment of depth frames to others frames
# The "align_to" is the stream type to which we plan to align depth frames.
align_to = rs.stream.color
align = rs.align(align_to)

# Streaming loop
try:
    while True:
        # Get frameset of color and depth
        frames = pipeline.wait_for_frames()
        # frames.get_depth_frame() is a 640x360 depth image

        # Align the depth frame to color frame
        aligned_frames = align.process(frames)

        # Get aligned frames
        aligned_depth_frame = aligned_frames.get_depth_frame() # aligned_depth_frame is a 640x480 depth image
        color_frame = aligned_frames.get_color_frame()

        # Validate that both frames are valid
        if not aligned_depth_frame or not color_frame:
            continue

        depth_image = np.asanyarray(aligned_depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())

        # Remove background - Set pixels further than clipping_distance to grey
        grey_color = 153
        depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) #depth image is 1 channel, color is 3 channels
        bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)

        # Render images:
        #   depth align to color on left
        #   depth on right
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
        # images = np.hstack((bg_removed, depth_colormap))
        # images = bg_removed

        alpha = 0.5
        beta = 0.8

        images = cv2.addWeighted(
            bg_removed, alpha, depth_colormap, beta, 0)

        cv2.namedWindow('Align Example', cv2.WINDOW_NORMAL)
        cv2.imshow('Align Example', images)
        key = cv2.waitKey(1)
        # Press esc or 'q' to close the image window
        if key & 0xFF == ord('q') or key == 27:
            cv2.destroyAllWindows()
            break
finally:
    pipeline.stop()

Here's an example result:

poor-alignment

Is there any way to mitigate this issue?

@MartyG-RealSense
Copy link
Collaborator

Hi @vonHartz You should only need to comment out exit(0) rather than the whole RGB sensor checking section of lines 28-35 of align_depth2color.py in order to use that script with the D405 camera model, as described at #10445 (comment)

Please also try restoring clipping_distance to its default value of '1' in the script instead of 0.5 to see whether the results improve.

@vonHartz
Copy link
Author

vonHartz commented Jan 18, 2023

Hi @vonHartz You should only need to comment out exit(0) rather than the whole RGB sensor checking section of lines 28-35 of align_depth2color.py in order to use that script with the D405 camera model, as described at #10445 (comment)

Please also try restoring clipping_distance to its default value of '1' in the script instead of 0.5 to see whether the results improve.

Thanks for the quick reply, Marty.

Line 28-34 do nothing else, so uncommenting them does not change anything.
Likewise, the a larger clipping_distance doesn't solve the problem either.

Instead, it seems to be a more fundamental issue of the stereo-depth perception breaking down towards the edges of the image - especially for close objects.
Is there some build-in filter that could help?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 18, 2023

How close is the camera to the objects? The minimum depth sensing distance of the D405 is 7 cm. When the camera gets closer to objects than the minimum distance then below that minimum, the depth detail will start to break up as the camera moves progressively closer to the objects. The minimum distance of D405 can be reduced to 4 cm to enable the camera to get closer to objects by using the Disparity Shift option, demonstrated at #10963 (comment)

@vonHartz
Copy link
Author

vonHartz commented Jan 18, 2023

The objects are ~30cm away.

I tried to play with the disparity shift, but it was already at zero and increasing it only made the results worse.

I also tried the spatial filter, as per https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/depth_filters.ipynb , but with no difference. Should it be used before or after aligning?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 18, 2023

It is recommended by Intel that post-processing filters are applied before alignment in order to help to avoid distortion effects such as aliasing (jagged lines). There are a few rare cases though where a program works much better if alignment is performed before post-processing.

Is there any improvement if you align color to depth instead of depth to color by changing align_to = rs.stream.color to align_to = rs.stream.depth

@vonHartz
Copy link
Author

Nope, changing the align does not help.

Though, I found a solution The alignment gets much better, if a resolution of 640x360 is used instead of 640x480.

How would I go about applying the filter before alignment?
The following does not work as align requires a composite frame and the spatial filter does not return one.

frames = pipeline.wait_for_frames()
rs.disparity_transform(True).process(frames)
filtered_frames = spatial.process(frames)
# Align the depth frame to color frame
aligned_frames = align.process(filtered_frames)` ` `

@MartyG-RealSense
Copy link
Collaborator

You may be able to exclude the spatial filter, as it typically has a long processing time in exchange for a benefit that is not very noticable.

@MartyG-RealSense
Copy link
Collaborator

Hi @vonHartz Do you have an update about this case that you can provide, please? Thanks!

@vonHartz
Copy link
Author

Sure.
As I wrote above, the depth alignment is much better at a resolution of 640x360 instead of 640x480.
Also, as you noted, the filter does not help much.
Though, overall the alignment is satisfactory now.
So I'll close this.
Thanks again.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 25, 2023

You are very welcome, @vonHartz

As a closing note, I would add the recommendation to check that that the depth scale value that is printed by print("Depth Scale is: " , depth_scale) is 0.01 (the default depth scale of the D405 model). If it is not 0.01 then try hard-coding a fixed value instead of retrieving the value in real-time by using depth_scale = 0.01

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants