Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting preset options to the L515 in Python - "Wavy" result #8619

Closed
maximepollet27 opened this issue Mar 19, 2021 · 6 comments
Closed

Setting preset options to the L515 in Python - "Wavy" result #8619

maximepollet27 opened this issue Mar 19, 2021 · 6 comments
Labels

Comments

@maximepollet27
Copy link

Required Info
Camera Model L515
Firmware Version 01.05.04.01
Operating System & Version Win (8.1/10)
Kernel Version (Linux Only)
Platform PC
SDK Version 2.42.0
Language Python
Segment Others

Issue Description

Hi,

I am trying to use the L515 to obtain a depth image. However, the result is always "wavy" despite the measured surface being relatively flat. This can be seen in the following picture:

image

Because of this, I tried some of the presets like Short Range, Low Ambient Light or even tried changing the confidence but the result is always the same. Is there a way to solve this or is it normal? I am attaching my code below in case so you can tell me if I am not applying the options the right way (once the numpy arrays are saved, I use another script with Open3d to make it a .ply file similar to what is shown on the picture above).

import pyrealsense2 as rs
import numpy as np
from PIL import Image

# Declare pointcloud object, for calculating pointclouds and texture mappings
pc = rs.pointcloud()
# We want the points object to be persistent so we can display the last cloud when a frame drops
points = rs.points()

# Create pipeline and config stream
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 1024, 768, rs.format.z16, 30) # 1024*768 resolution (others are possible)
config.enable_stream(rs.stream.color, 1920, 1080, rs.format.rgb8, 30)

profile = config.resolve(pipeline)
pipeline.start(config)

# Declare sensor object and set options
depth_sensor = profile.get_device().first_depth_sensor()
depth_sensor.set_option(rs.option.visual_preset, 3) # 5 is short range, 3 is low ambient light
depth_sensor.set_option(rs.option.confidence_threshold, 3) # 3 is the highest confidence
depth_sensor.set_option(rs.option.noise_filtering, 6)

# Create an align object to match both resolutions
# The "align_to" is the stream type to which other stream will be aligned.
align_to = rs.stream.depth
align = rs.align(align_to)

# Get frames
frames = pipeline.wait_for_frames() # Wait until a frame is available

# Align the color frame to depth frame
aligned_frames = align.process(frames)
depth_frame = aligned_frames.get_depth_frame()
color_frame = aligned_frames.get_color_frame()

# Ensure that both frames are valid
if depth_frame and color_frame:

    # Turn depth frame into an pointcloud (with x, y, z values) and store it in a numpy array
    point_cloud = pc.calculate(depth_frame)

    # Turn the frames into numpy arrays
    depth = np.array(point_cloud.get_vertices(3)) # shape: resolution * 3 -> e.g. 1024*768 * (x,y,z)
    color = np.array(color_frame.get_data()) # shape: aligned resolution * 3 -> e.g. 1024*768 * (r,g,b)

    # Save arrays for future uses
    np.save('numpy_depth.npy', depth)
    np.save("numpy_color.npy", color)

    # Save image
    im = Image.fromarray(color)
    im.save("depth_image.jpg")

pipeline.stop()

Thank you in advance for your help!

@RealSenseSupport
Copy link
Collaborator

Hi @maximepollet27

Are you seeing this only via your python scripts?

Is there anyway you can use the RealSense Viewer on your setup and see if the depth image is similar to what you're seeing here?

What kind of material is the object or flat surface you're having in the camera FOV?

Have you had a change to review our documentation around L515:
https://support.intelrealsense.com/hc/en-us/articles/360051646094-Intel-RealSense-LiDAR-Camera-L515-User-Guide
https://www.intelrealsense.com/optimizing-the-lidar-camera-l515-range/?_ga=2.164975543.909780770.1616424584-1479303487.1578600965

@maximepollet27
Copy link
Author

Hi @RealSenseSupport

Thank you for your reply. Just below is a screenshot of the depth image in the RealSense Viewer. As you can see, the signal is still "wavy" but not as much.

image

When I apply the "Low ambient light" preset in the RealSense Viewer it is even better as you can see below.

image

So yes, it does seem like this only happens when I am using the python script. More specifically, it seems like it is the presets I am trying to use that are not working.

The surface is a small (40cm*40cm) paper model I made to test the L515.

@RealSenseSupport
Copy link
Collaborator

Thanks for the images. The material of the object makes a difference as well as the ambient light in the environment. Also you can have a look at the depth resolution that may help as well. In our Depth Quality Tool (https://github.com/IntelRealSense/librealsense/releases/download/v2.42.0/Depth.Quality.Tool.exe)(or the latest SDK release) we have an option where you can test out how much IR reflectivity that the L515 gets back by an object. This also may help understand the environment and object reflectivity.

@maximepollet27
Copy link
Author

Thank you for the advice, I will check it out. Still, why is there such a difference between the point cloud collected using the python library and the point cloud displayed in the RealSense Viewer?

@maximepollet27
Copy link
Author

Hi @RealSenseSupport,

Do you have any updates on this issue?

@RealSenseSupport
Copy link
Collaborator

Hi @maximepollet27

The RealSense Viewer has more filters used to help visualize the point cloud better. Within the settings menu for the overall Viewer in the top right hand corner, you will see performance tab there. These settings can have an affect on the point cloud generated in the Viewer 3D view. There is also an option there "Perform Occlusion Invalidation" that may also have an affect on the point cloud viewed in the Viewer.

Overall if you're setting up exact same Presets in both meaning using "No Ambient Light", Low Ambient Light", "Max Range", "Short Range" along with the same resolutions, the resultant point cloud should come out to be the same, although as stated some post processing filters are used in the Viewer that may not be there in the Python script.

Hope this helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants