Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Acquiring multiple frames from D435 #9800

Closed
asb2111991 opened this issue Sep 28, 2021 · 10 comments
Closed

Acquiring multiple frames from D435 #9800

asb2111991 opened this issue Sep 28, 2021 · 10 comments

Comments

@asb2111991
Copy link


Required Info
Camera Model D435
Firmware Version 05.12.15.50
Operating System & Version Win 10
Platform PC
SDK Version 2.49
Language python 3.7

Issue Description

I am extracting RGB and Depth frames on a single shot basis and process them for object identification application. I observed that sometimes, it is likely that the depth map has some artifacts and hence throws the algorithm off. I found that acquiring multiple frames using loops can help me run a median filter and make the depth map more reliable. Is there any way to acquire 5 frames as a single burst rather than using a loop which is time-consuming?

I am a beginner and would really appreciate it if anyone can point me to an example program. I looked into the concept of 'frameset' and it was confusing. Any help is appreciated.

@MartyG-RealSense
Copy link
Collaborator

Hi @asb2111991 The RealSense SDK's Keep() function might meet your requirements. It enables frames to be stored in memory and then processed in a batch in a single action when closing the pipeline - for example, applying post-processing and alignment to all of the stored frames and then saving them to file.

Information about using Keep() with Python can be found at #6146

@asb2111991
Copy link
Author

Thank you so much @MartyG-RealSense, using the keep method, is it possible to retrieve all the 5 frames or I can only get processed output?

@MartyG-RealSense
Copy link
Collaborator

You do not have to apply post-processing, align or file-save to the memory-stored frames after the pipeline is closed if you do not need to. You should be able to access all of the stored frames.

Reading your case again though, I wonder if the artifacts that you are experiencing are due to the fact that the auto-exposure takes the first several frames to settle down after the pipeline has started. You could therefore put a few lines into your script to skip the first several frames and then capture a single frame after that to fulfill your one-shot needs.

#7932 has an example of a Python script under the Loading Code heading that demonstrates skipping the first 5 frames.

# Skip 5 first frames to give the Auto-Exposure time to adjust
for _ in range(5):
pipe.wait_for_frames()

for i in range(num_frames - 5):
print(i)
frameset = pipe.wait_for_frames()

Alternatively, you could disable the depth auto-exposure function like in #3558 (comment) as it is not necessary to skip the first several frames when using manual exposure.

sensor_dep = profile.get_device().first_depth_sensor()
sensor_dep.set_option(rs.option.enable_auto_exposure, 0)

@asb2111991
Copy link
Author

Thank you @MartyG-RealSense for the answer. But, I would really like to use the keep() method. For my application, the external lighting (which I know has a minimal impact over the depth map) is highly variable while the accuracy requirement is very high. So, after so much experimentation, I have found the median filtering approach more effective. I have figured out how to use set_option and rs.option.frames_queue_size. Now I would really like my code to be able to pull multiple frames in one go and use that object to align and also apply post-processing filters.
Can you help me with the code that I can use to pull individual frames out after using :

frames = pipeline.wait_for_frames()
frames.keep()

Thank you

@MartyG-RealSense
Copy link
Collaborator

There are few references about using Keep() to perform actions on the data after the frames are stored. #3164 (comment) provides a simple Python example of using a Decimation post-processing filter with Keep().

An example that uses frames.keep() is also at #3121 (comment)

@asb2111991
Copy link
Author

I have looked at both references you have cited a while ago, but, they do not seem to be working on individual frames that I collect using the keep(). I am starting to wonder if it is even a possibility to extract specific number of frames using keep()

Also, I was looking at frame_queue_example.py and in there I have come across the usage of

queue = rs.frame_queue(50, keep_frames=True)
pipeline.start(config, queue)
frames = queue.wait_for_frame()

How is this different from what I (and many others) am using?

pipeline.start(config)
frames = pipeline.wait_for_frames()
frames.keep()

Please advise.

@RealSenseSupport
Copy link
Collaborator

RealSenseSupport commented Sep 29, 2021

acquire 5 frames as a single burst rather than using a loop

In addition to Marty's suggestion, here are other examples for postprocessing filters in C/C++ and python.

Please see if examples below contain any information that can be applied to your use case.
https://github.com/IntelRealSense/librealsense/tree/master/examples/post-processing

Python box-measurement example also shows an example of applying spatial and temporal filters to the depth.
https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/box_dimensioner_multicam/realsense_device_manager.py#L53

@asb2111991
Copy link
Author

Thank you @RealSenseSupport for jumping in to help. I have few outputs under different settings. The code is as follows:

import pyrealsense2 as rs
import numpy as np
import matplotlib.pyplot as plt
import time

# plt.close('all')

# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()

config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

# Start streaming
profile = pipeline.start(config)

s = profile.get_device().query_sensors()[0]
s.set_option(rs.option.exposure,6000)
s.set_option(rs.option.frames_queue_size,0)

s = profile.get_device().query_sensors()[1]
s.set_option(rs.option.exposure,550)
s.set_option(rs.option.frames_queue_size,0)

spat_filter = rs.spatial_filter()       # Spatial    - edge-preserving spatial smoothing
temp_filter = rs.temporal_filter()      # Temporal   - reduces temporal noise
hole_fill = rs.hole_filling_filter()    # Hole-filling filter

# Create an align object
# rs.align allows us to perform alignment of depth frames to others frames
# The "align_to" is the stream type to which we plan to align depth frames.
align_to = rs.stream.color
align = rs.align(align_to)

##############################################################################
####################### Using 'wait_for_frames' ##############################

frames = pipeline.wait_for_frames()

# # Align the depth frame to color frame
# aligned_frames = align.process(frames)
# 
# # Get aligned frames
# depth_frame = aligned_frames.get_depth_frame() # aligned_depth_frame
# color_frame = aligned_frames.get_color_frame() # aligned RGB frame
        
depth_frame = frames.get_depth_frame() # Not-aligned_depth_frame

filtered = spat_filter.process(depth_frame)
filtered = temp_filter.process(filtered)
depth_frame_filtered = hole_fill.process(filtered)
dframe = np.asanyarray(depth_frame_filtered.get_data())

dframe[dframe > 2650] = 0
dframe[dframe < 1000] = 0
plt.matshow(dframe)
plt.title('Using wait without for-loop accumulation')
plt.savefig('op1.png')

time.sleep(0.5)

##############################################################################
####################### Using 'poll_for_frames' ##############################

frameset = pipeline.poll_for_frames()

frames1 = frameset.first_or_default(rs.stream.depth)

# # Align the depth frame to color frame
# aligned_frames = align.process(frameset)

# # Get aligned frames
# depth_frame = aligned_frames.get_depth_frame() # aligned_depth_frame
# color_frame = aligned_frames.get_color_frame() # aligned RGB frame
        
# depth_frame = frames.as_depth_frame() # Not-aligned_depth_frame

filtered = spat_filter.process(frames1)
filtered = temp_filter.process(filtered)
depth_frame_filtered = hole_fill.process(filtered)
dframe = np.asanyarray(depth_frame_filtered.get_data())

dframe[dframe > 2650] = 0
dframe[dframe < 1000] = 0
plt.matshow(dframe)
plt.title('Using poll')
plt.savefig('op2.png')

time.sleep(0.5)

##############################################################################
################ Using 'wait_for_frames' and accumulation ####################

NAcq = 4
bufferD = (np.zeros((480,640,NAcq)))

for ct1 in range(NAcq):
    # Wait for a coherent pair of frames: depth and color
    frames2 = pipeline.wait_for_frames()
    
    # # Align the depth frame to color frame
    # aligned_frames = align.process(frames)

    # # Get aligned frames
    # depth_frame = aligned_frames.get_depth_frame() # aligned_depth_frame
    # color_frame = aligned_frames.get_color_frame() # aligned RGB frame

    depth_frame = frames2.get_depth_frame() # Not-aligned_depth_frame
    
    filtered = spat_filter.process(depth_frame)
    filtered = temp_filter.process(filtered)
    depth_frame_filtered = hole_fill.process(filtered)
    bufferD[:,:,ct1] = np.asanyarray(depth_frame_filtered.get_data())
    
dframe = np.median(bufferD,axis = 2)
    
dframe[dframe > 2650] = 0
dframe[dframe < 1000] = 0
plt.matshow(dframe)
plt.title('Using wait and for-loop accumulation')
plt.savefig('op3.png')

pipeline.stop()

The outputs are:

op1

op2

op3

These images have been taken under controlled conditions. While the last output (using accumulation) is the desirable outcome, it takes significantly longer duration for acquiring multiple frames in a loop. Is there any way I can avoid loops and acquire 'n' (= 5 in my case) frames with one command?

Thank you.

@MartyG-RealSense
Copy link
Collaborator

Hi @asb2111991 Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants