Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Realsense python using post processing and alignment causing either blank depth frames or RuntimeError #11246

Closed
MartinPedersenpp opened this issue Dec 21, 2022 · 15 comments

Comments

@MartinPedersenpp
Copy link


Required Info
Camera Model D435
Firmware Version 05.13.00.50
Operating System & Version Ubuntu 18 (L4T nvidia)
Kernel Version (Linux Only) 4.9
Platform NVIDIA Jetson Xavier NX
SDK Version 2.51.1
Language python
Segment others

I am running into some issues when trying to capture frames from my D435 camera. The setup. of my stream looks like this

context = rs.context()
pipeline = rs.pipeline(context)
config = rs.config()
fps = camera_config.fps
width, height = camera_config.resolution
rgb_width, rgb_height = 1920, 1080 # 1280, 720 for D455
config.enable_stream(rs.stream.depth, width, height, rs.format.z16, fps)
config.enable_stream(rs.stream.color, rgb_width, rgb_height, rs.format.rgb8, 30)
profile = pipeline.start(config)
align_to = rs.stream.color
clr_profile = profile.get_stream(rs.stream.color)
clr_profile.as_video_stream_profile().get_intrinsics()
roisensor = profile.get_device().first_roi_sensor()
roi = roisensor.get_region_of_interest()
roi.min_x, roi.max_x, roi.min_y, roi.max_y = 362, 533, 157, 276
roisensor.set_region_of_interest(roi)
roi = roisensor.get_region_of_interest()
depth_sensor, color_sensor, *_ = profile.get_device().query_sensors()
json_obj = json.load(open("custom.json", "r"))
json_str = str(json_obj).replace("'", '\"')
dev = self.profile.get_device()
adv_mode = rs.rs400_advanced_mode(dev)
adv_mode.load_json(json_str)
depth_sensor.set_option(rs.option.enable_auto_exposure, 1)
depth_sensor.set_option(rs.option.laser_power, 210)
depth_multiplier = 6
depth_sensor.set_option(rs.option.depth_units, 0.001000 / depth_multiplier)
depth_sensor.set_option(rs.option.emitter_always_on, 1.0)
color_sensor.set_option(rs.option.enable_auto_exposure, self.camera_config.enable_auto_exposure)
color_sensor.set_option(rs.option.gamma, 500) # default 300, 100-500 lower in high lighting
color_sensor.set_option(rs.option.saturation, 64) # default 64
color_sensor.set_option(rs.option.sharpness, 50) # default 50
color_sensor.set_option(rs.option.backlight_compensation, 0)
color_sensor.set_option(rs.option.enable_auto_white_balance, 1)
color_sensor.set_option(rs.option.auto_exposure_priority, 1)
threshold_filter = rs.threshold_filter(0.3, 0.7)
temp_filter = rs.temporal_filter(0.25, 20.0, 6) #default 0.1
spat_filter = rs.spatial_filter(0.30, 20.0, 4.0, 0.0)  # test settings - 1st = aplha, 2nd = delta, 3rd = magniture, 4th = holefilling(0 = none, 1 = 2px, 2 = 4px, 3 = 8px, 4 = 16px, 5 = unlimited)
align = rs.align(align_to)

def wait_for_exposure_stabilisation(frames_to_skip: int = 60):
    for _ in range(frames_to_skip):
        frameset = pipeline.wait_for_frames()
        frameset = align.process(frameset)
wait_for_exposure_stabilisation()

def capture_frame(self):
    start = time.time()
    def filter_depth_data(depth_frame):
        depth_frame = threshold_filter.process(depth_frame)
        depth_frame = rs.disparity_transform(True).process(depth_frame)
        depth_frame = spat_filter.process(depth_frame)
        depth_frame = temp_filter.process(depth_frame)
        depth_frame = rs.disparity_transform(False).process(depth_frame)
        return depth_frame
    frameset = self.pipeline.wait_for_frames()
    frameset = self.align.process(frameset)
    color_frame = frameset.get_color_frame()
    depth_frame = frameset.get_depth_frame()
    depth_frame = filter_depth_data(depth_frame)
    depth_data = np.array(depth_frame.get_data())
    frame_counter = 0
    while time.time()-start < 1.0:
        frameset = self.pipeline.wait_for_frames()
        frame_counter+=1
        frameset = align.process(frameset)
        depth_frame = frameset.get_depth_frame()
        depth_frame = filter_depth_data(depth_frame)
        depth_data_add = np.array(depth_frame.get_data())
        depth_data = np.where(depth_data > depth_data_add, depth_data, depth_data_add)
    color_data = np.array(color_frame.get_data())
    return color_frame, depth_data    

When everything is set up I have a main thread that looks like this:

while true:
    wait_for_exposure_stabilisation()
    if flag:
        capture_frame()

On a seperate daemon thread, I have a input worker that looks like this:

while true:
    flag = input("Set flag? 1/0")

I use the timed while loop to try and smoothen the depth data as much as possible by replacing any point with the data farthest away.

When I run the setup like it looks here, I sometimes get empty depth frames from the alignment which causes my script to crash because no data is received.

I read here: #10716 that I should be doing the post-processing before splitting the frameset and then align the data. I tried moving things around and performing the post-processing on the frameset and then aligning them and then extracting the data, but when I do that I run into the RuntimeError: Error occured during execution of the processing block! See the log for more info after processing a few frames in my timed loop.

Any idea how I can avoid the empty depth frames or the RuntimeError?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 21, 2022

Hi @MartinPedersenpp I would first recommend removing the following two lines from the wait_for_exposure_stabilisation code section.

frameset = pipeline.wait_for_frames()
frameset = align.process(frameset)

These two instructions are repeated further down the script, and at this point all you want to do is have the program skip the initial frames so that the auto-exposure settles down before the first frame is processed.

@MartinPedersenpp
Copy link
Author

Hi @MartinPedersenpp I would first recommend removing the following two lines from the wait_for_exposure_stabilisation code section.

frameset = pipeline.wait_for_frames()
frameset = align.process(frameset)

These two instructions are repeated further down the script, and at this point all you want to do is have the program skip the initial frames so that the auto-exposure settles down before the first frame is processed.

Thanks for the feedback @MartyG-RealSense, but if I remove the wait_for_frames() from the wait_for_exposure_stabilisation, will the auto exposure still get corrected? is the pipeline fetching the 30fps all the time and only "saving" the frames that are extracted with wait_for_frames()? Also how about the post-processing, will the spatial and temporal filter automatically get smoothed out if I have a low alpha threshold, even if I don't pass my data through the filters?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 21, 2022

If you are able to disable auto-exposure and use manual exposure then you do not need a mechanism to skip frames as the exposure should be correct from the first frame with manual exposure.

If you require auto exposure, I think that what would work for your script is to implement the skip mechanism in the way described at #9800 (comment)

There is the possibility for the FPS speed to vary when using both depth and color streams. If you have auto-exposure enabled and the RGB option auto-exposure priority disabled then the SDK will try to enforce a constant FPS speed. A simple code snippet for disabling it in Python is at #5885 (comment)

I would suggest removing the Spatial filter, as it can take a long time to process whilst not making a large difference to the image. Setting a low alpha for the Temporal filter such as '0.1' can reduce fluctuation in depth values but will cause the image to take longer to update. This can cause a wave effect when observing motion as the image slowly updates from one state to the next.

@MartinPedersenpp
Copy link
Author

@MartyG-RealSense
I am not able to disable the auto-exposure because my camera is placed in a setting with exterior lighting and I need a somewhat uniform exposure during the entire day in all kinds of weather.

isn't the solution in #9800 (comment) what I am already doing in wait_for_camera_stabilisation? only without the alignment?

Is there any chance that the auto exposure priority will cause wait_for_frames() to pass empty depth frames due to slower processing?

You suggest removing the spatial filter, but again, can the filters and their long processing time cause the blank depth images? aren't both wait_for_frames() and the align.process() and post-process filters blocking functions which would force the script to wait for them to finish?

@MartyG-RealSense
Copy link
Collaborator

#9800 (comment) is similar to your approach, though in the skip mechanism they are using pipe.wait_for_frames() and not using frameset = pipe.wait_for_frames() until the skip has completed.

As you are using a powerful Xavier model of Jetson, I would not expect processing to slow down enough to cause blank depth images. The less filters that are used the better though, as they are processed on the CPU instead of the camera hardware and so have a processing cost.

Whilst it is generally recommended to place align after filters, there are rare cases where aligning before filters results in significantly better performance.

wait_for_frames is a blocking function. If you use poll_for_frames() then frames are returned immediately without blocking.

@MartinPedersenpp
Copy link
Author

@MartyG-RealSense
Thanks for the feedback, I will try removing the alignment from the exposure stabilisation and disabling the AE priority on the depth images, if that doesn't do anything, I will fiddle with the spatial filter to see if I can avoid the empty frames or RuntimeError and get back to you.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @MartinPedersenpp for the update. I look forward to your next report. Good luck!

@MartinPedersenpp
Copy link
Author

MartinPedersenpp commented Dec 22, 2022

Thanks very much @MartinPedersenpp for the update. I look forward to your next report. Good luck!

Unfortunately I just got empty frames again.
I am starting and stopping my script many times to implement new steps, is it possible that when I either stop or initiate the pipeline that there is a bottleneck of sorts that causes the depth sensor to return empty frames?
Side note: I am from time to time getting notifications from Ubuntu that python3.9 has stopped working/crashed and sometimes when I close my terminals, python3.9 keeps running in the background so I have to go force kill it to release the pipeline.

@MartyG-RealSense
Copy link
Collaborator

If you are closing the pipeline then all the frames that are currently in the pipeline at the time of closure will be lost.

If you are using an append instruction anywhere in your Python project then I would recommend not doing so if possible, as it can cause a RealSense application to stop providing new frames after 15 frames have been generated, as described at #946

If you are using append and it is not possible to remove it then storing the frames in memory with the SDK's Keep() instruction can be a workaround to resolve the problem: #6146

@MartinPedersenpp
Copy link
Author

If you are closing the pipeline then all the frames that are currently in the pipeline at the time of closure will be lost.

If you are using an append instruction anywhere in your Python project then I would recommend not doing so if possible, as it can cause a RealSense application to stop providing new frames after 15 frames have been generated, as described at #946

If you are using append and it is not possible to remove it then storing the frames in memory with the SDK's Keep() instruction can be a workaround to resolve the problem: #6146

I am not using an append function anywhere. The only thing I am doing is that I am replacing pixels in the depth image that are lower valued than the current frame and then repeating for up to a second.

But it seems like now that when I am running a more stable script (less crashes and closures) I haven't met any empty frames for a while, but I am not sure that the problem has been solved.

I am/was using one/two object detection models(TensorRT engines) which are loaded into the GPU of the jetson on initiation. Is the possible that the post-processing performed on the GPU gets bottlenecked sometimes because of the two models? (any inference is performed after capturing the frames, but the models are located on the GPU from the start)

@MartyG-RealSense
Copy link
Collaborator

RealSense post-processing filters are processed on the CPU instead of the GPU.

@MartyG-RealSense
Copy link
Collaborator

Hi @MartinPedersenpp Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

@MartinPedersenpp
Copy link
Author

Case closed due to no further comments received.

Sorry for not closing the issue myself, I have been having my holiday break, but thanks for the help.

@MartyG-RealSense
Copy link
Collaborator

No problem at all, @MartinPedersenpp :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants