Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting persistency index in temporal filter with pyrealsense2 has no effect #10078

Closed
amirf147 opened this issue Dec 18, 2021 · 30 comments
Closed
Labels

Comments

@amirf147
Copy link

amirf147 commented Dec 18, 2021

I just figured it out.
As this thread reveals

Selecting persistence mode is done via RS2_OPTION_HOLES_FILL

As for Python, the attribute is "rs.option.holes_fill".

Originally posted by @lihk11 in #1672 (comment)

In regards to pyrealsense2, I followed the above solution but it has no effect.

#assuming import pyrealsense2 as rs
temporal = rs.temporal_filter()
temporal.set_option(rs.option.holes_fill, persistency_index)
depth = temporal.process(depth)

From the realsense viewer I can see a huge difference when changing this value but in the python script, changing the values has no effect on the stream. I can also verify that the option was set with:

print(rs.options.get_option(temporal, rs.option.holes_fill))

for some reason the option is not being applied or at least I'm not seeing the result when using python. This does return the value of persistency_index which I set but I am not seeing any difference in the stream. Setting the same option in the realsense viewer has an obvious result. The recording when played with python looks the same as when played with realsense viewer but with the persistency index disabled.

@MartyG-RealSense
Copy link
Collaborator

Hi @amirf147 May I first ask if you have defined a value for persistency_index so that the script knows which of the persistency modes (0-8) to apply, please?

The effect that is applied with modes 1-8 is described in the official pyrealsense2 documentation link below.

https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.temporal_filter.html

Intel's post-processing guide in the link below defines '0' as "Disabled - The Persistency filter is not activated and no hole filling occurs". If a float variable persistency_index was defined but a value not stored in it then I would speculate that the value of the variable would default to '0' - where the Persistency filter is not activated.

https://github.com/IntelRealSense/librealsense/blob/master/doc/post-processing-filters.md#temporal-filter

@amirf147
Copy link
Author

amirf147 commented Dec 18, 2021

Yes i defined a value for persistency_index and i tried all the values from 0 to 8. I was also able to confirm that the value was loaded to the option with the print statement that i showed. Still, no effect was observed.

@MartyG-RealSense
Copy link
Collaborator

And you have placed your filter instructions after the pipe start instruction like in #1672 (comment) ?

Would it be possible to post your full Python script into a comment, please?

@amirf147
Copy link
Author

amirf147 commented Dec 18, 2021

camera_setup.py
`
import numpy as np
import pyrealsense2 as rs

class RealSenseCamera:

def __init__(self, ros_bag = None):

    # Used if openings stream from prerecorded ros .bag file
    # holds the path to the .bag file
    self.ros_bag = ros_bag

    # Data variables that will be set with get_data()
    self.depth_frame = None
    self.color_frame = None
    self.infrared_frame = None
    self.color_intrinsics = None
    self.depth_scale = None

    # Post Processing Filter variables with default values
    # https://dev.intelrealsense.com/docs/post-processing-filters

    # Decimation filter variable 
    self.decimation_magnitude = 2
    
    # Spatial filter variables
    self.spatial_magnitude = 5
    self.spatial_smooth_alpha = 1
    self.spatial_smooth_delta = 50
    self.spatial_holes_fill = 0

    # Temporal filter variables
    self.temporal_smooth_alpha = 0.4
    self.temporal_smooth_delta = 20
    self.persistency_index = 7

    # Holes Filling filter variable
    self.hole_filling = 1       

    # Holds the data frame after it has undergone filtering 
    self.processed_depth_frame = None

    # Configure and start streams
    self.pipeline = rs.pipeline()
    config = rs.config()
    if ros_bag:
        config.enable_device_from_file(self.ros_bag)
    else:
        config.enable_stream(rs.stream.depth, rs.format.z16, 30)
        config.enable_stream(rs.stream.color, rs.format.bgr8, 30)
        config.enable_stream(rs.stream.infrared, rs.format.y8, 30)
    self.profile = self.pipeline.start(config)

    # Get depth scale
    depth_sensor = self.profile.get_device().first_depth_sensor()
    self.depth_scale = depth_sensor.get_depth_scale()

def get_data(self, aligned_to_color = False, aligned_to_depth = False,
               aligned_to_infrared = False):

    # Trying to align to infrared will be ignored. Just align to depth and it is the
    # same thing since LiDAR uses infrared. TODO: remove align to infrared option

    '''Gets the frames as numpy arrays and get other data'''

    align_to_options = [aligned_to_color,aligned_to_depth,aligned_to_infrared]    
    streams = [rs.stream.color, rs.stream.depth, rs.stream.infrared]

    # Validate that only 1 aligned_to_... variable is true
    if align_to_options.count(True) > 1:
        raise Exception("Can't align to more than one type of frame")

    # Determine which frame we are aligning the other frames to
    align_to = [stream for stream in streams 
                if align_to_options[streams.index(stream)]]

    frames = self.pipeline.wait_for_frames()

    if align_to:
        align_to = align_to.pop()
        align = rs.align(align_to)
        frames = align.process(frames)

    self.depth_frame = frames.get_depth_frame()
    self.color_frame = frames.get_color_frame()
    self.infrared_frame = frames.first(rs.stream.infrared)
    self.color_intrinsics = self.color_frame.profile \
                            .as_video_stream_profile() \
                            .intrinsics

def filter_depth_data(self,
                      enable_decimation = False,
                      enable_spatial = False,
                      enable_temporal = True,
                      enable_hole_filling = True):

    '''Apply a cascade of filters on the depth frame'''

    # https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/depth_filters.ipynb

    depth_to_disparity = rs.disparity_transform(True)
    disparity_to_depth = rs.disparity_transform(False)

    depth = self.depth_frame
    # DECIMATION FILTER
    if enable_decimation:
        decimation = rs.decimation_filter()
        decimation.set_option(rs.option.filter_magnitude, self.decimation_magnitude)
        depth = decimation.process(depth)

    # SPATIAL FILTER
    if enable_spatial:
        spatial = rs.spatial_filter()
        depth = spatial.process(depth)

        spatial.set_option(rs.option.filter_magnitude, self.spatial_magnitude)
        spatial.set_option(rs.option.filter_smooth_alpha, self.spatial_smooth_alpha)
        spatial.set_option(rs.option.filter_smooth_delta, self.spatial_smooth_delta)
        depth = spatial.process(depth)
        
        spatial.set_option(rs.option.holes_fill, self.spatial_holes_fill)
        depth = spatial.process(depth)

    # TEMPORAL FILTER
    if enable_temporal:
        temporal = rs.temporal_filter()
        temporal.set_option(rs.option.holes_fill, self.persistency_index)
        depth = temporal.process(depth)

    # if enable_temporal:
    #     temporal = rs.temporal_filter(0.40, 40, 8)
    # HOLE FILLING
    if enable_hole_filling:
        hole_filling = rs.hole_filling_filter()
        hole_filling.set_option(rs.option.holes_fill, self.hole_filling)
        depth = hole_filling.process(depth)

    self.processed_depth_frame = depth
    print(rs.options.get_option(temporal, rs.option.holes_fill))
def frame_to_np_array(self, frame, colorize_depth = False):
    # Create colorized depth frame
    if colorize_depth:
        colorizer = rs.colorizer()
        frame_as_image = np.asanyarray(colorizer.colorize(frame).get_data())
        return frame_as_image
    frame_as_image = np.asanyarray(frame.get_data())
    return frame_as_image

def stop(self):
    self.pipeline.stop()`

opencv_stream.py
`
import cv2
from camera_setup import RealSenseCamera

ros_bag = "C:\Users\35840\Documents\20211217_204044.bag"

camera = RealSenseCamera(ros_bag)
apply_filter = True

try:
while True:
camera.get_data() # Load the object's variables with data
depth_frame = camera.depth_frame
color_frame = camera.color_frame
infrared_frame = camera.infrared_frame
color_intrin = camera.color_intrinsics

    # apply filtering to depth data
    if apply_filter:
        camera.filter_depth_data(enable_decimation = False,
                                enable_spatial = False,
                                enable_temporal = True,
                                enable_hole_filling = False)

        depth_frame = camera.processed_depth_frame
        print('filters applied')

    depth_image = camera.frame_to_np_array(depth_frame, colorize_depth=True)

    image_to_be_shown = depth_image
    image_name = 'filtered depth'
    
    img = cv2.resize(image_to_be_shown, (640, 480))
    cv2.imshow(image_name, img)

    key = cv2.waitKey(1)

    # Press esc or 'q' to close the image window
    if key & 0xFF == ord('q') or key == 27:
        cv2.destroyAllWindows()
        break

finally:
camera.stop()`

@amirf147
Copy link
Author

amirf147 commented Dec 18, 2021

hopefully you can understand that. for some reason not everything stayed in the code block. the program works without errors and displays the frames. just the filter is not applied, or atleast, setting and changing the persistency_index variable is not resulting in any changes in the final frame. When playing the ros bag file with realsense-viewer, changing the persistency mode values does show significant changes to the stream. It seems that changing it in the python code and viewing the frames through opencv does not show any effect though.

@amirf147
Copy link
Author

amirf147 commented Dec 18, 2021

applying the other filters in the python code does work though, or at least they have some noticeable effects which would cause me to conclude that they are working. the only one not seeming to work is the temporal filter. also, the double back slashes are present in the ros_bag path but just seem to get cut out after pasting here.

@MartyG-RealSense
Copy link
Collaborator

The first observation I would make about your script is that it is recommended that align is applied after post-processing filters are applied. If alignment is done before post-processing then it can result in issues such as 'aliasing' distortion (jagged lines).

Also, Intel recommend that when using multiple post-processing filters, they are applied in a specific order described in the link below.

https://dev.intelrealsense.com/docs/post-processing-filters#section-using-filters-in-application-code

image

In your script, the filters are listed in this sequential order:

Decimation > Spatial > Temporal > Hole-Filling

This is the same order that Intel recommend. Although the application of the filters is dependent on an If condition being true, since the program would process the lines of the script sequentially from top to bottom then I would assume that even if some filters were not applied and skipped over because they were not set to True, the ones that were enabled should be applied in the correct order. So that code is likely okay.

If the temporal filter is being used for the purpose of hole-filling and the Hole-Filling filter is being applied afterwards then I wonder if it might become more obvious if the temporal filter is having the intended hole-filling effect if the Hole-Filling filter was set to false so that it was not duplicating the hole-filling work. Perhaps the temporal filter would not even be needed if Hole-Filling can fill the holes on its own.

@amirf147
Copy link
Author

amirf147 commented Dec 21, 2021

I am wondering why it looks different in my python code than in my realsense viewer. I am not using rs.align currently, only just filtering the depth stream. These are both the same rosbag file. I am using the L515 by the way.

Enabling temporal filter in realsense viewer:
rs_viewer_temporal

Enabling only temporal filter with my python code with the same values as the realsense viewer:
python_temporal

Enabling all filters in the recommended order in my python code:
python_all_filters

Enabling temporal and spatial filter only:
python_temporal_spatial

Enabling only hole filling:
only_hole

Could it be that you have to explicitly enable noise filtering in the python code? Is there other settings that have to be enabled explicitly in the python code as well for it to look the same as in the realsense viewer? How would I go about enabling all the same same settings in the python code as are enabled in the realsense viewer? Also, I wasn't sure how to enable noise filtering in the python code and If this should be done in the same way that filters are done?

I thought it was strange that in the realsense viewer just changing persistency mode would make such a large difference while in the python code when temporal filter was only filter on, changing the persistency mode did nothing and the image continued to look like the second image on every persistency mode. This is why my original issue is that the temporal filter seems to have no effect in python whereas in the viewer it has significant effects.

@MartyG-RealSense
Copy link
Collaborator

The RealSense Viewer applies a range of post-processing and depth colorization settings by default when launched. When creating your own application such as a Python script, these defaults for filters are not included and you must manually program them into your script yourself to more closely replicate the images produced by the Viewer.

I did experimentation with the Viewer settings to try to replicate your results. Your image with the numerous horizontal lines seemed to be replicable if applying a Spatial filter and setting its Hole Filling function to a high value such as Unlimited.

image

@qaler
Copy link

qaler commented Dec 22, 2021

The first observation I would make about your script is that it is recommended that align is applied after post-processing filters are applied. If alignment is done before post-processing then it can result in issues such as 'aliasing' distortion (jagged lines).

Could you please provide a basic python script about how to align post filtered depth frame as the rs.align::process() requires rs2::frameset object and I don't know how to set the post filtered depth frame into the original frameset.

@amirf147
Copy link
Author

amirf147 commented Dec 22, 2021

The first observation I would make about your script is that it is recommended that align is applied after post-processing filters are applied. If alignment is done before post-processing then it can result in issues such as 'aliasing' distortion (jagged lines).

Could you please provide a basic python script about how to align post filtered depth frame as the rs.align::process() requires rs2::frameset object and I don't know how to set the post filtered depth frame into the original frameset.

import pyrealsense2 as rs

pipeline = rs.pipeline()
frameset = pipeline.wait_for_frames()
decimation​ ​=​ ​rs​.​decimation_filter​() 
​decimation​.​set_option​(​rs​.​option​.​filter_magnitude​, ​4) 
​frameset​ ​=​ ​decimation​.​process​(​frameset​).​as_frameset​()

align_to = rs.stream.color
align = rs.align(align_to) 
frameset = align.process(frameset)

depth_frame_aligned = frameset.get_depth_frame()

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 22, 2021

#2356 (comment) provides a simple Python example of applying align after a post-processing filter.

@MartyG-RealSense
Copy link
Collaborator

Hi @amirf147 Do you require further assistance with this case, please? Thanks!

@amirf147
Copy link
Author

amirf147 commented Dec 29, 2021

While the issue that I originally raised still stands. I've tried many different combinations of the filters with the options that I mentioned above but the temporal filter still shows no effect or difference when on or off or with any combination of the other filters and their options. When using the realsense viewer it is obvious that changing the temporal filter values makes huge changes to the depth stream but when setting the same options with my python code, I see no changes. So I'm wondering is there some other filter or option that needs to be explicitly set with the python code for the temporal filter to do anything or is the temporal filter just not working in python?

@amirf147
Copy link
Author

amirf147 commented Dec 29, 2021

Ok I think I see my problem now, the filter has to be applied to a set of frames, not just one at a time... Edit: though i thought I was doing that in my code.. I'm now wondering if the previous frame is not remembered in my code now? I guess I will try a simpler 1 page script and see if it works.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @amirf147 for the update - I look forward to your next report. Good luck!

@amirf147
Copy link
Author

@MartyG-RealSense OK it looks like the temporal filter is working when I just do it in a simple one page script. So it looks like the reason it's not working for me has something to do with the fact that my code tried to separate out the frame creation or filtering in a separate class. I do appreciate all your input and I learned several things along the way which have been helpful in adding more features to my program. I can conclude that the temporal filter is working correctly and the issue was with my code. Thanks again.

@MartyG-RealSense
Copy link
Collaborator

You are very welcome. Thanks for your detailed feedback about your success :)

@MartyG-RealSense
Copy link
Collaborator

Hi @amirf147 Do you require further assistance with this case, please? Thanks!

@amirf147
Copy link
Author

amirf147 commented Jan 5, 2022

@MartyG-RealSense no, I figured out it was my code that was the problem. Thank you!

@MartyG-RealSense
Copy link
Collaborator

Okay, thanks very much for the confirmation. I will therefore close this case as a solution has been achieved. Thanks again!

@Co-stoletta
Copy link

Ok I think I see my problem now, the filter has to be applied to a set of frames, not just one at a time... Edit: though i thought I was doing that in my code.. I'm now wondering if the previous frame is not remembered in my code now? I guess I will try a simpler 1 page script and see if it works.

Hi @amirf147, I'm stuck in the same situation you described: I have a sequence of filters that I want to apply to a depth frame, including a temporal filter. I was setting and applying the temporal filter as:

rs2::temporal_filter temp_filter;
temp_filter.set_option(RS2_OPTION_FILTER_SMOOTH_ALPHA,0.4f);
temp_filter.set_option(RS2_OPTION_FILTER_SMOOTH_DELTA,20.f);
temp_filter.set_option(RS2_OPTION_HOLES_FILL,8);
...
frame = frame.apply_filter(temp_filter); // Temporal filter

I read from your preview answers that I should apply it to a set of frames, do you intend like:

auto allFrames = pipe.wait_for_frames();
allFrames = temp_filter.process(allFrames); // Temporal filter

Because I tried to do it and it seems not working, could you please help me?

Thanks :)

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 2, 2022

Hi @Co-stoletta It looks as though you are using C++ language. Another RealSense C++ user recently had problems with applying temporal and spatial filters as it seemed as though the filters were being destroyed. A solution that worked in their particular case was to place the filters inside a 'for' loop, as shown in #10201 (comment)

image

Intel also make use of applying a filter inside a for-loop in their official C++ post processing example program.

https://github.com/IntelRealSense/librealsense/blob/master/examples/post-processing/rs-post-processing.cpp#L126-L130

image

@Co-stoletta
Copy link

Hi @Co-stoletta It looks as though you are using C++ language. Another RealSense C++ user recently had problems with applying temporal and spatial filters as it seemed as though the filters were being destroyed. A solution that worked in their particular case was to place the filters inside a 'for' loop, as shown in #10201 (comment)

image

Intel also make use of applying a filter inside a for-loop in their official C++ post processing example program.

https://github.com/IntelRealSense/librealsense/blob/master/examples/post-processing/rs-post-processing.cpp#L126-L130

image

Hi @MartyG-RealSense, firstly thanks for your fast answer. Regarding the first part of your comment: at the moment every single frame (pipe.wait_for_frames()) is inside a while cycle where the control state is if the STOP button is pressed or not (see the next line)

while(!isInterruptionRequested())
{
...
auto allFrames = pipe.wait_for_frames();
...
}

So I think that is as I should do. In fact, my C++ code works well, the only problem is the application of the temporal filter.

Regarding the second part of your answer, I have to try it, after lunch I will try and I will give you a response.

Thanks! :)

@Co-stoletta
Copy link

Hi @MartyG-RealSense, I wasn't able to create the vector of filters and to apply it so I'm stuck again, wish I will solve it rapidly.

Thanks.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 2, 2022

Does the example C++ script shared by a RealSense team member in #1658 (comment) correctly apply the temporal filter if you run the script?

@Co-stoletta
Copy link

Co-stoletta commented Feb 3, 2022

Hi @MartyG-RealSense, I tried the script (with little adjustments to let it work with my code):

rs2::pointcloud pc;
rs2::points points;

rs2::spatial_filter spat_filter;
rs2::temporal_filter temp_filter;

rs2::frameset frames;
rs2::frameset framesf;
rs2::frameset aligned_frames;

rs2::align align(rs2_stream::RS2_STREAM_COLOR);

rs2::pipeline pipe;
auto profile = pipe.start();

spat_filter.set_option(rs2_option::RS2_OPTION_FILTER_MAGNITUDE, 5.0f);
spat_filter.set_option(rs2_option::RS2_OPTION_FILTER_SMOOTH_DELTA, 50.0f);
spat_filter.set_option(rs2_option::RS2_OPTION_FILTER_SMOOTH_ALPHA, 0.3f);

temp_filter.set_option(rs2_option::RS2_OPTION_FILTER_SMOOTH_DELTA, 100.0f);
temp_filter.set_option(rs2_option::RS2_OPTION_FILTER_SMOOTH_ALPHA, 0.3f);

while(!isInterruptionRequested())
{
    frames = pipe.wait_for_frames();
    aligned_frames = align.process(frames);

    rs2::frame depth;
    for (auto&& f : aligned_frames)
    {
        if (f.get_profile().format() == RS2_FORMAT_Z16)
            depth = f;
    }

    depth = spat_filter.process(depth);//apply filter
    depth = temp_filter.process(depth);//apply filter
    points = pc.calculate(depth);

    auto ucharptr = reinterpret_cast<unsigned char *>
                    (const_cast<void *>(depth.get_data()));
    int w = 640;
    int h = 480;
    Frame video_frame;
    video_frame.width = w;
    video_frame.height = h;
    video_frame.data.resize(w * h /*RGB*/);
    memcpy(video_frame.data.begin(), ucharptr, size_t(w * h));

    m_cb.push(video_frame);
    emit frameReady();
}

As you can see the first part is exactly what is written in #1658, plus a way to convert it into "raw" data and emit it.
What i see is posted below:
Immagine 2022-02-03 094852

I think there is some bug that I can't see.

Thanks!

EDIT: I adjusted the size of the window where I print the frames, now I can see a better image but I'm working to understand which image format I should use

EDIT2: Ok now it works but I'm not sure that the filter is really working, let me indagate

EDIT3: Ok now it seems to work, I will try to apply the same code to my original code and not to the #1658 code. I will post updates

EDIT4: Ok I understand where the problem is: it is the application of filters, I need to apply color filter, threshold filter and temporal filter but:

  1. if I apply the 3 filters altogether I get:
    image
  2. if I apply threshold and temporal I get:
    image

So the problem is the application, in combination with the temporal filter, of the color filter, that I implemented like:

rs2::colorizer color_filter;
color_filter.set_option(rs2_option::RS2_OPTION_VISUAL_PRESET, 0.f); // Dynamic
color_filter.set_option(rs2_option::RS2_OPTION_COLOR_SCHEME, 0.f); // Jet
color_filter.set_option(rs2_option::RS2_OPTION_HISTOGRAM_EQUALIZATION_ENABLED, 1);

@MartyG-RealSense do you have some advice to solve it?

Thanks!

@Co-stoletta
Copy link

@MartyG-RealSense I SOLVED IT!!

To all the guys that are stacked in the same problem, you should use the code from the #1658 comment but you should change the align with:

rs2::frameset aligned_frames;
rs2::align align(rs2_stream::RS2_STREAM_DEPTH);

Now with this modification and this filter sequence application:

frame = temp_filter.process(frame); // Temporal filter
frame = thr_filter.process(frame); // Threshold filter
frame = color_filter.process(frame); // Color filter

I get:
image
(it's not easy to see from the image but the temporal filter is on, if I move my body I can clearly see it)

Thanks!

@MartyG-RealSense
Copy link
Collaborator

Thanks so much @Co-stoletta for sharing your workings and your C++ solution with the RealSense community :)

@edouardvindevogel
Copy link

edouardvindevogel commented Jan 10, 2024

@amirf147
Hello,
Would you mind sharing your simple one page script?
When I use the following simple code, the temporal filter also has no effect. In this code, I just initialize a temporal filter, grab 10 framesets, put these 10 framesets through the filter, align the last one depth frame with the color frame.

    temporal = rs.temporal_filter()
    temporal.set_option(rs.option.filter_smooth_alpha, 0.000001)
    temporal.set_option(rs.option.holes_fill, 8)

    # Grab 10 images
    for x in range(10):
        frameset = pipeline.wait_for_frames()
        frameset = temporal.process(frameset).as_frameset()
        if x == 9:
            aligned_frameset = align.process(frameset)
            depth_frame = aligned_frameset.get_depth_frame()
            color_frame = aligned_frameset.get_color_frame()
            depth_image = np.asanyarray(depth_frame.get_data())
            color_image = np.asanyarray(color_frame.get_data())

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants