Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get stable depth image with align and post-processing? #2157

Closed
danielw256 opened this issue Jul 26, 2018 · 18 comments
Closed

How to get stable depth image with align and post-processing? #2157

danielw256 opened this issue Jul 26, 2018 · 18 comments
Assignees

Comments

@danielw256
Copy link

danielw256 commented Jul 26, 2018

Required Info
Camera Model D415
Firmware Version 05.09.13.00
Operating System & Version Win 10 1803 build 17134.112
Platform PC
SDK Version 2.14.0
Language C++

I get a nice stable depth image in Realsense viewer with depth units 0.0001 and post-processing enabled. I can do the same in my code, but when I add alignment either before or after post-processing, the depth image becomes much more shaky, almost like the post-processing is not applied. Is there anyway to add alignment and post-processing and still get a nice stable depth image?

@baptiste-mnh
Copy link
Contributor

baptiste-mnh commented Jul 27, 2018

Are you filtering the depth image or the aligned depth image?
Can you share the part of the code where you're applying it please?

@danielw256
Copy link
Author

If I do alignment before, I filter the aligned depth image.
If I do alignment after, I filter the depth image.

I'll try to post the code soon.

@danielw256
Copy link
Author

danielw256 commented Jul 27, 2018

`

// depth units was set to 0.0001 before this function
window app(1280, 720, "CPP - Align Example"); // Simple window handling
ImGui_ImplGlfw_Init(app, false);      // ImGui library intializition
glfwIconifyWindow(app);
rs2::colorizer c;                          // Helper to colorize depth images
texture renderer;                     // Helper for renderig images

rs2::pipeline pipe;
rs2::config conf;
conf.enable_stream(RS2_STREAM_COLOR, 1280, 720, RS2_FORMAT_Y16, 15);
conf.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 15);

rs2::pipeline_profile profile = pipe.start(conf);

//rs2::decimation_filter dec_filter;  // Decimation - reduces depth frame density
rs2::spatial_filter spat_filter;    // Spatial    - edge-preserving spatial smoothing
rs2::temporal_filter temp_filter;   // Temporal   - reduces temporal noise
rs2::hole_filling_filter hole_filling_filter;   // Temporal   - reduces temporal noise
//dec_filter.set_option(RS2_OPTION_FILTER_MAGNITUDE, rpc_data.postProcessing[RS2_RS400_DECIMATE_MAGNITUDE]);
									// Declare disparity transform from depth to disparity and vice versa
const std::string disparity_filter_name = "Disparity";
rs2::disparity_transform depth_to_disparity(true);
rs2::disparity_transform disparity_to_depth(false);

// Initialize a vector that holds filters and their options
std::vector<filter_options> filters;

// The following order of emplacement will dictate the orders in which filters are applied
//filters.emplace_back("Decimate", dec_filter, rpc_data.postProcessingEnable[RS2_RS400_DECIMATE_FILTER]);
filters.emplace_back(disparity_filter_name, depth_to_disparity, rpc_data.postProcessingEnable[RS2_RS400_SPATIAL_FILTER] || rpc_data.postProcessingEnable[RS2_RS400_TEMPORAL_FILTER] || rpc_data.postProcessingEnable[RS2_RS400_HOLE_FILLING_FILTER]);
filters.emplace_back("Spatial", spat_filter, rpc_data.postProcessingEnable[RS2_RS400_SPATIAL_FILTER]);
filters.emplace_back("Temporal", temp_filter, rpc_data.postProcessingEnable[RS2_RS400_TEMPORAL_FILTER]);
filters.emplace_back("HoleFilling", hole_filling_filter, rpc_data.postProcessingEnable[RS2_RS400_HOLE_FILLING_FILTER]);

spat_filter.set_option(RS2_OPTION_FILTER_MAGNITUDE, rpc_data.postProcessing[RS2_RS400_SPATIAL_MAGNITUDE]);
spat_filter.set_option(RS2_OPTION_FILTER_SMOOTH_ALPHA, rpc_data.postProcessing[RS2_RS400_SPATIAL_ALPHA]);
spat_filter.set_option(RS2_OPTION_FILTER_SMOOTH_DELTA, rpc_data.postProcessing[RS2_RS400_SPATIAL_DELTA]);

temp_filter.set_option(RS2_OPTION_FILTER_SMOOTH_ALPHA, rpc_data.postProcessing[RS2_RS400_TEMPORAL_ALPHA]);
temp_filter.set_option(RS2_OPTION_FILTER_SMOOTH_DELTA, rpc_data.postProcessing[RS2_RS400_TEMPORAL_DELTA]);

hole_filling_filter.set_option(RS2_OPTION_HOLES_FILL, rpc_data.postProcessing[RS2_RS400_HOLE_FILLING_SELECTION]);

rs2_stream align_to = RS2_STREAM_COLOR; 

while (app) {
	rs2::frameset frameset = pipe.wait_for_frames();

	rs2::align align(align_to);
	rs2::frameset processed = align.process(frameset);
	rs2::video_frame other_frame = processed.first(align_to);
	rs2::depth_frame aligned_depth_frame = processed.get_depth_frame();
	// comment above, and uncomment below to turn off alignment before post processing
	//rs2::video_frame other_frame = frameset.get_color_frame();
	//rs2::depth_frame aligned_depth_frame = frameset.get_depth_frame();

	if (!aligned_depth_frame || !other_frame)
	{
		continue;
	}

	rs2::frame filtered = aligned_depth_frame;
	bool revert_disparity = false;
	for (auto&& filter : filters)
	{
		if (filter.is_enabled)
		{
			filtered = filter.filter.process(filtered);
			if (filter.filter_name == disparity_filter_name)
			{
				revert_disparity = true;
			}
		}
	}
	if (revert_disparity)
	{
		filtered = disparity_to_depth.process(filtered);
	}

	//rs2::align align(align_to);
	//frameset[0] = other_frame;
	//frameset[1] = filtered;
	//auto proccessed = align.process(frameset);
	//rs2::video_frame post_processed_image_frame = proccessed.first(align_to);
	//rs2::depth_frame post_processed_depth_frame = proccessed.get_depth_frame();
	//uncomment above, and comment below to align after post processing
	rs2::video_frame post_processed_image_frame = other_frame;
	rs2::depth_frame post_processed_depth_frame = filtered;

	if (post_processed_depth_frame && post_processed_image_frame)
	{
		float w = static_cast<float>(app.width());
		float h = static_cast<float>(app.height());

		rs2::video_frame color_frame = c(post_processed_depth_frame);
		rect altered_other_frame_rect{ 0, 0, w, h };
		altered_other_frame_rect = altered_other_frame_rect.adjust_ratio({ static_cast<float>(color_frame.get_width()),static_cast<float>(color_frame.get_height()) });

		renderer.render(color_frame, altered_other_frame_rect);

		renderer.upload(color_frame);

	}
}

`

@danielw256
Copy link
Author

BTW, my working distance is about 500mm

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
@danielw256

The align instance keep re-instantiating inside the loop cause the application slow down and you will see the flicker effect in the colorized depth map, you can just simply move instantiation outside the loop, then you should see consistent colorized depth map.

I also suggest you to move filters into thread or processing block. For thread way you can refer to rs-post-processing.cpp, I post the processing block way code below for your reference.

window app(1280, 720, "CPP - Align Example"); // Simple window handling
 ImGui_ImplGlfw_Init(app, false);      // ImGui library intializition
 glfwIconifyWindow(app);
 rs2::colorizer c;                          // Helper to colorize depth images
 texture renderer;                    // Helper for renderig images

 rs2::pipeline pipe;
 rs2::config conf;
 conf.enable_stream(RS2_STREAM_COLOR, 1280, 720, RS2_FORMAT_RGB8, 30);
 conf.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 30);

 rs2::pipeline_profile profile = pipe.start(conf);

 rs2::decimation_filter dec_filter;  // Decimation - reduces depth frame density
 rs2::spatial_filter spat_filter;    // Spatial    - edge-preserving spatial smoothing
 rs2::temporal_filter temp_filter;  // Temporal  - reduces temporal noise
 rs2::hole_filling_filter hole_filling_filter;  // Temporal  - reduces temporal noise
                //dec_filter.set_option(RS2_OPTION_FILTER_MAGNITUDE, rpc_data.postProcessing[RS2_RS400_DECIMATE_MAGNITUDE]);
                // Declare disparity transform from depth to disparity and vice versa
 const std::string disparity_filter_name = "Disparity";
 rs2::disparity_transform depth_to_disparity(true);
 rs2::disparity_transform disparity_to_depth(false);

 // Initialize a vector that holds filters and their options

 rs2_stream align_to = RS2_STREAM_COLOR;
 std::vector<filter_options> filters;
 rs2::frame_queue filtered_data;
 rs2::align align(align_to);
 bool stopped = false;

 filters.emplace_back("Decimate", dec_filter);
 filters.emplace_back(disparity_filter_name, depth_to_disparity);
 filters.emplace_back("Spatial", spat_filter);
 filters.emplace_back("Temporal", temp_filter);

 std::thread processing_thread([&]() {
  rs2::processing_block frame_processor(
      [&](rs2::frameset data, // Input frameset (from the pipeline)
    rs2::frame_source& source) // Frame pool that can allocate new frames
  {
      rs2::frameset processed = align.process(data);
      rs2::video_frame other_frame = processed.first(align_to);
      rs2::depth_frame aligned_depth_frame = processed.get_depth_frame();

      if (!aligned_depth_frame || !other_frame)
      {
    return;
      }

      rs2::frame filtered = aligned_depth_frame; // Does not copy the frame, only adds a reference

      bool revert_disparity = false;
      for (auto&& filter : filters)
      {
    if (filter.is_enabled)
    {
        filtered = filter.filter.process(filtered);
        if (filter.filter_name == disparity_filter_name)
        {
      revert_disparity = true;
        }
    }
      }
      if (revert_disparity)
      {
    filtered = disparity_to_depth.process(filtered);
      }

      rs2::frameset combined = source.allocate_composite_frame({ filtered, other_frame });
      source.frame_ready(combined);
  });

  frame_processor >> filtered_data;

  while (!stopped) //While application is running
  {
      rs2::frameset fs;
      if (fs = pipe.wait_for_frames()) frame_processor.invoke(fs);
  }

 });

 while (app) {
  rs2::frameset frames = filtered_data.wait_for_frame();
  rs2::video_frame post_processed_image_frame = frames.get_color_frame();
  rs2::depth_frame post_processed_depth_frame = frames.get_depth_frame();

  if (post_processed_depth_frame && post_processed_image_frame)
  {
      float w = static_cast<float>(app.width());
      float h = static_cast<float>(app.height());

      rs2::video_frame color_frame = c(post_processed_depth_frame);
      rect altered_other_frame_rect{ 0, 0, w, h };
      altered_other_frame_rect = altered_other_frame_rect.adjust_ratio({ static_cast<float>(color_frame.get_width()),static_cast<float>(color_frame.get_height()) });

      renderer.render(color_frame, altered_other_frame_rect);
  }
 }

 stopped = true;
 processing_thread.join();

    return EXIT_SUCCESS;
``

@danielw256
Copy link
Author

As suggested, I moved the align object outside of the loop, but I could not notice any improvement. I can post videos of the depth image if needed.

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
@danielw256
Did you run my code? If not, please run my code and see if result is still not good.

@danielw256
Copy link
Author

Yes, I ran your code and the results are about the same. Still shaky, not stable.

@danielw256
Copy link
Author

@danielw256
Copy link
Author

@danielw256
Copy link
Author

@danielw256
Copy link
Author

@RealSense-Customer-Engineering
I uploaded 3 example videos. The Intel RealSense Viewer and the No Align video are much more stable than the Align video.

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
@danielw256
Can you check how much CPU resource you still have? I tested the RealSense Viewer I didn't see the jittery situation as you attached. Please try to add my recorded .bag file to realsense viewer and test, I don't see the issue you mentioned.
https://drive.google.com/open?id=1DGY7jR7h4eJSbU-VOlmEBJFA8-6LF_DN

@danielw256
Copy link
Author

Our computers run the Realsense viewer with no problem. When we set the depth units to 0.0001 we get a nice, stable depth image that's a huge improvement over before. The problem is in our custom code when we add alignment, the old shakiness reoccurs. If you run the code you wrote above with alignment and preprocessing and depth units to 0.0001, you should see it. There is no option in Realsense viewer to add alignment, correct?

I have run it on 2 computers with the same result. A Dell laptop with an Intel i7-7700HQ CPU @ 2.80 GHz 4 physical cores and 8 logical cores. Our program runs with alignment and preprocessing it only uses 20% of the CPU according to task manager. Realsense viewer only uses 11% CPU. The other is a Gigabyte PC with an Intel i7-4790 @ 3.6 Ghz.

The output from your bag file is very similar to our output from our Realsense viewer. It's just in custom code when adding alignment and preprocessing, that the shakiness returns.

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
@danielw256
Okay, I can reproduce the issue you mentioned, will inform engineer to fix the problem. Let me update you later.

@ioreed
Copy link

ioreed commented Sep 13, 2018

We have a similar problem. Any updates?

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
Ticket being closed due to inactivity for 30+ days

@danielw256
Copy link
Author

The CUDA code in the latest drivers produce a slightly better depth image. The best way is to redesign your code to use the unaligned color and depth images and then align as necessary later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants