Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ROS2] Emitter On-Off option - D435i #1657

Closed
BarzelS opened this issue Jan 31, 2021 · 30 comments
Closed

[ROS2] Emitter On-Off option - D435i #1657

BarzelS opened this issue Jan 31, 2021 · 30 comments
Labels

Comments

@BarzelS
Copy link

BarzelS commented Jan 31, 2021

Hi,
I'm trying to migrate the functionality to enable the laser emitter strobe every other frame, allowing the device to output high quality depth images with the help of emitter, and along with binocular images free from laser interference:
This functionality is mentioned here:
https://github.com/ZJU-FAST-Lab/ego-planner#improved-ros-realsense-driver
But for some reason it's working only using ROS1 and when I'm trying to take the code to ROS2 it does not work.
Is it possible to release an official support for this feature? or any help on how to implement it in ROS2?
Thanks

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 31, 2021

Hi @SBarzz I believe that the equivalent librealsense function for emitter per-frame strobing would be RS2_EMITTER_ON_OFF, as demonstrated in the link below (warning: flashing image at the top of the page).

IntelRealSense/librealsense#3066

Whilst in ROS, the emitter_on_off function would control it.

#1379

If this function works in ROS1 but not in ROS2, I would speculate that the ZJU-FAST system may be activating it with dynamic_reconfigure like in the link above. There are apparently differences in how ROS2 handles dynamic parameters:

IntelRealSense/librealsense#5825 (comment)

@doronhi mentions in the above link about being able to watch and modify parameter events on ROS2 using rqt with the parameter reconfigure plugin.

I wonder if you could also set emitter_on_off in a ROS2 launch file as a rosparam or in a roslaunch statement with emitter_on_off:=true

@BarzelS
Copy link
Author

BarzelS commented Jan 31, 2021

I don't understand, only by setting the RS2_EMITTER_ON_OFF to 1 will not give me the effect that the infra images wont have the emitter pattern within them, right? (Of course, while allowing the device to output high quality depth images with the help of emitter)
My question was if you can release an official version which will support this effect just like the ZJU lab did .
Thanks

@MartyG-RealSense
Copy link
Collaborator

It sounds as though what you actually want to do is remove the dot pattern from the infrared images and avoid laser speckle black dots on the depth image. You also do not want to turn the emitter off and remove the dot pattern but lose depth detail. Is that correct please?

If it is correct, some possible options may be:

*. Reduce the Laser Power value to reduce the visibility of the dot pattern on the IR image whilst still keeping the projector active. This will reduce the amount of depth detail, but you may be able to fill in some of the holes by applying a post-processing filter that has hole filling capability. You may even find that a hole filling filter fills in depth holes well enough to not need to reduce the dots' visibility with the Laser Power setting.

*. Change camera model to a D455. Intel have been working on providing the D415's dot pattern removal ability for D455 via color correction. I am not certain about the current status of development on that feature though.

IntelRealSense/librealsense#7149 (comment)

  • It sounds though as you do not mind about dots on the IR image and just want to reduce the noise that they can cause on the depth image. If that is the case, using an LED based external dot pattern projector instead of the camera's built-in laser emitter should reduce the laser speckle noise.

https://dev.intelrealsense.com/docs/projectors

*. Turn off the camera's built-in emitter and use a patternless external IR illuminator lamp to provide an IR light source for the camera to use.

IntelRealSense/librealsense#2000

@BarzelS
Copy link
Author

BarzelS commented Jan 31, 2021

It sounds as though what you actually want to do is remove the dot pattern from the infrared images and avoid laser speckle black dots on the depth image. You also do not want to turn the emitter off and remove the dot pattern but lose depth detail. Is that correct please?

If it is correct, some possible options may be:

*. Reduce the Laser Power value to reduce the visibility of the dot pattern on the IR image whilst still keeping the projector active. This will reduce the amount of depth detail, but you may be able to fill in some of the holes by applying a post-processing filter that has hole filling capability. You may even find that a hole filling filter fills in depth holes well enough to not need to reduce the dots' visibility with the Laser Power setting.

*. Change camera model to a D455. Intel have been working on providing the D415's dot pattern removal ability for D455 via color correction. I am not certain about the current status of development on that feature though.

IntelRealSense/librealsense#7149 (comment)

  • It sounds though as you do not mind about dots on the IR image and just want to reduce the noise that they can cause on the depth image. If that is the case, using an LED based external dot pattern projector instead of the camera's built-in laser emitter should reduce the laser speckle noise.

https://dev.intelrealsense.com/docs/projectors

*. Turn off the camera's built-in emitter and use a patternless external IR illuminator lamp to provide an IR light source for the camera to use.

IntelRealSense/librealsense#2000

  1. I don't want to lose the depth accuracy achieved by using the emitter. Meanwhile I want to keep the infra red images clear from the pattern. Isn't it acheiveable by enabling the laser emitter strobe every other frame and the infrared capture to be performed in the frames where the emitter is off?
    (Just like described here:
    https://github.com/ZJU-FAST-Lab/ego-planner#improved-ros-realsense-driver)??

  2. By reducing the laser power I will not also reduce the depth accuracy?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 31, 2021

  1. I have had a case where somebody was using emitter on-off with D435i and just keeping the 'off' frames. It tends to be awkward in practice though, as under certain lighting conditions the dot pattern may still be visible in this mode on the 'off' frame. So I would not recommend it unless your camera is always in a location with the same lighting conditions.

  2. Reducing laser power reduces the amount of depth pixels on the image, making the depth image more sparse in detail (more holes / gaps). Factors such as the depth resolution used, environmental / lighting conditions and the particular camera model's depth error over distance (RMS error) may have more of an influence on the amount of error though in my opinion.

A circumstance where depth accuracy could be affected by reduction of dot visibility though is if the surfaces that the camera is observing have low texture or no texture. In the absence of dots being cast into those surfaces to provide the camera with a texture source to.analyze for depth information, the camera may have difficulty with taking accurate depth measurements from those surfaces.

@BarzelS
Copy link
Author

BarzelS commented Feb 1, 2021

  1. I have had a case where somebody was using emitter on-off with D435i and just keeping the 'off' frames. It tends to be awkward in practice though, as under certain lighting conditions the dot pattern may still be visible in this mode on the 'off' frame. So I would not recommend it unless your camera is always in a location with the same lighting conditions.
  2. Reducing laser power reduces the amount of depth pixels on the image, making the depth image more sparse in detail (more holes / gaps). Factors such as the depth resolution used, environmental / lighting conditions and the particular camera model's depth error over distance (RMS error) may have more of an influence on the amount of error though in my opinion.

A circumstance where depth accuracy could be affected by reduction of dot visibility though is if the surfaces that the camera is observing have low texture or no texture. In the absence of dots being cast into those surfaces to provide the camera with a texture source to.analyze for depth information, the camera may have difficulty with taking accurate depth measurements from those surfaces.

  1. Can you please view these changes(from version 2.2.10) that were made by the ZJU lab:
    2.2.10...SBarzz:test_emitter_on_off
    (They attached it as a zip file but I uploaded it so it will be easier to view)
    According to what they are saying these changes: "allowing the device to output high quality depth images with the help of emitter, and along with binocular images free from laser interference"
    It is not clear to me from the code how this is done but from observations I made it seems that this is indeed the case, there is a slight signature of the pattern in the infra images but the depth accuracy is much better then using it without the emitter at all.
    Can you please confirm it?
  2. My camera is moving and can observe low textures so I think I need to use the emitter.
    Thanks

Update:
I think I started to understand what they were doing, they only allowed publishing every second frame using the the "seq[stream]" value, on even value they published infra and on odd value they published depth, but I think that with newer versions of the wrapper(ROS2) this is not the way to do it when using align depth option.

@BarzelS
Copy link
Author

BarzelS commented Feb 1, 2021

  1. I have had a case where somebody was using emitter on-off with D435i and just keeping the 'off' frames. It tends to be awkward in practice though, as under certain lighting conditions the dot pattern may still be visible in this mode on the 'off' frame. So I would not recommend it unless your camera is always in a location with the same lighting conditions.
  2. Reducing laser power reduces the amount of depth pixels on the image, making the depth image more sparse in detail (more holes / gaps). Factors such as the depth resolution used, environmental / lighting conditions and the particular camera model's depth error over distance (RMS error) may have more of an influence on the amount of error though in my opinion.

A circumstance where depth accuracy could be affected by reduction of dot visibility though is if the surfaces that the camera is observing have low texture or no texture. In the absence of dots being cast into those surfaces to provide the camera with a texture source to.analyze for depth information, the camera may have difficulty with taking accurate depth measurements from those surfaces.

  1. Can you please view these changes(from version 2.2.10) that were made by the ZJU lab:
    2.2.10...SBarzz:test_emitter_on_off
    (They attached it as a zip file but I uploaded it so it will be easier to view)
    According to what they are saying these changes: "allowing the device to output high quality depth images with the help of emitter, and along with binocular images free from laser interference"
    It is not clear to me from the code how this is done but from observations I made it seems that this is indeed the case, there is a slight signature of the pattern in the infra images but the depth accuracy is much better then using it without the emitter at all.
    Can you please confirm it?
  2. My camera is moving and can observe low textures so I think I need to use the emitter.
    Thanks

Update:
I think I started to understand what they were doing, they only allowed publishing every second frame using the the "seq[stream]" value, on even value they published infra and on odd value they published depth, but I think that with newer versions of the wrapper(ROS2) this is not the way to do it when using align depth option, it will be really helpful if intel will release an official support on this important feature which applies to all cases.
Thanks

I'm interested in publishing point cloud data that is using the emitter in order to get high quality depth accuracy.

@MartyG-RealSense
Copy link
Collaborator

I shall refer the above information to @doronhi as I am not involved in development.

@doronhi Essentually, @SBarzz is asking whether it would be possible to create a system for the ROS wrapper that replicates changes to the wrapper that the project in the link below has done. Namely, "to enable the laser emitter strobe every other frame, allowing the device to output high quality depth images with the help of emitter, and along with binocular images free from laser interference".

https://github.com/ZJU-FAST-Lab/ego-planner#improved-ros-realsense-driver

@BarzelS
Copy link
Author

BarzelS commented Feb 2, 2021

I shall refer the above information to @doronhi as I am not involved in development.

@doronhi Essentually, @SBarzz is asking whether it would be possible to create a system for the ROS wrapper that replicates changes to the wrapper that the project in the link below has done. Namely, "to enable the laser emitter strobe every other frame, allowing the device to output high quality depth images with the help of emitter, and along with binocular images free from laser interference".

https://github.com/ZJU-FAST-Lab/ego-planner#improved-ros-realsense-driver

Meanwhile, I want to understand the emitter on\off feature.

  1. Is it right to implement it the way ZJU did it? by publishing IR when the [(seq[stream] % 2) == true], and publishing Depth when [(seq[stream] % 2) != true] ? (2.2.10...SBarzz:test_emitter_on_off)
  2. Will I get the same effect - that the point cloud data will be calculated using the emitter when I will also publish point cloud when the [seq[stream] % 2 != true] ?
    By adding this in base_realsense_node.cpp(1742-1748)
                if (f.is<rs2::points>())
                {
                if (0 != _pointcloud_publisher->get_subscription_count())
                    {
                        if(!(_seq[sip] % 2))
                        {
                            ROS_DEBUG_STREAM("Publishing point cloud on _seq[sip]: " << _seq[sip]);
                            publishPointCloud(f.as<rs2::points>(), t, frameset);
                        }
                    }
                }

Another update:
I understood that the looking at the seq value is not stable, cause sometimes the odd values are the one when the pattern is on and sometimes the even values. So I found a better way by using this:
(int)f.get_frame_metadata(RS2_FRAME_METADATA_FRAME_LASER_POWER_MODE);
for a specific frame.

I would love to have an opinion from you about that
Thanks

@doronhi
Copy link
Contributor

doronhi commented Feb 3, 2021

@SBarzz , it sounds like a valid change request.
Currently, all the frames are being published and used for the pointcloud, whether they contain the projected pattern or not.
You are absolutely right to assume that using RS2_FRAME_METADATA_FRAME_LASER_POWER_MODE is the way to detect the relevant images, rather than count on the sequential number.
I actually wonder a parameter emitter_on_off is necessary. Maybe checking the device's option RS2_OPTION_EMITTER_ON_OFF is enough and selected frames should always be used.
I created a Change Request, tracked as DSO-16522.
I can't tell when I'll come around to implement but I hope it won't be long now. In the meanwhile, the code from https://github.com/ZJU-FAST-Lab/ego-planner#improved-ros-realsense-driver seems simple and useful, considering the changes you mentioned (using the flag instead of the sequential number).

@MartyG-RealSense
Copy link
Collaborator

Hi @RoboRoboRoboRobo Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

@doronhi
Copy link
Contributor

doronhi commented Nov 15, 2021

Although this issue is already closed I would like to give an update.
The Change request is now closed. One of the reasons is that as far as the firmware is concerned, the pattern of one frame with laser and one frame without is not a fixed pattern and it is possible to define other sequences.
For now, you can use the sequence_id filter that is available when setting filters:=hdr_merge.
turn stereo_module.hdr_enabled off.
turn stereo_module.emitter_on_off on.
try the sequence_id_filter.sequence_id - 1 shows pattern, 2 - doesn't show. 0 - show all frames.

I uploaded another branch - foxy-beta. Some of the parameter names are a bit different there but it is more flexible, you turn the infrared stream on and off in run-time and the sequence_id filter is available regardless of the hdr_merge filter:

ros2 launch realsense2_camera rs_launch.py
ros2 param set /camera/camera enable_infra1 true
ros2 param set /camera/camera depth_module.emitter_on_off true
ros2 param set /camera/camera filter_by_sequence_id.enable true
ros2 param set /camera/camera filter_by_sequence_id.sequence_id 2

Using rqt_image_view you can now see an infrared image without the laser pattern.
Naturally, you can set all these parameters using rqt_reconfigure. It was just easier to explain this way.

@martinakos
Copy link

@doronhi is there a way to do this selection in the ROS1 wrapper?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 19, 2021

Hi @martinakos You should be able to at least set emitter_on_off to true in ROS1 rosrun during runtime.

rosrun dynamic_reconfigure dynparam set /camera/stereo_module emitter_on_off 1

@doronhi
Copy link
Contributor

doronhi commented Nov 21, 2021

@doronhi is there a way to do this selection in the ROS1 wrapper?

Yes, the hdr_merge and alongside it the sequence_id_filter are available in the ROS1 version.
Please notice that hdr_merge filter modifies gain and exposure values by default for both sequence ids (making one image very bright and the other very dark). Make sure you set both of them to your specific needs.

@martinakos
Copy link

martinakos commented Nov 29, 2021

I can't seem to get the infra1 image without the laser pattern. I'm using ros melodic with a launch file with depth and ir streams enabled then I've set these parameters in the launch file:

  <rosparam> /d435i/stereo_module/hrd_enabled: true </rosparam>
  <rosparam> /d435i/stereo_module/sequence_id: 2 </rosparam>
  <rosparam> /d435i/stereo_module/exposure/2: 7500 </rosparam>  
  <rosparam> /d435i/stereo_module/emitter_on_off: true </rosparam>
  <rosparam> /d435i/stereo_module/filter_by_sequence_id/enable: true </rosparam>
  <rosparam> /d435i/stereo_module/filter_by_sequence_id/sequence_id: 2 </rosparam>

I'm trying to visualise the infra1 and depth images in rviz but I can see the ir pattern in the infra1 image.
if I launch the dynamic reconfigure I can't see the filter_by_sequence_id parameters. Am I setting the correct parameter names?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 29, 2021

Hi @martinakos You could disable the IR dot pattern with emitter_enabled, as described in #1379 (comment)

I would recommend setting emitter_enabled to '0' to disable the IR dot projection instead of '2' like the code in the above link does though. '2' (Auto mode) will make the pattern non-visible, but '0' (Off) is the technically correct one for disabling the emitter.

@martinakos
Copy link

martinakos commented Nov 30, 2021

I've added
<rosparam> /d435i/stereo_module/emitter_enabled: 0 </rosparam>
to my launch file (in addition to the parameters I set in my previous post) and now I can't see the IR pattern in the infra images but I think the IR pattern is now permanently off, instead of switching on/off. I compare the depth map after this change with one of the same scene with the emitter on and another with the emitter off and it looks like the one with the emitter off. So maybe the emitter is not switch on/off or I'm seeing the depth map with the wrong sequence id (ie. the one with emitter off). I don't know if there is anything in rviz I could do to select which sequence id I want to visualise.

Also a suspicious thing is that if I check the dynamic reconfigure parameters I can't see any emitter_on_off, filter_by_sequence_id/enable or emitter_on_off/sequence.

$ rosrun dynamic_reconfigure dynparam get /d435i/stereo_module
{'laser_power': 150.0, 'groups': {'laser_power': 150.0, 'parent': 0, 'emitter_always_on': False, 'sequence_name': 0, 'global_time_enabled': True, 'sequence_size': 2, 'inter_cam_sync_mode': 0, 'sequence_id': 2, 'gain': 16, 'groups': {}, 'id': 0, 'output_trigger_enabled': False, 'exposure': 8500, 'name': 'Default', 'parameters': {}, 'emitter_enabled': 0, 'enable_auto_exposure': True, 'error_polling_enabled': True, 'hdr_enabled': False, 'state': True, 'visual_preset': 0, 'type': '', 'frames_queue_size': 16}, 'emitter_always_on': False, 'sequence_name': 0, 'enable_auto_exposure': True, 'global_time_enabled': True, 'hdr_enabled': False, 'sequence_size': 2, 'inter_cam_sync_mode': 0, 'sequence_id': 2, 'visual_preset': 0, 'error_polling_enabled': True, 'gain': 16, 'emitter_enabled': 0, 'frames_queue_size': 16, 'output_trigger_enabled': False, 'exposure': 8500}

I can, however, see the parameters that I've set in the launch file in the parameter server.

d435i:
  depth:
    image_rect_raw:
      compressed:
        format: jpeg
        jpeg_quality: 80
        png_level: 9
      compressedDepth:
        depth_max: 10.0
        depth_quantization: 100.0
        png_level: 9
      theora:
        keyframe_frequency: 64
        optimize_for: 1
        quality: 31
        target_bitrate: 800000
  infra1:
    image_rect_raw:
      compressed:
        format: jpeg
        jpeg_quality: 80
        png_level: 9
      compressedDepth:
        depth_max: 10.0
        depth_quantization: 100.0
        png_level: 9
      theora:
        keyframe_frequency: 64
        optimize_for: 1
        quality: 31
        target_bitrate: 800000
  infra2:
    image_rect_raw:
      compressed:
        format: jpeg
        jpeg_quality: 80
        png_level: 9
      compressedDepth:
        depth_max: 10.0
        depth_quantization: 100.0
        png_level: 9
      theora:
        keyframe_frequency: 64
        optimize_for: 1
        quality: 31
        target_bitrate: 800000
  realsense2_camera:
    accel_fps: 63
    accel_frame_id: d435i_accel_frame
    accel_optical_frame_id: d435i_accel_optical_frame
    align_depth: false
    aligned_depth_to_color_frame_id: d435i_aligned_depth_to_color_frame
    aligned_depth_to_fisheye1_frame_id: d435i_aligned_depth_to_fisheye1_frame
    aligned_depth_to_fisheye2_frame_id: d435i_aligned_depth_to_fisheye2_frame
    aligned_depth_to_fisheye_frame_id: d435i_aligned_depth_to_fisheye_frame
    aligned_depth_to_infra1_frame_id: d435i_aligned_depth_to_infra1_frame
    aligned_depth_to_infra2_frame_id: d435i_aligned_depth_to_infra2_frame
    allow_no_texture_points: false
    base_frame_id: d435i_link
    calib_odom_file: ''
    clip_distance: -2.0
    color_fps: 30
    color_frame_id: d435i_color_frame
    color_height: 480
    color_optical_frame_id: d435i_color_optical_frame
    color_width: 640
    confidence_fps: 30
    confidence_height: 480
    confidence_width: 640
    depth_fps: 30
    depth_frame_id: d435i_depth_frame
    depth_height: 480
    depth_optical_frame_id: d435i_depth_optical_frame
    depth_width: 848
    device_type: ''
    enable_accel: true
    enable_color: false
    enable_confidence: true
    enable_depth: true
    enable_fisheye: false
    enable_fisheye1: false
    enable_fisheye2: false
    enable_gyro: true
    enable_infra: true
    enable_infra1: true
    enable_infra2: true
    enable_pointcloud: true
    enable_pose: false
    enable_sync: false
    filters: ''
    fisheye1_frame_id: d435i_fisheye1_frame
    fisheye1_optical_frame_id: d435i_fisheye1_optical_frame
    fisheye2_frame_id: d435i_fisheye2_frame
    fisheye2_optical_frame_id: d435i_fisheye2_optical_frame
    fisheye_fps: 30
    fisheye_frame_id: d435i_fisheye_frame
    fisheye_height: 480
    fisheye_optical_frame_id: d435i_fisheye_optical_frame
    fisheye_width: 640
    gyro_fps: 200
    gyro_frame_id: d435i_gyro_frame
    gyro_optical_frame_id: d435i_gyro_optical_frame
    imu_optical_frame_id: d435i_imu_optical_frame
    infra1_frame_id: d435i_infra1_frame
    infra1_optical_frame_id: d435i_infra1_optical_frame
    infra2_frame_id: d435i_infra2_frame
    infra2_optical_frame_id: d435i_infra2_optical_frame
    infra_fps: 30
    infra_height: 480
    infra_rgb: false
    infra_width: 848
    initial_reset: true
    json_file_path: ''
    linear_accel_cov: 0.01
    odom_frame_id: d435i_odom_frame
    ordered_pc: false
    pointcloud_texture_index: 0
    pointcloud_texture_stream: RS2_STREAM_ANY
    pose_frame_id: d435i_pose_frame
    pose_optical_frame_id: d435i_pose_optical_frame
    publish_odom_tf: true
    publish_tf: true
    rosbag_filename: ''
    serial_no: 040322073861
    stereo_module:
      exposure:
        '1': 7500
        '2': 1
      gain:
        '1': 16
        '2': 16
    tf_publish_rate: 0.0
    topic_odom_in: odom_in
    unite_imu_method: linear_interpolation
    usb_port_id: ''
  stereo_module:
    emitter_enabled: 0
    emitter_on_off: true
    exposure:
      '2': 7500
    filter_by_sequence_id:
      enable: true
      sequence_id: 2
    hrd_enabled: true
    sequence_id: 2
rosdistro: 'melodic

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 30, 2021

@martinakos The pattern would not be able to alternate on-off if the IR emitter has been disabled by setting emitter_enabled to '0'.

Do you mean that you were actually aiming for the pattern to alternate on-off on a per-frame basis but there never seemed to be a frame in which the pattern was in the 'off' state and instead seemed to be permanently on?

As the 400 Series cameras can use ambient light in a scene instead of the dot pattern to analyze surfaces for depth detail, you may not need to have the pattern enabled anyway so long as the scene is well lit, as the camera can make use of that light for generating depth detail.

@martinakos
Copy link

I thought this thread was about the usage of emitter_on_off option and selection of the depth image having the IR pattern on and selection of the stereo pair (infra1, infra2) having the IR pattern off. That's why I asked @doronhi if it was possible to do the same in ROS1, he said yes, however my configuration is not working and that's why I asked more showing the parameters I set. So yes, I'm aiming for the pattern to alternate on-off on a per-frame basis.

@MartyG-RealSense
Copy link
Collaborator

@martinakos I believe that the confusion may stem from your original comment at #1657 (comment) that said "I can't seem to get the infra1 image without the laser pattern", suggesting that you desired to turn the pattern off to eliminate it from the image. I apologize if I drew the wrong conclusion from that.

@doronhi
Copy link
Contributor

doronhi commented Dec 1, 2021

I am not sure it is possible to set all the parameters in the launch file. The reason is that you can't set "emitter_on_off" while "hdr_merge" is on. However, in the current wrapper, you don't have the "filter_by_sequence_id" option if "hdr_merge" is not set.
The solution is to start the node with hdr_merge filter and then turn it off before setting the emitter_on_off parameter on.

Set the following commands in the launch file:

  <rosparam> /d435i/stereo_module/sequence_id: 2 </rosparam>
  <rosparam> /d435i/stereo_module/exposure: 7500 </rosparam>  
  <rosparam> /d435i/sequence_id_filter/sequence_id: 2 </rosparam>

Launch realsense-ros using the following command
roslaunch realsense2_camera rs_camera.launch filters:=hdr_merge

After the node start turn the "hdr_enabled" option to false and then start the "emitter_on_off". Run the following command:
rosrun dynamic_reconfigure dynparam set /d435i/stereo_module "{'hdr_enabled': false, 'emitter_on_off': true}"

I should note here that I am not sure in what order the parameters are actually set. It might be safer to use 2 consecutive commands:

rosrun dynamic_reconfigure dynparam set /d435i/stereo_module hdr_enabled false
rosrun dynamic_reconfigure dynparam set /d435i/stereo_module emitter_on_off true

@martinakos
Copy link

@doronhi I'm still not getting this feature working. I'd explained what I've tried:
I modified the default rs_camera.launch so that we start from a known point.
The lines I modified from the default rs_camera.launch are related with the namespace and enabling depth and infra images then the setup for the emitter_on_off feature:

<arg name="camera"              default="d435i"/>
<arg name="depth_width"         default="848"/>
<arg name="depth_height"        default="480"/>
<arg name="enable_depth"        default="true"/>
<arg name="infra_width"        default="848"/>
<arg name="infra_height"       default="480"/>
<arg name="enable_infra"        default="true"/>
<arg name="enable_infra1"       default="true"/>
<arg name="enable_infra2"       default="true"/>

<rosparam> /d435i/stereo_module/sequence_id: 2 </rosparam>
<rosparam> /d435i/stereo_module/exposure: 7500 </rosparam>  
<rosparam> /d435i/filter_by_sequence_id/sequence_id: 2 </rosparam>

Then launch my modified res_camera.launch with:

roslaunch realsense2_camera rs_camera.launch filters:=hdr_merge

and then disable hdr_enable and enable emitter_on_off as suggested:

rosrun dynamic_reconfigure dynparam set /d435i/stereo_module hdr_enabled false
rosrun dynamic_reconfigure dynparam set /d435i/stereo_module emitter_on_off true

After following these steps, what I expect is to be able to visualise in rviz an infra1/infra2 images without laser pattern and a depth map calculated with the laser pattern. However, what I see in rviz is an infra1/infra2 images with the laser pattern and a depth map, which I assume has used the laser pattern too.

Note that despite I launched with filters:=hdr_merge and I can verify that hdr_enabled=true in the dynamic_reconfigure parameters, I can't see the filter_by_sequence_id parameter among the dynamic_reconfigure parameters. So I've also tried to set it in the launch file with these variations:

<rosparam> /d435i/stereo_module/filter_by_sequence_id/sequence_id: 2 </rosparam>
<rosparam> /d435i/realsense2_camera/filter_by_sequence_id/sequence_id: 2 </rosparam>
<rosparam> /d435i/realsense2_camera/stereo_module/filter_by_sequence_id/sequence_id: 2 </rosparam>

but still can't see it among the dynamic_reconfigure parameters. Then I searched in the realsense-ros github repo and the only reference to filter_by_sequence_id I found is in this issue, none in the code. So the parameter must have a different name. Then I found the sequence_id_filter parameter and I tried to set this one instead of the filter_by_sequence_id. I tried all the following variations:

<rosparam> /d435i/sequence_id_filter/sequence_id: 2 </rosparam>
<rosparam> /d435i/realsense2_camera/sequence_id_filter/sequence_id: 2 </rosparam>
<rosparam> /d435i/stereo_module/sequence_id_filter/sequence_id: 2 </rosparam>
<rosparam> /d435i/realsense2_camera/stereo_module/sequence_id_filter/sequence_id: 2 </rosparam>

but none of the worked to show a sequence_id_filter parameter in the dynamic reconfigure either, and in any combination I got the results I look for in rviz.

What am I missing to get this feature working? also is my way of testing whether it works, by visualising in rviz, the correct way of testing this feature? I don't see how rviz could differentiate video frames in a stream based on sequence_id

@doronhi
Copy link
Contributor

doronhi commented Dec 1, 2021

I usually test with rqt_image_view but I guess rviz is just fine for that too.
The "filter_by_sequence_id" will send only frames with the specified sequence id so the viewer just show everything on the topic.
regardless of the parameters you change in the rs_camera.launch file, running the roslaunch command with filters:=hdr_merge should enable the hrd_merge and you should see "/d435i/sequence_id_filter" in the output of rosrun dynamic_reconfigure dynparam get /d435i/sequence_id_filter
You should set all these options running rosrun rqt_reconfigure rqt_reconfigure

What versions do you use? (librealsense2, realsense2-camera and firmware)

@martinakos
Copy link

I got some progress! I just repeated the same steps (I didn't see the /d435i/sequence_id_filter before because I was checking in the wrong /d435i/stereo_module namespace).
Now I can switch the sequence_id in the dynamic reconfigure while visualising the infra1 image and the depth map in rviz. I can see that when I choose sequence_id=1 I see the ir pattern in infra1 and a dense depth map, and when I choose sequence_id=2 I don't see the ir pattern in infra1 but the depth map is not as dense (suggesting it was calculated without an ir pattern). See images:
With sequence_id=1
image

With sequence_id=2
image

How can I simultaneously visualise the infra1 without the ir pattern (as in sequence_id=2) and the depth map calculated with the ir pattern (as in sequence_id=1)? I want the infra1/infra2 without ir pattern to run a VIO on them and the depth map with ir pattern to map the area.

@doronhi
Copy link
Contributor

doronhi commented Dec 2, 2021

It seems like I was wrong and you can't have what you want out of the box.
Part of the reason is that the "sequence_id_filter" works on all the streams, both infra and depth.
I think the only option you have now, other than alternating the code of the realsense-ros wrapper (or maybe better yet, the original implementation of the filter in librealsense2), is to filter the frames in your own app. You could use the field "frame_laser_poser" inside the metadata" topic for that. You can sync based on the timestamps.

@martinakos
Copy link

Oh, that's a shame. Do you have any estimate if/when the ros wrapper will implement a way of doing what I want so that I can just subscribe to a depth topic (with ir pattern) and subscribe to the infra topics (without ir pattern)?
I can't find the field "frame_laser_poser in any topic. This is the metadata I can see:

rostopic echo --noarr /d435i/infra1/image_rect_raw
---
header: 
  seq: 10013
  stamp: 
    secs: 1638444465
    nsecs: 945783377
  frame_id: "d435i_infra1_optical_frame"
height: 480
width: 848
encoding: "mono8"
is_bigendian: 0
step: 848
data: "<array type: uint8, length: 407040>"

rostopic echo --noarr /d435i/depth/image_rect_raw
---
header: 
  seq: 11832
  stamp: 
    secs: 1638444526
    nsecs: 521533728
  frame_id: "d435i_depth_optical_frame"
height: 480
width: 848
encoding: "16UC1"
is_bigendian: 0
step: 1696
data: "<array type: uint8, length: 814080>"

Maybe this field is a new addition? My versions are: RealSense ROS v2.2.24 Built with LibRealSense v2.44.0 Device FW version: 05.12.13.50

@martinakos
Copy link

Oh! I see the metadata is not available until RealSense ROS v2.3.2. I'll try this version.

@doronhi
Copy link
Contributor

doronhi commented Dec 5, 2021

I am sorry but currently, there are no plans to add this functionality to the wrapper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants