Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discuss RealSense rgb / depth frames and alignment in simulation #183

Closed
xEnVrE opened this issue Oct 19, 2023 · 18 comments · Fixed by #188
Closed

Discuss RealSense rgb / depth frames and alignment in simulation #183

xEnVrE opened this issue Oct 19, 2023 · 18 comments · Fixed by #188
Assignees
Labels
domain-modeling Related to Pure Physical Modeling domain-software Related to Software prj-ergocub Related to ErgoCub Project team-fix Related to Team Fix

Comments

@xEnVrE
Copy link
Contributor

xEnVrE commented Oct 19, 2023

I was trying to visualize the point cloud from ergoCub1_1 in simulation with rviz2 and I was superimposing it on the 3D model of the robot rendered according to the forward kinematics and I noticed the following misalignment:

image

As the red arrows try to indicate, the point cloud is not perfectly superimposed on the model - something that 1) I was not expecting and that 2) actually does not happen on other robots, e.g., iCubGazeboV2_5_visuomanip.

URDF investigation

I then investigated the urdf of the robot ergoCubGazeboV1_1 and I noticed the following facts:

  • a realsense frame is defined and two childs
    • realsense_depth_frame
    • realsense_rgb_frame
      via two fixed joints
  • the way the RGB camera plugin is configured is not really clear to be honest:
  • the depth camera is configured similarly but with different numbers

Proposed changes

I propose the following changes:

  • (I know that not all might agree on that) we should make sure that the RGB and depth sensors are configured to produce perfectly aligned frames. Even if the simulator can handle two different frames for the RGB and depth parts, as the real camera has a shift between the two, the final aim of the simulator should be that of reproducing a real setup and on the real setup we always, not sure if someone does not, activate the alignment between RGB and depth frames at the YARP-level or ROS2-level, i.e. the final images on ports/topic are already aligned.
  • simplify a lot the way the sensors are configured in the URDF by making these changes:
    • make sure that both the RGB and depth sensors refer to the frame realsense_rgb_frame
    • removing all the other change of poses
    • keep only the <pose>0.0 0.0 0.0 0.0 -1.57 1.57</pose> correction which make sure that the usual orientation used for cameras in robotics (y down, x right, z out of image plane) is aligned with the convention used inside Gazebo (z up, y left, x out of image plane)

The final configuration becomes:

<gazebo reference="realsense_rgb_frame">
    <sensor name="realsense_head_depth" type="depth">
      <always_on>1</always_on>
      <update_rate>30.000000</update_rate>
      <pose>0.0 0.0 0.0 0.0 -1.57 1.57</pose>
      <camera name="intel_realsense_depth_camera">
      (etc)
  </gazebo>
  <gazebo reference="realsense_rgb_frame">
    <sensor name="realsense_head_rgb" type="camera">
      <always_on>1</always_on>
      <update_rate>30.000000</update_rate>
      <pose>0.0 0.0 0.0 0.0 -1.57 1.57</pose>
      (etc)
  </gazebo>

Comparison

Using the above proposals, we get the alignment as expected:

image

Superimposition is quite nice also when closing fingers:

I would be happy to discuss on the above with you in order to understand how to possibly fix the configuration for Gazebo, keeping in mind that there is a part of automatic generation going on in this repository.

cc @traversaro @pattacini @Nicogene

@traversaro
Copy link
Member

traversaro commented Oct 19, 2023

the way the RGB camera plugin is configured is not really clear to be honest:

I think I can help a bit on this:

the sensor is referring to the link realsense instead of realsense_rgb_frame

realsense_rgb_frame is not a real frame with a mass, it is just an additional mass-less links used in URDF to represent a frame. It is lumped away in the URDF --> SDF conversion, i.e. removed. The <model>--><gazebo> reference attribute require that a "real" link (or joint) is passed as argument (unless you are using a really recent version of sdformat, if I am not wrong). That is the reason why we need to pass realsense as reference attribute, as that is a "real" link with a mass. On the other hand, you may ask: but why we need the realsense_rgb_frame fake link? Because the URDF parser used by robos-state-publisher ignores the <model>--><gazebo> or <model>--><sensor> tags, so the only way to publish the sensor pose on tf is to add it as a separate (and duplicated) fake URDF link.

Do you find that confusing? I totally agree, see https://discourse.ros.org/t/urdf-ng-link-and-frame-concepts/56 for a related detailed post.

an additional pose reference is given later on

You can ignore that tags. <model>--><sensor> tags in URDF are ignored by Gazebo/SDF, that only considers <model>--><gazebo>--><sensor> tags. The <model>--><sensor> tags are part of an URDF extension that was never implemented by sdformat/Gazebo: http://wiki.ros.org/urdf/XML/sensor .

@pattacini pattacini added prj-ergocub Related to ErgoCub Project team-fix Related to Team Fix domain-modeling Related to Pure Physical Modeling domain-software Related to Software labels Oct 19, 2023
@traversaro
Copy link
Member

The <model>--><gazebo> reference attribute require that a "real" link (or joint) is passed as argument (unless you are using a really recent version of sdformat, if I am not wrong).

Just to elaborate more on this, which version of Gazebo and sdformat you used for your tests?

@pattacini
Copy link
Member

To be addressed after our PI16, that is in two weeks.

/remind October 30 cc @Nicogene

@octo-reminder
Copy link

octo-reminder bot commented Oct 19, 2023

Reminder
Monday, October 30, 2023 10:00 AM (GMT+01:00)

cc @Nicogene

@traversaro
Copy link
Member

  • Even if the simulator can handle two different frames for the RGB and depth parts, as the real camera has a shift between the two, the final aim of the simulator should be that of reproducing a real setup and on the real setup we always, not sure if someone does not, activate the alignment between RGB and depth frames at the YARP-level or ROS2-level, i.e. the final images on ports/topic are already aligned.

Related to that, we should check which configuration files we ship by default to access the realsense and what is set there.

@xEnVrE
Copy link
Contributor Author

xEnVrE commented Oct 19, 2023

Thanks @traversaro and @pattacini for your answers.

I did not know that the frame used in the reference tag had to be "real". If realsense_rgb_frame gets lumped away, it is then strange that I was able to run the simulation with reference = realsense_rgb_frame. But, as you said, maybe it depends on the version of gazebo and sdf.

Just to elaborate more on this, which version of Gazebo and sdformat you used for your tests?

The setup consists of:

  • Gazebo multi-robot simulator, version 11.13.0 (the apt package is gazebo11/unknown,now 11.13.0-1~focal amd64 [installed]

  • a world where I put ergoCubGazeboV1_1 with sdf version="1.7"

  • a module that composes the point cloud in YARP and sends it via ROS2. That module is using the frame realsense_rgb_frame to tag the point cloud - this tag is then used within rviz2 to properly place the cloud in the space

    this seems consistent with the configuration done for the RGBD sensors on ROS2 in simulation, see:

    <param name="color_frame_id">realsense_rgb_frame</param>

Related to that, we should check which configuration files we ship by default to access the realsense and what is set there.

In simulation, it seems that the alignment was assumed, i.e.:

<param name="color_frame_id">realsense_rgb_frame</param>
<param name="depth_frame_id">realsense_rgb_frame</param>

On the real robot, it seems that the alignment is enabled:
https://github.com/robotology/robots-configuration/blob/master/ergoCubSN001/sensors/realsense.xml#L12

@traversaro
Copy link
Member

Thanks @traversaro and @pattacini for your answers.

I did not know that the frame used in the reference tag had to be "real". If realsense_rgb_frame gets lumped away, it is then strange that I was able to run the simulation with reference = realsense_rgb_frame. But, as you said, maybe it depends on the version of gazebo and sdf.

Actually I was just relying on memory, and I could not find anything on this, so I may be not recalling correctly. If attaching to a fake link works fine on the Gazebo available via apt on Ubuntu 22.04 (i.e. sdformat9 9.7.0/Gazebo 11.10.2), it is ok for me. (Note that OpenRobotics does not publish latest Gazebo11 version on Ubuntu 22.04, so if we want to support apt on 22.04 we need to ensure that our models are compatible with those versions.

Related to that, we should check which configuration files we ship by default to access the realsense and what is set there.

In simulation, it seems that the alignment was assumed, i.e.:

<param name="color_frame_id">realsense_rgb_frame</param>
<param name="depth_frame_id">realsense_rgb_frame</param>

On the real robot, it seems that the alignment is enabled:
https://github.com/robotology/robots-configuration/blob/master/ergoCubSN001/sensors/realsense.xml#L12

Thanks for checking! In that case it make sense to align the two frames also on the URDF/SDF.

@xEnVrE
Copy link
Contributor Author

xEnVrE commented Oct 19, 2023

Actually I was just relying on memory, and I could not find anything on this, so I may be not recalling correctly. If attaching to a fake link works fine on the Gazebo available via apt on Ubuntu 22.04 (i.e. sdformat9 9.7.0/Gazebo 11.10.2), it is ok for me.

actually this test was done on Ubuntu 20.04 but we also have an easy way to test it on Ubuntu 22.04, I can report back about that.

@xEnVrE
Copy link
Contributor Author

xEnVrE commented Oct 19, 2023

If attaching to a fake link works fine on the Gazebo available via apt on Ubuntu 22.04

I tried with Ubuntu 22.04 with Gazebo 11.0.2 and your suspicions were correct @traversaro: using a fake link as a reference tag does not work (the image was badly rotated).

Nonetheless, the actual problem is more the fact that vision and kinematics do not agree - as the point cloud reveals.

In this regard, I tried to restore the original URDF and then use the same pose parameters of the RGB sensor, i.e.,

<pose>0.00751548 -0.0115 1.73521e-08 -1.5708 3.75968e-16 -1.5708 </pose>
<camera name="intel_realsense_rgb_camera">
<pose>0 0 0 -1.57079 -1.57079 3.14159</pose>
<horizontal_fov>1.2217</horizontal_fov>

for the depth sensor and that was enough to solve the problem, that is we need to enforce alignment of RGB and depth images as discussed in the first comment of this issue.

@traversaro
Copy link
Member

Thanks! Yes, the two problems (that the sensor stuff is complex, and that the RGB and the depth are not aligned while in reality they are) are distinct. I think we can keep the issue to track the second problem.

@Nicogene Nicogene self-assigned this Oct 20, 2023
@octo-reminder
Copy link

octo-reminder bot commented Oct 30, 2023

🔔 @pattacini

cc @Nicogene

@xEnVrE
Copy link
Contributor Author

xEnVrE commented Oct 30, 2023

In this regard, I tried to restore the original URDF and then use the same pose parameters of the RGB sensor

I forgot to mention that maybe also the <horizontal_fov> tags should be matched so as to have the same intrinsics on both sensors.

@Nicogene
Copy link
Member

Nicogene commented Nov 7, 2023

Hi @xEnVrE,

Since in the urdf there is not a single sensor that handles both streams, but depth and rgb are disjointed sensors, a way to align it could be take this

- frameName: SCSYS_HEAD_DEPTH
linkName: realsense
sensorName: realsense_head_depth
sensorType: "depth"
updateRate: "30"
sensorBlobs:
- |
<camera name="intel_realsense_depth_camera">
<pose>0 0 0 -1.57079 -1.57079 3.14159</pose>
<horizontal_fov>1.57079</horizontal_fov>
<distortion>
<k1>0</k1>
<k2>0</k2>
<k3>0</k3>
<p1>0</p1>
<p2>0</p2>
<center>319.5 239.5</center>
</distortion>

<clip>
<near>0.175</near>
<far>3000</far>
</clip>
</camera>
- |
<visualize>false</visualize>
- |
<plugin name="ergocub_yarp_gazebo_plugin_depthCamera" filename="libgazebo_yarp_depthCamera.so">
<yarpConfigurationFile>model://ergoCub/conf/sensors/gazebo_ergocub_rgbd_camera.ini</yarpConfigurationFile>
</plugin>

Changing frameName to SCSYS_HEAD_RGB and <horizontal_fov>1.57079</horizontal_fov> to <horizontal_fov>1.2217</horizontal_fov> should mimic what happens on the real robot.
About the pose
<pose>0 0 0 -1.57079 -1.57079 3.14159</pose>

I probably copied it from icub3 yaml so it can be wrong but I don't think it is the problem here.

@xEnVrE
Copy link
Contributor Author

xEnVrE commented Nov 7, 2023

I probably copied it from icub3 yaml so it can be wrong but I don't think it is the problem here.

Maybe I am wrong, but should not we have the same pose for both sensors to get the alignment? Otherwise they will observe a different scene. At least, this was the way I could get the alignment in #183 (comment)

@Nicogene
Copy link
Member

Nicogene commented Nov 7, 2023

but should not we have the same pose for both sensors to get the alignment?

If I am not wrong that pose is referred to the frameName, it is already the same for the 2 sensors, so changing the frame could do the trick

I committed here https://github.com/icub-tech-iit/ergocub-software/tree/fix/depthAlignment the urdf of ergoCubGazeboV1_1 w/ the depth aligned w/ the rgb camera, if it is correct I do the same thing also for the other robots and open a PR

@xEnVrE
Copy link
Contributor Author

xEnVrE commented Nov 7, 2023

if it is correct I do the same thing also for the other robots and open a PR

I can make some tests and let you know between today and tomorrow

@xEnVrE
Copy link
Contributor Author

xEnVrE commented Nov 8, 2023

Hi @Nicogene,

I tested your branch and the result seems fine so far:

image

Nicogene added a commit that referenced this issue Nov 8, 2023
It is now aligned w/ the RGB frame as on the real robot.
It fixes #183
@Nicogene
Copy link
Member

Nicogene commented Nov 8, 2023

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain-modeling Related to Pure Physical Modeling domain-software Related to Software prj-ergocub Related to ErgoCub Project team-fix Related to Team Fix
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants