Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port DAVE DVL to Ignition #145

Closed
Tracked by #150 ...
mabelzhang opened this issue Jan 14, 2022 · 12 comments
Closed
Tracked by #150 ...

Port DAVE DVL to Ignition #145

mabelzhang opened this issue Jan 14, 2022 · 12 comments
Milestone

Comments

@mabelzhang
Copy link
Collaborator

mabelzhang commented Jan 14, 2022

This ticket outlines the options to help us prioritize how much of the DAVE DVL to port.

The NPS DAVE DVL is based on the WHOI ds_sim DVL. There are two conceptual parts to it:

  1. Bottom tracking. This exists in the WHOI ds_sim DVL
    ds_sim DVL (master branch on DAVE's fork, I think. Double-check with NPS):
    https://github.com/Field-Robotics-Lab/ds_sim/blob/master/gazebo_src/dsros_dvl.cc
    https://github.com/Field-Robotics-Lab/ds_sim/blob/master/src/dsros_dvl_plugin.cc

    • Porting rays:
      There are 4 beams, implemented using a Gazebo-classic object (physics::RayShape?) to shoot cones out and check the object of intersection. This is done in ODE, which has a flag that does collision checking but won't enforce contact constraints. To port to Ignition, we need to see if DART supports reporting contact point without enforcing constraints.
      It is similar to how SonarSensor in Gazebo-classic is implemented, which has not been ported to Ignition. If feasible, we might want to port that upstream, then reuse the code. Another relevant sensor that might come up, RaySensor, has also not been ported.
      (Thanks @scpeters for the insights. Hope I paraphrased correctly.)
  2. Water tracking and current profiling. This is added in DAVE.
    DAVE DVL (ds_sim DVL plus water tracking and current profiling, nps_dev branch):
    https://github.com/Field-Robotics-Lab/ds_sim/blob/nps_dev/gazebo_src/dsros_dvl.cc
    https://github.com/Field-Robotics-Lab/ds_sim/blob/nps_dev/src/dsros_dvl_plugin.cc

    • Porting currents, on top of porting current profiling:
      This version of the DVL further depends on the NPS fork of the uuv_simulator repo, which adds currents (Double-check with NPS which branch).
      That means, to port this DVL, NPS's ocean currents addition to uuv_simulator also need to be ported, which is not trivial.

If we don't need water tracking, we only need to port bullet 1, the ds_sim version.

Documentation on DAVE DVL
https://github.com/Field-Robotics-Lab/dave/wiki/whn_dvl_examples
https://github.com/Field-Robotics-Lab/dave/wiki/DVL-Water-Tracking
https://github.com/Field-Robotics-Lab/dave/wiki/DVL-Seabed-Gradient

@arjo129
Copy link
Member

arjo129 commented Jan 14, 2022

For the Ray/Beam tracing we could alternatively use ign-rendering's RayQuery to query the depth of various objects.

This discussion on CPU based ray collisions for CPU-Lidar (which may be relevant to us) can also be found here:
gazebosim/gz-sensors#26

@chapulina outlines the need to create a Ray shape in ign-physics.

@mabelzhang
Copy link
Collaborator Author

mabelzhang commented Jan 14, 2022

Yeah that ign-sensors#26 is the same ticket as the RaySensor linked in the OP above. That sensor is the basis for some other sensor in DAVE, I think. The SonarSensor and RaySensor are different enough though, we might want to think about which one and why. The SonarSensor has some known issues too (linked from a comment in the close-the-gap ticket linked in OP).

@arjo129
Copy link
Member

arjo129 commented Jan 14, 2022

Good news is Dart does have Cone shapes which I suppose can be abused as rays https://dartsim.github.io/dart/v6.12.1/de/d3e/classdart_1_1dynamics_1_1ConeShape.html

@chapulina
Copy link
Contributor

For the Ray/Beam tracing we could alternatively use ign-rendering's RayQuery to query the depth of various objects.

+1 to this, I'd recommend going with the rendering approach unless there's an explicit need to use physics. Physics-based ray sensors are notably slower. The only reason I can think of to use them is to avoid the need for a GPU, but Ignition features like EGL allow us to work around that.

@braanan
Copy link
Collaborator

braanan commented Jan 14, 2022

Thanks for looking into this @mabelzhang. There's no immediate use case for waster speed sensing atm, but I suspect that's something we'll want at some point. LRAUV currently only supports water mass speed measurements for defined bin using PD13 format, but at some point, we'd like to also support full ADCP water speed via PD0. When/if we go down that route I'd like to integrate the current readings from our existing data interface rather than supporting a new interface and adding dependencies.

It would be nice to use the DVL message types defined in https://github.com/apl-ocean-engineering/hydrographic_msgs/blob/main/acoustic_msgs/msg/Dvl.msg, but that's not a requirement.

@mabelzhang
Copy link
Collaborator Author

mabelzhang commented Jan 15, 2022

Re physics vs rendering: I ran into some glitches with the collision geometry for heightmaps actually, that I had to disable the collision and only use the visuals. I didn't dig into it much, but it appeared that the robot was colliding with invisible things, when the heightmap was far below it, though the upper bounding box of the heightmap intersects with the robot. I don't know if that's fixed with the new DEM feature.

+1 for using the hydrographic_msgs types. It would be a good example of early adoption. The messages were recently created as part of a community effort to standardize maritime sensor messages, and they've consulted Open Robotics about propagation and adoption. If we run into problems, we can give them feedback.
On the other hand, if we upstream the DVL, then we might think a bit about dependencies and how stable these message types are going to be, for future maintenance.

@caguero caguero mentioned this issue Jan 19, 2022
39 tasks
@caguero caguero modified the milestones: 2022 M2, 2022 M1, 2022 M3, 2022 M4 Jan 19, 2022
@scpeters
Copy link
Member

  • There are 4 beams, implemented using a Gazebo-classic object (physics::RayShape?)

yes, it looks like a physics::RayShape to me

@scpeters
Copy link
Member

  • Porting rays:
    There are 4 beams, implemented using a Gazebo-classic object (physics::RayShape?) to shoot cones out and check the object of intersection. This is done in ODE, which has a flag that does collision checking but won't enforce contact constraints. To port to Ignition, we need to see if DART supports reporting contact point without enforcing constraints.
    It is similar to how SonarSensor in Gazebo-classic is implemented, which has not been ported to Ignition. If feasible, we might want to port that upstream, then reuse the code. Another relevant sensor that might come up, RaySensor, has also not been ported.
    (Thanks @scpeters for the insights. Hope I paraphrased correctly.)

yes, it looks like a physics::RayShape to me

* https://github.com/Field-Robotics-Lab/ds_sim/blob/master/gazebo_src/dsros_dvl.hh#L81

ok, as I look at it more closely, it seems that this plugin was experimenting with both the RaySensor (physics::RayShape) and SonarSensor (3D collision shape with collide_without_contact) approaches. It is currently using the RaySensor approach though there are still some vestiges of the SonarSensor approach:

For the Ray/Beam tracing we could alternatively use ign-rendering's RayQuery to query the depth of various objects.

+1 to this, I'd recommend going with the rendering approach unless there's an explicit need to use physics. Physics-based ray sensors are notably slower. The only reason I can think of to use them is to avoid the need for a GPU, but Ignition features like EGL allow us to work around that.

the other significant difference is that ign-rendering's RayQuery will interact with Visual objects, while physics-based ray or collide-without-contact sensors will interact with Collision objects. This is a significant factor to consider if the collision and visual shapes are not identical in a given world.

Good news is Dart does have Cone shapes which I suppose can be abused as rays https://dartsim.github.io/dart/v6.12.1/de/d3e/classdart_1_1dynamics_1_1ConeShape.html

the collide-without-contact approach can be used with arbitrary 3D shapes, but they are not guaranteed to return the closest point to the sensor. The collision detection algorithm may return a point inside the overlapping volume, so further investigation of the narrow-phase collision algorithms may be needed

@mabelzhang
Copy link
Collaborator Author

mabelzhang commented Jan 20, 2022

Thank you Steve for looking into the details!

Re RaySensor: that makes sense. For context, I remember reading in the DAVE wiki that the RaySensor approach is used for more than one custom sensors.

Here's a page from the DAVE wiki making detailed comparisons between RaySensor and SonarSensor for underwater sonars https://github.com/Field-Robotics-Lab/dave/wiki/A-Gazebo-Ray-vs-Gazebo-Sonar-comparison
"We concluded that the ray sensor could be used to calculate beam intensity while the Sonar sensor, which detects mesh collision, could not."

I definitely think porting something like this should involve a few verbal exchanges with the DAVE team, rather than us going in point blank to port it and use alternatives that they might have already looked into and decided were substandard.

@caguero caguero mentioned this issue Apr 27, 2022
15 tasks
@hidmic
Copy link
Collaborator

hidmic commented Apr 30, 2022

(Wrote this yesterday, but forgot to post it). Circling back to this. @arjo129 and I had a quick sync the other day. Current plan of record is to use depth camera frames to sample distances to visuals. We can then try to find the objects within FOV along with their velocities (or try and model acoustic propagation). That'd be enough to replicate the DVL implementation in ds_sim.

I took a quick look at Ignition Gazebo/Sensors architecture for rendering and custom sensors, in hopes we can build atop. There's nothing special about custom sensors beyond some SDF conventions. Rendering sensors, on the other hand, do get special treatment. To build a custom, depth camera-like custom sensor we would need to extract and re-purpose some of the functionality contained in the Sensors system and RenderUtil class. Tricky, but doable.

What's still bugging me is how are we going to match points with (object's) velocities efficiently. We could perform ray queries and then reverse lookup links by visual object IDs (which I presume is possible but haven't found a way yet) but I suspect that's going to be an expensive operation. I'll sleep on it.

@arjo129
Copy link
Member

arjo129 commented Apr 30, 2022 via email

@braanan
Copy link
Collaborator

braanan commented May 4, 2022

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants