-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port DAVE DVL to Ignition #145
Comments
For the Ray/Beam tracing we could alternatively use This discussion on CPU based ray collisions for CPU-Lidar (which may be relevant to us) can also be found here: @chapulina outlines the need to create a Ray shape in ign-physics. |
Yeah that ign-sensors#26 is the same ticket as the RaySensor linked in the OP above. That sensor is the basis for some other sensor in DAVE, I think. The SonarSensor and RaySensor are different enough though, we might want to think about which one and why. The SonarSensor has some known issues too (linked from a comment in the close-the-gap ticket linked in OP). |
Good news is Dart does have Cone shapes which I suppose can be abused as rays https://dartsim.github.io/dart/v6.12.1/de/d3e/classdart_1_1dynamics_1_1ConeShape.html |
+1 to this, I'd recommend going with the rendering approach unless there's an explicit need to use physics. Physics-based ray sensors are notably slower. The only reason I can think of to use them is to avoid the need for a GPU, but Ignition features like EGL allow us to work around that. |
Thanks for looking into this @mabelzhang. There's no immediate use case for waster speed sensing atm, but I suspect that's something we'll want at some point. LRAUV currently only supports water mass speed measurements for defined bin using PD13 format, but at some point, we'd like to also support full ADCP water speed via PD0. When/if we go down that route I'd like to integrate the current readings from our existing data interface rather than supporting a new interface and adding dependencies. It would be nice to use the DVL message types defined in https://github.com/apl-ocean-engineering/hydrographic_msgs/blob/main/acoustic_msgs/msg/Dvl.msg, but that's not a requirement. |
Re physics vs rendering: I ran into some glitches with the collision geometry for heightmaps actually, that I had to disable the collision and only use the visuals. I didn't dig into it much, but it appeared that the robot was colliding with invisible things, when the heightmap was far below it, though the upper bounding box of the heightmap intersects with the robot. I don't know if that's fixed with the new DEM feature. +1 for using the |
yes, it looks like a |
ok, as I look at it more closely, it seems that this plugin was experimenting with both the
the other significant difference is that
the collide-without-contact approach can be used with arbitrary 3D shapes, but they are not guaranteed to return the closest point to the sensor. The collision detection algorithm may return a point inside the overlapping volume, so further investigation of the narrow-phase collision algorithms may be needed |
Thank you Steve for looking into the details! Re Here's a page from the DAVE wiki making detailed comparisons between RaySensor and SonarSensor for underwater sonars https://github.com/Field-Robotics-Lab/dave/wiki/A-Gazebo-Ray-vs-Gazebo-Sonar-comparison I definitely think porting something like this should involve a few verbal exchanges with the DAVE team, rather than us going in point blank to port it and use alternatives that they might have already looked into and decided were substandard. |
(Wrote this yesterday, but forgot to post it). Circling back to this. @arjo129 and I had a quick sync the other day. Current plan of record is to use depth camera frames to sample distances to visuals. We can then try to find the objects within FOV along with their velocities (or try and model acoustic propagation). That'd be enough to replicate the DVL implementation in I took a quick look at Ignition Gazebo/Sensors architecture for rendering and custom sensors, in hopes we can build atop. There's nothing special about custom sensors beyond some SDF conventions. Rendering sensors, on the other hand, do get special treatment. To build a custom, depth camera-like custom sensor we would need to extract and re-purpose some of the functionality contained in the What's still bugging me is how are we going to match points with (object's) velocities efficiently. We could perform ray queries and then reverse lookup links by visual object IDs (which I presume is possible but haven't found a way yet) but I suspect that's going to be an expensive operation. I'll sleep on it. |
As I mentioned one option would be to apply a "velocity texture" of some
form to each object. Then we could retrieve it in a single pass. I think we
can solve the other problems first then revisit this to make it fast.
…On Sat, Apr 30, 2022, 10:55 PM Michel Hidalgo ***@***.***> wrote:
(Wrote this yesterday, but forgot to post it). Circling back to this.
@arjo129 <https://github.com/arjo129> and I had a quick sync the other
day. Current plan of record is to use depth camera frames to sample
distances to visuals. We can then try to find the objects within FOV along
with their velocities (or try and model acoustic propagation). That'd be
enough to replicate the DVL implementation in ds_sim.
I took a quick look at Ignition Gazebo/Sensors architecture for rendering
and custom sensors, in hopes we can build atop. There's nothing special
about custom sensors beyond some SDF conventions. Rendering sensors, on the
other hand, do get special treatment. To build a custom, depth camera-like
custom sensor we would need to extract and re-purpose some of the
functionality contained in the Sensors system
<https://github.com/ignitionrobotics/ign-gazebo/blob/9927a26287cfdb7c584b9fec334c994ae09cac0f/src/systems/sensors/Sensors.cc>
and RenderUtil class
<https://github.com/ignitionrobotics/ign-gazebo/blob/9927a26287cfdb7c584b9fec334c994ae09cac0f/src/rendering/RenderUtil.cc>.
Tricky, but doable.
What's still bugging me is how are we going to match points with
(object's) velocities efficiently. We could perform ray queries and then
reverse lookup links by visual object IDs (which I presume is possible but
haven't found a way yet) but I suspect that's going to be an expensive
operation. I'll sleep on it.
—
Reply to this email directly, view it on GitHub
<#145 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEEMQEGZYDHBCOIYDAZ3E3VHVCVVANCNFSM5L5XE77A>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
This ticket outlines the options to help us prioritize how much of the DAVE DVL to port.
The NPS DAVE DVL is based on the WHOI
ds_sim
DVL. There are two conceptual parts to it:Bottom tracking. This exists in the WHOI
ds_sim
DVLds_sim
DVL (master
branch on DAVE's fork, I think. Double-check with NPS):https://github.com/Field-Robotics-Lab/ds_sim/blob/master/gazebo_src/dsros_dvl.cc
https://github.com/Field-Robotics-Lab/ds_sim/blob/master/src/dsros_dvl_plugin.cc
There are 4 beams, implemented using a Gazebo-classic object (
physics::RayShape
?) to shoot cones out and check the object of intersection. This is done in ODE, which has a flag that does collision checking but won't enforce contact constraints. To port to Ignition, we need to see if DART supports reporting contact point without enforcing constraints.It is similar to how SonarSensor in Gazebo-classic is implemented, which has not been ported to Ignition. If feasible, we might want to port that upstream, then reuse the code. Another relevant sensor that might come up, RaySensor, has also not been ported.
(Thanks @scpeters for the insights. Hope I paraphrased correctly.)
Water tracking and current profiling. This is added in DAVE.
DAVE DVL (
ds_sim
DVL plus water tracking and current profiling,nps_dev
branch):https://github.com/Field-Robotics-Lab/ds_sim/blob/nps_dev/gazebo_src/dsros_dvl.cc
https://github.com/Field-Robotics-Lab/ds_sim/blob/nps_dev/src/dsros_dvl_plugin.cc
This version of the DVL further depends on the NPS fork of the
uuv_simulator
repo, which adds currents (Double-check with NPS which branch).That means, to port this DVL, NPS's ocean currents addition to
uuv_simulator
also need to be ported, which is not trivial.If we don't need water tracking, we only need to port bullet 1, the
ds_sim
version.Documentation on DAVE DVL
https://github.com/Field-Robotics-Lab/dave/wiki/whn_dvl_examples
https://github.com/Field-Robotics-Lab/dave/wiki/DVL-Water-Tracking
https://github.com/Field-Robotics-Lab/dave/wiki/DVL-Seabed-Gradient
The text was updated successfully, but these errors were encountered: