Skip to content

GTRI based Sonar Notes and Description

Bruce Allen edited this page May 27, 2020 · 16 revisions

Summary: The GTRI-based Sonar is similar to the ROS Melodic GPU Ray plugin which we present at NPS GPU Ray-based FLS. Here we port it to work on ROS Melodic with Gazebo 9 and show usage.

The GTRI-based Sonar model at https://github.com/Field-Robotics-Lab/gtri_based_sonar is based on and is forked from the SyllogismRXS Sonar model at https://github.com/SyllogismRXS/syllo-gazebo, ref. "A Computationally-Efficient 2D Imaging Sonar Model for Underwater Robotics Simulations in Gazebo", Georgia Tech . This fork has been modified as follows:

  • It has been ported to work with ROS Melodic and Gazebo 9.
  • The algorithm for converting the point cloud from the Gazebo Ray module to a Gazebo camera image has been merged into the Gazebo module. The image is no longer calculated and advertised on a separate ROS Node.
  • Modules not related to modeling Sonar have been removed in order to simplify the repository.

Example Usage

See https://github.com/Field-Robotics-Lab/DAVE/wiki/Sonar-Models#gtri for usage and example screenshot.

.xacro Interface

Here is the xacro macro for instantiating a named Sonar instance onto a parent link:

Name: gtri_based_fls. Parameters: namespace, suffix, parent_link, topic, *origin.

ROS Interface

Here are the topics published by this Sonar model:

Message type Suggested topic name Description
sensor_msgs::PointCloud sonar_cloud The point cloud is calculated from Gazebo's RaySensor interface.
sensor_msgs::Image sonar_image The Sonar image calculated from the point cloud.
sensor_msgs::CameraInfo sonar_image_camera_info The height, width, and timestamp of the currently available Sonar image.

Composition of sensor_msgs::PointCloud

PointCloud consists of a header, a number of xyz points, and the same number of values, see http://docs.ros.org/melodic/api/sensor_msgs/html/msg/PointCloud.html:

  • std_msgs/Header header
    • uint32 seq
    • time stamp
    • string frame_id
  • geometry_msgs/Point32[] points
    • float32 x
    • float32 y
    • float32 z
  • sensor_msgs/ChannelFloat32[] channels | string name
    • float32[] values

Composition of sensor_msgs::Image:

Image consists of a header, height, width, encoding information, and uint8[] data, see http://docs.ros.org/melodic/api/sensor_msgs/html/msg/Image.html:

  • std_msgs/Header header
    • uint32 seq
    • time stamp
    • string frame_id
  • uint32 height
  • uint32 width
  • string encoding
  • uint8 is_bigendian
  • uint32 step
  • uint8[] data

Composition of sensor_msgs::CameraInfo:

CameraInfo contains metatata about the camera image, see http://docs.ros.org/melodic/api/sensor_msgs/html/msg/CameraInfo.html.

Implementation

The implementation is divided into two parts, 1) point cloud and 2) camera image:

  • The point cloud is created using averaging functions on range and retro values from physics::MultiRayShape, see gazebo_ros_block_laser.cpp.
  • The point cloud is transformed to the location of the sensor, randomization is applied on the x values, and the image is created, blurred, colored, clipped, and rotated.

Point Cloud

The point cloud is calculated from information available in Gazebo's RaySensor interface, https://osrf-distributions.s3.amazonaws.com/gazebo/api/dev/classgazebo_1_1sensors_1_1RaySensor.html, see file ImagingSonar.cc at https://github.com/Field-Robotics-Lab/gtri_based_sonar/blob/master/gtri_based_fls_gazebo/src/ImagingSonar.cc.

This point cloud is identical to the point cloud the gazebo_ros_block_laser ROS plugin makes except that the ROS one has an optional parameter for adding gaussian noise.

Here is the basic flow:

Initialization

Function load reads the requested topic names and sets up the Sonar so it doesn't run unless there are topic listeners. It also sets Gazebo's RaySensor to get callbacks to function OnNewLaserScans from Gazebo's MultiRayShape class for processing scans during runtime.

Runtime

Sonar Image

The Sonar image is calculated from the point cloud, see file cloud_to_image.cpp at https://github.com/Field-Robotics-Lab/gtri_based_sonar/blob/master/gtri_based_fls_gazebo/src/cloud_to_image.cpp. File ColorMaps.cpp maps the Sonar image to its rendered colors.

Initialization

Function init_cloud_to_image records the angle the image is calculated across and the Sonar's topic name for use during runtime.

Runtime

Function publish_cloud_to_image projects the point cloud onto a camera image and then publishes the camera image. Here are the transforms:

  • The point cloud is transformed to the location of the Sonar sensor using the tf::TransformListener transform.
  • The x value of each point is randomized by adding a random Gaussian distribution that has a mean of 0 and std of 0.01.
  • A Blank cv::Mat image img is created, size 600x600 and holding 8-bit unsigned integers.
  • Each point in the point cloud is cast onto image img by drawing a circle of radius 1 with the color being the retro value from 0 to 255.
  • Apply a median blur over img using a kernel size of 3 (for each point, use the middle value of the 3x3 square around it).
  • Create new image img_color from img, with the same size but holding three 8-bit unsigned colors for Blue-Green-Red, substituting each retro value from 0 to 255 to a color based on coloring algorithm getColor_matlab defined in file colorMaps.cpp.
  • Fill the edges black.
  • Rotate the image 180 degrees using cv::warpAffine.
  • Create the ROS sensor_msgs/Image message from the img_color image using cv_bridge::CvImage.
  • Publish the message.

IVision

The https://github.com/IvisionLab/sonar-simulation repository requires Ruby and QT4. The install fails with a syntax error under bootstrap.sh.

Clone this wiki locally