Skip to content
Tom Howard edited this page Apr 6, 2022 · 36 revisions

Advanced Navigation and SLAM

You should be able to complete the exercises on this page within a two-hour lab session.

Quick Links

Exercises

Additional Resources

Introduction

This week you will have a go at implementing closed-loop control by creating a ROS node that controls the velocity of the TurtleBot3 robot whilst using odometry data as a feedback signal. In doing this, you will start to appreciate the limitations of using odometry data for real-time control and so we will then explore another data-stream that can be used to aid navigation further. Finally, you will leverage some existing ROS libraries and TurtleBot3 packages to observe some more advanced navigation methods in action.

Intended Learning Outcomes

By the end of this session you will be able to:

  1. Implement closed-loop (odometry-based) control of a robot through development of a Python publisher-subscriber node.
  2. Explain the limitations of Odometry-based motion control methods and identify additional feedback signals that can be used to enhance this.
  3. Interpret the data that is published to the /scan topic and use existing ROS tools to visualise this.
  4. Use existing ROS tools to implement SLAM and build a map of an environment.
  5. Leverage existing ROS libraries to make a robot navigate an environment autonomously, using the map that you have generated.
  6. Explain how these SLAM and Navigation tools are implemented and what information is required in order to make them work.

Getting Started

  1. Launch your WSL-ROS environment by running the WSL-ROS shortcut in the Windows Start Menu (if you haven't already done so). Once installed, the Windows Terminal app should launch with an Ubuntu terminal instance ready to go (TERMINAL 1).
  2. Also launch VS Code now by following the steps here to launch it correctly for the WSL-ROS environment.

Restoring your Environment

Remember that any work that you do within this WSL-ROS Environment will not be preserved between sessions or across different University computers. You should have run the wsl_ros tool at the end of last week's session to backup your home directory to your University U: Drive. Restore this now before you start on this week's work by running the following command in TERMINAL 1:

[TERMINAL 1] $ wsl_ros restore

Launching the Robot Simulation

You should know exactly how to do this now but, just to re-iterate, enter the following into TERMINAL 1:

[TERMINAL 1] $ roslaunch turtlebot3_gazebo turtlebot3_empty_world.launch

...which will launch a Gazebo simulation of a TurtleBot3 Waffle in an empty world:

Pro Tip: Getting bored of entering that long command to launch the robot simulation? You could use the tb3_empty_world alias instead!

Odometry-based Navigation

In last week's session you created a Python node to make your robot move using open-loop control, that is: you published velocity commands to the /cmd_vel topic to make your robot follow a circular motion path. How do you know if your robot actually achieved this though? In a real-world environment, what external factors might result in your robot not achieving its desired trajectory?

Last week you also learnt about Robot Odometry, which is published by the robot to inform us of its position and orientation in the environment. Remember that this is determined by a process called "dead-reckoning", and so it's only really an approximation, but it's a fairly good one in any case, and we can use this as a feedback signal to understand if our robot is moving in the way that we expect it to. We can therefore enhance the /cmd_vel publisher node from last week to also subscribe to the /odom topic, thus improving the navigation capabilities of the node through implementation of basic closed-loop control.

Exercise 1: Make your robot follow a Square motion path

  1. Launch a new terminal instance (TERMINAL 2).

  2. In TERMINAL 2, navigate to the week2_navigation package that you created last week, using roscd.

  3. Navigate to the package src directory and use the Linux touch command to create a new file called move_square.py:

     [TERMINAL 2] $ touch move_square.py
    
  4. Then make this file executable using chmod:

     [TERMINAL 2] $ chmod +x move_square.py
    
  5. Use the VS Code File Explorer to navigate to the move_square.py file and open it up, ready for editing.

  6. We have put together a template to help you with this exercise. Copy and paste the template code into your new move_square.py file to get you started.

  7. Run the code as it is to see what happens.

    FILL IN THE {BLANK}! Something not quite working as expected? We may have missed out something very crucial on the very first line of the code template, can you work out what it is?!

  8. Fix the code and then adapt it to make your robot follow a square motion path of 0.5m x 0.5m dimensions:

    • The robot's odometry will tell you how much the robot has moved and/or rotated, and so you should use this information to achieve the desired motion path.
    • Your Python node will therefore need to subscribe to the /odom topic as well as publish to /cmd_vel.

Advanced features:

  1. Adapt the node further to make the robot automatically stop once it has performed two complete loops.
  2. Make a launch file for this node, launch it using roslaunch and observe the outcome!

After following a square motion path a few times, your robot should return to the same location that it started from. Is this the case, or is there a difference? If you have observed any discrepancies what might be the reason(s) for these?

Laser Displacement Data and The LiDAR Sensor

Odometry is really important for robot navigation, but it can be subject to drift and accumulated error over time, as you may have observed in the previous exercise (although it is better in simulation than it is on a real robot though!) Fortunately, we have another sensor on-board our robot which provides even richer information about its environment and we can use this to supplement the odometry information and enhance the robot's navigation capabilities.

Exercise 2: Using RViz to Visualise Robot Data

We're going to place the robot in a more interesting environment now, so you'll need to make sure that you close down the Gazebo simulation that is currently running. The best way to do this is to go to TERMINAL 1 and enter Ctrl+C to close down all the Gazebo processes. It may take a bit of time, but the Gazebo window will close down after 30 seconds or so of doing this. You should also stop your move_square.py node, if that's still running too.

  1. Return to TERMINAL 1 and enter the following to launch a new simulation:

     [TERMINAL 1] $ roslaunch turtlebot3_gazebo turtlebot3_world.launch
    

    A new Gazebo simulation should now be launched with a Turtlebot3 Waffle in a new arena:

  2. In TERMINAL 2 we then need to launch a "Bringup" package for the TurtleBot3, which launches a number of key processes for the robot to make it fully functional:

     [TERMINAL 2] $ roslaunch turtlebot3_bringup turtlebot3_remote.launch
    
  3. In a new terminal instance (TERMINAL 3), enter the following:

     [TERMINAL 3] $ rosrun rviz rviz -d `rospack find turtlebot3_description`/rviz/model.rviz
    

    Pro Tip: The rosrun command above is quite a long one, but there is a tb3_rviz alias that you could use instead, to make life a bit easier!

    On running the command a new window should open:

    This is RViz, which is a ROS tool that allows us to visualise the data being measured by a robot in real-time. The red dots scattered around the robot represent laser displacement data which is measured by the LiDAR sensor located on the top of the robot (as you discovered in The Robot Overview Section). This data allows the robot to "see" any obstacles in its immediate surroundings. The LiDAR sensor spins continuously, sending out laser pulses as it does so, which are reflected back to the sensor from nearby objects. The time taken for the pulses to return can be used to determine how far away the object that reflected it is. Because the LiDAR sensor spins and performs this process continuously, a full 360° scan of the environment can be generated. In this case (because we are working in simulation here) the data represents the objects surrounding the robot in its simulated environment, so you should notice that the red dots produce an outline that resembles the objects in the world that is being simulated in Gazebo (or partially at least).

  4. Next, open up a new terminal instance (TERMINAL 4). Laser displacement data from the LiDAR sensor is published by the robot to the /scan topic. We can use the rostopic info command to find out more about the nodes that are publishing and subscribing to this topic, as well as the type of message that is being published to the /scan topic:

     [TERMINAL 4] $ rostopic info /scan
    
         Type: sensor_msgs/LaserScan
    
         Publishers:
          * /gazebo (http://localhost:#####/)
    
         Subscribers:
          * /rviz_#### (http://localhost:#####/) 
    
  5. As we can see from above, /scan messages are of the sensor_msgs/LaserScan type, and we can find out more about this message type using the rosmsg info command:

     [TERMINAL 4]: $ rosmsg info sensor_msgs/LaserScan
    
         std_msgs/Header header
           uint32 seq
           time stamp
           string frame_id
         float32 angle_min
         float32 angle_max
         float32 angle_increment
         float32 time_increment
         float32 scan_time
         float32 range_min
         float32 range_max
         float32[] ranges
         float32[] intensities 
    

Interpreting /LaserScan (LiDAR) Data

The /LaserScan message is a standardised ROS message (from the sensor_msgs package) that any ROS Robot can use to publish data that it obtains from a Laser Displacement Sensor such as the LiDAR on the TurtleBot3. You can find the full definition of the sensor_msgs/LaserScan message here, access the link to review the full message definition.

ranges is an array of float32 values (we know it's an array of values because of the [] after the data-type). This is the part of the message containing all the actual distance measurements that are being obtained by the LiDAR sensor (in meters).

Consider a simplified example here, taken from a TurtleBot3 robot in a much smaller, fully enclosed environment. In this case, the displacement data from the ranges array is represented by green squares:

As illustrated in the figure, we can associate each data-point within the ranges array to an angular position by using the angle_min, angle_max and angle_increment values that are also provided within the LaserScan message. We can use the rostopic echo command to drill down into these elements of the message specifically and find out what their values are:

$ rostopic echo /scan/angle_min -n1
0.0
---

$ rostopic echo /scan/angle_max -n1
6.28318977356
---

$ rostopic echo /scan/angle_increment -n1
0.0175019223243
---

Notice how we were able to access specific variables within the /scan message using rostopic echo rather than simply printing the whole thing? What does the -n1 option do and why is it appropriate to use this here? What do these values represent? (Compare them with the figure above).

The ranges array contains 360 values in total: a distance measurement at every 1° (an angle_increment of 0.0175 radians) around the robot. The first value in the ranges array (ranges[0]) is the distance to the nearest object directly in front of the robot (i.e. at θ = 0 radians, or angle_min). The last value in the ranges array (ranges[359]) is the distance to the nearest object at 359° (i.e. θ = 6.283 radians, or angle_max) from the front of the robot. If, for example, we were to obtain the 65th value in the ranges array, that is: ranges[65], we know that this would represent the distance to the nearest object at an angle of 65° (1.138 radians) from the front of the robot (anti-clockwise).

The LaserScan message also contains the parameters range_min and range_max, which represent the minimum and maximum distances (in meters) that the LiDAR sensor can detect, respectively. You can use the rostopic echo command to report these directly too.

Review what we did with rostopic echo above to find out what these values are yourself and be aware of the limitations of the LiDAR sensor by considering these upper and lower limits.

Finally, use the rostopic echo command again to display the ranges portion of the LaserScan topic message. Don't use the -n1 option now, so that you can see the data changing, in the terminal, in real-time, but use the -c option to clear the screen after every message to make things a bit clearer. You might also need to maximise the terminal window so that you can see the full content of the array: with 360 values, the array is quite big, but is bound by square brackets [] to illustrate where it starts and ends, and there should be a --- at the end of each message too, to help you confirm that you are viewing the entire thing.

The main thing you'll notice here is that there's way too much information, updating far too quickly for it to be of any real use! As you have already seen though, it is the numbers that are flying by here that are represented by red dots in RViz. Flick over to the RViz screen to have another look at this. As you'll no doubt agree, this is a much more useful way to visualise the ranges data, and illustrates how useful RViz can be for interpreting what your robot can see in real-time.

What you may also notice is several inf values scattered around the array. This represents sensor readings that were greater than the distance specified by range_max, so the sensor couldn't report a distance measurement in these cases.

Stop the rostopic echo command from running in the terminal window by entering Ctrl+C.

Simultaneous Localisation and Mapping (SLAM)

In combination, the data from the LiDAR sensor and the robot's odometry (the robot pose specifically) are really powerful, and allow some very useful conclusions to be made about the robot's environment. One of the key applications of this data is "Simultaneous Localisation and Mapping", or SLAM. This is a tool that is built into ROS, allowing a robot to build up a map of its environment and locate itself within that map at the same time! You will now learn how easy it is to leverage this tool in ROS.

Exercise 3: Building a map of an environment with SLAM

  1. Close down all ROS processes that are running now by entering Ctrl+C in each terminal:

    1. The Gazebo processes in TERMINAL 1.
    2. The Bringup processes in TERMINAL 2.
    3. The RViz processes running in TERMINAL 3.
  2. We're going to launch our robot into another new simulated environment now, which we'll be creating a map of using SLAM! To launch the simulation enter the following command in TERMINAL 1:

     [TERMINAL 1] $ roslaunch com2009_simulations nav_world.launch
    

    The environment that launches should look like this:

  3. Now we will launch SLAM to start building a map of this environment. In TERMINAL 2, launch SLAM as follows:

     [TERMINAL 2] $ roslaunch turtlebot3_slam turtlebot3_slam.launch
    

    This will launch RViz again, and you should be able to see a model of your TurtleBot3 from a top-down view, this time with green dots representing the real-time LiDAR data. The SLAM tools will already have begun processing this data to start building a map of the boundaries that are currently visible to your robot based on its position in the environment.

  4. In TERMINAL 3 launch the turtlebot3_teleop node (you should know how to do this by now). Re-arrange and re-size your windows so that you can see Gazebo, RViz and the turtlebot3_teleop terminal instance all at the same time:

  5. Drive the robot around the arena slowly, using the turtlebot3_teleop node, and observe the map being updated in the RViz window as you do so. Drive the robot around until a full map of the environment has been generated.

  6. As you are doing this you need to also determine the coordinates of the four zones (A, B, C & D) that are printed on the arena floor. Drive your robot into each of these circular zones and stop the robot inside them. As you should remember from last week, we can determine the position (and orientation) of a robot in its environment from its odometery, as published to the /odom topic. In Exercise 2 last week you built an odometry subscriber node, so you could launch this now, in TERMINAL 4, and use this to inform you of your robot's x and y position in the environment when located within each of the zone markers:

     [TERMINAL 4] rosrun week2_navigation odom_subscriber.py
    

    Record the zone marker coordinates in a table such as the one below (you will need this information for the next exercise).

    Zone x Position (m) y Position (m)
    START 0.5 -0.04
    A
    B
    C
    D
  7. Once you have obtained all this data, and you are happy that your robot has built a complete map of the environment, you then need to save this map for later use. We do this using a ROS map_server package. First, stop the robot by pressing S in TERMINAL 3 and then enter Ctrl+C to shutdown the turtlebot3_teleop node.

  8. Then, remaining in TERMINAL 3, navigate to the root of your week2_navigation package directory and create a new folder in it called maps:

     [TERMINAL 3] $ roscd week2_navigation
     [TERMINAL 3] $ mkdir maps
    
  9. Navigate into this new directory:

     [TERMINAL 3] $ cd maps/
    
  10. Then, run the map_saver node from the map_server package to save a copy of your map:

     [TERMINAL 3] $ rosrun map_server map_saver -f {map name}
    

    Replacing {map name} with a name of your choosing.

    This will create two files: a {map name}.pgm and a {map name}.yaml file, both of which contain data related to the map that you have just created. The .pgm file contains an Occupancy Grid Map (OGM), which is used for autonomous navigation in ROS. Have a look at the map by launching it in an Image Viewer Application called eog:

     [TERMINAL 3] $ eog {map name}.pgm
    

    A new window should launch containing the map you have just created with SLAM and the map_saver node:

    White regions represent the area that your robot has determined is open space and that it can freely move within. Black regions, on the other hand, represent boundaries or objects that have been detected. Any grey area on the map represents regions that remain unexplored, or that were inaccessible to the robot.

  11. Compare the map generated by SLAM to the real simulated environment. How accurately has the environment been mapped? In a simulated environment this process should be pretty accurate, and the map should represent the simulated environment very well (unless you didn't allow your robot to travel around and see the whole thing!) In a real environment this is often not the case. Have a think about why this might be.

  12. Close the image using the x button on the right-hand-side of the eog window.

Summary of SLAM:

See how easy it was to map an environment in the previous exercise? This works just as well on a real robot in a real environment too (see this video demonstration that we put together last year).

This illustrates the power of ROS: having access to tools such as SLAM, which are built into the ROS framework, makes it really quick and easy for a robotics engineer to start developing robotic applications on top of this. Our job was made even easier here since we used some packages that had been pre-made by the manufacturers of our TurtleBot3 Robots to help us launch SLAM with the right configurations for our exact robot. If you were developing a robot yourself, or working with a different type of robot, then you might need to do a bit more work in setting up and tuning the SLAM tools to make it work for your own application.

Advanced Navigation Methods

As mentioned above, the map that you created in the previous exercise can now be used by ROS to autonomously navigate the mapped area. We'll explore this now.

Exercise 4: Navigating an Environment Autonomously

  1. Close down all ROS processes again now so that nothing is running (but leave all the terminal windows open).

  2. In order to perform autonomous navigation we now need to activate a number of ROS libraries, our simulated environment and also specify some custom parameters, such as the location of our map file. The easiest way to do all of this in one go is to create a launch file.

  3. You may have already created a launch directory in your week2_navigation package, but if you haven't then do this now:

     [TERMINAL 1] $ roscd week2_navigation
     [TERMINAL 1] $ mkdir launch
    
  4. Next, navigate into the launch directory and create a new file called navigation.launch:

     [TERMINAL 1] $ cd launch/
     [TERMINAL 1] $ touch navigation.launch
    
  5. Open up this file in VS Code and copy and paste the following content:

    <!-- Adapted from the Robotis "turtlebot3_navigation" package: 
    https://github.com/ROBOTIS-GIT/turtlebot3/blob/master/turtlebot3_navigation/launch/turtlebot3_navigation.launch
    -->
    <launch>
      <include file="$(find com2009_simulations)/launch/nav_world.launch" />
    
      <!-- To be modified -->
      <arg name="map_file" default=" $(find week2_navigation)/maps/{map name}.yaml"/>
      <arg name="initial_pose_x" default="0.0"/>
      <arg name="initial_pose_y" default="0.0"/>
      
      <!-- Other arguments -->
      <arg name="initial_pose_a" default="0.0"/>
      <arg name="model" default="$(env TURTLEBOT3_MODEL)" doc="model type [burger, waffle, waffle_pi]"/>
      <arg name="move_forward_only" default="false"/>
    
      <!-- Turtlebot3 Bringup -->
      <include file="$(find turtlebot3_bringup)/launch/turtlebot3_remote.launch">
        <arg name="model" value="$(arg model)" />
      </include>
    
      <!-- Map server -->
      <node pkg="map_server" name="map_server" type="map_server" args="$(arg map_file)"/>
    
      <!-- AMCL -->
      <include file="$(find turtlebot3_navigation)/launch/amcl.launch">
        <arg name="initial_pose_x" value="$(arg initial_pose_x)"/>
        <arg name="initial_pose_y" value="$(arg initial_pose_y)"/>
        <arg name="initial_pose_a" value="$(arg initial_pose_a)"/>
      </include>
    
      <!-- move_base -->
      <include file="$(find turtlebot3_navigation)/launch/move_base.launch">
        <arg name="model" value="$(arg model)" />
        <arg name="move_forward_only" value="$(arg move_forward_only)"/>
      </include>
    
      <!-- rviz -->
      <node pkg="rviz" type="rviz" name="rviz" required="true"
        args="-d $(find turtlebot3_navigation)/rviz/turtlebot3_navigation.rviz"/>
    </launch>
  6. Edit the default values in the To be modified section:

    <!-- To be modified -->
    <arg name="map_file" default=" $(find week2_navigation)/maps/{map name}.yaml"/>
    <arg name="initial_pose_x" default="0.0"/>
    <arg name="initial_pose_y" default="0.0"/>
    1. Change {map name} to the name of your map file as created in the previous exercise.
    2. Change the initial_pose_x and initial_pose_y default values. Current these are both set to "0.0", but they need to be set to match the coordinates of the start zone of the com2009_simulations/nav_world environment (we may have given you a clue about these in the table earlier!)
  7. Once you've made these changes, save the file and then launch it:

     [TERMINAL 1]: $ {BLANK} week2_navigation navigation.launch
    

    FILL IN THE {BLANK}! Which ROS command do we use to execute launch files?

  8. RViz and Gazebo should be launched, both windows looking something like this:

    How may nodes were actually launched on our ROS Network by executing this launch file?

    As shown in the figure, in RViz you should see the map that you generated with SLAM earlier. There should be a "heatmap" surrounding your robot and a lot of green arrows scattered all over the place. The green arrows represent the localisation particle cloud, and the fact that these are all scattered across quite a wide area at the moment indicates that there is currently a great deal of uncertainty about the actual robot pose within the environment. Once we start moving around, this will improve and the arrows will start to converge more closely around the robot. The "heatmap" is actually called a "costmap" and this illustrates what the robot perceives of its environment: blue regions representing safe space that it can move around in; red regions representing areas where it could collide with an obstacle.

    Finally, the green dots illustrate the real-time LaserScan data coming from the LiDAR sensor, as we saw earlier. This should be nicely overlaid on top of the boundaries in our map.

  9. To send a navigation goal to our robot we need to issue a request to the move_base action server. We will cover ROS actions later in this course, but for now, all you really need to know is that we can send a navigation goal by publishing a message to a topic on the ROS network. In TERMINAL 2 run rostopic list and filter this to show only topics related to /move_base:

     [TERMINAL 2] $ rostopic list | grep /move_base
    

    This will be quite a long list, but right at the bottom you should see the following item:

     /move_base_simple/goal
    

    We will use this to publish navigation goals to our robot to make it move autonomously using the ROS Navigation Stack.

  10. Running rostopic info on this topic will allow us to find out more about it:

     [TERMINAL 2] $ rostopic info move_base_simple/goal
     
         Type: geometry_msgs/PoseStamped
    
         Publishers:
         * /rviz (http://localhost:#####/)
    
         Subscribers:
         * /move_base (http://localhost:#####/)
         * /rviz (http://localhost:#####/)
    

    What type of message does this topic use, and which ROS package provides this message type?

  11. Run another command now to find out what the structure of this message is (you did this earlier for the LaserScan messages published to the /scan topic).

  12. Knowing all this information now, we can use the rostopic pub command to issue a navigation goal to our robot, via the /move_base_simple/goal topic. This command works exactly the same way as it did when we published messages to the /cmd_vel topic last week (when we made the robot move at a velocity of our choosing).

    Remember that the rostopic pub command takes the following format:

     rostopic pub {topic_name} {message_type} {data}
    

    ...but to make life easier, we can use the autocomplete functionality in our terminal to help us format the message correctly:

     rostopic pub {topic_name} {message_type}[SPACE][TAB]
    

    Do this now, (replacing {topic_name} and {message_type} accordingly) to generate the full message structure that we will use to send the navigation goal to the robot, from the terminal. Don't press Enter yet though, as we will need to edit the message data in order to provide a valid navigation goal.

  13. There are four things in this message that need to be changed before we can publish it:

    1. frame_id: '' should be changed to frame_id: 'map'

    2. The pose.orientation.w value needs to be changed to 1.0:

       pose:
         orientation:
           w: 1.0
      
    3. The pose.position.x and pose.position.y parameters define the location, in the environment, that we want the robot to move to, as determined in the previous exercise:

       pose:
         position:
           x: {desired location in x}
           y: {desired location in y}
      

      Scroll back through the message using the left arrow key on your keyboard (←), and modify the four parameters of the message accordingly, setting your x and y coordinates to make the robot move to any of the four marker zones in the environment.

  14. Once you're happy, hit Enter and watch the robot move on its own to the location that you specified!

    Notice how the green particle cloud arrows very quickly converge around the robot as it moves around? This is because the robot is becoming more certain of it's pose (its position and orientation) within the environment as it compares the boundaries its LiDAR sensor can actually see with the boundaries marked out in the map that you supplied to it.

  15. Have a go at requesting more goals by issuing further commands in the terminal (using rostopic pub) to make the robot move between each of the four zone markers.

Summary:

We have just made a robot move by issuing navigation goal requests to an Action Server on our ROS Network. You will learn more about ROS Actions in Week 5, where you will start to understand how this communication method actually works. You will also learn how to create Action Client Nodes in Python, so that - in theory - everything that you have been doing on the command-line in this exercise could be done programmatically instead.

As you have observed in this exercise, in order to use ROS navigation tools to make a robot move autonomously around an environment there are a few important things that we need to provide to the Navigation Stack:

  1. We need a map of the environment that we want to navigate around. This means that our robot needs to have already explored the environment once beforehand to know what the environment actually looks like. We drove our robot around manually in this case but, often, some sort of basic exploratory behaviour would be required in the first instance so that the robot can safely move around and create a map (using SLAM) without crashing into things! You will learn more about robotic search/exploration strategies in your lectures.
  2. We needed to tell the robot its location within this environment to begin with, so that it could compare the map file that we supplied to it with what it actually observes in the environment. If we didn't know where the robot was to begin with, then some further exploration would be required to start with, in order for the robot to build confidence in its actual pose in the environment, prior to navigating it.
  3. We need to know the coordinates of the locations that we want to move to, before we can navigate to them! This may seem obvious, but it's an extra thing that we need to establish before we are able to navigate autonomously.

Further Reading

The ROS Robot Programming eBook that we have mentioned previously goes into more detail on how SLAM and the autonomous navigation tools that you have just implemented actually work. There is information in here on how these tools have been configured to work with the TurtleBot3 robots specifically. We therefore highly recommend that you download this book and have a read of it. You should read through Chapters 11.3 ("SLAM Application") and 11.4 ("SLAM Theory") in particular, and pay particular attention to the following:

  • What information is required for SLAM? One of these bits of information may be new to you: how does this relate to Odometry, which you do know about? (See Section 11.3.4)
  • Which nodes are active in the SLAM process and what do they do? What topics are published and what type of messages do they use? How does the information flow between the node network?
  • Which SLAM method did we use? What parameters had to be configured for our TurtleBot3 Waffle specifically, and what do all these parameters actually do?
  • What are the 5 steps in the iterative process of pose estimation?

We would also recommend you read Chapter 11.7 ("Navigation Theory") too, which should allow you to then answer the following:

  • What is the algorithm that is used to perform pose estimation?
  • What process is used for trajectory planning?
  • How many nodes do we need to launch to activate the full navigation functionality on our ROS Network? (We asked you this earlier, and the best way to determine it might be to do it experimentally, i.e.: using the rosnode command-line tool perhaps?)!

Wrapping Up

This week you have learnt how to develop an odometry-based controller to make your robot follow a square motion path. You will likely have observed some degree of error in this which, as you already know, could be due to the fact that Odometry data is determined by dead-reckoning and is therefore subject to drift and error. Consider how other factors may impact the accuracy of control too: How might the rate at which the odometry data is sampled play a role? How quickly can your robot receive new velocity commands, and how quickly can it respond?

Be aware that we did all this in simulation here too. In fact, in a real world environment, this type of navigation might be less effective, since things such as measurement noise and calibration errors can also have considerable impact. You will have the opportunity to experience this first hand later in this course.

Ultimately then, we have seen a requirement here for additional information to provide more confidence of a robot's location in its environment so as to enhance its ability to navigate effectively and avoid crashing into things!

You therefore also learnt about the LiDAR sensor today, and the information that can be obtained from it. We explored where this data is published, how we access it, and what it tells us about a robot's immediate environment. We then looked at some of the ways odometery and laser displacement data can be combined to perform advanced robotic functions such as the mapping of an environment and the subsequent navigation around it. This is all complicated stuff but, using ROS, we can leverage these tools with relative ease, which illustrates just how powerful this framework can be for developing robotic applications quickly and effectively without having to re-invent the wheel!

Saving your work

Remember, the work you have done in this WSL-ROS environment today will not be preserved for future sessions or across different University machines automatically! To save the work you have done here today you should now run the following script in any idle WSL-ROS Terminal Instance:

$ wsl_ros backup

Navigating This Wiki:
← Week 2: Odometry and Basic Navigation [Previous] | [Next] Week 4: ROS Services →

Clone this wiki locally