Skip to content
Tom Howard edited this page Apr 6, 2022 · 22 revisions

ROS Actions

You should be able to complete most of the exercises on this page within a two-hour lab session, but you might wish to spend a bit more time on Exercise 4.

Quick Links

Exercises

Additional Resources

Introduction

This week you will learn about a third (and final) communication method available within ROS: Actions. Actions are essentially an advanced version of ROS Services, and you will learn about exactly how these two differ and why you might choose to employ an action over a service for a robot application.

Intended Learning Outcomes

By the end of this session you will be able to:

  1. Recognise how ROS Actions differ from ROS Services and explain where this method might be useful in robot applications.
  2. Explain the structure of Action messages and identify the relevant information within them, enabling you to build Action Servers and Clients.
  3. Implement Python Action Client nodes that utilise concurrency and preemption.
  4. Develop Action Server & Client nodes that could be used as the basis for a robotic search strategy.

Getting Started

  1. Launch your WSL-ROS environment by running the WSL-ROS shortcut in the Windows Start Menu.

  2. Restore your work from last week (if you haven't done so already) by running the restore script in the Ubuntu terminal instance that should now be active (TERMINAL 1):

     [TERMINAL 1] $ wsl_ros restore
    
  3. Launch VS Code.

Calling an Action Server

Before we talk about what actions actually are, we're going to dive straight in and see one in action (excuse the pun). As you may remember from the Week 3 session, you actually used a ROS Action to make your robot navigate autonomously in Exercise 4, by calling an action server from the command-line. We will do a similar thing now, in a different application, but this time we'll look at what is going on in a bit more detail.

Exercise 1: Launching an Action Server and calling it from the command-line

We'll play a little game here. We're going to launch our TurtleBot3 Waffle in a mystery environment now, and we're going to do this by launching gazebo headless i.e. Gazebo will be running behind the scenes, but there'll be no Graphical User Interface (GUI) to show us what the environment actually looks like. Then, we'll use an action server to make our robot scan the environment and take pictures for us, to reveal its surroundings!

  1. To launch our TurtleBot3 Waffle in the mystery environment, use the following roslaunch command:

     [TERMINAL 1] $ roslaunch com2009_simulations mystery_world.launch
    

    Messages in the terminal should indicate that something has happened, but that's about all you will see!

  2. Next, open up a new instance of the Windows Terminal Application by pressing the "New Tab" button whilst pressing the Shift key (we'll call this WT(B)):

  3. In WT(B) have a look at all the topics that are currently active on the ROS network (you should know exactly how to do this by now!)

  4. Return to the original Windows Terminal instance (the one with the Gazebo processes running, and which we'll now refer to as WT(A)), open up a new tab (WT(A):TERMINAL 2) and launch an action server that we have already prepared for you for this exercise:

     [WT(A):TERMINAL 2] $ roslaunch com2009_examples camera_sweep.launch
    
  5. Now, return to WT(B) and take a look again at all the topics that are active on the ROS network. What do you notice? Anything new there now compared to when you ran the same command before?

    You should in fact notice 5 new items in that list:

     /camera_sweep_action_server/cancel
     /camera_sweep_action_server/feedback
     /camera_sweep_action_server/goal
     /camera_sweep_action_server/result
     /camera_sweep_action_server/status
    

    A ROS action therefore has five messages associated with it. We'll talk about these in a bit more detail later on, but for now, all we need to know is that in order to call an action, we need to send the action server a Goal (which you may remember doing in Week 3).

  6. ROS Actions use topic messages, so we can tap into the ROS network and observe the messages being published to these in exactly the same way as we have done in previous weeks using rostopic echo. In order to monitor some of these messages now, we'll launch a couple more instances of the Windows Terminal, so that we can view a few things simultaneously:

    1. Once again, launch an additional Windows Terminal instance by pressing the "New Tab" button whilst pressing the Shift key (this one will be called WT(C)):

    2. Do this again to launch another Windows Terminal instance, which we'll call WT(D)

    3. You should now have four Windows Terminal applications open! Arrange these so that they are all visible:

  7. In WT(C) run a rostopic echo command to echo the messages being published to the /camera_sweep_action_server/feedback topic:

     [WT(C)] $ rostopic echo /camera_sweep_action_server/feedback
    

    To begin with, you will see the message:

     WARNING: no messages received and simulated time is active.
     Is /clock being published?
    

    Don't worry about this.

  8. Do the same in WT(D), but this time to echo the messages being published to the /result part of the action server message.

  9. Now, going back to WT(B), run the rostopic pub command on the /camera_sweep_action_server/goal topic, using the autocomplete functionality in the terminal to help you format the message correctly:

     [WT(B)] rostopic pub /camera_sweep_action_server/goal[SPACE][TAB][TAB]
    

    This should provide you with the following:

     $ rostopic pub /camera_sweep_action_server/goal com2009_msgs/CameraSweepActionGoal "header:
       seq: 0
       stamp:
         secs: 0
         nsecs: 0
       frame_id: ''
     goal_id:
       stamp:
         secs: 0
         nsecs: 0
       id: ''
     goal:
       sweep_angle: 0.0
       image_count: 0"
    
  10. Edit the goal portion of the message by using the left arrow button on your keyboard to scroll back through the message. Modify the sweep_angle and image_count parameters:

    • sweep_angle is the angle (in degrees) that the robot will rotate on the spot
    • image_count is the number of images it will capture from its front-facing camera while it is rotating
  11. Once you have decided on some values, hit Enter to actually publish the message and call the action server. Keep an eye on all four terminal instances. What do you notice happening in each of them?

  12. Now, in WT(B):

    1. Cancel the rostopic pub command by entering Ctrl+C

    2. Once the action had completed, a message should have been published in WT(D) (a "result"), informing you of the filesystem location where the action server has stored the images that have just been captured by the robot:

       result:
         image_path: "/home/student/myrosdata/action_examples/YYYYMMDD_hhmmss"
       ---
      
    3. Navigate to this directory in WT(B) (using cd) and have a look at the content using ll (a handy alias for the ls command):

       [WT(B)] $ ll
      

      You should see the same number of image files in there as you requested with the image_count parameter.

    4. Launch eog in this directory and click through all the images to reveal your robot's mystery environment:

       [WT(B)] $ eog .
      
  13. To finish off, open another tab in the WT(A) terminal instance (WT(A):TERMINAL 3) and launch the Gazebo client to view the simulation that has, until now, been running headless:

     [WT(A):TERMINAL 3] $ gzclient
    
  14. The actual simulated environment should now be revealed!! To finish off, close down all the active processes (of which there are now many!):

    1. Close down the eog window and the WT(B) Windows Terminal instance.
    2. Stop the rostopic echo commands that are running in WT(C) and WT(D) by entering Ctrl+C in each of them and then close each of these Windows Terminal instances too.
    3. Then, enter Ctrl+C in WT(A):TERMINAL 3, 2 and 1 to stop the Gazebo GUI, the Action Server and the Gazebo simulation processes respectively, but leave all the tabs in WT(A) open.

Summary:

Phew, that was a long one! Essentially what we have done here is launched an action server and then called it from the command-line using rostopic pub. Hopefully, while the action server was performing the task that we had requested, you also noticed that it was providing us with some real-time feedback on how it was getting on (in WT(C)). In the same way as a ROS Service, it should also have provided us with a result (in WT(D)), once the action had been completed. Feedback is one of the key features that differentiates a ROS Action from a ROS Service, but there are other interesting features too, and we'll explore these in more detail now.

What is a ROS Action?

As you will have observed from the above exercise, a ROS Action actually seems to work a lot like a ROS Service. We've seen that we have a feedback message associated with an Action though, which is indeed different, but this isn't the main differentiating feature. The key difference is that the node that calls a ROS Action (an action "Caller" or "Client") doesn't need to wait until the action is complete before it can move on to something else: it can continue to do other tasks at the same time. Unlike ROS Services then, ROS Actions are Asynchronous, which makes them useful when implementing robotic behaviours that take a longer time to execute, and which an Action Client might need to be updated on throughout the process.

Recall the five messages associated with the action server in the exercise above, the messages had the following names:

/cancel
/feedback
/goal
/result
/status

The top item there hints at the most important feature of ROS Actions: they can be cancelled (or "preempted"), which we'll learn more about later.

The other thing to note is that - where we used the rosservice command to interrogate the ROS Services that were active on our ROS network last week - Actions use ROS Topics, so we use rostopic commands to interrogate action servers:

  1. rostopic list: to identify the action servers that are available on the network.
  2. rostopic echo: to view the messages being published by a given action server.
  3. rostopic pub: to call an action from the command-line.

The Format of Action Messages

Like Services, Action Messages have multiple parts to them and we need to know what format the action messages take in order to be able to call them. We don't have a tool like rossrv to do this for Actions though, instead we have to use rosmsg, or look for the message definition inside the Action Message Package.

We ran rostopic list to identify our action server in the previous exercise, which told us that there was an action server running called /camera_sweep_action_server:

$ rostopic list
[some topics...]
/camera_sweep_action_server/cancel
/camera_sweep_action_server/feedback
/camera_sweep_action_server/goal
/camera_sweep_action_server/result
/camera_sweep_action_server/status
[some more topics...]

If we were to then run the rostopic info command on any of these (goal for instance), this would provide us with the following:

[TERMINAL 1] $ rostopic info /camera_sweep_action_server/goal
Type: com2009_msgs/CameraSweepActionGoal

Publishers: None

Subscribers:
* /camera_sweep_action_server (http://localhost:#####/)

The Type field tells us that the action message belongs to the com2009_msgs package (the same package that contained the service messages that we used last week), and we could find out more about the goal message by using rosmsg info

[TERMINAL 1] $ rosmsg info com2009_msgs/CameraSweepActionGoal
std_msgs/Header header
  uint32 seq
  time stamp
  string frame_id
actionlib_msgs/GoalID goal_id
  time stamp
  string id
com2009_msgs/CameraSweepGoal goal
  float32 sweep_angle
  int32 image_count

In order to call this action server, we need to send a goal, and rosmsg info has just told us that there are two goal parameters that we must provide (see the bottom section of the output):

com2009_msgs/CameraSweepGoal goal
  float32 sweep_angle
  int32 image_count

For an action, the cancel and status messages are standardised, so the format will always be the same. The feedback, goal and result messages will be designed specifically for any given action server though, and so we need to know about the format of all of these before we attempt to make a call to the action server. A more convenient way to look at all these message portions at once is to navigate to the package directory and look at the actual message definition. We know that the messages belong to the com2009_msgs package, so we can navigate to this using roscd:

[TERMINAL 1] $ roscd com2009_msgs

Actions are always contained within an action folder inside the package directory, so we can then navigate into this folder using cd:

[TERMINAL 1] $ cd action/

Use the ll command again here to view all the action messages within the package. Here you should see the CameraSweep.action message listed. Run cat on this file to view the full message definition:

[TERMINAL 1] $ cat CameraSweep.action
#goal
float32 sweep_angle    # the angular sweep over which to capture images (degrees)
int32 image_count      # the number of images to capture during the sweep
---
#result
string image_path      # The filesystem location of the captured images
---
#feedback
int32 current_image    # the number of images taken
float32 current_angle  # the current angular position of the robot (degrees)

And there we have it. You will learn how we use this information to develop Python Action Server & Client nodes in the following exercises.

Concurrent Activity

An Action Server provides feedback messages at regular intervals whilst performing an action and working towards its goal. This is one way that an Action Client can monitor the progress of the action that it has requested. Another way it can do this is by monitoring the status of an action. Both of these features enable concurrency, allowing an action client to work on other things whilst waiting for the requested behaviour to be completed by the action server.

Exercise 2: Building a Python Action Client Node with concurrency

  1. You should only have one Windows Terminal application instance open now, and this should have three WSL-ROS terminal instances sat idle, waiting for something to do! In TERMINAL 1 create a new package called week5_actions using the catkin_create_pkg tool as you have done previously. This time, define rospy, actionlib and com2009_msgs as dependencies. (Remember to do this in your ~/catkin_ws/src/ folder).

  2. Once again, run catkin build on this and then re-source your environment:

     [TERMINAL 1] $ catkin build week5_actions
     [TERMINAL 1] $ src
    
  3. Navigate to the src folder of this package, create a file called action_client.py (using touch) and set this to be executable (using chmod).

  4. Review the code provided here, and the explainer, then copy and paste the code into your newly created action_client.py file.

  5. Then, in TERMINAL 2, execute the same launch file as before but this time with a couple of additional arguments:

     [TERMINAL 2] $ roslaunch com2009_simulations mystery_world.launch gui:=true camera_search:=true
    

    ... which will launch the Gazebo simulation in GUI mode this time, as well as the camera_sweep_action_server too.

  6. In TERMINAL 1, use rosrun to call the action server with the action_client.py node that you have just created.

    The node we have just created, in its current form, uses a feedback callback function to perform some operations while the action server is working. In this case, it simply prints the feedback from the server to the terminal. That's it though, and the client.wait_for_result() line still essentially just makes the node wait until the action server has finished doing what we asked it to (by using the client.send_goal() function to send a goal). This still therefore looks a lot like a service, so let's modify this now to really build concurrency into the client node.

  7. First, create a copy of your action_client.py node and call it concurrent_action_client.py (you will need to make sure you are still in the src directory of your week5_actions package before you run this command):

     [TERMINAL 1] $ cp action_client.py concurrent_action_client.py
    
  8. We want to use the status message from the action server now, and we can find out a bit more about this as follows:

    1. Use rostopic info camera_sweep_action_server/status to find the message type.
    2. Then, use rosmsg info using the message type you have just identified to tell you all the status codes that might be returned by the action server.

    You should have identified the following states, listed in the status_list portion of the message:

     PENDING=0
     ACTIVE=1
     PREEMPTED=2
     SUCCEEDED=3
     ABORTED=4
     REJECTED=5
     ...
    

    We can setup the action client to monitor these status codes in a while loop, and then perform other operations inside this loop until the action has completed (or has been stopped for another reason).

  9. To do this, replace the client.wait_for_result() line in the concurrent_action_client.py file with the following code:

    rate = rospy.Rate(1)
    i = 1
    print("While we're waiting, let's do our seven-times tables...")
    while client.get_state() < 2:
        print(f"STATE: Current state code is {client.get_state()}")
        print(f"TIMES TABLES: {i} times 7 is {i*7}")
        i += 1
        rate.sleep()
  10. Run the concurrent_action_client.py node and see what happens this time. Essentially, we know that we can carry on doing other things as long as the status code is less than 2 (either PENDING or ACTIVE), otherwise either our goal has been achieved, or something else has happened...

Cancelling (or Preempting) an Action

Actions are extremely useful for invoking robotic behaviours or processes that might take a while to complete, but what if something goes wrong, or if we just change our mind and want to stop an action before the goal has been satisfied? The ability to preempt an action is one of the things that makes them so useful.

Exercise 3: Building a Preemptive Python Action Client Node

  1. In TERMINAL 1 you should still be located within the src folder of your week5_actions package. If not, then go back there now! Create a new file called preemptive_action_client.py and make this executable.
  2. Have a look at the code here, then copy and paste it into the preemptive_action_client.py node that you have just created. Here, we have built an action client that will cancel the call to the action server if we enter Ctrl+C into the terminal. This is useful, because otherwise the action server would continue to run, even if we had terminated the client. A lot of the code is similar to the previous Action Client code, but we have built a class structure around this now for more flexibility. Have a look at the explainer and make sure that you understand how it all works.
  3. Run this using rosrun, let the server take a couple of images and then enter Ctrl+C to observe the goal cancelling in action.
  4. We can also cancel a goal conditionally, which may also be useful if, say, too much time has elapsed since the call was made, or the caller has been made aware of something else that has happened in the meantime (perhaps we're running out of storage space on the robot and can't save any more images!). This is all achieved using the cancel_goal() method.
    • Have a go now at introducing a conditional call to the cancel_goal() method once a total of 5 images have been captured.
    • You could do this inside the feedback_callback() class method.
    • You could use the captured_images attribute from the CameraSweepFeedback message to trigger this.

A Summary of ROS Actions

ROS Actions work a lot like ROS Services, but they have the following key differences:

  1. They are asynchronous: a client can do other things while it waits for an action to complete.
  2. They can be cancelled (or preempted): If something is taking too long, or if something else has happened, then an Action Client can cancel an Action whenever it needs to.
  3. They provide feedback to the client, so that it can monitor what is happening and act accordingly (i.e. preempt an action, if necessary).

This mechanism is therefore useful for robotic operations that may take a long time to execute, or where intervention might be necessary.

Creating Action Servers in Python

Note: Cancel any active processes that you may have running in any of your terminal instances before moving on.

So far we have looked at how to call an action server, but what about if we actually want to set up our own? We have been working with an action server in the previous exercises, but so far we haven't really considered how it actually works. First, let's do some detective work... We launched the Action Server using roslaunch in Exercise 1:

roslaunch com2009_examples camera_sweep.launch

What does this tell us about the package that the action server node is located in? Where, in the package directory, is this node likely to be located? How might we find out the name of the Python node from the camera_sweep.launch file?

There are plenty of clues there, so once you have identified the name and the location of the source code, open it up in VS Code and have a look through it to see how it all works.

Don't worry too much about all the content associated with obtaining and manipulating camera images in there, we'll learn more about this next week. Instead, focus on the general overall structure of the code and the way that the action server is implemented.

As a starting point, consider the way in which the action server is initialised and the way a callback function is defined to encapsulate all the code that will be executed when the action is called:

self.actionserver = actionlib.SimpleActionServer("/camera_sweep_action_server", 
    CameraSweepAction, self.action_server_launcher, auto_start=False)
self.actionserver.start()

Look at how a /cmd_vel publisher and an /odom subscriber are defined in external classes:

self.robot_controller = Tb3Move()
self.robot_odom = Tb3Odometry()

Which are imported (at the start of the code) from an external tb3.py module that lives in the same directory as the action server itself:

from tb3 import Tb3Move, Tb3Odometry

We do this to simplify the process of obtaining odometry data and controlling the robot, whilst keeping the actual action server code itself more concise. Have a look at the tb3.py module to discover exactly how these Python classes work.

Look inside the action server callback function to see how the camera sweep operation is performed once the action has been called:

def action_server_launcher(self, goal):
    ...

Consider the error checking that is performed on the goal input variables, and how the call to the action server is aborted should any of these goal requests be invalid:

success = True
if goal.sweep_angle <= 0 or goal.sweep_angle > 180:
    print("Invalid sweep_angle.  Select a value between 1 and 180 degrees.")
    success = False
    ...

if not success:
    self.actionserver.set_aborted()
    return

Consider how preemption is implemented in the server, and how the Action is stopped on receipt of a preempt request:

if self.actionserver.is_preempt_requested():
    ...

Also have a look at the way a feedback message is constructed and published by the server:

self.feedback.current_image = i
self.feedback.current_angle = abs(self.robot_odom.yaw)
self.actionserver.publish_feedback(self.feedback)

Finally, review the way in which a result message is constructed, how the server registers that the action has been completed successfully, and how the result message is then published to the caller.

self.result.image_path = self.base_image_path
self.actionserver.set_succeeded(self.result)
self.robot_controller.stop()

Exercise 4: Developing an "Obstacle Avoidance" behaviour using an Action Server

Knowing what you now do about ROS Actions, do you think the Service Server/Client systems that we developed last week were actually appropriate use cases for ROS Services? Probably not! In fact, Action Server/Client methods would have probably been more appropriate!

You are now going to construct your own Action Server and Client nodes to implement a more effective obstacle avoidance behaviour that could form the basis of an effective search strategy.

Step 1

In TERMINAL 1 navigate to the src folder of your week5_actions package, create two Python scripts:

  1. search_server.py
  2. search_client.py

... and make sure these files are both executable.

Step 2

Start by programming your search_server.py action server node:

  1. The action server should make the robot move forwards until an obstacle is detected ahead of it.

  2. Similarly to the Service Server that you created last week, your Action Server here should be configured to accept two goal parameters:

    1. The speed (in m/s) at which the robot should move forwards when the action server is called. Consider doing some error checking on this to make sure a velocity request is less than the maximum speed that the robot's wheel motors can actually achieve (0.26 m/s)!
    2. The distance (in meters) at which the robot should stop ahead of any objects or boundary walls that it has detected in front of it. To do this you will need to subscribe to the /scan topic. Be aware that an object won't always be directly in front of your robot, so you may need to monitor a range of LaserScan data points within the ranges array to make the collision avoidance effective (recall the LaserScan callback example from last week and also have a look at a further class within the com2009_examples tb3.py module that might help you with this).
  3. Whilst your server performs its task it should provide the following feedback to the Action Caller:

    1. The distance travelled (in meters) since the current action was initiated.

      To do this you will need to subscribe to the /odom topic. Remember from earlier that there is a Tb3Odometry() class within the com2009_examples/tb3.py module that might help you with obtaining this data.

      Remember also that your robot's orientation shouldn't change over the course of a single action call, only its linear.x and linear.y positions should vary. Bear in mind however that the robot won't necessarily be moving along the X or Y axis, so you will need to consider the total distance travelled in the X-Y plane. You should have done this in the Week 3 move_square exercise, so refer to this if you need a reminder.

  4. Finally, on completion of the action, your server should provide the following result:

    1. The total distance travelled (in meters) over the course of the action.
    2. The distance to the nearest obstacle up ahead, as detected by the LiDAR sensor (this should be the same, or very close to, the distance that was provided by the Action Client in the goal).
    3. The angle at which the nearest obstacle is situated in front of the robot.
  5. An action message has been created for you for this exercise, available in the com2009_msgs package, called Search.action. Navigate to the action folder of the com2009_msgs package directory to find out everything you need to know about this action message in order to develop your Action Server (and Client) nodes appropriately (or use rosmsg info ... in the terminal).

  6. Use the Camera Sweep Action Server that you reviewed earlier as a template for building your server node. To begin with, copy the contents of the camera_sweep.py node into your search_server.py file then modify it to your own needs by taking a similar approach:

    1. Use the same class structure and the same actionlib calls, modified to your own requirements.

    2. Use the tb3.py module from the com2009_examples package as discussed above. To use this, you will need to copy the module across to your own week5_actions/src folder though, so that your search_server.py node can import it. In TERMINAL 1 you should already be located in the src folder of your week5_actions package so, from there, copy the .py file as follows:

       [TERMINAL 1] $ cp ~/catkin_ws/src/COM2009/com2009_examples/src/tb3.py ./
      

Step 3

Next, program your search_client.py action client:

  1. Use the Client node that you developed in Exercise 3 as a guide to help you with this: copy the code across from your preemptive_action_client.py node into your new search_client.py file and start editing from there.
  2. The client needs to issue a correctly formatted goal to the server.
  3. The client should be programmed to monitor the feedback data from the Server. If it detects from this that the robot has travelled a distance greater than 2 meters without coming into contact with an obstacle, then it should cancel the current action call using the cancel_goal() actionlib method.
  4. Be sure to return to your search_server.py server node to also program the correct behaviours on receipt of a preempt request too.

Testing

As you develop your action server/client nodes for this exercise there are two simulated environments that you might wish to use. The first is similar to the mystery world that we were working in earlier:

$ roslaunch turtlebot3_gazebo turtlebot3_stage_2.launch

Or, for something a bit more challenging, you might want to try this one instead:

$ roslaunch turtlebot3_gazebo turtlebot3_stage_4.launch

Once you have everything in place, have a go at launching the action server and making successive calls to it using your client node. Reorient your robot between successive calls (using the turtlebot3_teleop node), to make sure that it is robustly stopping in front of obstacles when approaching them from a range of different angles.

Make sure that your preemption functionality works correctly too, so that the robot never moves any further that 2 meters during a given action call.

Some advanced exercises (if you're feeling adventurous!)

Want to do more with the ROS skills that you have now developed?! Consider the following advanced exercises that you could try out now that you know how to use ROS Actions!

Note: We know that you have done a lot this week already, and these are really just suggestions for more advanced things that you may want to explore in your own time, or to help with the further work that you will do on the COM2009 course.

Advanced Exercise 1: Implementing a Search strategy

What you developed in the previous exercise could be used as the basis for an effective robot search strategy. Up to now, your Action Client node should have the capability to call your Search.action server to make the robot move forwards by 2 meters, or until it reaches an obstacle (whichever occurs first), but you could enhance this further:

  • Between action calls, the client could make the robot turn on the spot to face a different direction and then issue a further action call to make the robot move forwards in this new direction until an object is detected again.
  • The turning process could be done at random, or it could be informed by the result of the last action call, i.e.: if (on completion) the server has informed the client that it detected an object at an angle of, say, 10° anti-clockwise from the front of the robot, then the client might then decide to turn the robot clockwise in an attempt to turn away from the object before issuing its next action call to make the robot move forwards again.
  • Repeating this process over and over again, the robot would (somewhat randomly) travel around its environment safely, stopping before it crashes into any obstacles and reorienting itself every time it stops moving forwards, which is effectively an implementation of a basic robot search strategy! (Imagine SLAM was running during this process - your robot could be building up a map of its environment in the background as it slowly explored every part of it!)

Advanced Exercise 2: Autonomous Navigation using waypoint markers

In the Week 3 session you used SLAM to construct a map of an environment and then issued navigation requests to the move_base action server (via the command-line) to make your robot move to a zone marker, based on coordinates that you had established beforehand. Now that you know how to build Action Client Nodes in Python you could return to your week2_navigation package and build a new node that makes the robot move sequentially between each zone marker programmatically.

  • Your node could cycle through the coordinates of all four of the zone markers (or "waypoints") that you established whilst using SLAM to build a map of the environment.
  • Your node could monitor the status of the move_base_simple action call to know when the robot has reached a zone marker, so that it knows when to issue a further action call to move to the next.
  • You could refer to the launch file that you created in the Week 3 session to launch all the Navigation processes that need to be running in order to enable and configure the ROS Navigation Stack appropriately for the TurtleBot3 robot.

Wrapping Up

In this week's session you have learnt how ROS Actions work and why they are useful. You have learnt how to develop Action Client Nodes in Python which can perform other tasks concurrently to the action they have requested, and can also cancel the requested action as required. You have learnt how to use standard ROS tools to interrogate the topic messages used by an action server, allowing you to build clients to call them, and to also allow you to build standalone action servers yourself using bespoke Action messages. Finally, you have harnessed this communication method to implement a behaviour that could be used as the basis for a genuine robotic search strategy.

Topics, Services or Actions: Which to Choose?

You should now have developed a good understanding of the three communication methods that are available within ROS to facilitate communication between ROS Nodes. Through this course you have gained practical experience in using all three of these, but you may still be wondering how to select the appropriate one for a certain robot task...

This page on the ROS.org website summarises all of this very nicely (and briefly), so you should have a read through this to make sure you know what's what. In summary though:

  • Topics: Are most appropriate for broadcasting continuous data-streams such as sensor data and robot state information, and for publishing data that is likely to be required by a range of Nodes across a ROS network.
  • Services: Are most appropriate for very short procedures like quick calculations (inverse kinematics etc.) and performing short discrete actions that are unlikely to go wrong or will not need intervention (e.g. turning on a warning LED when a battery is low).
  • Actions: Are most appropriate for longer running tasks (like moving a robot), for longer processing calculations (processing the data from a camera stream) or for operations where we might need to change our mind and do something different or cancel an invoked behaviour part way through.

Saving your work

Remember to save the work you have done here today by running the wsl_ros backup script in any idle WSL-ROS Terminal Instance now:

$ wsl_ros backup

Navigating This Wiki:
← Week 4: ROS Services [Previous] | [Next] Week 6: Cameras, Machine Vision & OpenCV →

Clone this wiki locally