Skip to content

EXPROBLAB: Assignment relating the SMACH sw used by ROS to implement a FSM to control the behaviour of a robot, in addition the robot has to detect markers to plan its path

Notifications You must be signed in to change notification settings

mmatteo-hub/EXPROBLAB_Assignment2

Repository files navigation

EXPROBLAB_Assignment #2

Code Documentation

Introduction

The assignment is the implementation of the previous Assignment architecture, here the link, to make the robot move in an environment and have a surveillance behavior. The environment is composed of locations that are connected in a certain way depending on the information given by the environment itself: there are some ArUco markers, in particular 7, deploys in the starting room the robot spawns into. The robot has to detect them and retrieve information relating to the ontology. The goal is to adopt the Finite State Machine (FSM), with the use of SMACH, implemented in the first assignment that allows the robot to choose the behavior to adopt depending on the situation. The robot is also provided a battery, periodically checked, that needs to be recharged after some time of use.
The locations are divided into:

  • room: location with one door;
  • corridor: location with at least two doors.

The entities that connect two locations are called doors and the entity that moves in the environment is the robot, called rosbot.

Folder organization

This repository contains a ROS package named EXPROBLAB_Assignment1 that includes the following resources.

  • action/: It contains the definition of each action server used by this software.
    • Plan.action: It defines the goal, feedback and results concerning motion planning.
    • Control.action: It defines the goal, feedback and results concerning motion control.
  • docs/: It contains the HTML documentation of the package.
  • images/: It contains the diagrams and images shown in this README file.
  • launch/: Contains the configuration to launch this package.
  • meshes/: a folder containing the meshes for the robot model
  • msg/: It contains the message exchanged through ROS topics.
    • Point.msg: It is the message representing a 2D point.
    • RoomConnection.msg: It is the message that describes how the information from the ArUco needs to be processed.
  • param/: parameters for the robot and the mapping and moving nodes.
  • scripts/: It contains the implementation of each software component.
  • src/: It contains the C++ files
    • detect_marker.cpp: It is the node responsible for making the arm robot move to detect the ArUco markers.
    • detect_marker.h: It is the definition of the class for the marker detection
    • marker_server.cpp: It is the server, already provided, to retrieve the information from the ArUco
  • srv/: it contains the .srv files
  • topology/: It contains the starting ontology used in the package which is modified in the initial state to build the new environment for the assignment.
  • urdf: It contains the urdf of the robot and its arm, in the dedicated folder.
  • utilities/EXPROBLAB_Assignment2/: It contains auxiliary python files, which are exploited by the files in the scripts folder.
    • name_mapper.py: It contains the name of each node, topic, server, actions and parameters used in this architecture, if any.
  • worlds/: It contains the world definition used.
  • CMakeLists.txt: File to configure this package.
  • README.md: README of the repo
  • package.xml: File to configure this package.
  • setup.py: File to import python modules from the utilities folder into the files in the script folder.

Software Architecture

For checking the Software Architecture used and the relative components used we re-address to the previous assignment folder:

The architecture was only re-adapted. In particular, the only component that has been modified is the controller:

  • from the request of the planner it is only taken the last point, the target, which is sent to the move_base thus driving there the robot;
  • the controller finishes its execution as soon as the robot reaches the goal.

Temporal Diagram

The temporal diagram has been a little bit changed according to the new nodes introduced. The resultant one is:

The changes are related to:

  • detect_marker with ArUco: this is the initial node of the architecture. In particular it is run and as soon as all the markers are detected it publishes the list on the /topic_list topic.
  • FSM: the FSM has a subscriber to the topic to wait for the information coming and processe them when available. The init_state sends a request to the server and builds the ontology according to the information retrieved:
    • id of the room;
    • connection to which room;
    • the door which connects the two rooms.

Thanks to a for loop all this information is stored in the aRMOR server thus building the ontology.

  • Controller server: it sends a request to the move_base Action Client to move the robot. It waits until the robot has reached the location chosen by the planner.
  • SLAM: it works in collaboration with the move_base and each time the robot is moved the map is updated and save on the server. In this way, with a lot of iteration the path the move_base will compute will be more and more accurate.

The same for the surveillance policy adopted and the

Node for marker detection

The marker detection is performed using the OpenCV library for the ArUco markers.
The node, detect_markers.cpp, is composed of a class responsible of detecting the markers. It uses the RGB-D camera the robot is provided with. The node prints on the terminal the ID detected and it is not ended till the end of the entire program.

ArUco marker

In computer vision, an ArUco marker is a square barcode that is used to identify things specifically. Each marker has a different pattern made up of a series of black and white squares placed in a square grid. Robotics, augmented reality, and other applications use arco markers to swiftly and precisely identify objects.

Mapping

For the mapping solution I used the Gmapping algorithm which is a SLAM (simultaneous localization and mapping) algorithm that allows a robot to create a map of its environment and to simultaneously localize itself within that map. It does this by using sensor data from laser rangefinders or cameras to generate a map and then using that map to determine its own location within the environment.
The Gmapping algorithm uses a filtering approach to map the environment and localize the robot. Specifically, it uses a particle filter to track the robot's pose (i.e., its position and orientation) over time. As the robot moves through the environment, it takes measurements with its sensors and updates the filter using these measurements. The filter maintains a distribution over the possible poses of the robot, and this distribution is updated at each time step based on the sensor measurements and the motion of the robot. By using a filtering approach, the Gmapping algorithm is able to handle uncertainty and noise in the sensor measurements, which is important for reliable operation in real-world environments.

Installation and Running Procedure

Run by roslaunch file

In order to run the program by the roslaunch file we first need to install the xterm tool by following these steps:

sudo apt-get update
sudo apt-get -y install xterm

Now to run the code we just type these on a terminal:

roslaunch EXPROBLAB_Assignment2 assignment.launch

Environment

The resultant ontology of the in this assignment is the following:

The environment is built according to the information taken by the robot.
Rooms' name and their coordinates are stored in two different arrays with a one-to-one relationship thus allowing the program to find them easily during the execution.

At the beginning of the program execution, the robot does not anything about the environment and it has to detect some markers to build the ontology. The markers are placed as shown in the figure:

The red X indicates the robot's spawning position.
The robot has to detect them with its camera and then store the value to build the ontology later.

To perform a scan of the total room the program start by making the arm turn around the base and looking at the ground. In this way, all the ground markers, 12 14 16 17, are correctly detected. Later the same operation is performed with the robot facing to the top of the walls thus detecting 11 13 and 15 markers. Once the monitoring action is finished the robot is put back into its "home" position and the FSM starts.
Since in the beginning, the camera had problems detecting the markers I modified the worlds settings of the ArUco, in particular, the marker's box is set to white. In this way, the ArUco library can easily detect the corners thus detecting the ID correctly.

Robot

The robot is an assembly of different existing urdf re-adapted to this task. In particular, there are two robots:

  • ROSbot, from which it was taken the entire model
  • KUKA, from which it was only taken the arm. The resultant robot is the following:

Since the original urdf of the two different robots were different in dimensions, then I decided to maintain the ROSbot with its original dimensions and to scale the KUKA such that it is proportional to the entire assembly.

For the robot specifications used check the last assignment section here. Important to remember that the battery duration is changed and incremented up to 20 minutes because of the low simulation speed during the test.

The original urdf was modified by changing the position of the camera and the laser. In particular:

  • the camera, which is a RGB-D one, is placed on the KUKA arm, such that it can be moved from code;
  • the laser has been placed in front of the robot and its range is changed from -90° to 90°.

Laser of the robot

The initial version of the laser is provided with a view range from -180° to 180°. Since the resultant assembly also provided an arm I had to move the laser and change its view range. This is needed because, otherwise, the robot would have errors while reading them since it would see obstacles that are not: the arm for example.

Robot behaviour when in a room

Since the robot is supposed to do something when it is in a new just reached room, to perform a complete scan of the room, the execution of the FSM is stopped for a while and the robot arm turns 360°.
In the code there is a variable to monitor the actual robot position in particular:

  • as soon as the detection phase is finished the robot is set in its home configuration: all the joint angles are set to 0 rad whereas joint 1 is set to -3.14 rad;
  • a variable, called joint1_angle, is initialized in the helper.py with the value -1;
  • each time the robot is in a new room the variable is changed in sign, by multiplying it by -1, and the angle is set according to it.

In this way, the robot rotation is monitored from code and it will be clockwise or anti-clockwise alternatively.

Robot view while moving

Here there is a snap of the robot while it is reaching the first goal set by the reasoner and sent by the controller.

In the picture it can be seen:

  • the laser used to detect obstacles;
  • the global path of the robot;
  • the local path of the robot;
  • the map mapped till that moment by the robot.

System Features

The modularity of the architecture already implemented in Assignment #1 allowed me to make very few changes to the existing nodes. I was able to adapt the new node for detecting the marker by just modifying the coordinates passed to the controller.py.

System Limitation

The marker detection is thought for this environment, so if the marker were placed in a different position the camera could not detect them.
Moreover, the robot is an assembly of two pre-existing ones so when the base is moving the parameters are not optimal for the entire assembly thus having a slow simulation when the robot is moving.

Possible technical improvements

  • Detection phase:
    A possible improvement could be the possibility to have a monitoring phase as general as possible, thus not having problems in detecting markers put in different locations. Possibility of tuning the parameters to make the robot faster thus avoiding too long simulations.
  • Exploration phase:
    There is an algorithm that can be used to make the robot explore a map. In particular, the Explore-Lite algorithm provides greedy frontier-based exploration. When the node is running, the robot will greedily explore its environment until no frontiers could be found, by sending goals to the Move Base server
  • Planner and Controller:
    Implement a planner that takes into account the wall presence and a controller that takes each point planned and drives the robot to the goal through these points. Of course, it would be needed a planning algorithm to make the planner plan all the points correctly.

Authors and Contacts

Matteo Maragliano (S4636216)
S4636216@studenti.unige.it

About

EXPROBLAB: Assignment relating the SMACH sw used by ROS to implement a FSM to control the behaviour of a robot, in addition the robot has to detect markers to plan its path

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published