Skip to content

MATLAB implementation of control and navigation algorithms for mobile robots

Notifications You must be signed in to change notification settings

d-misra/Mobile-robotics-navigation-algorithms

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 

Repository files navigation

Robot navigation algorithms

A collection of algorithms for control and navigation of mobile robots. These programs were developed during my masters program's EMOR course tutorials (2016-17 session) at Warsaw University of Technology, Poland. Detailed project descriptions can be found on the official course page. Setup instructions along with scenes and helper functions are provided, necessary to setup the environment used for these projects.

Requirements

  • Matlab
  • Peter Corke's Robotics, Vision and Control toolbox
  • CoppeliaSim simulator (earlier known as V-REP)
  • Matlab bindings for CoppeliaSim

Overview

The mobile robot used in these projects is a KUKA youBot. It is controlled in a simulated environment (in CoppeliaSim) with bindings to communicate with the control program written in MATLAB.

Inputs to the MATLAB callback function consist of the robot's position and orientation and data from the LIDAR sensors. Outputs from the program controls the linear and angular velocities of the simulated robot base wheels to perform a specified task. A Finite State Machine (FSM) manages the sequence of robot behaviours, e.g. initial, moving forward, stopping, moving backward.

Topics covered in this repository :

  • Trajectory generations
  • Wall following algorithm
  • Bug2 algorithm
  • Localisation using particle filter
  • Wavefront planner

Project 1

Introduces trajectory generation and basic control of the simulated Youbot using translational and rotational velocities.

  • Straight-line trajectory generation
  • Moving along a circular trajectory with constant orientation

pic1

The robot motion is generated by using proportional regulators with limited output.

Demos can be viewed here and here.

Project 2

Introduces reactive control and navigation for obstacle avoidance using data gathered by LIDAR sensors. The LIDAR sensor in the simulator has a range of 5 m and sends 684 rays, and is analysed to find the nearest obstacle.

pic1

Wall-following algorithm

The youBot moves along the wall, using LIDAR data to detect the wall and avoid collisions. 3 proportional regulators are used to keep at a fixed distance to the wall, move along it and maintain perpendicular orientation.

Demo

pic1a

Bug2 algorithm

Bug algorithms assume only local knowledge of the environment, and the robot has simple behaviours - move along a wall or along a straight line toward the goal.

In Bug 2 algorithm, first a line joining the initial and goal positions, called the m-line is created. The robot starts to move along this line towards the goal. If an obstacle is encountered, the robot circumnavigates it until the m-line is encountered again, closer to the goal. Now the obstacle is left and the robot continues moving along the m-line to reach the goal.

Demo

pic1b

More on Bug algorithms can be read here

Project 3

Introduces localization and path planning algorithms.

Localisation using Particle Filter

In this task, the robot does not know its pose. Instead, the odometry for every control step is known. The robot's pose is estimated from odometry data and a provided map. Distance to the walls are computed using laser range sensors.

Here, we use a probabilistic state estimation technique using sampling-based distribution representation of a particle filter. Such localization is also known as Monte Carlo localization and the wikipedia page describes it as :

The algorithm uses a particle filter to represent the distribution of likely states, with each particle representing a possible state, i.e., a hypothesis of where the robot is.[4] The algorithm typically starts with a uniform random distribution of particles over the configuration space, meaning the robot has no information about where it is and assumes it is equally likely to be at any point in space.[4] Whenever the robot moves, it shifts the particles to predict its new state after the movement. Whenever the robot senses something, the particles are resampled based on recursive Bayesian estimation, i.e., how well the actual sensed data correlate with the predicted state. Ultimately, the particles should converge towards the actual position of the robot.

Demo

pic1b

Some resources on Particle filter localization:

  • A video tutorial from Penn Engineering
  • Original publication on Monte Carlo localization for mobile robots
  • A blog post on particle filter localisation

Path planning with Wavefront

Motion planning algorithm that prepares the robot trajectory off-line, i.e using the initial and goal positions only, through a discretized workspace. To create a "wave" of values, the destination is given the minimum cost, and the neighbouring cells are assigned progressively higher cost. An example of a wave of cost values is seen below (taken from here where the obstacle has value 1 with goal and start points are labelled as 2 and 18.

wave

A path is computed starting from the initial cell to the destination, by selecting the neighbouring cell with the smallest cost. The environment map is already known, where the obstacles (given a fixed cost value) are avoided for a collision-less trajectory. Multiple solutions can be found to reach the goal, and any path consisting of descending values of cost are acceptable.

An example wavefront planner and the following demo below uses movement in only 4 directions (no diagonal movement).

wave

Demo

pic1b

Some resources on wavefront planner :

Additional information

Sometimes the official webpage is not available (or reads yet to be announced, depending when in the academic year since EMOR is taught in the winter semester). In such cases, please right click and copy the link address of the project html files (within the folder course_page) and paste it in http://htmlpreview.github.io/ )

Since these tutorials are still part of the ongoing robotics curriculum evaluations at WUT, Poland, the Matlab code for these projects are not publicly released in this repository. It can be made available upon request.

Other robotics projects from my masters can be found at https://github.com/d-misra/EMARO-Robotics-Projects

Acknowledgments

Many thanks to Prof. Dawid Seredyński for conducting and preparing content for this excellent course, and also to my classmates and Shreyas for all the discussions and help.