Skip to content

Official repo of the Third Assignment developed for the experimental robotics course

Notifications You must be signed in to change notification settings

fedehub/ExperimentalRoboticsAssignment3

Repository files navigation

Forks Stargazers Issues


Logo

Experimental Robotics Laboratory

First assignment for the Experimental Robotics laboratory course
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. ROS node description: An overview
  5. Working hypothesis and environment
  6. Roadmap
  7. Contributing
  8. License
  9. Contact
  10. Acknowledgments

About The Project

This project takes inspiration from the earlier ones (ExperimentalRoboticsAssignment1 and ExperimentalRoboticsAssignment2, respectively) but unlike them, the environment in which detectiBot moves is much more complex. Indeed, it presents several rooms, and 30 ArUco markers (5 markers for each room)

This time, each marker corresponds to a hint, which are always given with the following structure:

int32 ID
string key
string value

Also, markers may me in three different positions: placed on the walls (height 1 m ca.), and on the floor (placed vertically or horizonally).

The idea is the same of the previous two assignment: the robot should keep receiving hints until it has a complete and consistent hypothesis. However, as in the previous assignments, only one ID source is the trustable one.

As soon as the robot gets a complete hypothesis, it should go in the center of the arena (x=0.0, y=-1.0, which should be also the starting position of the robot) and «tells» its solution.

If the solution is the correct one, the game ends.

REMARK x and y coordinates belonging to each room's point where known a priori as shown in the table below

| room  | x,y coordinates  | 
|--|--|
| Room1 | ( -4 , -3 ) | 
| Room2 | ( -4 , +2 ) | 
| Room3 | ( -4 , +7 ) | 
| Room4 | ( 5 , -7 )  | 
| Room5 | ( 5 , -3 )  | 
| Room6 | ( 5 , +1 )  | 

Having differen values for z, it is needed that detectibot reaches both quotes with its cluedo_link

Concerining the simulation environment, there are small walls around the robot aimed at impeding the movements of its mobile base

Hence the robot moves from one «hint» coordinate to another one, while receiving hints. This holds until it has a complete and consistent hypothesis

Please consider that consistent hypothesis have been defined as COMPLETED but NOT INCONSISTENT

REMARK A consistent hypothesis is defined as completed when there occurs one role for each class (i.e., one occourence of what, one occourence for who, one occourence for where ). A straightforward example of such hypothesis is ID2, whose definition is here below reported

ID2_1: ['where', 'Study']
ID2_2: ['who', 'Col.Mustard']
ID2_3: ['what', 'Rope']

REMARK An hypothesis, is defined as inconsistent when there occurs more than one role for each class (i.e. 2 or more occourences of who, where, what)

A clear example of such hypothesis is ID4 whose definition is here below reported

ID4_1: ['where', 'Library']
ID4_2: ['who', 'Mrs.White']
ID4_3: ['what', 'LeadPipe']
ID4_4: ['where', 'Diningroom']

Assignment's prerequisites

In this assignment:

  • Here using ROSPlan is not mandatory
  • the robot of the model has no limitations, meaning that it can be modelled in whatever fashion
  • the usage of the ArUco libraries is mandatory for the detection of the markers.
  • using functionalities such as mapping, path planning and following may greatly help in performing the assignment
  • use, as starting point the following package, provided by our Professors!

What does the starting package contain:

  • a node, which implements the oracle. Concerning the implementation of the Oracle consider that: - there are in total 6 IDs [0...5]; - ome of these IDs (randomly chosen) may generate inconsistent hypotheses (e.g. multiple persons, rooms, objects) - the «trustable» ID is also randomly chosen (among the IDs which do not generate inconsisten - the oracle node implements two services: the first one (/oracle_hint) recevies an Int32 as request (the id of the marker) and returns the hint as a erl2/ErlOracle message; - The oracle node implements also a service (/oracle_solution) which returns the trustable ID (erl2/Oracle.h, with an empty message for the request, and a int32 for the reply).
  • similarly to the second assignment, some markers correspond to malformed hints (e.g, all fields are empty, or just one field is empty, ....)

(back to top)

Built With 🏗️

(back to top)

Getting Started

Under the following sections, the software architecture is briefly introduced, along with the prerequisites and installation procedures. Then, a quick video demonstration showing the overall functioning is provided and system’s limitations are discussed

Installation procedure

⚠️ To avoid further issues, please use this docker image provided by our professors

docker run -it -p 6080:80 -p 5900:5900 --name MyDockerContainer carms84/exproblab

Also remember to update and upgrade the container

sudo apt get update
sudo apt get upgrade

Then run catkin_make on your workspace; in my case:

  • Navigate to your ROS workspace
     cd /home/ros_ws/
  • Run catkin
     catkin_make

You can now download the repository inside the src folder

cd /home/ros_ws/src
git clone https://github.com/fedehub/ExperimentalRoboticsAssignment3

Also download MoveIt 1.1.5 (If you haven't already done so)

git clone https://github.com/ros-planning/moveit.git
cd moveit
git checkout 2b881e5e3c4fd900d4d4310f4b12f9c4a63eb5dd
cd ..
git clone https://github.com/ros-planning/moveit_resources.git
cd moveit_resources
git checkout f6a7d161e224b9909afaaf621822daddf61b6f52
cd ..
git clone https://github.com/ros-planning/srdfdom.git
cd srdfdom
git checkout b1d67a14e45133928f9793e9ee143998219760fd
cd ..
apt-get install -y ros-noetic-rosparam-shortcuts
cd ..
catkin_make
catkin_make
catkin_make

Then navigate through the directory, in order to find the marker models

cd /ros_ws/src/ExperimentalRoboticsAssignment3/erl3/models 

copy all files inside the erl3 models folder and navigate to the root/.gazebo directory

cd /root/.gazebo/models

paste the previously copied files, containing all the marker's models, as showm in the animated gif, here below

optimized_install_models

Workspace building e launch

Navigate to you workspace

cd /home/ros_ws/
  • clone the repository
https://github.com/fedehub/ExperimentalRoboticsAssignment3
  • source your workspace by typing
source devel/setup.bash

(back to top)

Running procedure

Running the entire project

To test the project, first of all:

  • Open a shell and run:
roslaunch erl_assignment_3_robot detectibot_environment_2.launch 2>/dev/null
  • Open a second shell and run
roslaunch erl_assignment_3 launch_nodes.launch
  • Open a third shell and type:
rosrun erl_assignment_3 state_machine.py

Running the Navigation module

To test the navigation module, first of all:

  • Open a shell and run:
roslaunch erl_assignment_3_robot detectibot_environment_2.launch
  • Open a second shell and run the navigation node
rosrun erl_assignment_3 navigation.py

Running the Vision module

To test the vision module, first of all:

  • Open a shell and run:
roslaunch erl_assignment_3_robot detectibot_environment_2.launch
  • Open a second shell and run
rosrun erl_assignment_3 img_echo
  • Open a third shell and type:
rosrun erl_assignment_3 detectibot_magnifier

Running the State machine module

To test the state machine's module , first of all:

  • Open a shell and run:
roslaunch erl_assignment_3_robot detectibot_environment.launch 2>/dev/null
  • Open a second shell and run
rosrun erl_assignment_3 img_echo &
rosrun erl_assignment_3 detectibot_magnifier &
rosrun erl_assignment_3 navigation.py 
  • Open a third shell and type:
rosrun erl_assignment_3 cluedo_kb.py
  • Open a fourth shell and run:
rosrun erl_assignment_3 state_machine.py

(back to top)

Usage

The most relevant aspects of the project and a brief video tutorial on how to launch the simulation can be found here below

erl3.test.mp4

(back to top)

ROS node description: An overview

Here there is the UML components diagram of the project

The aforementioned architechture can be seen as a deliberative one, being its pipeline structured as "sense-plan-act" More specifically, there are three types of sense in this architechture

  • Vision: It is implemented by means of Aruco and OpenCV frameworks
  • Localisation: It is implemented by means of the Odom topic, in Gazebo
  • Mapping: made possible by laser sensors and GMAPPING algorithm

Moreover:

  • Concerning the "plan" module, it is implemented through a Smach state machine
  • Finally, the move_base pkg is responsible for the detectibot's movement around the environment

As shown in the above component diagram, this software architechture relies on the synergy of varius modules, here beeow listed:

(back to top)

the state_machine.py node 🪢

Let's start with the state_machine.py node

It implements a state machine that controls the operations of the robot; it is the core node of the architecture that interacts with and directs all remaining parts

In particular the machine organises the investigation into four states.

  • move → moves the robot between rooms inside the simulated indoor environment
  • collect → the robot rotates on itself to read the largest number of hints within the room
  • check → takes hints from the sensing system via a service, and uses the ontology to work out whether there are possible solutions or not. If there occurs no possible solutions, the outcome is mistery_not_solvable, and the robot transitions back to the "move" state. Otherwise, if there actually occurs possible solutions, the state machine makes a transition to the "show" state, responsible for querying the oracle about the solution's truthfulness
  • show → questions the oracle about the solution

Here below we can find a hand-made state diagram, representing how the system works

Moreover, an introspection Server has been implemented in order to visualize the possible transitions between states, as well as the currently active state and the values of user data that is passed around For visualising it, just type:

rosrun smach_viewer smach_viewer.py

REMARK: Please, remember to import the correct libraries (i.e import smach, smach_ros) otherwise some errors may occur!

state_machine_functioning

Node interfaces:

Node [/state_machine]
Publications: 
 * /rosout [rosgraph_msgs/Log]

Subscriptions: 
 * /clock [rosgraph_msgs/Clock]

Services: 
 * /state_machine/get_loggers
 * /state_machine/set_logger_level

(back to top)

the navigation.py node 🪢

This node implements two different services aimed at letting the robot reach ifferent rooms in order to fulfill its investigation-related tasks. The /go_to_point service calls MoveBase and waits until the robot hasn't reached the given target whereas the /turn_robot service listens for a request containing the angular velocity around x to keep, and the time during which the robot has to turn at "that" specific angular velocity. Moreover:

  • Localisation takes place through the subscription to the odom (nav_msgs/Odom) topic
  • The node uses move_base (from move_base pkg) to perform the navigation. The main function of this package is to move a robot from its current position to a goal position with the help of other navigation nodes. The move_base node inside this package links the global-planner and the local-planner for the path planning, connecting to the rotate-recovery package if the robot is stuck in some obstacle and connecting global costmap and local costmap for getting the map. The move_base node is basically an implementation of SimpleActionServer, which takes a goal pose with message type (geometry_msgs/PoseStamped). We can send a goal position to this node using a SimpleActionClient node.
  • In addition the navigation service provides a service to rotate the robot (erl_assignment_3_msgs/TurnRobot) by a certain angle speed for a certain time; this functionality is aimed at the collection of clues!

Node Interfaces:

Node [/navigation]
Publications: 
 * /cmd_vel [geometry_msgs/Twist]
 * /move_base/cancel [actionlib_msgs/GoalID]
 * /move_base/goal [move_base_msgs/MoveBaseActionGoal]
 * /rosout [rosgraph_msgs/Log]

Subscriptions: 
 * /clock [rosgraph_msgs/Clock]
 * /odom [nav_msgs/Odometry]

Services: 
 * /go_to_point
 * /navigation/get_loggers
 * /navigation/set_logger_level
 * /turn_robot

For exploring a bit how move_base actually works, an entire repo used for testing the ROS navigation stack, has been devoted to it

(back to top)

the cluedo_kb.py node 🪢

Concerning the cluedo_kb.py node:

cluedo_KB is a node that serves as a specialised ontology for the problem in hand; it supplies a processing/reasoning system that provides the functionality of:

  • Registering clues
  • Building and processing hypotheses based on the added information
  • Finding possible solutions to the case
  • Rejecting hypotheses (whether needed)

More specifically, When the robot starts roaming around looking for Aruco markers, (where the hints' IDs are stored), it makes a service request through /add_hint for soliciting the oracle to announce the found hint. This latter, consists in a request of type erl3/Marker here below reported

# erl3/Marker service implementation

int32 markerId
---
# erl3/ErlOracle oracle_hint
ErlOracle oracle_hint

Since could happen that sometimes the Oracle sends a wrong hint (i.e. it may occurs that some fields are empty strings and/or some fields has value -1), a function responsible for checking its quality, has been implemented.

Remark In the previous section we mentioned the difference between consistent and inconsistent hypothesis; It is worthmentioning that this nodes also implements a function to cope with the removal of inconsistent hypothesis from the list of possible hints

REMARK the KB listens to the oracle's topic and as soon as the oracle transmits the clue, the KB adds the message to the ontology without the need for an explicit request

Node interfaces:

Node [/cluedo_kb]
Publications: 
 * /rosout [rosgraph_msgs/Log]

Subscriptions: 
 * /clock [rosgraph_msgs/Clock]

Services: 
 * /add_hint
 * /cluedo_kb/get_loggers
 * /cluedo_kb/set_logger_level
 * /get_id
 * /mark_wrong_id

(back to top)

the simulation.cpp node (final_oracle) 🪢

Concerning the simulation.cpp node:

The architecture is based on the simulation.cpp node which is the same node we were provided by our Professors This latter supplies two services:

  • Concerning the first one (/oracle_hint [erl3/Marker]), once it has been provided with a certain ID, it returns the clue corresponding to that ID (Identifier of an index in an array of messages yield by the oracle)
  • Concerning the second one (/oracle_solution [erl3/Oracle]), it is needed to check the correctness of a proposed hypothesis at the end of the case

Node interfaces:

Node [/final_oracle]
Publications: 
 * /rosout [rosgraph_msgs/Log]

Subscriptions: 
 * /clock [rosgraph_msgs/Clock]

Services: 
 * /final_oracle/get_loggers
 * /final_oracle/set_logger_level
 * /oracle_hint
 * /oracle_solution

(back to top)

the img_echo.cpp node 🪢

Concerning the img_echo.cpp node :

Briefly, this node reads the input image from the robot's camera. Secondly, it print it on a floating window, namely DetectiCAm, by means of a cv_ptr. (the cv_bridge::CvImagePtr cv_ptr returns a ROS image into an appropriate format compatible with OpenCV). Thirdly it publish the video stream!

detecticam_optimised

Remark: Since we have to deal with the image, multiple copies of it will be needed; For this purpose the BGR8 image encoding has been chosen, being it less susceptible against typos. Further Remark: ImageTransport's methods have been employed for creating image publishers and subscribers, being image_transport a package that provides transparetn support for transporting images in low-bandwidth compressed formats. Further Further Remark: Please remember to include cv_bridge in your xml package! Also do not forget to add the following headers to your cpp file

#include <cv_bridge/cv_bridge.h>
#include <opencv2/imgproc/imgproc.hpp> 
#include <opencv2/highgui/highgui.hpp>

Node interfaces:

Node [/img_echo]
Publications: 
 * /img_echo [sensor_msgs/Image]
 * /img_echo/compressed [sensor_msgs/CompressedImage]
 * /img_echo/compressed/parameter_descriptions [dynamic_reconfigure/ConfigDescription]
 * /img_echo/compressed/parameter_updates [dynamic_reconfigure/Config]
 * /img_echo/compressedDepth [sensor_msgs/CompressedImage]
 * /img_echo/compressedDepth/parameter_descriptions [dynamic_reconfigure/ConfigDescription]
 * /img_echo/compressedDepth/parameter_updates [dynamic_reconfigure/Config]
 * /img_echo/theora [theora_image_transport/Packet]
 * /img_echo/theora/parameter_descriptions [dynamic_reconfigure/ConfigDescription]
 * /img_echo/theora/parameter_updates [dynamic_reconfigure/Config]
 * /rosout [rosgraph_msgs/Log]

Subscriptions: 
 * /clock [rosgraph_msgs/Clock]
 * /robot/camera1/image_raw [sensor_msgs/Image]

Services: 
 * /img_echo/compressed/set_parameters
 * /img_echo/compressedDepth/set_parameters
 * /img_echo/get_loggers
 * /img_echo/set_logger_level
 * /img_echo/theora/set_parameters
 
 --------------------------------------------------------------------------------
Node [/gazebo]
Publications: 
 * /clock [rosgraph_msgs/Clock]
 * /gazebo/link_states [gazebo_msgs/LinkStates]
 * /gazebo/model_states [gazebo_msgs/ModelStates]
 * /gazebo/parameter_descriptions [dynamic_reconfigure/ConfigDescription]
 * /gazebo/parameter_updates [dynamic_reconfigure/Config]
 * /odom [nav_msgs/Odometry]
 * /robot/camera1/camera_info [sensor_msgs/CameraInfo]
 * /robot/camera1/image_raw [sensor_msgs/Image]
 * /robot/camera1/image_raw/compressed [sensor_msgs/CompressedImage]
 * /robot/camera1/image_raw/compressed/parameter_descriptions [dynamic_reconfigure/ConfigDescription]
 * /robot/camera1/image_raw/compressed/parameter_updates [dynamic_reconfigure/Config]
 * /robot/camera1/image_raw/compressedDepth [sensor_msgs/CompressedImage]
 * /robot/camera1/image_raw/compressedDepth/parameter_descriptions [dynamic_reconfigure/ConfigDescription]
 * /robot/camera1/image_raw/compressedDepth/parameter_updates [dynamic_reconfigure/Config]
 * /robot/camera1/image_raw/theora [theora_image_transport/Packet]
 * /robot/camera1/image_raw/theora/parameter_descriptions [dynamic_reconfigure/ConfigDescription]
 * /robot/camera1/image_raw/theora/parameter_updates [dynamic_reconfigure/Config]
 * /robot/camera1/parameter_descriptions [dynamic_reconfigure/ConfigDescription]
 * /robot/camera1/parameter_updates [dynamic_reconfigure/Config]
 * /rosout [rosgraph_msgs/Log]
 * /scan [sensor_msgs/LaserScan]
 * /tf [tf2_msgs/TFMessage]

Subscriptions: 
 * /clock [rosgraph_msgs/Clock]
 * /cmd_vel [geometry_msgs/Twist]
 * /gazebo/set_link_state [unknown type]
 * /gazebo/set_model_state [unknown type]

Services: 
 * /controller_manager/list_controller_types
 * /controller_manager/list_controllers
 * /controller_manager/load_controller
 * /controller_manager/reload_controller_libraries
 * /controller_manager/switch_controller
 * /controller_manager/unload_controller
 * /gazebo/apply_body_wrench
 * /gazebo/apply_joint_effort
 * /gazebo/clear_body_wrenches
 * /gazebo/clear_joint_forces
 * /gazebo/delete_light
 * /gazebo/delete_model
 * /gazebo/get_joint_properties
 * /gazebo/get_light_properties
 * /gazebo/get_link_properties
 * /gazebo/get_link_state
 * /gazebo/get_loggers
 * /gazebo/get_model_properties
 * /gazebo/get_model_state
 * /gazebo/get_physics_properties
 * /gazebo/get_world_properties
 * /gazebo/pause_physics
 * /gazebo/reset_simulation
 * /gazebo/reset_world
 * /gazebo/set_joint_properties
 * /gazebo/set_light_properties
 * /gazebo/set_link_properties
 * /gazebo/set_link_state
 * /gazebo/set_logger_level
 * /gazebo/set_model_configuration
 * /gazebo/set_model_state
 * /gazebo/set_parameters
 * /gazebo/set_physics_properties
 * /gazebo/spawn_sdf_model
 * /gazebo/spawn_urdf_model
 * /gazebo/unpause_physics
 * /robot/camera1/image_raw/compressed/set_parameters
 * /robot/camera1/image_raw/compressedDepth/set_parameters
 * /robot/camera1/image_raw/theora/set_parameters
 * /robot/camera1/set_camera_info
 * /robot/camera1/set_parameters

(back to top)

The detectibot_magnifier.cpp node 🪢

This node is devoted to the detection of ARUCO's markers made through a single camera mounted on the front side of the robot. It also implements a service that allows for retrieving the IDs identified through Aruco.

For realising such a node, the vision_openCV packages, aimed at interfacing ROS with OpenCV have been emploied. OpenCV basically consists in a library of programming functions for real time computer vision. Hence this node employs a bridge between OpenCV and ROS. Due to the fact that ROS sends Images in sensor_msgs/Image format, our goal is to obtain images in cv_bridge format.

REMARK Please note that By using image_transport::Publisher image_pub_ and subscribing to the topic /robot/camera1/image_raw we are able to decrease the bandwidth!

Node interfaces:

Node [/detectibot_magnifier]
Publications: 
 * /rosout [rosgraph_msgs/Log]

Subscriptions: 
 * /clock [rosgraph_msgs/Clock]
 * /robot/camera1/image_raw [sensor_msgs/Image]

Services: 
 * /aruco_markers
 * /detectibot_magnifier/get_loggers
 * /detectibot_magnifier/set_logger_level

(back to top)

rqt_graph

In the figure below, circles represent nodes and squares represent topic messages. The arrow instead, indicates the transmission of the message!

UML temporal diagram

By means of this diagram it is possible to show how the system works. As the state_machine gets launched, the robot enters the MOVE state, responsible for the acrivation of the /go_to_point service. Hence, it reaches the center of the room and it starts to collect as many marker as possible.

This has been made possible through the implementation of a /turn_robot service that, as the name explicitly suggests, makes detectibot turning around its own position. Only after, the system transitions to the CHECK state, where a request is made by the /aruco_marker service to retrieve the detected marker's IDs (by means of a topic subscription). Whenever a new hint gets detected, the knowledge base represented by cluedo_kb node is issued (with a /oracle_hint service request).

By means of a further request, made to the final_oracle node through the /oracle_solution service, the True ID gets compared and it is chosen whether to terminate the investigation (ending up in a MISTERY_SOLVED state) or pursuing it, transitioning back to the MOVE state

(back to top)

Working hypothesis and environment

This architecture is designed for providing a reinterpretation of the Cluedo Game. Markers are set a-priori both on the ground and on wall-fixed boxes. The hypothesis IDs are contained within Aruco markers. The True ID instead, is randomly chosen before starting the game.

Detectibot (the robot involved in the investigation), moves in a obstacle-free environment charachterised by a perfectly flat floor (without irregularities), within a indoor environment, provided with rooms without furnitures. Path planning, has been implemented by menas of the MoveBase package It has been designed to mount a single camera, pointing toward the front side.

It is also equipped with laser range finders which make possible, together with Odometry data, to employ a SLAM gmapping algorithm. This approach uses a particle filter in which each particle carries an individual map of the environment. To ensure its employability, the following requirements were met:

  • laser outputs are published onto /scan topic
  • the robot model is endowed with two frames required for mapping algorithms, which are: link_chassis and odom

Concerning the Aruco detection, it has been implemented by the detectibotMagnifier. Indeed, ROS images messages are sent over /robot/camera/image_raw for being converted im a openCV handable format. This is done by means of cv_bridge packages without forgetting to optimise the overall process (please, take a look at the previous paragraph where we mentioned the image_transport package and its advantages)

All choices were made with the aim of making the system as modular and flexible as possible. Despite this, certain limitations make the system quite unrealistic but functional.

(back to top)

System's features

Most of them have been already discussed in the Software architecture’s section.

The project implements the robot behaviour so that it can keep roaming around, looking for Aruco Markers inside the environmeent. This serves for solving the case.

Indeed, while it navigates through the environment it tries to combine them in order to find a solution. This is where the reasoning & AI module, represented by the cluedo_kb.py, comes imto play

Concerning the architecture, it is centralised and designed in such a way that individual components can be replaced as long as they meet the same required interface

(back to top)

System's limitations

Here below, some of the major system limitations are listed:

  • If the robot had more than one camera, the detection system (detectibot_magnifirer) would have to be re-implemented to ensure a certain performance
  • Sometimes the robot seems to face issues in evaluating the target goal; Indeed, it remains stuck at a certain point and it takes a while before it start moving again
  • Many Aruco Markers are not perceived by the single camera. Modifying the orientation of the camera could be a solution or even better, endowing the manipulator arm with multiple cameras (with different pan and tilt) could work as well.
  • The high computational demand requested by the simulation inevitably leads to the choice of high-performance laptops for avoiding futile delays

(back to top)

Possible technical Improvements

As for the system limitations, some of the most relevant potential techincal improvements:

  • The current KB can be modified to implement the same functionalities on a different ontology system (i.e. Armor); the component can be extended for more accurate hypotheses processing or for providing, for instance, a ontology backup feature

  • The current navigation system is rather poor; it should be replaced with a more elaborate navigation system. In particular, the new navigation system should make it possible to achieve a certain orientation as well as a final position.

  • The current robot model is quite unstable. It should be adjusted so that it does not oscillate when it starts moving to reach a certain goal

  • the robot needs a lot of manoeuvring space to move; There should be the need of seeking an appropriate navigation algorithm to reduce the necessary manoeuvring space

  • The architecture could also be executed in a distributed manner by splitting the components over several devices. However, this possibility was not considered during the design of the system. It is therefore necessary to identify possible criticalities in the communication protocol (i.e. to better manage service calls that fail based on the quality of the connection) and deal with them appropriately

(back to top)

Roadmap

  • Complete the introduction of the template
  • Describe the software architechture
    • Component diagram (not mandatory)
    • Temporal diagram + comments
    • States diagrams, whether applicable + comments
    • Create a list describing ROS messages and parameters
  • Describe the installation steps and the running procedures - [x] Create a dedicated paragraph - [x] Include all the steps to display the robot's behaviour
  • Show in the "usage" section the running code
    • Create a small video tutorial of the launch
    • Create a small animated gif of the terminal while running code
  • Describe the Working hypothesis and environment
    • Dedicated section for System's features
    • Dedicated section for System's limitations
    • Dedicated section for Possible technical improvements

See the open issues for a full list of proposed features (and known issues).

For consulting the Sphinx documentation, please refer to the index.html file inside the _build/html folder; Just type:

firefox html/index.html

inside the html folder

image

(back to top)

Contributing

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement"

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under none License.

(back to top)

Contact

Federico Civetta - s4194543@studenti.unige.it

Project Link: https://github.com/fedehub/ExperimentalRoboticsAssignment3

(back to top)

List of resources

(back to top)

About

Official repo of the Third Assignment developed for the experimental robotics course

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published