Skip to content

NMAI-lab/saviRoombaDebugger

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

saviRoombaDebugger

This repository contains scripts that demonstrate the functionality of the savi_ros_bdi package for ros as well as the saviRoomba software without the need for the Roomba hardware https://github.com/NMAI-lab/saviRoomba.

The savi_ros_bdi package listens for perceptions on the perceptions topic and publishes actions that are to be executed to the actions topic. Similarly, it listens to messages on the inbox topic and publishes messages to the outbox topic.

This simulator started off as me grabbing all of our software for the Roomba and starting to hack together simulated data sources for each of the hardware sensors. This means that the sensor drivers (which won't work without the hardware) have been replaced with the simulated versions. The following is a list of the modules that you will need to run in order to use the simulator as well as to simulate the agent. I'll give a brief explanation of what each does.

Here are the nodes you will need (YOU NEED TO RUN THESE NODES):

  • navigator.py: The navigation module. Uses RouteSearcher.py, nodeGraph.json, nodeLocations.json, and nodeNames.json. You need to run this node.
  • testerMain.py: The simulated test environment. Uses VirtualBot.py, VirtualMap.py for simulating the location of the robot on the map (which uses the same json files as the navigator to set itself up. This simulates the sensor data that would normally be generated by the camera driver and the line sensor driver. WARNING: My simulation of the line sensor was imperfect. Also, my simulation of the movement of the robot was also imperfect, meaning that the robot sometimes leaves the map even if the robot was behaving properly. Sorry about this, we will likely need to refactor things to resolve that issue. Frankly, we may want to consider getting rid of the line sensor all together for these experiments, I'm not sure it adds value. Worth some thought at least. actionTranslator.py: This node is needed for translating the actions generated by the agent into something that the robot can use. This has been configured to generate signals that are read by Virtual Bot.
  • logger.py: This is a script that monitors all of the agent's ROS topics and records everything published to CSV files. I used this for performance analysis on the Roomba. Not sure if we will need this or not, but it doesn't hurt to run it. Bear in mind that it only prints to the CSV file as part of the program shutting down as I found that the file system on the RPi was not able to keep up with the ROS topics. The result was that the files were garbled. This means that the files are only generated when you Ctrl C out of the application.
  • perceptionTranslatorBattery.py: Monitors the battery topics and generated perceptions for the agent with respect to the battery state of charge. In the case of the simulation, this data is generated by the virtual bot.
  • perceptionTranslatorLine.py: Similar to the battery translator but for the line sensor. This monitors data generated by the virtual bot.
  • perceptionTranslatorQr.py: Translator for the QR codes generated by the camera driver (for navigation). Again, these are generated by the simulator in this case, on the real robot they are generated by a hardware driver that you don't need.

MAKE SURE YOU RUN THIS NEXT ONE LAST:

  • userInterface.py: A command line user interface. Used for telling the agent the location (post point) for where to pick up the mail and where to deliver it. Also tells the agent where to find the charging station. The agent will not do anything until these parameters have been sent to the agent. Note, there is not bullet proofing in this program, so be careful to make your inputs make sense. To be clear: You will need to provide inputs to this node for it to work.

Nodes that are not used (THESE WILL NOT WORK AND SHOULD NOT BE RUN):

  • driverCamera.py: Hardware driver for the camera (DO NOT USE THIS AS WE ARE NOT USING A REAL CAMERA)
  • driverLineSensor.py: Hardware driver for the line sensor (DO NOT USE THIS ARE WE ARE NOT USING THE RAEL LINE SENSOR)

Configuration and Setup

These instructions assume that you already have a ros workspace with the savi_ros_bdi package set up, as per the instructions at that repository. This means that you have a ros workspace at ~/SAVI_ROS/rosjavaWorkspace which contains the savi_ros_bdi project, as described in the savi_ros_bdi Readme.

First, clone this repository to the src directory of your workspace.

$ git clone https://github.com/NMAI-lab/saviRoombaDebugger.git

Please note that to build this project from scratch, this could have been done using the following:

$ cd ~/SAVI_ROS/rosjavaWorkspace/src
$ catkin_create_pkg savi_ros_py std_msgs rospy roscpp
$ cd savi_ros_py 
$ mkdir scripts
$ mkdir asl
$ mkdir resources

The scripts folder holds the Python scripts used for publishing and subscribing to ros topics. The asl folder is the location of the AgentSpeak programs. Lastly, the resources folder contains settings.cfg, which needs to be copied to the savi_ros_bdi package for it to correctly configure the agent. There is also a bash script called configProject, which can be used for correctly moving this settings file to the correct location in the savi_ros_bdi package. To use this you must first update line 6 of this script with the correct directory location for the savi_ros_bdi package. Also, the settings.cfg file should be checked to confirm that the parameters are correct, most notably the location of the ASL file, the agent type and the agent name. This script can be run at the command line without parameters.

./configProject

In order to use the scripts, return to the project home directory and run catkin_make and source the setup.bash file.

$ cd ~/SAVI_ROS/rosjavaWorkspace
$ catkin_make
$ source devel/setup.bash

Running

Before running the the demo scripts, roscore and savi_ros_bdi.Main need to be running. Please see the savi_ros_bdi Readme for instructions. It is then recommended that you run the listener first, followed by the talker. These will each need to be executed in thir own terminals. See the list above for a description of the nodes.

$ rosrun saviRoombaDebugger testerMain.py

With these scripts running you will see details of the execution print to the terminals. Talker prints the messages being sent to ros, savi_ros_bdi will receive these messages and then publish actions to be executed to the actions topic. The listener prints these messages to the terminal.

About

Test project for testing the mail delivery robot software without the robot hardware

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published