To simulate the project follow this instructions:
In a new terminal access your catkin workspace.
$ git clone https://github.com/PanosMallioris/Autonomous-Marine-Exploration-Water-Simulation-ROS-GAZEBO.git
If the catkin make is successfull proceed.
In a new terminal
This should open the simulation world
Try
and then launch again.
You will have to spawn the vessel in the desired position through the launch
In a new terminal
If you want to initiate the python for the autonomous exploration, Make sure you pressed the play button on Gazebo: (Note the python script is made for the particular's map propotions and size its not perfect but its good enough for the proof of concept)
To stop the script press Control + C in the terminal.
If you want to teleop the robot through the world
Follow the same instructions just change
to
and run the python script
instead.
To implement object recognition you will have to download some extra packages in a new terminal
$ git clone https://github.com/ros-autom/find-object.git
If the catkin_make is successful in a new terminal we will run the object recognition nodes
in a new terminal
This is the application for the object recognition that subscribes on your laptops camera. You can add and choose objects for recognition
if you want to recognize the human you will have to spawn the human in the map
and change the camera subscription to vessel's camera this time
So you will have to run this instead
Note that in order to recognize the human you have to add his picture first. So add the human.png from auv/map to your find object application