-
Notifications
You must be signed in to change notification settings - Fork 0
Run the Project
This setup relies on a few things, due to the docker networking constraints on windows we need to setup a mavlink-router
to help route the mavlink messages to and from the drone to the ground station and the agent code. Additionally we are going to setup for SITL mode, so this means launching a PX4 SITL. Lastly we will need to have the base station running standalone (or the agent) depending on which one is to be debugged in Clion. These steps are shown below. Order does matter.
Back in the powershell we need to open a dedicated window for the mavlink-router executable. This will not create a new container, but rather will give you a new bash-prompt into the already created container.
- In powershell, run
.\run_dev.sh <path to repo-root>
. - In the new git-bash window execute the rest of the commands.
- Find the docker host IP address:
getent hosts host.docker.internal | awk '{ print $1 }'
- Execute
mavlink-routerd -e <docker host ip address>:14550 127.0.0.1:14550
At this point the mavlink router is running. QGround control and the agent (once the sitl is turned on below) will be able to connect to the autopilot. Note that the router can some times crash when disconnects occur, just simply run the 4th command above again.
Note: Its possible to configure mavlink router with a configuration file. We are not doing that here, so you may see an error. Ignore that error.
Like the container, the first time this command is ran it can take a while. However subsequent executions should be quick.
- Open a new bash window. See the mavlink-router setup for instructions on opening a new bash prompt.
- Navigate to the PX4 repo, which is downloaded by default in the docker build process, via:
cd ~/px4
- Next execute the sitl, this can be ran in headless mode as well:
make px4_sitl gazebo_iris_opt_flow
orHEADLESS=1 make px4_sitl gazebo_iris_opt_flow
Now we have the mavlink-router and the sitl running. Next we can begin to setup our code base and get it connected to this system as well!
Our system is made up of 3 component types: Agent(s), a Base Station, and Controller(s). We must first turn on the base station before any other components can be started. Below shows one option: run the base station, debug the agent. Of course these can be flipped around or you can choose to just run both (external to clion after building each component).
This section will describe building a base station node.
- Open a new bash window. See the mavlink-router setup for instructions on opening a new bash prompt.
- Update submodules
git submodule update --init --recursive
- Open clion
clion&
.
Now we can modify the project source to prepare for a Base station build.
At the top of the main.cpp
we have a #define called BASE_MODE
this, when set to 1, will configure the project to build a base station executable. See below:
As you can see this is configured with 0
so it will actually build an Agent node. Change to 1 and build, you may also need to reset/reload the cmake first. These a re shown below:
Click the green hammer to build.
Once the build completes, we can copy out the executable for manual execution.
- In the bash prompt execute the following:
cd ~/ && cp ~/hovergames2/code/bin/HoverGames2 ./
. - Now we can execute the base station with the following command:
cd ~/ && ./HoverGames2
.
Now we can setup clion to build the agent.
Note: This cannot be ran until the base station is running.
- Simply change the #define from
#define BASE_MODE 1
to#define BASE_MODE 0
. - Build the project, like we did above.
Now that we have a built project, either copy out the executable (with a name change so it doesnt override the base station executable) or simply press the debug/run icon in the upper right hand corner.
Lastly we want to run a version of the controller. We have several versions, as listed below:
- manual_controller.py - a CLI interface that allows the user to send position commands to the drone manually typed in.
- User-emulated controller - a arrow interface that runs the full scene simulation and allows a user to directly control the position of the Agent with arrow key commands
- Rule based controller - The full scene simulation with a rule based controller (fully autonomous)
- Reinforcement Learning Controller - The full scene simulation running a trained RL model to decide where the Agent should move to (fully autonomous).
The below sections will describe how to setup and use each controller listed above.
Like mentioned above, this controller directly exercises the python controller interface by allowing the user to set a position command, and then turn on the repetitive sending of said command to the Agent via the base station.
The easiest way to execute this version of the Controller is to use Clion and execute the python code within Clion.
Note: This cannot be ran until the base station and Agent are running.
- In Clion, open the folder structure to
/home/user/hovergames2/code/src/system/controller/python
. - Open the file
manual_controller.py
. - Right click on the code, and select run.
Once the code is running you should be met with a input prompt: Enter a command:
. At this point you are ready to get started, try typing help
and hit enter to see a list of commands. Below I will show how to get the base case going:
- In the program CLI prompt, type
connect
. After a second or so it should connect to the base station and display some information like node ID. - Now type
pos <agent id> <x> <y> <z>
and hit enter. This will set the position command that will be sent in the next step. - Now we want to turn on a position thread to send a position to the drone, type
pos start
and hit enter. - Navigate to your running Agent an observe the drone take off and fly to the set point (minus an offset for the base station to drone offset found in the
BaseStation.cpp
code). - Now try and send a new position, simply type in the CLI prompt
pos <agent id> <different x> <different y> <different z>
and press enter. The drone should immediately change its set point and fly to the new location. - Now we are done and can type
shutdown
to stop sending position commands, and allow the drone to return to launch.
After 2 seconds of shutdown
being executed, the Agent should begin to return to launch for its landing sequence.
At this point the entire system (minus the AI) has been exercised. See the below sections to exercise the AI components.
pip install -e /home/user/hovergames2/code/src/system/controller/gym_scarecrow/
- Blue Circle: UAV Agent
- Red Circle: Wild Pig Subject
- Black Square: Projected Area Boundary
- Objective: Control the blue UAV in a way the spooks the red pigs in a direction opposite of the protected area to prevent any breaches
- In Clion, open the folder structure to
/home/user/hovergames2/code/src/system/controller/gym_scarecrow/gym_scarecrow/
. - Open the file
params.py
. Here is where all parameters are set. - Suggested parameters to experiment with:
- Algorithm selection:
ALGORITHM = 'Rules' # 'Qlearn', 'Human', 'Rules'| Coming Soon: 'DQN', 'PPO'
- Hardware integration: `HARDWARE = True' * CAUTION: make sure all hardware instructions are followed all the way through completing the Agent setup prior
- Number of threats in the scene:
SUBJECT_COUNT = 100 # pigs
- All pf the boids algorithm parameters:
SUBJECT_FORCE = 0.5
SUBJECT_SPEED = 5
SUBJECT_PERCEPTION = 60
SPOOK_DISTANCE = 60
SPOOK_FORCE = 100
- Algorithm selection:
- In Clion, open the folder structure to
/home/user/hovergames2/code/src/system/controller/gym_scarecrow/gym_scarecrow/
. - Open the file
params.py
. - Modify
ALGORITHM = 'Human'
- Right click on the code, and select run.
- Use the arrow keys to control the UAV
- Up Arrow Key: forward
- Down Arrow Key: backward
- Right Arrow Key: right
- Left Arrow Key: left
- In Clion, open the folder structure to
/home/user/hovergames2/code/src/system/controller/gym_scarecrow/gym_scarecrow/
. - Open the file
params.py
. - Modify
ALGORITHM = 'Rules'
- Right click on the code, and select run.
- Sit back and watch the rules in action.
- In Clion, open the folder structure to
/home/user/hovergames2/code/src/system/controller/gym_scarecrow/gym_scarecrow/
. - Open the file
params.py
. - Modify
ALGORITHM = 'Qlearn'
- Modify
Train = 'True'
- True: trains a new policy
- False: runs an existing policy from the path defined by
PLAY_QTABLE = '20210214-011253/qtable.npy'
- Right click on the code, and select run.
- A better interface and explanation of how to construct and/or tune a new reinforcement learning agent
Issue: PX4 Sitl reports no local position and/or the drone repeatedly tries to takeoff and doesnt make it to full altitude (2m) before failsafe going into effect and the drone lands.
Solution: Restart Sitl, Agent, Controller and base station.
Issue: Python scripts error out saying it cannot find a package, and that package is a part of our system, meaning it starts with src....
Solution: Make sure to export the python path such that it points at the root of the repository: export PYTHONPATH=$PYTHONPATH:/home/user/hovergames2/code
OR export PYTHONPATH=$PYTHONPATH:/home/user/hovergames2/code/src/system/controller/gym_scarecrow/