This starter kit is part of the Airlift Challenge, a competition in which participants must design agents that can plan and execute an airlift operation. Quick decision-making is needed to rapidly adjust plans in the face of disruptions along the delivery routes. The decision-maker will also need to incorporate new cargo delivery requests that appear during the episode. The primary objective is to meet the specified deadlines, with a secondary goal of minimizing cost. Solutions can incorporate machine learning, optimization, path planning heuristics, or any other technique.
This repository provides a template for you to create your own solution, test it, and prepare a submission.
- For more information about the competition: see the documentation.
- The simulator can be found here
- For submissions and to participate in the discussion board: see the competition platform on CodaLab
A) Install Anaconda. We use Anaconda to create a virtual environment.
B) Clone the starter kit repo.
$ git clone https://github.com/airlift-challenge/airlift-starter-kit
$ cd airlift-starter-kit
C) Create the airlift-solution
Anaconda environment:
$ conda env create -f environment.yml
$ conda activate airlift-solution
Optionally, you may want to install the core airlift simulator code from source to allow for easier debugging. This can be done by commenting out the simulator pip requirement in environment.yml, and installing from source aa described in the Airlift Simulator README.
We provide a Solution class that you can extend to build your own solution. Using this solution class is a requirement for your submission. You need to fill in two methods:
reset
. Resets the solution code in preparation for a new episode.policies
. A method which takes in a set of observations for all agent and returns a set of actions for each. It is important that your solution only rely on the information found in the observation and not attempt to access internal environment attributes.
The starter kit provides an example random agent solution which you can modify to produce your solution. You may also want to reference our baseline which implements a simple agent that follows a shortest path.
For more information, you can view the following sections in the documentation.
- Model. Formulates the model implemented by the simulator.
- Interface. Provides information regarding the observation and action spaces, as well as rewards and metrics.
- Essential API. Documents key classes and methods you may need to interact with.
- Solutions. Provides some background on previous solutions to the airlift and other related problems.
- Simulator Code Documentation. Although you do not need to understand the internals of the simulator to write a solution, it could help with debugging.
Download the test scenarios and unzip the contents into the scenarios
folder.
Then, perform the evaluation by running:
$ python eval_solution.py
Optionally, you may specify a different scenario folder by passing the folder name as a parameter:
$ python eval_solution.py --scenarios scenariofolder
The test set is similar to the hidden scenarios that will be used for the final evaluation.
The evaluator will output two csv files:
reakdown_results.csv
. Provides details regarding each episode.results_summary.csv
. Provides a summary of the overall evaluation and score.
You can also run a single scenario for debugging purposes as follows:
$ python eval_solution.py --scenarios scenarios/Test_0/Level_0.pkl
The script will output two csv files:
env_info_TIMESTAMP.csv
. Provides details regarding the episode.metrics_TIMESTAMP.csv
. Provides a summary of the metrics at each step.
Rather than running the environment against a set of pre-generated scenarios, you may also instantiate an environment in Python with custom scenario parameters. This can be useful for debugging (to avoid the overhead of generating scenario files), as well as for generating training scenarios for a machine learning solutions. An example is provided in run_custom_scenario.py, which can be run with the following command:
$ python run_custom_scenario.py
The script will output two csv files the same as when evaluating a single scenario (see above). Example renderings can be seen on the Generating Scenarios page.
When you submit your code to CodaLab, it will run in a Docker evaluator in our competition server. To perform a more thorough test of your code in a server-like nevironment, you can use the Docker Evaluator. This evaluator mimics the official competition platform on CodaLab. See the Docker Evaluator instructions for more information.
Once you are finished with your solution code, you can produce a zip file archive and make a submission at the competition platform on CodaLab. Your submission should contain the following files:
File/Directory | Description |
---|---|
postBuild |
Specify any additional commands that need to be run when building the Docker image. The default postBuild only installs the core "airlift" simulator package. This is also where you would specify any apt packages. This has replaced the apt.txt file. |
environment.yml |
File containing the list of python packages you want to install for the submission to run. This should instantiate a conda environment named airlift-solutions . |
solution/mysolution.py |
Contains your agent policy code. |
Distribution Statement A: Approved for Public Release; Distribution Unlimited: Case Number: AFRL-2022-5074, CLEARED on 19 Oct 2022