ASSET2 - Airport Surface Simulator and Evaluation Tool 2
ASSET2 is a generic airport simulation tool for research purpose. It is designed to support multiple airports, to test and to evaulate customized schedulers. Please check out our paper for more information.
This tool is built for Carnegie Mellon University MSIT Practicum Project and Master Independent Study sponsored by the NASA Ames Research Center.
Prepare airport data
Place airport related data under
data folder like
data/sfo-terminal-2/build/ (use IATA airport code).
If you're on Ubuntu:
$ sudo apt-get update $ sudo apt-get install -y python3-pip $ mkdir -p ~/.config/matplotlib/ $ echo "backend : Agg" >> ~/.config/matplotlib/matplotlibrc
$ pip3 install -r requirements.txt
$ python3 simulator.py -f plans/base.yaml
$ python3 simulator.py -f batch_plans/simple-uc.yaml
Execution under Virtual Environtment
If you don't want to install the dependencies for the whole system, you may want to use the virtual environment where we install dependencies under this project folder.
$ python3 -m venv env # create a new virtual environment $ source env/bin/activate # activate the virtual environment $ python3 -m pip -r install requirements.txt # install dependencies locally $ python3 simulator.py -f plans/base.yaml # execute the simulation
$ python3 -m unittest discover tests # all tests $ python3 -m unittest tests/test_scheduler.py # single test
$ pycodestyle --show-pep8 --show-source . $ ls -1 *py scheduler/*py | xargs pylint # optional but recommended
$ python3 visualization/server.py
$ pydoc <python-file-name-without-.py>
The following steps are suggested for lauching an successful experiment systematically.
Compose and launch a single plan to find out (a) the upper bound of the value of the experimental variable and (b) the execution time for a single run.
$ time ./simulator.py -f plans/<upper-bound-to-try>.yaml
Use the visualization tool on the single plans you launched in step one to see if things are working as expected. For example, you should check if the aircrafts are busy enough in order to retrieve a meaningful plot.
By using the execution time and upper bound information we collected from the previous steps we can then lanuch a batch run with
try_until_success: False. The execution time of this batch run should be able to estimated.
By using the execution time and failure rate information from the previous steps, we can then launch a batch run with
try_until_success: Trueto obtain meaningful final results.
Please ALWAYS follow PEP 8 -- Style Guide for Python Code for readability and consistency.
Default logging level is set in
simulation.py, and please initialize logging
for each class in
__init__ like this way:
self.logger = logging.getLogger(__name__)
Put breakpoint in this way:
import pdb; pdb.set_trace()
Also, please refer to our Google Map for debugging the details.
For consistency, following units are used everywhere in the code:
Time: second Length: ft
Routing table calculated by the routing expert will be cached at
please make sure all the objects in routing table can be dumped into binary
pickle. Ex. logger can't be dumped.
Note that cache may cause errors or bugs in many cases because stale data is used.
Simulation time (
sim_time) indicates the time should be passed in each
tick() and it can be accessed globally in any place by using following
from clock import Clock self.logger.debug("sim time is %s", Clock.sim_time)
To speedup the simulation, we can apply some profiling technique to locate the
slow code. Add
@profile decorator at the beginning of the function you want to
profile, then do following commands to obtain a report of the execution time of
each line within the function.
$ kernprof -l ./simulator -f <your_plan>.yaml $ python3 -m line_profiler simulator.py.lprof