Project Website - John Asaro - Claire Carroll - Derin Gezgin
This repository includes our code for the final project of COM407: Computational Intelligence course. In this final project, we evolved a simple neural network agent via CMA-ES for the Planet Wars game.
Requirements: Python 3.10+, Java JDK 21, Git, Bash
git clone https://github.com/deringezgin/COM407-FinalProject
cd ci_final
./setup.shThe setup script will create a Python virtual environment, clone the Planet Wars source code, apply our patch for the GUI support, install the Python dependencies and build the Planet Wars app.
If you already have your own virtual environment or wish not to have one, you can run the setup script with the noenv flag.
./setup.sh noenvYou can also run our project in a Docker container.
git clone https://github.com/deringezgin/COM407-FinalProject
cd ci_final
docker build -t planet-wars-ci .To run it with a GUI, start the container with display forwarding.
docker run -it -e DISPLAY=host.docker.internal:0 planet-wars-ci /bin/bashIf the display forwarding is not supported in your device, you can run our agent in headless mode described in the Running the Trained Agent section.
To train a neural network agent, run the train_nn.py script. It is possible to specify a .yaml file with the --config flag. The default config is in config1.yaml.
python3 train_nn.py --config config1.yamlThe training script will scrape through the config file, evolve the network weights via CMA-ES and save the training progress (solution and fitness for each individual) and the used config into a timestamped SQLite database in the data/ folder.
To run the trained agent, first extract a solution from the training databases into a .npy file using extract_agent.py script:
python3 extract_agent.pyBy default this script scans all .sqlite3 databases in the data/ folder, picks the best individual, and writes it to sharp_agent_weights.npy file. It is possible to specify a specific database, generation, individual, and output file via the --db, --generation, --individual, and --outfile flags.
After extracting the agent, simply run the ./run_sharp_agent.sh script:
./run_sharp_agent.shThis script will add the Planet Wars Python bindings to PYTHONPATH, acivate the .venv if it exists, restart the Python game server and run our trained agent against the greedy heuristic agent.
To evaluate the trained agent in headless mode, run the same script with the headless flag
./run_sharp_agent.sh headlessThis runs the run_agents.py script to play a set of games between our trained agent against the greedy heuristic agent.
We also include a simple benchmarking script to compare the baseline agents. To run a benchmark and save the game results into a CSV file in the benchmarks/ folder, use the benchmarks/run_benchmark.py script.
python3 benchmarks/run_benchmark.py --agent1 pure --agent2 greedy --n-games 100000The script will run the requested number of games and write per‑game information (winner, planet counts, ship counts) to a .csv file. You can get a simple analysis of these outputs via the benchmarks/analyze_benchmark.py script.
After you complete the training and have a .sqlite3 database in the data/ folder, you can generate the fitness plot by running:
python3 plot_runs.pyThe plot will be saved into the plots folder.