Visual Intelligence (CS-503) @ EPFL. Hunting for Insights: Investigating Predator-Prey Dynamics through Simulated Vision and Reinforcement Learning
Course project for CS-503 at EPFL. This research project investigates how different vision fields affect predator-prey interactions. By simulating simplified environments and training agents with reinforcement learning and self-play, we identify trends that emerge in the strategies and effectiveness of trained predator and prey agents which use varying vision fields.
Authors:
The natural world is full of fascinating and complex interactions between predators and prey, with each constantly adapting and evolving to survive. As researchers seek to better understand these dynamics, visual intelligence has emerged as a critical field of study, allowing us to gain new insights into how animals perceive and react to their environments. In this research project, we investigate the prey-predator setting by training agents in a simplified environment with obstacles, using different vision fields to simulate different types of prey and predators observed in the real world. The goal of this project is to examine emerging behaviours that can shed light on the strategies used by animals to survive an attack or hunt a prey, as well as evaluate how different types of vision can help or hinder predator and prey agents. Furthermore, we examine the psychology of chasing and how prey agents might learn to use occlusions in their environment to their advantage, avoiding the predator's line of sight and increasing their chances of survival.
- Install Unity (our project uses version 2021.3.24f1)
- Set up Python (preferably version 3.7)
- Clone the project onto your local machine and open the project.
Locate the repository, create a Python environment, and run:
pip install -r requirements.txt
This will install all necessary packages to train your own agent on the environments using Unity's ML-Agents package.
Please follow any Unity tutorial to find out how to open a project, and how to work with the Unity editor.
Since the project is mostly developed in Unity, we insert the following screenshot of the project structure:
In order to train an agent, you need to build a scene and then train the agents in the scene with a set configuration. The configurations for training with self-play can be found in the "Predator-Prey/config" subfolder. For more information about the specific configurations, you can read about it here. For an example of how to set up a project with ML-Agents from scratch, this link provides a great tutorial. Afterwards, you can use Python to train the agents in the following way:
- Navigate to the "Predator-Prey" folder in your command line.
- Activate your Python environment
- Run
mlagents-learn CONFIG_PATH --env=BUILT_SCENE_PATH --run-id=ANY_RUN_IDENTIFIER --no-graphics
(you can add "--resume" if you wish to continue training from a previous checkpoint. - Enjoy :)
In order to track your training progress, you can run tensorboard in the same directory in the following way:
- Navigate to the "Predator-Prey" folder in your command line.
- Activate your Python environment
- Run
tensorboard --logdir results
and open the webpage - Enjoy even more :)
For inference, you can use the "inference" scene in Unity, which will write logs to the "Predator-Prey/inference_logs" folder. This folder also contains a file called "experiments.ipynb", which can be used to generate quantitative results from the generated logs. Qualitative results can be obtained by running the scenes in the Unity editor after dragging the parameters of the models onto the respective agent objects.