Note: This repository is based on a ml-agents branch.
This is an AI and Machine Learning with Unity ML-Agents project! This project aims to explore, test, optimize Deep Reinforcement Learning (DRL) algorithms for controlling agents in Unity game engine. We will develop custom sensors, experiment with environments and document our results.
- Introduction
- Objectives
- Project Description
- Getting Started
- Installation Guide
- Project Phases
- Risk Analysis
- Usage Instructions
- Experiments
- References and Resources
- Project Contributors
This project is about putting Deep Reinforcement Learning (DRL) to work in Unity. We will be using Unity's Ml-Agents (Unity-Technologies, n.d.) to build agents that can fulfill different tasks in 3D game.
The main objectives of this project are as follows:
- Implement Deep Reinforcement Learning (DRL) using the ML-Agents Toolkit, train agents in pre-made 3D games and fine-tune the models to achieve better performance
- Design and implement new sensors inputs for agents in Unity to enhance their ability to interact and perceive the environment and make the pre-made models mimic the real world.
- Evaluate the performance of the created algorithm using metrics, training time, resource usage, the framerate of the stimulation and optimize agents behaviour by exploring different environment complexity, sensor configurations and hyperparameters
For this project, we will be using Unity Game Engine with the Ml-Agents toolkit to create a 3D stimulation environment. Unity will handle the visual aspects and the in-game logic. Ml-Agents, written in Python, will manage the decision-making process. By combining those two we will explore and analyse the DRL techniques in real-time simulations and hypertune the agents through thorough testing
- Unity: Used for creating, stimulating and publishing 3D (and 2D) games
- Ml-Agents Toolkit: Used to integrate Unity and Machine Learning Agent behaviours
- C#: handles the game mechanics using Unity scripts
- Python: Runs the machine learning algorithms
- Git: Version control software used to manage the respository
- Unity game Engine: Required for 3D stimulation
- Python(v3.10.x): Used for Ml-Agents
- Ml-Agents Toolkit: Used for training and manging agents
- Clone The Repository:
git clone https://github.com/AlexNicSor/ml-agents-unity.git
cd ml-agents-unity- Set Up Virtual Environment
python -m venv venv
MAC/Linux: source venv/bin/activate
Windows: venv\Scripts\activate- Install ML-Agents:
pip install --upgrade pip
pip install -e ./ml-angents-envs
pip install -e ./ml-angents- Test instalation:
ml-agents-learn --help- Additional dependencies:
pip install torch torchvision torchaudio- Install Unity: Download and install Unity
Our future plans for this project are split into two phases and include the following tasks:
-
Implement Baseline Algorithm for Scenario Adaptation.
-
Parameter Adjustment for Algorithm Optimization.
-
Code Addition for New Input Type in Soccer Twos
-
Peripheral Vision and Advanced Input Simulation
-
- To enhance our ML-Agent’s perception and to create a more realistic environment, we added custom sensors, in- cluding proximity detectors and a decoupled vision system. Proximity sensors enable the agent to sense nearby objects, improving spatial awareness and collision avoidance. The decoupled vision system allows a more realistic movement pattern, the agent being able to move while looking in a different direction, enhancing its ability to analyze and respond to complex scenarios. Adding a reward system to our ML-Agent was a crucial step in designing and training intelligent agents to learn and achieve the desired result. In the context of reinforcement learning, the reward system serves as the agent’s feedback mechanism, guiding its behavior by providing positive or negative rewards based on it’s actions. By clearly defining reward signals aligned with desired outcomes, we encourage the agent to explore and exploit strategies that maximize cumulative rewards. In our implementation, we introduced several rewards that were designed to promote certain be- havior and strategies for our agents
-
Establish Baseline Performance of Algorithm in a selected envi- ronment(Chosen Game)
-
Experiment with Parameter Tuning
-
Experiment with different sensor/input types
-
In our approach to optimizing and training our ML-Agent’s to best perform, we experimented with changing 4 different parameters in the .yaml file used for training. These include tuning the Beta, the Learning Rate, number of Epochs and Batch Size respectively. Each parameter was changed twice, then the model trained to 10 million steps with this change. This allowed us to get large enough training data in order to then evaluate the specific version’s performance. Therefore we had a total of 8 independently modified and trained versions all with only one parameter changed. Each one of the aforementioned can be found in the yaml file of the project.
| Parameter | Batch Size | Epochs | Learning Rate | Beta |
|---|---|---|---|---|
| Original | 2048 | 3 | 0.0003 | 0.005 |
| First change | 1024 | 1 | 0.03 | 0.05 |
| Second change | 4096 | 6 | 0.003 | 0.0005 |
- Lack of Experience: Some team members are new to Unity and Ml-Agents, this will pose a challenge
- Computational Constraints: Training deep RLM can be resource-intensive and may require significant computational power
- Time Management:Balancing the project tasks effectively will be important, since we are occupied with different courses
- Debugging Issues:Debugging issues with complex agents, environments and Unity might be challenging and time-consuming
To use the cloned repository, after all other installation steps are completed:
- Open unity hub.
- Press the add button and select the add from disk option.
- Navigate to the location of the cloned repository.
- Select the folder named Project and add it.
- Open the project.
- Select the examples folder located at the bottom left of your screen.
- Select the example game that you want to run.
- Select the Scences folder and then the scene you want to run.
- Press play, located at the top middle part of your screen.
To view the results of our experiments or recreate them, follow the steps below:
- Navigate to the final_data_and_results folder in the main branch to find all final results.
- Set Up the Environment: Follow the steps outlined in the Installation Guide to set up the environment.
- Select the Experiment Branch: For better organization, switch to the branch corresponding to the parameter you want to experiment with (e.g., batch_size, epoch, learning_rate, or beta).
- Build the Unity Project: Build the Unity project and save the executable in a desired location.
- Activate the Python Virtual Environment: Open a terminal and activate the Python virtual environment you set up earlier.
- Navigate to the Configuration Files: Change the directory to ml-agents-unity\config\poca.
6.Run the Training Command: Execute the following command:
mlagents-learn <test-file>.yaml --env="<path-to-executable>/UnityEnvironment.exe" --run-id="<run-name>" --no-graphics
- Unity-Technologies. (n.d.-b). GitHub - Unity-Technologies/ml-agents: The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning. GitHub. https://github.com/Unity-Technologies/ml-agents
- Unity documentation. (n.d.). Unity Documentation. https://docs.unity.com/
- Van Rossum, G., & Drake, F. L. (2009). Python 3 Reference Manual. In CreateSpace eBooks. https://dl.acm.org/citation.cfm?id=1593511
This project was developed by the following group of Maastricht University computer science students:
- Alexandru Lazarina
- Karol Plandowski
- Marios Petrides
- Carl Balagtas
- Marcel Pendyk
- Ethan de Beer
- Hadi Ayoub