Project that aims to learn AI programming in game, specifically Movements, Decisions, Machine Learning and Planner.
Exercice doing during 3rd year at ISART.
- Quick Start
- 1. Movement & Decision
- 1.1 Project Description
- 1.2 AI
- 2. Machine Learning
- 2.1 Project Description
- 2.2 Goal
- 2.3 Technical Choices
- 2.4 Training Sets
- 2.5 Analysis of result
- 3. Planner
- 3.1 Project Description
- 3.2 GOAP Data
- Technology
- Credit
- Clone the project:
git clone git@github.com:Vincent-Devine/AI_Learning.git
(by shh) - Open the Unity project
- Choose the project you want to play going to the righ scene
- Start the simulation by the
Play button
on the Unity interface
For Machine Learning scene only, by default, AI is not trained.
In the manager's inpector
, you can resertart the training or setup a already trained AI
Create an NPC AI that can move in squads and interact with the player, as part of a real-time action game.
You'll code an AI that allows an ally NPC to help and support the player in various circumstances.
The NPC must be able to
- follow the player: staying close to the player, slightly behind him or her (adjustable distance)
- support fire: if the player shoots at a point on the map, the NPC must shoot at the same point (left click)
- protect the player: if an enemy shoots at the player, the NPC will go between the enemy and the player, pointing its shield toward the enemy
- heal the player: if the player is seriously wounded, the NPC will heal the player by moving near him and then by triggering a healing action
- cover fire: the player can right-click on an area of the map to request the action, it can be stopped at any time by right-clicking again
We've deviced to have 4 allies (AI / NPC). Each with a defined position.
A shieldman to protect the player with a shield to reduce bullet damage.
A medic to heal allies when their health points fall below a threshold ( current health <= 5 ).
Two gunmen that will provide cover shots, if requested.
When the allies aren't doing their specific jobs. Allies follow the player in predefined formations (shieldman in front, gunmen on both sides of the player and medic behind). Allies also fire to the same position as the player shoots.
We use the NavMesh Agents of Unity.
For the decision-making part of AI, we use the Finite State Machine (FSM). It's a simple system used to manage simple behaviors.
It's a well-known system, it's not only used for AI but also for animations.
Unity already has an FSM system implemented for its animator. We've decided to adapt it and use it for our AI. This allows us to have clean, easy-to-read and well-designed graphical interface.
Using the MLP you've coded, you will produce a simple learning system of your choice, but in a "video game" context.
The aim of my project is to have a functional AI for a car racing game. The AI will have to complete a lap of the circuit without touching a wall.
To create my AI, I chose to use a Neural Network with with Genetic Algorithms.
Thanks to this technique. I simply need to determine a score for each t'entative. Then take the best and the worst, mix them up and start again with a new generation.
As for the neural network. I used a matrix representation, simply to learn a new way of representing a neural network
As mentioned above. I need to determine a score for each of my races.
As values that will help me to determine my score, I will have:
- the distance covered
- the average speed
- the distance from walls
And, I multiply each of these values with a value that gives importance to each of this values.
In my case, the primary objective is to succeed in the race, so I will put a high value on distance covered, as opposed to speed, where I will put a low value.
I managed to get a AI to do a lap without touching a wall.
But, with the same genome, when it does several laps, it doesn't succeed all the time. I think it needs more training.
After more training, it could be interesting to decrease the importance of distance covered and increase the importance of average speed.
- Use the template given to create your own planner (with effects, action, world state and goal systeme) for a real-time plan execution.
- To make the planne, use Forward search.
- Witch action has a cost depending the world state.
- Add agent.
- Use BitArray to optimize WorldState storage.
- Use Backward search.
Add 5 bar in the chest.
To do this:
- The agent need to mine ore
- When the agent has 2 ore
- Craft bar in furnace
- Store the bar in the chest
To help the agent, there are 1 pickaxe in the world.
Each condition is calculated in relation to an agent.
- NearOre
- NearFurnaceAvailable
- NearFurnaceWithIron
- HasEnoughOre
- HasBar
- BarIsReadyToPickUp
- NearChest
- NearPickaxe
- hasPickaxe
- HasPickaxeAvailableInWorld
- MoveNearOre
- MineOre
- MoveToAvailableFurnace
- MoveToFurnaceWithIron
- CraftingBar
- TakeBar
- MoveToChest
- StoreBar
- Wait
- MoveNearPickaxe
- TakePickaxe
- Engine : Unity 2022.3.4f1
- Text Editor : Visual Studio 2022
- Versionning : Github
- Math library (Machine Learning) : MathNet
Project done at ISART DIGITAL
Author : Vincent DEVINE, Kristian GOUPIL (Movement&Decision)
Special thanks : Florian Wolf
- Car model by Mena (Machine Learning)
Movement & Decision project start : 03-10-2023
Movement & Decision project end : 23-10-2023
Machine Learning project start : 13-12-2023
Machine Learning project start : 02-01-2024
Planner project start : 02-01-2024
Planner project start : 05-01-2024