Skip to content
This repository has been archived by the owner on Feb 20, 2024. It is now read-only.
/ AI_Learning Public archive

Project that aims to learn AI programming in game, specifically Movements, Decisions, Machine Learning and Planner. Exercice doing during 3rd year at ISART.

License

Notifications You must be signed in to change notification settings

Vincent-Devine/AI_Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Learning

Project that aims to learn AI programming in game, specifically Movements, Decisions, Machine Learning and Planner.
Exercice doing during 3rd year at ISART.

Table of content

Quick Start

  1. Clone the project: git clone git@github.com:Vincent-Devine/AI_Learning.git (by shh)
  2. Open the Unity project
  3. Choose the project you want to play going to the righ scene
  4. Start the simulation by the Play button on the Unity interface

For Machine Learning scene only, by default, AI is not trained. In the manager's inpector, you can resertart the training or setup a already trained AI

1. Movement & Decision

Decision

1.1 Project description

1.1.1 Instruction

Create an NPC AI that can move in squads and interact with the player, as part of a real-time action game.
You'll code an AI that allows an ally NPC to help and support the player in various circumstances.
The NPC must be able to

  • follow the player: staying close to the player, slightly behind him or her (adjustable distance)
  • support fire: if the player shoots at a point on the map, the NPC must shoot at the same point (left click)
  • protect the player: if an enemy shoots at the player, the NPC will go between the enemy and the player, pointing its shield toward the enemy
  • heal the player: if the player is seriously wounded, the NPC will heal the player by moving near him and then by triggering a healing action
  • cover fire: the player can right-click on an area of the map to request the action, it can be stopped at any time by right-clicking again

1.1.2 Our choices

We've deviced to have 4 allies (AI / NPC). Each with a defined position.
A shieldman to protect the player with a shield to reduce bullet damage.
A medic to heal allies when their health points fall below a threshold ( current health <= 5 ).
Two gunmen that will provide cover shots, if requested.
When the allies aren't doing their specific jobs. Allies follow the player in predefined formations (shieldman in front, gunmen on both sides of the player and medic behind). Allies also fire to the same position as the player shoots.

1.2 AI

1.2.1 Movement

We use the NavMesh Agents of Unity.

1.2.2 Decision making

For the decision-making part of AI, we use the Finite State Machine (FSM). It's a simple system used to manage simple behaviors.
It's a well-known system, it's not only used for AI but also for animations.
Unity already has an FSM system implemented for its animator. We've decided to adapt it and use it for our AI. This allows us to have clean, easy-to-read and well-designed graphical interface.

png

2. Machine Learning

2.1 Project description

Using the MLP you've coded, you will produce a simple learning system of your choice, but in a "video game" context.

2.2 Goal

The aim of my project is to have a functional AI for a car racing game. The AI will have to complete a lap of the circuit without touching a wall.

2.3 Technical Choices

To create my AI, I chose to use a Neural Network with with Genetic Algorithms.
Thanks to this technique. I simply need to determine a score for each t'entative. Then take the best and the worst, mix them up and start again with a new generation.

As for the neural network. I used a matrix representation, simply to learn a new way of representing a neural network

2.4 Training Sets

As mentioned above. I need to determine a score for each of my races.
As values that will help me to determine my score, I will have:

  • the distance covered
  • the average speed
  • the distance from walls

And, I multiply each of these values with a value that gives importance to each of this values.

In my case, the primary objective is to succeed in the race, so I will put a high value on distance covered, as opposed to speed, where I will put a low value.

2.5 Analysis of result

I managed to get a AI to do a lap without touching a wall.
But, with the same genome, when it does several laps, it doesn't succeed all the time. I think it needs more training.

After more training, it could be interesting to decrease the importance of distance covered and increase the importance of average speed.

3. Planner

GOAP

3.1 Project description

3.1.1 Instruction

  • Use the template given to create your own planner (with effects, action, world state and goal systeme) for a real-time plan execution.
  • To make the planne, use Forward search.

3.1.2 Bonus

  • Witch action has a cost depending the world state.
  • Add agent.
  • Use BitArray to optimize WorldState storage.
  • Use Backward search.

Planner

3.2 GOAP Data

3.2.1 Goal

Add 5 bar in the chest.
To do this:

  1. The agent need to mine ore
  2. When the agent has 2 ore
  3. Craft bar in furnace
  4. Store the bar in the chest

To help the agent, there are 1 pickaxe in the world.

3.2.2 Conditions list

Each condition is calculated in relation to an agent.

  • NearOre
  • NearFurnaceAvailable
  • NearFurnaceWithIron
  • HasEnoughOre
  • HasBar
  • BarIsReadyToPickUp
  • NearChest
  • NearPickaxe
  • hasPickaxe
  • HasPickaxeAvailableInWorld

3.2.3 Actions list

  • MoveNearOre
  • MineOre
  • MoveToAvailableFurnace
  • MoveToFurnaceWithIron
  • CraftingBar
  • TakeBar
  • MoveToChest
  • StoreBar
  • Wait
  • MoveNearPickaxe
  • TakePickaxe

Technology

  • Engine : Unity 2022.3.4f1
  • Text Editor : Visual Studio 2022
  • Versionning : Github
  • Math library (Machine Learning) : MathNet

Credit

Project done at ISART DIGITAL
Author : Vincent DEVINE, Kristian GOUPIL (Movement&Decision)
Special thanks : Florian Wolf

Assets

  • Car model by Mena (Machine Learning)

Date

Movement & Decision project start : 03-10-2023
Movement & Decision project end : 23-10-2023

Machine Learning project start : 13-12-2023
Machine Learning project start : 02-01-2024

Planner project start : 02-01-2024
Planner project start : 05-01-2024

About

Project that aims to learn AI programming in game, specifically Movements, Decisions, Machine Learning and Planner. Exercice doing during 3rd year at ISART.

Topics

Resources

License

Stars

Watchers

Forks