The Pong game was discussed for the PPO solution example as problem. A2C and DQN can also be used if requested. It can be switch from train.py file. Read elaborated explanation down below and train your PONG Agent 🎮
Reward has been assigned as the distance between the Ball and Agent. Reward mechanism is:
- If the Ball goes out -> - score,
- If Agent hits the Ball -> + score,
- If Agent gets closer to the ball (y coordinate) -> + score
Observation array:
- Euclidean distance between Agent and ball (sqrt(ball_x - Agent_x)**2 - (ball_y - Agent_y)**2),
- Agent_Y_Coord
- Agent_X_Coord
- Ball_Y_Coord
- Ball_X_Coord
- Ball_Velocity
Action space is discrete(3). It means there is certain 3 moves the Agent able to do. Rise Up, hold and get down.
- 🎲 test.py: Testing for the environment. You can display how game screen is.
- ⌛ train.py: Trains the Agent. You can change total_steps from Constants.py. Check it out
- 🤖 Agent.py: Padlde & Agent class.
- 🦾 evaluate.py: If you have any trained model, you can evaluate it with this file. Detailed usage is down below.
- 🏠 Env.py : Environment class. You can alter the game rules, Reward mechanism and what ever you want.
- 🔧 Constants.py : Stable variables of the Game. Screen width, hyperparameters etc.
$ pip install -r requirements.txt
$ python test.py
Default step is 100k. You can alter it from Constants.py file (Loading Libraries may take a time about 10 seconds)
$ python train.py
! After the training, your model will be saved in 'models' file. Evaluate your trained model with adding --model parameter to terminal, Or use pretrained models Which in models folder.
$ python evaluate.py --model models/yourmodel
Utilizing a 200-step trained model:
$ python evaluate.py --model models/200k