Skip to content

Latest commit

 

History

History
54 lines (47 loc) · 2.56 KB

README.md

File metadata and controls

54 lines (47 loc) · 2.56 KB

SARSA (on poilicy TD control) learning algorithm

Simple implementation of SARSA learning algorithm in python to solve a maze [1]

Status : Active

Dataset

Data is fed into the algorithm as text file containing specifications about the maze e.g. -


sssssssssssssss
000000000000000
000000000000000
000000000000000
000000000000000
0000xxxxxxx0000
000000000000000
000000000000000
000000000000000
xxxxx00000xxxxx
000000000000000
000000000000000
000000000000000
0000xxxxxxx0000
000000000000000
000000000000000
fffffffffffffff

Where

  • x - barrier
  • s - start
  • f - final

Getting Started

  • Clone the repository to your local machine

git clone https://github.com/AshishSinha5/maze_runner/
  • run main.py with apropiate args e.g. -

python main.py -f "data/maze2.txt" -a 0.4 -g 0.9 -e 0.1

Inferences

The state space are the coordinates of the maze with (-1,-1) being the state when the agent goes out of bounds and action space are the two components of velocity with a constrint that it can have a absolute value of greater than 5.

The algorithm was tested for three mazes, and the outputs and plots are shown below. We can see that the algorithm(agent) starts to find the currect path after few iteration and also the proportion of successes keep on increasing with the number of episodes as the agent finds the optimal action value function.

Maze 1 Maze 2 Maze 3
maze 1 gif maze 2 gif maze 3 gif
maze 1 succ maze 2 succ maze 3 succ
maze 1 rewards maze 2 rewards maze 3 rewards

References

[1]Reinforcement Learning: An Introduction, R. Sutton, and A. Barto., The MIT Press, Second edition, (2018)