Implementation of a Monte Carlo Tree Search algorithm
This repo contains:
- an implementation of a MCTS algorithm
- an agent that uses the MCTS to play an openAI gym game (CartPole-v0)
The code has been adapted from Udacity Deep Reinforcement Learning Nanodegree.
The MCTS implementation follows the explanation Monte Carlo Tree Search.
This is an implementation of a an agent that uses a vanilla implementation of the MCTS algorithm in order to play the openAI gym game of CartPole.
Execute the code in the notebook to see the agent in action!
To set up your python environment to run the code in this repository, follow the instructions below.
-
Create (and activate) a new environment with Python 3.6.
- Linux or Mac:
conda create --name MCTS python=3.6 source activate MCTS
- Windows:
conda create --name MCTS python=3.6 activate MCTS
-
Clone the repository, and then, install the required packages (see requirements).
git clone https://github.com/ciamic/MCTS.git
- Create an IPython kernel for the
MCTS
environment.
python -m ipykernel install --user --name MCTS --display-name "MCTS"
- Before running code in a notebook, change the kernel to match the
MCTS
environment by using the drop-down contextualKernel
menu.
Python 3
numpy
matplotlib
gym