Skip to content
Stabilizing an Inverted Pendulum on a cart using Deep Reinforcement Learning
Jupyter Notebook
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Cartpole Deep Q-Learning.ipynb
LICENSE
README.md

README.md

Deep-Q-Learning-Cartpole

Stabilizing an Inverted Pendulum on a cart using Deep Reinforcement Learning. This project uses OpenAI gym's Cartpole-v1 environment.

Environment Description

A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.
More information can be found here

Requirements

Jupyter Notebook

This project was developed in a Jupyter Notebook installed using the Anaconda Distribution, which includes Python, the Jupyter Notebook, and other commonly used packages for scientific computing and data science.
Installation instructions can be found here

Python

This project uses python 3.5. Anaconda installation includes python.

Keras with TensorFlow Backend

This project uses Keras(version 2.0.8) with TensorFlow(version 1.7.0) backend.
Instructions for installing packages inside anaconda environment can be found here

OpenAI Gym

This project uses OpenAI Gym's environment.
Installation instructions for OpenAi Gym can be found here

You can’t perform that action at this time.