Skip to content

howardyclo/NTHU-CEDL2017-HW2-MDPs

 
 

Repository files navigation

Homework1-MDPs

The lab materials are partially borrowed from UC Berkerly cs294

Introduction

In this homework, we solve MDPs with finte state and action space via value iteration, policy iteration, and tabular Q-learning.

What's Markov decision process (MDP)?

Markov Decision Process is a discrete time stochastic control process. At each time step, the process is in some state s, and the decision maker may choose any action a that is available in state s. The process responds at the next time step by randomly moving into a new state s', and giving the decision maker a corresponding reward R(s,a,s')

image borrowed from UCB CS188

Setup

  • Python 3.5.3
  • OpenAI gym
  • numpy
  • matplotlib
  • ipython

All the codes you need to modified are in Lab2-MDPs.ipynb.

We encourage you to install Anaconda or Miniconda in your laptop to avoid tedious dependencies problem.

for lazy people:

conda env create -f environment.yml
source activate cedl
# deactivate when you want to leave the environment
source deactivate cedl

Prerequisites

If you are unfamiliar with Numpy or IPython, you should read materials from CS231n:

How to Start

Start IPython: After you clone this repository and install all the dependencies, you should start the IPython notebook server from the home directory Open the assignment: Open Lab1-MDPs (students).ipynb, and it will walk you through completing the assignment.

TODO

  • [30%] value iteration
  • [30%] policy iteration
  • [30%] tabular Q-learning
  • [10%] report
  • [5%] Bonus, share you code and what you learn on github or yourpersonal blogs, such as this

Other

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 85.9%
  • Python 14.1%