From b38da5a611539a16a93d5d6dd687249e49cb029f Mon Sep 17 00:00:00 2001 From: AngryCracker Date: Tue, 27 Feb 2018 17:17:46 +0530 Subject: [PATCH 1/3] Added mdp_apps notebook --- mdp_apps.ipynb | 1310 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 1310 insertions(+) create mode 100644 mdp_apps.ipynb diff --git a/mdp_apps.ipynb b/mdp_apps.ipynb new file mode 100644 index 000000000..8ce33a562 --- /dev/null +++ b/mdp_apps.ipynb @@ -0,0 +1,1310 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# APPLICATIONS OF MARKOV DECISION PROCESSES\n", + "---\n", + "In this notebook we will take a look at some indicative applications of markov decision processes. \n", + "We will cover content from [`mdp.py`](https://github.com/aimacode/aima-python/blob/master/mdp.py), for chapter 17 of Stuart Russel's and Peter Norvig's book [*Artificial Intelligence: A Modern Approach*](http://aima.cs.berkeley.edu/)." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "from mdp import *\n", + "from notebook import psource, pseudocode" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## CONTENTS\n", + "- Simple MDP\n", + " - State dependent reward function\n", + " - State and action dependent reward function\n", + " - State, action and next state dependent reward function\n", + "\n", + "\n", + "## SIMPLE MDP\n", + "---\n", + "### State dependent reward function\n", + "\n", + "Markov Decision Processes are formally described as processes that follow the Markov property which states that \"The future is independent of the past given the present\". \n", + "MDPs formally describe environments for reinforcement learning and we assume that the environment is *fully observable*. \n", + "Let us take a toy example MDP and solve it using the functions in `mdp.py`.\n", + "This is a simple example adapted from a [similar problem](http://www0.cs.ucl.ac.uk/staff/D.Silver/web/Teaching_files/MDP.pdf) by Dr. David Silver, tweaked to fit the limitations of the current functions.\n", + "![title](images/mdp-b.png)\n", + "\n", + "Let's say you're a student attending lectures in a university.\n", + "There are three lectures you need to attend on a given day.\n", + "
\n", + "Attending the first lecture gives you 4 points of reward.\n", + "After the first lecture, you have a 0.6 probability to continue into the second one, yielding 6 more points of reward.\n", + "
\n", + "But, with a probability of 0.4, you get distracted and start using Facebook instead and get a reward of -1.\n", + "From then onwards, you really can't let go of Facebook and there's just a 0.1 probability that you will concentrate back on the lecture.\n", + "
\n", + "After the second lecture, you have an equal chance of attending the next lecture or just falling asleep.\n", + "Falling asleep is the terminal state and yields you no reward, but continuing on to the final lecture gives you a big reward of 10 points.\n", + "
\n", + "From there on, you have a 40% chance of going to study and reach the terminal state, \n", + "but a 60% chance of going to the pub with your friends instead. \n", + "You end up drunk and don't know which lecture to attend, so you go to one of the lectures according to the probabilities given above.\n", + "
\n", + "We now have an outline of our stochastic environment and we need to maximize our reward by solving this MDP.\n", + "
\n", + "
\n", + "We first have to define our Transition Matrix as a nested dictionary to fit the requirements of the MDP class." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "t = {\n", + " 'leisure': {\n", + " 'facebook': {'leisure':0.9, 'class1':0.1},\n", + " 'quit': {'leisure':0.1, 'class1':0.9},\n", + " 'study': {},\n", + " 'sleep': {},\n", + " 'pub': {}\n", + " },\n", + " 'class1': {\n", + " 'study': {'class2':0.6, 'leisure':0.4},\n", + " 'facebook': {'class2':0.4, 'leisure':0.6},\n", + " 'quit': {},\n", + " 'sleep': {},\n", + " 'pub': {}\n", + " },\n", + " 'class2': {\n", + " 'study': {'class3':0.5, 'end':0.5},\n", + " 'sleep': {'end':0.5, 'class3':0.5},\n", + " 'facebook': {},\n", + " 'quit': {},\n", + " 'pub': {},\n", + " },\n", + " 'class3': {\n", + " 'study': {'end':0.6, 'class1':0.08, 'class2':0.16, 'class3':0.16},\n", + " 'pub': {'end':0.4, 'class1':0.12, 'class2':0.24, 'class3':0.24},\n", + " 'facebook': {},\n", + " 'quit': {},\n", + " 'sleep': {}\n", + " },\n", + " 'end': {}\n", + "}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We now need to define the reward for each state." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "rewards = {\n", + " 'class1': 4,\n", + " 'class2': 6,\n", + " 'class3': 10,\n", + " 'leisure': -1,\n", + " 'end': 0\n", + "}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This MDP has only one terminal state." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "terminals = ['end']" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's now set the initial state to Class 1." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "init = 'class1'" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We will write a CustomMDP class to extend the MDP class for the problem at hand. \n", + "This class will implement the `T` method to implement the transition model. This is the exact same class as given in [`mdp.ipynb`](https://github.com/aimacode/aima-python/blob/master/mdp.ipynb#MDP)." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "class CustomMDP(MDP):\n", + "\n", + " def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9):\n", + " # All possible actions.\n", + " actlist = []\n", + " for state in transition_matrix.keys():\n", + " actlist.extend(transition_matrix[state])\n", + " actlist = list(set(actlist))\n", + " print(actlist)\n", + "\n", + " MDP.__init__(self, init, actlist, terminals=terminals, gamma=gamma)\n", + " self.t = transition_matrix\n", + " self.reward = rewards\n", + " for state in self.t:\n", + " self.states.add(state)\n", + "\n", + " def T(self, state, action):\n", + " if action is None:\n", + " return [(0.0, state)]\n", + " else: \n", + " return [(prob, new_state) for new_state, prob in self.t[state][action].items()]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We now need an instance of this class." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['study', 'pub', 'sleep', 'facebook', 'quit']\n" + ] + } + ], + "source": [ + "mdp = CustomMDP(t, rewards, terminals, init, gamma=.9)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The utility of each state can be found by `value_iteration`." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "{'class1': 16.90340650279542,\n", + " 'class2': 14.597383430869879,\n", + " 'class3': 19.10533144728953,\n", + " 'end': 0.0,\n", + " 'leisure': 13.946891353066082}" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "value_iteration(mdp)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now that we can compute the utility values, we can find the best policy." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "pi = best_policy(mdp, value_iteration(mdp, .01))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`pi` stores the best action for each state." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'class3': 'pub', 'leisure': 'quit', 'class2': 'study', 'class1': 'study', 'end': None}\n" + ] + } + ], + "source": [ + "print(pi)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can confirm that this is the best policy by verifying this result against `policy_iteration`." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "{'class1': 'study',\n", + " 'class2': 'study',\n", + " 'class3': 'pub',\n", + " 'end': None,\n", + " 'leisure': 'quit'}" + ] + }, + "execution_count": 11, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "policy_iteration(mdp)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "collapsed": true + }, + "source": [ + "Everything looks perfect, but let us look at another possibility for an MDP.\n", + "
\n", + "Till now we have only dealt with rewards that the agent gets while it is **on** a particular state.\n", + "What if we want to have different rewards for a state depending on the action that the agent takes next. \n", + "The agent gets the reward _during its transition_ to the next state.\n", + "
\n", + "For the sake of clarity, we will call this the _transition reward_ and we will call this kind of MDP a _dynamic_ MDP. \n", + "This is not a conventional term, we just use it to minimize confusion between the two.\n", + "
\n", + "This next section deals with how to create and solve a dynamic MDP." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### State and action dependent reward function\n", + "Let us consider a very similar problem, but this time, we do not have rewards _on_ states, \n", + "instead, we have rewards on the transitions between states. \n", + "This state diagram will make it clearer.\n", + "![title](images/mdp-c.png)\n", + "\n", + "A very similar scenario as the previous problem, but we have different rewards for the same state depending on the action taken.\n", + "
\n", + "To deal with this, we just need to change the `R` method of the `MDP` class, but to prevent confusion, we will write a new similar class `DMDP`." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "class DMDP:\n", + "\n", + " \"\"\"A Markov Decision Process, defined by an initial state, transition model,\n", + " and reward model. We also keep track of a gamma value, for use by\n", + " algorithms. The transition model is represented somewhat differently from\n", + " the text. Instead of P(s' | s, a) being a probability number for each\n", + " state/state/action triplet, we instead have T(s, a) return a\n", + " list of (p, s') pairs. The reward function is very similar.\n", + " We also keep track of the possible states,\n", + " terminal states, and actions for each state.\"\"\"\n", + "\n", + " def __init__(self, init, actlist, terminals, transitions={}, rewards={}, states=None, gamma=.9):\n", + " if not (0 < gamma <= 1):\n", + " raise ValueError(\"An MDP must have 0 < gamma <= 1\")\n", + "\n", + " if states:\n", + " self.states = states\n", + " else:\n", + " self.states = set()\n", + " self.init = init\n", + " self.actlist = actlist\n", + " self.terminals = terminals\n", + " self.transitions = transitions\n", + " self.rewards = rewards\n", + " self.gamma = gamma\n", + "\n", + " def R(self, state, action):\n", + " \"\"\"Return a numeric reward for this state and this action.\"\"\"\n", + " if (self.rewards == {}):\n", + " raise ValueError('Reward model is missing')\n", + " else:\n", + " return self.rewards[state][action]\n", + "\n", + " def T(self, state, action):\n", + " \"\"\"Transition model. From a state and an action, return a list\n", + " of (probability, result-state) pairs.\"\"\"\n", + " if(self.transitions == {}):\n", + " raise ValueError(\"Transition model is missing\")\n", + " else:\n", + " return self.transitions[state][action]\n", + "\n", + " def actions(self, state):\n", + " \"\"\"Set of actions that can be performed in this state. By default, a\n", + " fixed list of actions, except for terminal states. Override this\n", + " method if you need to specialize by state.\"\"\"\n", + " if state in self.terminals:\n", + " return [None]\n", + " else:\n", + " return self.actlist" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The transition model will be the same" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "t = {\n", + " 'leisure': {\n", + " 'facebook': {'leisure':0.9, 'class1':0.1},\n", + " 'quit': {'leisure':0.1, 'class1':0.9},\n", + " 'study': {},\n", + " 'sleep': {},\n", + " 'pub': {}\n", + " },\n", + " 'class1': {\n", + " 'study': {'class2':0.6, 'leisure':0.4},\n", + " 'facebook': {'class2':0.4, 'leisure':0.6},\n", + " 'quit': {},\n", + " 'sleep': {},\n", + " 'pub': {}\n", + " },\n", + " 'class2': {\n", + " 'study': {'class3':0.5, 'end':0.5},\n", + " 'sleep': {'end':0.5, 'class3':0.5},\n", + " 'facebook': {},\n", + " 'quit': {},\n", + " 'pub': {},\n", + " },\n", + " 'class3': {\n", + " 'study': {'end':0.6, 'class1':0.08, 'class2':0.16, 'class3':0.16},\n", + " 'pub': {'end':0.4, 'class1':0.12, 'class2':0.24, 'class3':0.24},\n", + " 'facebook': {},\n", + " 'quit': {},\n", + " 'sleep': {}\n", + " },\n", + " 'end': {}\n", + "}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The reward model will be a dictionary very similar to the transition dictionary with a reward for every action for every state." + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "r = {\n", + " 'leisure': {\n", + " 'facebook':-1,\n", + " 'quit':0,\n", + " 'study':0,\n", + " 'sleep':0,\n", + " 'pub':0\n", + " },\n", + " 'class1': {\n", + " 'study':-2,\n", + " 'facebook':-1,\n", + " 'quit':0,\n", + " 'sleep':0,\n", + " 'pub':0\n", + " },\n", + " 'class2': {\n", + " 'study':-2,\n", + " 'sleep':0,\n", + " 'facebook':0,\n", + " 'quit':0,\n", + " 'pub':0\n", + " },\n", + " 'class3': {\n", + " 'study':10,\n", + " 'pub':1,\n", + " 'facebook':0,\n", + " 'quit':0,\n", + " 'sleep':0\n", + " },\n", + " 'end': {\n", + " 'study':0,\n", + " 'pub':0,\n", + " 'facebook':0,\n", + " 'quit':0,\n", + " 'sleep':0\n", + " }\n", + "}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The MDP has only one terminal state" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "terminals = ['end']" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's now set the initial state to Class 1." + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "init = 'class1'" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We will write a CustomDMDP class to extend the DMDP class for the problem at hand.\n", + "This class will implement everything that the previous CustomMDP class implements along with a new reward model." + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "class CustomDMDP(DMDP):\n", + " \n", + " def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9):\n", + " actlist = []\n", + " for state in transition_matrix.keys():\n", + " actlist.extend(transition_matrix[state])\n", + " actlist = list(set(actlist))\n", + " print(actlist)\n", + " \n", + " DMDP.__init__(self, init, actlist, terminals=terminals, gamma=gamma)\n", + " self.t = transition_matrix\n", + " self.rewards = rewards\n", + " for state in self.t:\n", + " self.states.add(state)\n", + " \n", + " \n", + " def T(self, state, action):\n", + " if action is None:\n", + " return [(0.0, state)]\n", + " else:\n", + " return [(prob, new_state) for new_state, prob in self.t[state][action].items()]\n", + " \n", + " def R(self, state, action):\n", + " if action is None:\n", + " return 0\n", + " else:\n", + " return self.rewards[state][action]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "One thing we haven't thought about yet is that the `value_iteration` algorithm won't work now that the reward model is changed.\n", + "It will be quite similar to the one we currently have nonetheless." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The Bellman update equation now is defined as follows\n", + "\n", + "$$U(s)=\\max_{a\\epsilon A(s)}\\bigg[R(s, a) + \\gamma\\sum_{s'}P(s'\\ |\\ s,a)U(s')\\bigg]$$\n", + "\n", + "It is not difficult to see that the update equation we have been using till now is just a special case of this more generalized equation. \n", + "We also need to max over the reward function now as the reward function is action dependent as well.\n", + "
\n", + "We will use this to write a function to carry out value iteration, very similar to the one we are familiar with." + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "def value_iteration_dmdp(dmdp, epsilon=0.001):\n", + " U1 = {s: 0 for s in dmdp.states}\n", + " R, T, gamma = dmdp.R, dmdp.T, dmdp.gamma\n", + " while True:\n", + " U = U1.copy()\n", + " delta = 0\n", + " for s in dmdp.states:\n", + " U1[s] = max([(R(s, a) + gamma*sum([(p*U[s1]) for (p, s1) in T(s, a)])) for a in dmdp.actions(s)])\n", + " delta = max(delta, abs(U1[s] - U[s]))\n", + " if delta < epsilon * (1 - gamma) / gamma:\n", + " return U" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We're all set.\n", + "Let's instantiate our class." + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['study', 'pub', 'sleep', 'facebook', 'quit']\n" + ] + } + ], + "source": [ + "dmdp = CustomDMDP(t, r, terminals, init, gamma=.9)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Calculate utility values by calling `value_iteration_dmdp`." + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "{'class1': 2.0756895004431364,\n", + " 'class2': 5.772550326127298,\n", + " 'class3': 12.827904448229472,\n", + " 'end': 0.0,\n", + " 'leisure': 1.8474896554396596}" + ] + }, + "execution_count": 20, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "value_iteration_dmdp(dmdp)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "These are the expected utility values for our new MDP.\n", + "
\n", + "As you might have guessed, we cannot use the old `best_policy` function to find the best policy.\n", + "So we will write our own.\n", + "But, before that we need a helper function to calculate the expected utility value given a state and an action." + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "def expected_utility_dmdp(a, s, U, dmdp):\n", + " return dmdp.R(s, a) + dmdp.gamma*sum([(p*U[s1]) for (p, s1) in dmdp.T(s, a)])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we write our modified `best_policy` function." + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "from utils import argmax\n", + "def best_policy_dmdp(dmdp, U):\n", + " pi = {}\n", + " for s in dmdp.states:\n", + " pi[s] = argmax(dmdp.actions(s), key=lambda a: expected_utility_dmdp(a, s, U, dmdp))\n", + " return pi" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Find the best policy." + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'class3': 'study', 'leisure': 'quit', 'class2': 'sleep', 'class1': 'facebook', 'end': None}\n" + ] + } + ], + "source": [ + "pi = best_policy_dmdp(dmdp, value_iteration_dmdp(dmdp, .01))\n", + "print(pi)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "From this, we can infer that `value_iteration_dmdp` tries to minimize the negative reward. \n", + "Since we don't have rewards for states now, the algorithm takes the action that would try to avoid getting negative rewards and take the lesser of two evils if all rewards are negative.\n", + "You might also want to have state rewards alongside transition rewards. \n", + "Perhaps you can do that yourself now that the difficult part has been done.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### State, action and next-state dependent reward function\n", + "\n", + "For truly stochastic environments, \n", + "we have noticed that taking an action from a particular state doesn't always do what we want it to. \n", + "Instead, for every action taken from a particular state, \n", + "it might be possible to reach a different state each time depending on the transition probabilities. \n", + "What if we want different rewards for each state, action and next-state triplet? \n", + "Mathematically, we now want a reward function of the form R(s, a, s') for our MDP. \n", + "This section shows how we can tweak the MDP class to achieve this.\n", + "
\n", + "\n", + "Let's now take a different problem statement. \n", + "The one we are working with is a bit too simple.\n", + "Consider a taxi that serves three adjacent towns A, B, and C.\n", + "Each time the taxi discharges a passenger, the driver must choose from three possible actions:\n", + "1. Cruise the streets looking for a passenger.\n", + "2. Go to the nearest taxi stand.\n", + "3. Wait for a radio call from the dispatcher with instructions.\n", + "
\n", + "Subject to the constraint that the taxi driver cannot do the third action in town B because of distance and poor reception.\n", + "\n", + "Let's model our MDP.\n", + "
\n", + "The MDP has three states, namely A, B and C.\n", + "
\n", + "It has three actions, namely 1, 2 and 3.\n", + "
\n", + "Action sets:\n", + "
\n", + "$K_{a}$ = {1, 2, 3}\n", + "
\n", + "$K_{b}$ = {1, 2}\n", + "
\n", + "$K_{c}$ = {1, 2, 3}\n", + "
\n", + "\n", + "We have the following transition probability matrices:\n", + "
\n", + "
\n", + "Action 1: Cruising streets \n", + "
\n", + "$\\\\\n", + " P^{1} = \n", + " \\left[ {\\begin{array}{ccc}\n", + " \\frac{1}{2} & \\frac{1}{4} & \\frac{1}{4} \\\\\n", + " \\frac{1}{2} & 0 & \\frac{1}{2} \\\\\n", + " \\frac{1}{4} & \\frac{1}{4} & \\frac{1}{2} \\\\\n", + " \\end{array}}\\right] \\\\\n", + " \\\\\n", + " $\n", + "
\n", + "
\n", + "Action 2: Waiting at the taxi stand \n", + "
\n", + "$\\\\\n", + " P^{2} = \n", + " \\left[ {\\begin{array}{ccc}\n", + " \\frac{1}{16} & \\frac{3}{4} & \\frac{3}{16} \\\\\n", + " \\frac{1}{16} & \\frac{7}{8} & \\frac{1}{16} \\\\\n", + " \\frac{1}{8} & \\frac{3}{4} & \\frac{1}{8} \\\\\n", + " \\end{array}}\\right] \\\\\n", + " \\\\\n", + " $\n", + "
\n", + "
\n", + "Action 3: Waiting for dispatch \n", + "
\n", + "$\\\\\n", + " P^{3} =\n", + " \\left[ {\\begin{array}{ccc}\n", + " \\frac{1}{4} & \\frac{1}{8} & \\frac{5}{8} \\\\\n", + " 0 & 1 & 0 \\\\\n", + " \\frac{3}{4} & \\frac{1}{16} & \\frac{3}{16} \\\\\n", + " \\end{array}}\\right] \\\\\n", + " \\\\\n", + " $\n", + "
\n", + "
\n", + "For the sake of readability, we will call the states A, B and C and the actions 'cruise', 'stand' and 'dispatch'.\n", + "We will now build the transition model as a dictionary using these matrices." + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "t = {\n", + " 'A': {\n", + " 'cruise': {'A':0.5, 'B':0.25, 'C':0.25},\n", + " 'stand': {'A':0.0625, 'B':0.75, 'C':0.1875},\n", + " 'dispatch': {'A':0.25, 'B':0.125, 'C':0.625}\n", + " },\n", + " 'B': {\n", + " 'cruise': {'A':0.5, 'B':0, 'C':0.5},\n", + " 'stand': {'A':0.0625, 'B':0.875, 'C':0.0625},\n", + " 'dispatch': {'A':0, 'B':1, 'C':0}\n", + " },\n", + " 'C': {\n", + " 'cruise': {'A':0.25, 'B':0.25, 'C':0.5},\n", + " 'stand': {'A':0.125, 'B':0.75, 'C':0.125},\n", + " 'dispatch': {'A':0.75, 'B':0.0625, 'C':0.1875}\n", + " }\n", + "}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The reward matrices for the problem are as follows:\n", + "
\n", + "
\n", + "Action 1: Cruising streets \n", + "
\n", + "$\\\\\n", + " R^{1} = \n", + " \\left[ {\\begin{array}{ccc}\n", + " 10 & 4 & 8 \\\\\n", + " 14 & 0 & 18 \\\\\n", + " 10 & 2 & 8 \\\\\n", + " \\end{array}}\\right] \\\\\n", + " \\\\\n", + " $\n", + "
\n", + "
\n", + "Action 2: Waiting at the taxi stand \n", + "
\n", + "$\\\\\n", + " R^{2} = \n", + " \\left[ {\\begin{array}{ccc}\n", + " 8 & 2 & 4 \\\\\n", + " 8 & 16 & 8 \\\\\n", + " 6 & 4 & 2\\\\\n", + " \\end{array}}\\right] \\\\\n", + " \\\\\n", + " $\n", + "
\n", + "
\n", + "Action 3: Waiting for dispatch \n", + "
\n", + "$\\\\\n", + " R^{3} = \n", + " \\left[ {\\begin{array}{ccc}\n", + " 4 & 6 & 4 \\\\\n", + " 0 & 0 & 0 \\\\\n", + " 4 & 0 & 8\\\\\n", + " \\end{array}}\\right] \\\\\n", + " \\\\\n", + " $\n", + "
\n", + "
\n", + "We now build the reward model as a dictionary using these matrices." + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "r = {\n", + " 'A': {\n", + " 'cruise': {'A':10, 'B':4, 'C':8},\n", + " 'stand': {'A':8, 'B':2, 'C':4},\n", + " 'dispatch': {'A':4, 'B':6, 'C':4}\n", + " },\n", + " 'B': {\n", + " 'cruise': {'A':14, 'B':0, 'C':18},\n", + " 'stand': {'A':8, 'B':16, 'C':8},\n", + " 'dispatch': {'A':0, 'B':0, 'C':0}\n", + " },\n", + " 'C': {\n", + " 'cruise': {'A':10, 'B':2, 'C':18},\n", + " 'stand': {'A':6, 'B':4, 'C':2},\n", + " 'dispatch': {'A':4, 'B':0, 'C':8}\n", + " }\n", + "}" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "collapsed": true + }, + "source": [ + "The Bellman update equation now is defined as follows\n", + "\n", + "$$U(s)=\\max_{a\\epsilon A(s)}\\sum_{s'}P(s'\\ |\\ s,a)(R(s'\\ |\\ s,a) + \\gamma U(s'))$$\n", + "\n", + "It is not difficult to see that all the update equations we have used till now is just a special case of this more generalized equation. \n", + "If we did not have next-state-dependent rewards, the first term inside the summation exactly sums up to R(s, a) or the state-reward for a particular action and we would get the update equation used in the previous problem.\n", + "If we did not have action dependent rewards, the first term inside the summation sums up to R(s) or the state-reward and we would get the first update equation used in `mdp.ipynb`.\n", + "
\n", + "For example, as we have the same reward regardless of the action, let's consider a reward of **r** units for a particular state and let's assume the transition probabilities to be 0.1, 0.2, 0.3 and 0.4 for 4 possible actions for that state.\n", + "We will further assume that a particular action in a state leads to the same state every time we take that action.\n", + "The first term inside the summation for this case will evaluate to (0.1 + 0.2 + 0.3 + 0.4)r = r which is equal to R(s) in the first update equation.\n", + "
\n", + "There are many ways to write value iteration for this situation, but we will go with the most intuitive method.\n", + "One that can be implemented with minor alterations to the existing `value_iteration` algorithm.\n", + "
\n", + "Our `DMDP` class will be slightly different.\n", + "More specifically, the `R` method will have one more index to go through now that we have three levels of nesting in the reward model.\n", + "We will call the new class `DMDP2` as I have run out of creative names." + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "class DMDP2:\n", + "\n", + " \"\"\"A Markov Decision Process, defined by an initial state, transition model,\n", + " and reward model. We also keep track of a gamma value, for use by\n", + " algorithms. The transition model is represented somewhat differently from\n", + " the text. Instead of P(s' | s, a) being a probability number for each\n", + " state/state/action triplet, we instead have T(s, a) return a\n", + " list of (p, s') pairs. The reward function is very similar.\n", + " We also keep track of the possible states,\n", + " terminal states, and actions for each state.\"\"\"\n", + "\n", + " def __init__(self, init, actlist, terminals, transitions={}, rewards={}, states=None, gamma=.9):\n", + " if not (0 < gamma <= 1):\n", + " raise ValueError(\"An MDP must have 0 < gamma <= 1\")\n", + "\n", + " if states:\n", + " self.states = states\n", + " else:\n", + " self.states = set()\n", + " self.init = init\n", + " self.actlist = actlist\n", + " self.terminals = terminals\n", + " self.transitions = transitions\n", + " self.rewards = rewards\n", + " self.gamma = gamma\n", + "\n", + " def R(self, state, action, state_):\n", + " \"\"\"Return a numeric reward for this state, this action and the next state_\"\"\"\n", + " if (self.rewards == {}):\n", + " raise ValueError('Reward model is missing')\n", + " else:\n", + " return self.rewards[state][action][state_]\n", + "\n", + " def T(self, state, action):\n", + " \"\"\"Transition model. From a state and an action, return a list\n", + " of (probability, result-state) pairs.\"\"\"\n", + " if(self.transitions == {}):\n", + " raise ValueError(\"Transition model is missing\")\n", + " else:\n", + " return self.transitions[state][action]\n", + "\n", + " def actions(self, state):\n", + " \"\"\"Set of actions that can be performed in this state. By default, a\n", + " fixed list of actions, except for terminal states. Override this\n", + " method if you need to specialize by state.\"\"\"\n", + " if state in self.terminals:\n", + " return [None]\n", + " else:\n", + " return self.actlist\n", + " \n", + " def actions(self, state):\n", + " \"\"\"Set of actions that can be performed in this state. By default, a\n", + " fixed list of actions, except for terminal states. Override this\n", + " method if you need to specialize by state.\"\"\"\n", + " if state in self.terminals:\n", + " return [None]\n", + " else:\n", + " return self.actlist" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Only the `R` method is different from the previous `DMDP` class.\n", + "
\n", + "Our traditional custom class will be required to implement the transition model and the reward model.\n", + "
\n", + "We call this class `CustomDMDP2`." + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "class CustomDMDP2(DMDP2):\n", + " \n", + " def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9):\n", + " actlist = []\n", + " for state in transition_matrix.keys():\n", + " actlist.extend(transition_matrix[state])\n", + " actlist = list(set(actlist))\n", + " print(actlist)\n", + " \n", + " DMDP2.__init__(self, init, actlist, terminals=terminals, gamma=gamma)\n", + " self.t = transition_matrix\n", + " self.rewards = rewards\n", + " for state in self.t:\n", + " self.states.add(state)\n", + " \n", + " def T(self, state, action):\n", + " if action is None:\n", + " return [(0.0, state)]\n", + " else:\n", + " return [(prob, new_state) for new_state, prob in self.t[state][action].items()]\n", + " \n", + " def R(self, state, action, state_):\n", + " if action is None:\n", + " return 0\n", + " else:\n", + " return self.rewards[state][action][state_]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can finally write value iteration for this problem.\n", + "The latest update equation will be used." + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "def value_iteration_taxi_mdp(dmdp2, epsilon=0.001):\n", + " U1 = {s: 0 for s in dmdp2.states}\n", + " R, T, gamma = dmdp2.R, dmdp2.T, dmdp2.gamma\n", + " while True:\n", + " U = U1.copy()\n", + " delta = 0\n", + " for s in dmdp2.states:\n", + " U1[s] = max([sum([(p*(R(s, a, s1) + gamma*U[s1])) for (p, s1) in T(s, a)]) for a in dmdp2.actions(s)])\n", + " delta = max(delta, abs(U1[s] - U[s]))\n", + " if delta < epsilon * (1 - gamma) / gamma:\n", + " return U" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "These algorithms can be made more pythonic by using cleverer list comprehensions.\n", + "We can also write the variants of value iteration in such a way that all problems are solved using the same base class, regardless of the reward function and the number of arguments it takes.\n", + "Quite a few things can be done to refactor the code and reduce repetition, but we have done it this way for the sake of clarity.\n", + "Perhaps you can try this as an exercise.\n", + "
\n", + "We now need to define terminals and initial state." + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "terminals = ['end']\n", + "init = 'A'" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's instantiate our class." + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['cruise', 'dispatch', 'stand']\n" + ] + } + ], + "source": [ + "dmdp2 = CustomDMDP2(t, r, terminals, init, gamma=.9)" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "{'A': 124.4881543573768, 'B': 137.70885410461636, 'C': 129.08041190693115}" + ] + }, + "execution_count": 31, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "value_iteration_taxi_mdp(dmdp2)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "These are the expected utility values for the states of our MDP.\n", + "Let's proceed to write a helper function to find the expected utility and another to find the best policy." + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "def expected_utility_dmdp2(a, s, U, dmdp2):\n", + " return sum([(p*(dmdp2.R(s, a, s1) + dmdp2.gamma*U[s1])) for (p, s1) in dmdp2.T(s, a)])" + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "from utils import argmax\n", + "def best_policy_dmdp2(dmdp2, U):\n", + " pi = {}\n", + " for s in dmdp2.states:\n", + " pi[s] = argmax(dmdp2.actions(s), key=lambda a: expected_utility_dmdp2(a, s, U, dmdp2))\n", + " return pi" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Find the best policy." + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'C': 'cruise', 'A': 'stand', 'B': 'stand'}\n" + ] + } + ], + "source": [ + "pi = best_policy_dmdp2(dmdp2, value_iteration_taxi_mdp(dmdp2, .01))\n", + "print(pi)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We have successfully adapted the existing code to a different scenario yet again.\n", + "The takeaway from this section is that you can convert the vast majority of reinforcement learning problems into MDPs and solve for the best policy using simple yet efficient tools." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.1" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 6f4d45521df3593a0874686fa5eafdeb91d49e56 Mon Sep 17 00:00:00 2001 From: AngryCracker Date: Tue, 27 Feb 2018 17:21:36 +0530 Subject: [PATCH 2/3] Added images --- images/mdp-b.png | Bin 0 -> 17560 bytes images/mdp-c.png | Bin 0 -> 18293 bytes 2 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 images/mdp-b.png create mode 100644 images/mdp-c.png diff --git a/images/mdp-b.png b/images/mdp-b.png new file mode 100644 index 0000000000000000000000000000000000000000..f21a3760c7644f91a79ac72c81ef8f8648682a80 GIT binary patch literal 17560 zcmaI7WmFu^+66i|FOZO+0fJj_0>Rx~g3aI-Ja~ZM7Th7Y1qiMaAUFgvxXa)c+y);U z=1$&o*80x9Kkg3})7{n8UDYM~*?T`xn(7MAaL9200KhXPMOiHX096D4K)J_8MV={c z7-U5LLvhzqkOtI@z1~M2pxa2PNdW-h1l$L64CL`YE{gi@0KoI#fB#VWoy$G|0J0oP zvQj#qO<*hjiELi}yZ8A&uM)BY%hay=bpF`rkvos(sB}3J|(fPtflxERL@>$OSGHvd6cw4i!zIzJOVEt;TkJZdX>nAV7CXFb)f-X zd}JU2(7aY;2S^Ggf&f7-G++SWkE8_(peGoD3iyH+g#p0EBESW_c>aF{U5vZU#gd!* z)Zw0ihkPO!fDC>DhdT2gLvhQfLHab{J2Jk?fe<2LOT-#dp#g!GjCvBfN%W*#&;1@DBxW z)7rB;Jg4-!e1`gq!6FxgTjlYl9$=<30~%VcpdMleyfXA7boitD+=>Jc`-R+L)dT5dfWM$!`*W?bg#Q&Zx2(*CWL~&+zl=y$L810Dl>60EEOBfa%p9lnfW2N|8 zo7Nh{-b4tj&B>;sB*F*ovon6mHc`)90B7!!J>nahRm)^z0E!N>K}`S1b}qWH*vQrY z1WrJcl3AMVvv(HI#M0VGn$XV7WF*pDp#}#6L_AvNwL3LAe(Bk};BEnn4&;Y;mMs%N zUff_nZ)`u05w8C+ZBbXXM2Po1$$quX2oaRo@X~g6(*1b%lb*eV+CBbWO17{KqU7wB=fX+(m8YZmC!2iv_&pABGS1hZj8OEEr4m zG`>gMA8#qwU$CbgdfnzOYaL!2tUh62iR?H}4g&y5)6C$H1kDa4WRms|W~{)fs;ZNG zs4kEE$DL11(I3*i^u+tK2d~g5HyO8L*ERx%m&Ubg=PtZ;KSDJq?8S{)WthrF8P` z^Pe~qL+0a?=jz<(AIhlcF;UDucMuLUN!h%gE@@l+=(C_X>{NcC1P`T$D>^oTNN*kW zm7|Wb;$F5Gan;n8eFF!V<(E^2+PC=U5fY17x2X1u55q1#2c#RI10uiXMQxi}L~ZQ^ zu;i1#Em)Hlh!0z}CzPG21S*PCL*rDfH(gp5C;TI|elC~oOs5*LCJp3nsX)VxYYyld z1C(=@m3MQu@4RPwrmxY4u7=LER4HpkMi#vflBw6znLMV*w!M(Y(Oy;F{O8>Lx+Mm@Y<%?nX`vZ>ppcM{biy2oDLC;so%n*msD~-V) z{ST2GDWAU0!>{cic|Lx!d6l()xgcx}_VvF$zq%XCw`xShe(KbLi_YA(`AWF&+8^ul+RdWUg^}YX zncZV9HLAKh%jnYF!qkNFjf~SHPKOHaza$7ZP&+d@jvLsiqpB`Q(EG(KOi7MR*Q^*Q>@F(RYmCdcnd#Qur&EHCZgj%QLoIxcD(B zj*%Bxq4Pnjz1eFDb%pf>WPd$?8DBtBw!@IzX5>hfWJwPnQnt;+4u z4_Y(Nu~I?jC+~^1t;aplnR=6-YCOIeo!A_Ms{f}dj<%Aam--C@s%0>e!@|zOp^C9h zKLu_N(Jf`BHlx}tlP>y>ckUZQA7YUoOxVE9!`QzvTN_kCs&RdlS#LKRbjoggU>!>u{<2cqjmCza7YbQj3J3($V8}gbYIA zpSJ6)cfT&07hinUHFT@@W9J14b2>*Ch5GmiSy7Xx?JuWDS#=&wfdw@z@}@a40I`fj zmKZ77OSj53n!J{8vKpmplGlzaYIQi$>}tPDbmf-!X0>@eU_Z6r(J1 zV~Ebxkye83X;^1seNbj3AmjZa*3#0oVxm;NEffmvKl(8=nrkR?1)pTZ2E>0hE*^Br zm$_@gM`S*Dl}K$5=WN{dpdp4v?&xYtLM}j(AHIC=%VPB+7-_9J=zTCrN$-qg1Dr-$ zT3{&DPSDi=z8R_#R%*EN+TfO-0%&44GCm$hZrcyP<9_a?7XifW17+WUU`_zbN`)Yfb%S z>g-4OR-T?ahH;VAmHiHrH_w)6V_F_%IY_3g*I`Qzq-jk{et^~HTOq*R;=OE7Nu6G= zBviaP9Ih|qa58US!F=7oJ{6tn)46C1!+MS z07o_vPstY!@S;YQv%spTa2YP*{yA-78Tg^&YDAlPyxrw$p^O6eTEf)-KF-rgdX*WA zP64U~)iUW%(*f>+RZntvzv_5J6=x2Pxf}Zw|3y3EEmKvaYtwQyqJZm_?m;uSeO@|K zq3HIZA+8`}0_WhH^Uj&u)HCn=f$ftlx6}ZFcvO$|9AdQDDF0N&t z{!!1XKCjOGeA9>WCIV&SqBgABmLeC#MPLKZx5`uIbV%22katf(UOSoEBcJhCVgT?*$E}Xstq7T3WcM%Sg2|ds3}(HDMK1D_n{RX0PRV!F}v? zv4KygzsD=pC3X+dyyg{%_|P{SBM_HsRYSSOSDjd^LK}ddCyn^TRv)M)%q+Pvk^_-?bdJD5cC<5kf++s+S=k`>;3T( z!t8pfJF*%JQGORqC03S8)%}?cEWDD`Vca~ZNl2^rbZVI1d7fHp)OuMhVf2!D?S8Y( zE5Ff!U6rAJ$mo>LqfX2IGb<-VcA4;SDk7&|*&1-SByn0wS`(nyH>J>ZR-Pd4c70zH zu<>@Tzs||zcEsJ80C9{`|0IIQy@9xw7&>qBBAz_Yf(&X%Olo*x5D+)!x*R zO4wT4EPKn^R>t#f5fu7gSg?zI@(<+qg33wiyHU zH+<7~OswP@8&DS&+0EF2rjw~ZKrQ<|k1MW^YFC@m7mf4HJjO{WfhhiR)j$^<bH2Zk{`tIp=XyI)rzySb9lqHd4U+1I;E|#g6KqZH5X$GIB$uOAl$f; z$o-S*xKI3g{KyBRv2ty7<#zL={HuY!zL3y+OZ32d{H7*ram9)AfDq96YmEq@*F{UG zUkW$`jy1pG=j`W^>jb671NFZGhWk-X9T#BWl#Nj zsv&y@$bbgWw=D~-y`*a@iH4vC+3gV;e>KvUws_`w&N3&~a5*P1c044=jMoN`wlw=> zZuDM--5ZpSP2{ovO>cZ7+K8X0dgDL2T+i9~-HZ`fA}C{P zHv^KAH9|S1YuHExnM-720Q8Pz*hu*Fnx&C9N@|=vuWr7?31@6Tq7gKRV1vacDz=*f zIU|%;%0@D5CiGk(uMHr?K`$XZ4M?=1#Vvyb9^5}}!t{;^RB#MzUjhMrMD-KVmKI9g z|8Q*J{E}TRAoh5x#?FHMw~~yKvUL43HTJxu1j7P3exsb=fr>C1 z@C(x81J=R<;spq~c%kSjoUMxs!BLAS$;`$1VIQo>4pE#Lt}0`kRg{8{jRy>?dng@= z+A%RjaT4kf=r>Bbi+uCiEsP};;ZtEC5tbEhC`<$)A!okJ!FUwsZ%Q-7W*VYb_2J+G z0aaC0S5`vTtupHLTTpMzl?Yr_M{y_I@9cJA)irRc&y=W%RNkRDWID37CW0UxUH#!USa z&lWln%}apWO)$yB_AZJzr;10BUC^f&iQmjo=nSmG=)tX<_V1;vKK+Gsy@XL=G|?8B z#Fr2p13jgVjUk$bjgmkkvM(egL6EFcfOYDm1oAD|^c30Afm_mS-+(=ENa{#M9fa7z zDx*0W8ShI52CJ}V>aMt2m6A9PG}A!==rO)D@);OxU{pf`lok1koHzA#fJyR2PK-By zPYE46cUU7M_4Hsi0HBZLD+DLN2!)D1uGH)lgM^)cGBo%V+g^O7<_FzcUR@oaiw`35 z4Z7SHc|6X}J{)5s6aa}ucNB6|0KyvvAV|n-wv#jV>okAJ0Azt2M?p#KOkm_F{8dJh z2znXGHXczk!j6W4Vpc-pO%-#11K45RD*dsO3pz$3VQze*Hh;XFPEm7xv*po*yhfB)F>o>+wC5oKd0zKnm!bMM<- zl?$DvLhB(}hccLxRfu;b;054?Kayv*VlEvOljTmfBm)-!kpH{r{Fi*@Rbp>NM*(On{NJf* z4fboS2oyj@8rdu;2T4||zC_;tzYMndP*iUnF5p+}NJXE;JO;q5b&nW5=oL0%6wL7d zp|t-EuQS{&_#Ax0RC|Vq?A6-Zn|AU{x-_+U8+#Vs0ccEiGC@3kS}hjEPBwB#8q4Ym z^NXI_MKfb+y<4mIw(MWFcV8i72x4Cvj2QjnA$x;9a-n*Lj}}ChZEC@H%^%HLHC1dc zW2IS?mF^DTs1d(xO~_xFb}M0G*1G%WpjRjHke#rS>c8}Hc>pnUyyJza_QP+0S>H|? zXtW3|1$dvW@1Lk@-K1=H+u>H$)9?Rql$XZ4FuECTB4Ml*Tn!!6mS`)5#g2xU9p1!@58&_utFUu_@^~MnS`59;+%jj? zt(v*0IIY_07>47&Q|{3}lD)YCtDpK<^>4)*NC1f(?)&?f3O~=TEIQmSETMX%>-IWy z`m`%`@VTG&j_Dq#%XnVLW~*&D3jKN~NKO5`#hX#@Uh?qxiyN%KreDKGcl6PcA6Gb! z?(>mpRU79se@ULwO;sj_biEl?X5iNOV$H$6YWh!|Kbgkbl*JW$4aQ}+j&QU=sE58$ zmcdLX_AQ^jW!7#v^=rOd_}t9&_q6SUlD5VjE+*&gqp-DE3uz@cpe`v!rVeLNnj?@yyR;eQ2ml^eb zt%WRh&5u%re5(Dn|2+And3u`(8l$0J|JzJT{%cmD##XVwQo^xEy5@lNHHV_vlg9Q( z+~=MnYlZ3NCdjz7HB_%FK}ekQFyK61ZKSG;D&YTLLWRp77T*_S$1N-$3pWUxZ+x zm-9rFv;^l}7pc_PO((y9T2p?hN$u`7in@9Z|BR^h$k*X>cny(z1@?GOZoivjr3vv> z(9jb5c#RP9(L(MN-jh@|=Dx!^z^_A<{*5X62XUP+{L0Cp0cO$#nt(!NOB} z2%S!YRaMU!UH<;1EfHI2(ex!dFa}znroJfoZY1`A=Ex}U^!)1LyGop!ciRea)Iy4` zZjr&%3BlJ^;RLG#hI*w>!(hYGN@VkwMJM2$Sdn=xjA;wNTw~p1vmYV0`G*3odR$IK z@oAUYn)TNUVl!W{2S*({k{3sTS;b*oZli^(Jf|fjN zZaP<}r)5ACBV)Za5Nea-aoJGcAE18I+@9(2>^*m>N6uT69op;UZ(s%7leU$uolYOa zYI(~Kp1rN&s`@)(*F)X%O!cnKMve^#hlL-}UbmQ{|KPaTUcde^2-A&rkVS?;m2vm>zm>5R-P8@yx-rNB9(?C-!ZG39NTV~C{eJYGg} z*A1lWovCMMsb?efHvFwoCr!YL--7oyXzPsiuJH&KPRHdhsr$qg%8^X<`>2TE;U$gA zG%s=dZ~k{A_rwA2)nKDVXqvV#;Jpowz1MhiMY2brw2jqFWzUtVQvkoIfygpc;&pv`@aBDM=>zdK*0YDz_XJ_> zf_+>=hN3tD^&gbk@V1H1n5{L&d=1#h^(JOS8c zZ}K3?$WQnl+ccp(z0#t1S_(;-! z0RXBNs~sVj1sDKr5;&)nT}_nS8V$hgV@x@i0Vx#zQv8c2cw#0{gZ3kjq0TELnwFCS z*noP1l@!pFg%1W9^57M;XUtLv8!4eG;>rI*j;m6Ioj>tEc!O(HtkKQddBO+q!Yif< zTu5es6j+#lQg9T5Io!&nN5wZvTbf+!k~NluWEZ)et?V({!b1gkgDkr{LXvQ+cUILI zO*w2*d^j^XfETH`c@eL3flG$No+R)2FAX#ObZe(Q+wI5+CuBahq8In(_ebm4@+CY! zxNWS;)&Ajdd)J-h{9&-2wdP~`iUWN`cpzH}5>l`r;eiG_i*Nx{uzHC_!CuH%!f~Fg zY0=45j;iPAHJ#&p6VE6c^Tww+sJCJ|&0S1?f8A>&a%Sm4d;0*zX-SzP@g^puZ%-}H zk8Jdxrk$TB!77(sBP<9a4&{_$c@xDbN87OVen4Gl3tZ8budoXhXMVGugw&ZRo9>_M z?*0j2F&1L@0ic(RVhj^+NUl80LsNy-C7YPSVwpVDWoBYnmf$I@v!kzzdgvUGFJp{k zpf|Q~V5prIvy5+$7?eT7HjJYw`CsN%>{APiL%DLmlA%huz!Ybkmce(GXYyfk^DfR- zTfUj9CG~`wHB5SGwxoMpRK_7%>jWxp5ots}CXj-HF4*#*^e;ME3}w4i2xBR-urF)k zVV#F87+5#HD-{X;`Nc26nHjA_q)b%lr%M7u&0E6jCKF!5U)#g!qmbK+-O(>Hm=p3f zg%M`oFQAWC`>eNmOi0=|oC9p^;8BJ7DS8^&cin32zf%l@aX1KYSMC@{xYdy1EXL1y zI`BjsB)T0kmf~5_G<#+2Y4536MT2YRlmyz7w~$K^D155rT8I)4IHdQK&JGCb<*E5T z!H~8kf^HeyB4<%R5k*X9BgLmG3!i&lS1v~yMNBe1%5m{=^YOE$FjgH+=d<-GLj*Ig z>pea~f>(+Ho3hP{8Nb3wxRuF~K;1}*LGR7YPTD)ID?tsO2aOhdEo#5(b?a-WgoI%} zJrEu5g1;DlQS#R)cjYqENOdy_NDD+byq9V?_3Op?!$$->cXNiD1saoL^7^N=O%IZd zr@+n)zPQaIsb{BRk;tPZR>AQZv9X39m7| z^}ju-szJAn-AfDW zk4hDdNt=I(PcIZzdk7YK(#fV7F_wY7=%7xKK?z?Rs=~20Qo3`dm;i`k=w0l6%QJNc z>MS(+J+2R2i@zN0h}#(QJh@|Aw@3#aqXfGd1auq%1J)p^4|9H*3Z!8ZS@mX}iw(j0 zb*{vIXfngquJ)YplVk(+ISmTWzUrTbM&;rBwH6tmyzQuZqzPzIa=E47=#$F7(F&hP z>CAJseK>T?7HI>}k?pW`O2||E9__(PT6166?^tDA zsI1chXgRn;QG+`CZ6qsAllsmMtq8~8NGQB3R{MKzWgm{6ky6e?R&-rgcmoQ=D3*Y& z-PP)a0bz7sr2@sPJfG^q_hw4;{?cr%8_6Iy%Lw5bJZW6XoVVww;1|W;ca4X+XDZ)) z2!^(@7cgT#s>_|v;_Gxrk ziqNLM_j7t&g~-%epDbi4X_8fED~nuLcYaE2fr&-AJ4E4PDG021wcqRAzX^Zw2uZ&8 z{D8wA%HzOL)UGE%8b=+|Hx4tZ8!dbG&LK=yKCnsw87$dQZ03 zUR7k9-0$-o98B+x7?bK$Di8rD+YZ1r*ZN%O)Ix&5b5*5J^TIcuJB*f0w&&f>-&`(B@}P!!;x_xBWi!SlE0U>=xL;2bquvA9o0OM1Wyax zlJ5~$Ovoedq|DS{foSb9{zi`5Ld%$=K+CHJ-K2I)3kI8SccVkVIzh(NI|`5?EpDz# zm*HnDXUpc$F&6&!CJpsdQ)2h|BnnHd46W78*Bg#j#smu+M9;LkLwm$)Z5PR;n!{(Z{a#^T2(D?2QvA68w{>Ya+P-4Wnj# z9qLS7O|o+V6F&o^i-5D$`(8e-7zKevGNBN=uDMxYUW#Gc43Z?!G8l`Tme)2GlroM` z|M|>=QTX;r&Q!U(lg(A5vLz(oBxRv8v(0RvZ_^vU$VS13@EsvytF@csa%fI_SKPF{ zkWg-C0YA?@Rkk)yw6+uKV4oYU-ydks?2)WGXHI%z=zljE$JJq$ZizhuavC)?sp;@( zxm6LNAbcyLetQEh%ZqLzhUI)UAf3yP-W8$${+=M!D-!DxSp0!hxk8W^S0Kz?fo?6# zX3Lpqqf_iGO(4e9H}td0dm|7r6zzV!5Q^`4t&^II+>9IJ6NVSQ=S+?E@F%;?t|kw* zBik@6AknKVX6o2*mHymo(b;&Uw-+Xc>EWv~evn7X+FN>0R#ogdZTQuon2! z`}YZz#PxPxcYo#eYUXU?*Ar>#s8aVI7TUXDDzVm!&P`|2SKhG}u_BmC?1#|Y5uo!* z^X2c~>8un9Ygs~1B|;;!@y??;orTOco9SyKejdy(6wWlMOPS7NfMFnFO5H6A*{Ayr z4(B@eDXva>PMovVlbDF@xz5jzNjJv{P&e(ImD2Psn7S+YJ1g&680Bi!L5pcU?~p`i zw>o=rD`xYZNiC840f7e@5K*_V?07u2#OJv7ur>EXiB_OXQ=*WnPNVANZJARBTPr7f zCpH2Gy$Y;pD)uEO4FvjCm!ZlBh+XCw*VxxKj>urOY2eao?{1_+KR;Uw^EdKtz41+$ zbrNg}h%pWdF3bU)I}Riohw$*{HUBst%crV!O6Mg_khm#M-SrZ%CfDiX;v%J5^Pt9- z=kAE_9-t}>_xFFwPlw4;cU_5W{IL+RT%}0^_O1=)@H*8EYae^~kzRT3vu&vPu*KT8 zo-{&zzuL2`J;@wY1kUU9=GCkoH*scVzI*mvt@f*t!TV`uU{<5Ce&cRas7vqZE*E8o z-;%-D7z=Xoz>0Xr`^#9RYt!g_8_}%YNa?jNU}D1*>jT>#vdkZ^t|iVBqNx4THlYK>@X=g|rxk|<3sMBMb-)+^~yfxu#3OH%r{%0(v>Fqt~jmoBjT3n1>m*N=SdTw{0 zmbD)BQD9f5cRSsfW1M?VR?TP1(mtkP_EO;KMw0R8;oObX+OkQq)a0AV1V@hTwmN*> zlosO{Ht`{0%{tzG`(fWtz8&o=J+H%Xoy9dDjoW1}{i zcL7WfB%XOi6|)oq564f(hA`pk?DMv3E7O20$^1KVHKV_Slh&l=91;Bxw+qWjGwPf- z@b5Z2y&GCBPKeBwSh`!+*amJcYi<9tl4=uEF9950%Qg-CLmvh-Ju}_&@V>4!nGZOT z1FCZBlqG1Vke4W^_d$zHUpg<%b%d+|OLOsbeFqn=V*;5VYkpdy6<8#9oR`<`9({+- z$3)(9`VGCd$NQ6oWGj5^jQ_f7oGjatMvZJJgr;xlVLokCiBwXRSGPU9bE*0FZksXG zwN#?Bcb6WG5uPwXT$$QB@3XdQ8~p3FtAlVDv0uRtT~UI={_*nd>X^fg!(JnpTvu2w zJr{w-u9JK9#qw<0=XV#rq-1P@?E_3{erP;LZ@Saf5o1WF=;O|T#_;f5cIMW(;l;*K z>J=bvdv1TbR-|^eWvphiZDQNc+e74DScJF4?h~H;l^|aa+l9j}1g?;~Ngq%%#B#mf zv71jahX__|9RK*ZmV1(`?W|YCe7;8N>=rk7YP@$9Lme{|{^Er6b`7l3$5VX0=w?{% z>@m&9UhHNq8Gjhpr_>+kWCdG3e#uX`3KVvQ%M%(U zvcvvpT0VoNc32`uCcO_=YVfE2yF*0l%}sS?5pLL~^K3x51It>N6!G+?PN_g-Q+37J z%lN+F&sO@&8|2<+5^L-x-V15UamnO{Wdk_-l%Ck(41{~kZfbv;Cg3`K3CbI1t*nb7 zlVoqag(J50RCg<{s2Y*<2Onms$>9E_!{Fp` z>dLuOC9-Qk?8!;2ZodsjgDDb3LZ93t4*x7SL*kncN0XlkKD(tDe z7WN#j!}T%H#dieFXZ=KNd3kss)PGQZov5Y$pu4M)kkb;L`e^jdgm|O1ezhwRyL-jX zLk3jtL;vx$^|Y7cr$*#1g><*Xti?qY>08DT0JYqN@&ljfZGIyl+bcE*VgCzCoBsK^ zm1er9yBWN1H{07ij3NvjEYo|orPdm@9a_Hou4f5OIxnG(E_%-Fr2>8a{FmRZGOw{c zi48Y9nVdD5^Lt2uTHls}ZfU#^%!+3+42)WsUocTz@-wlXG>SX2O9oNjz%O{nCM5;Q~7Rj8>~w zBtOT6XiXHO)6&I)o-*~5&HHZ-@e7e=6t4o>L@07iy330C+*C}w#C?s+HgHVZ_u~s% zoIm-GZ>$2J8?`ElS9GBUnr_w=^x=rG`&{sAZl^W>ro<`ujK{`V$j=50(EYX7#s``-{0|^ z<=Z^r3^Xrx?ee?Jk7a|@jXEBvVX5qwVV|~##25I9FL6*`Ww!dr-CxliI9wbNLNbbG zyH0O&e$TM4=nD&qc51)&e-uK{Y90Lea=rxZfL(>9*ZBEglfgSS55h74e0qCt&wCEL zyK)Z)aNP%`_SN!rN$V2#;Wx}-cQm&LqO~N7aaJQWBSUNVXN?rSe-LLa(ar0ILfc)R zL7l&K&BzuKvfJ!OzmF6Gxyco)6v_4_MNhyX62LjkueTpIhU&&x4hzShfStzsOk8DL1# z)W(T#TeI?XYVt}V_O;%{+9g%fvKePD)Nh~hapi0(mG-zRoHhwWs7*Rhmx3}`l_@f> zQj^W=>9DqMGDx&2(1cIr&3*86u+!!;s}7I*QPbsqDn{{E1b5-GsC6GxzEvY0lBEf2 z_NTo5R`yras&;Z;v*-IMcfS*Bav z*0^zZst*~`TJaS-7dv-6c?@fI*hIv)wH=$@DBq7YwxneDQvwg3i|?*)`x0f>13L?A zdh_n9I+Hf32ZmH0wo(euDew0}@U-o-9qd}HLJS?e_EXJqQ`^tvMXXku-$5v4-ZI!% z{bR%_de;1qCVst8ZOW%YqHgTX-?h^jZ|ccmDj|4I6=olidU;nNYPMi$kSK!Td*J`e z;Fs_Ca_8q5@zm5iO4ocm4ob^oU+J;rafH`^mSN;y=8PX6bEJ`3-y52y8&!%Am-KW# zF};7#gJDU`ti63n;PCOM#3|e`cW%U(-Q8jR`Cr{`X4<^qn5f;YwavBpXDz<=d7_?b z*#3&gL{)TCGp2r7Hu+)H+}JS|x9RV+#q&qKImhp4wbvr4y%i6VYubo#_rs<=Z;r=! zcOQCDFA(%1122lN#52%r|B6bW6Q_M>@W(V;8NMBSgHwi5;R5qdAn(N0d6%K4#WO?u zN-xgRwwf`S)dtnVtJ@IP`kIyH7PDa_9KSkg(GxqL*kxT_|4fdwWz~zNTHT}da(JJp zQSJ`zrhsqH<4|W!OQti2RW8aOW_V$t1>I};#unfuv_#zE9rO;2X#dSJz|Yasu+yx!tHqsB#W>StjuCyvQOhsQ(D z*JjAT^DhF>V;8;LJDz)vgHKGjdz=fkafVVNGIy7s}n14R!{|7T1>S{Yqp}b9Bb>|=R8)2&>x81Z>SC1=k%hTOaMujH8E!4 zl^dP6#)X3pU$xp~-0X@ka`F0-K7+M<4FZ#NwMj-R-wEw_w3O`ZV^A8LR0p{D_D~!@ zuN?2q{Gjf*#oB86Cbw3YxpIcNF1m2WMLu+}On7lcNbseFmA|h8ud3xz1kYS0?$A`l zq__h-VNG-#pXiyZ9U)X1(x@ zp_5Lpx^e=FDs1nF2Y=wJn>cgMvZ%P>a23f_vvh-8dtdU7uJtG}#%$t-p=Gb(*Ws#H zq#y!5l0x>ydw$oj=BNOR!498r>JbBU9QJP#jiKzP70va8C#WigebNv)5ByzYT#Gj( zQHMcyolx9GHLKZEg`@_x^$(w-bHIYt~I1c4n3P%8# zxQLwX=&v!PNUKd1eTW^W>gQ)TPPjlrWaXgTCdPNNSmh3jg6A9?LyVrg;H{td#z=?B zG{IGRk~KFKyfy3{vY>iwyJY(pFU0Nu2IMVxE04MmX@PM#=(+Kk!eshx@@pa$`Bugj z29|@KFQSpo+911s^pUj$6b3Rbi;q}og?i^25thAMSomjB&dm-}n+g%cU;@2^w)f$LLt6MIG8X+X#RH~A&#KYwmgKb@=V zshn-@)FO{|wb=RbT3oTp+(V$q+-+EiCF&baZMgyvVVM>hI>-Lm1Zu){z9643+PXNm z=t7Nt_tEI5FBikCrVk$t^lXKch4?L@JkZN`FYx2W0G~%*`lk%weA{HiKXqfnQs9-+ zFbH{-#$kbZ??U1}^e^=Wi{R&%%OiDfpYzOsts<&fm;666%$oUUJ+H>Gr06`G7-8D= zenxte{9}c=`w@(}w`OarC3)(7rR}+Nid3z76Z_SEk9mRd-9P8X?qY1g@%Vf&A~u$W z?Cvt6O`b(z^3#sv&LiL-XK^T6l#SI&kYv6ae>tg_-7ztLw6$1^g;{ITV? zKGYXd^u4M7?y)P0&g+KNeLn|^aro{PBX&%fJSI$Pp&$vN`8P9JS5hM=Q5ZD#@-b{= zvLdN|jmCP6_1pHHKkDalWgAxt6Lodl9+0_5)YRAPecZSVBtd(Lf40V$$E&*jn(@2R z8r_xuv&YHn{VR~9!a+<0RygA2;hA(`fb19M988R7zl$dOjwCijvxbe{-TTSuUkmlbXxs zr8_kMwMcY4hiHduF5QO4nycAcGz@Pa7_6fq&lw@#REM_4Fo+l>XGMhwYDN!DV2~Z*R??Y&{;mP13O2 zl|P=*BGpIil)YZ!a0LtaZwhL5^k=oCFMSO+9>MVN zg6_OckWivXmai$Y$lohrXMf8)A=AQQq>RjU81%4g^38WqtcN;c2JNl{@j@Y}+`v3P zx=LiaCho6?&*xFiJ&MTE?$So)(hRd?HVAVC)Do3r?cz;+vwS7=dILiYqB#N0yT0T8FKa-_zqs(;dq6iISmU37zpS>$BIVwJrbP zZ4G_l`nUSH|5We$<7~Q3p2rf?i`9P7*rJ7+rtaUyqiP5~OVu^VCs6s&G$VNJ<67j> z&`=}ao?o5HsjI?<#lojdkIYRPLf5NeG^kF?jeToVi$qjt*0k$tlNUg6m+Jdp0{)Vk zmME`OspMA_w8q)FsgWZCMf8uXo?7lj^7-PGf(44y%aPw{y9s|1qcM<6!lkScttN69 zh<^}lxftZdWk*Mj+mIA2DQwK~HAy{M8ss&~NH%nmsA9o;1`*~B)t`aaoa>prv4UJY zwO5i1Bz2#q1g*UT_!KnIUqoR9HA^G&jU}rFUEFq4Y)G()2yj8%%(Oey#CdCqdZTNo zO#!cxKqd_o5OmkD30g3_Cy*UdNKWM%#*JKdC;w^>Jp+GTS`Fztc?LrD0=6BuR^bnF z$7|b}y`n<3qS|>LVfY7*eK`s|bE#ei@sck!oO#BHL z^h#b<^ziNzOw@6-=Hhm!tBLra2YT0F=Gs#|8;Rnpgl*ohlT_$XH8*(#kTW2EM~4g6s1$oawR zC|r9bFuG79m-K9YC0wjt@^AkfLHAf%TJvgNw_;(<{#O#y`pqlbk-7IY>HMe6Mc*7g8L-n_+CB3?N%@f!=omWQgiQGBl=Fwvtw1|Q4sWy(!_=1EvvQWW;@A8ZzG|e%c)B_j;%;;!WIIwuQKIe7T8F&NRq}B!cRpm6X$+ zyCwQ;AGcUN_%g?U*GQ!XIf$xYFe<8?9BlB9&W^?`wYyq=RvVe?tkVYcarTfBWUmeG zL_xArOh&VI#0`;Ipv)D|a<%`JwsprAu_%$|*NQjijAXjyeUr!PU^?Zv%~2(p;f3bo z>Sdc4jrmvMBi1U2{KI{_2Z_xB-$T{r4sJ{WhZ<|Ih7f8G9q|=UxvzM2x!X0CpXRhu z+{GvP$!)SdH9loGpx~vCI-aL42pYdkP4OUbpjlJn-M{+#VSD%1mKVrMirDIW`DOz_%#g>@e?x1TVV8Z-=p1+*!C&2H}-Wa!#|I-AFRrcC5vY} zPjQG0A#`edn@iFT5iC(2C3AWipAOBC6ZRLG+qD@N`N;0Km9}%u%lR;{I_z@o_4#D* zlEXUABVInJhBOGmbJu=O=twjIh$6P7hjAYn8@I+U`lAkihYtOE>nY9@xXtVzZeHJF zX`Vw^iNGDJAwrT~8L_SO;LU{%%hSE9p1xqv+UnwKq3XWCq|#Q8@FD9)t@jB5CqyZ< z$7rT5SLsM@x5qxYgH6xZIzfYFzo_YUCKVAZDzP;Y^WJnszvJe&RlVVB`zmu27zp;< zT){%Fe@ld_?YK2Y<|3>?HDo#WMuX^y2SGNc0afSjy2s{96+olMWVgc<0CdCr%G zbAB9rGa=%@4>wz@ViR_?kNj`S85o*?b^kH*ptCNGtiU58nAD6?_yofbJOLKULClvs zb}%h#nkatYN{PB~B&&}g_~42vRz`xSS%8=RB9@)uKK6p<3LQHB%e&~Bq+B_*CkA+$ O2!p4qpUXO@geCxoIew=A literal 0 HcmV?d00001 diff --git a/images/mdp-c.png b/images/mdp-c.png new file mode 100644 index 0000000000000000000000000000000000000000..1034079a2e355aa54c96d8abe8dbe7679a7f1c90 GIT binary patch literal 18293 zcmbSyby!qU_wLXoEh56uA+4k`q=1xC(kUekQbRY0gmefBfBQ^3@hxI?{R)?B|jXZs}9u;gv1(*Gy9~C!CY#espTtlC4!owOa_^0 z7x(oxOYfC@JW~@*`{w4>5Q%HIF0H5*Z@F)s`b@tF-wKaDaUhli8ms(=GqDx;&zFE9 z0mt|MU&kI>qwc=S`kz6nzZ1k#a$ z&G|~wm`=A5bs2#`k!S%a->(8=tvh9&M__?OwZ8JG1$1CDctw=w9ixCm;}z$ErB$X$ z5hsdkNFZad0TZ$k1_3fY2*mAZ6lg`--g!gerOE&TRrr+=XXvVFUR z0_r2`kpI4}qV(^ar9<8)TIKb>bL2(d6f9JFv@}kii%>T}jJLECfb;SOiCy{i@ z>s!?S+Pvk+t_d41e#dPi*YDPng&IGiQ<3-u=# z{wC8MP<&uNlxt5Se6@3ga#>6N+y0#s1}KuDFc8f2L|WA?Ciuf*g1lzn@0Kb{2KDY- z`s(xP7yWg*+|!99@d*~|dV*f762;taIp%7ar%PsBdhTkbrYv8^of=(~s+v7xccyHu zv(1=_kD~;E2^k{vTMfX zf`{XbGtkIgA#tUPo4e=3{wRN=ceSa|byk7xzFYBNyuPatUFG80ZRo9^$3mQ(GX0qo z3aG3kjs=n_*H}W=`R(biKcOWh8e0iXQy2{c+>s5_7_kY>D0k|c>%Mx+36tMiyp^gx0;w4$_^k%Z+>q7zyn-Syl|RMsI)Lj?$=S^f@|7>HZyFT}c( zmke9dV3_Py*_*E{-nw&V5!Cp!zNj;hqDcy?3N2zKA7dt=uzAq_9e?)W}Exalf{ z({rM6p(OB1?R{MK2Z;xsL#+e)Fy z&F!BFG}yda5dNXl=aZ%#x!g@xNuY{5IDbdg<*$nUv3y$mU|lf=cS4a^-)ur*FWXj@ z3Rh5~0;>oflgFM7bL&lMt;hMLr1=kl3G;>qSo~Lv0D36ScpRYRep?2nFVIWW)fV9# zzcnut)$9jqvrK+Oy(JdvdhhJ2Y2;Gy(LGGzDn}RV&vHzQ-m(Zuvy02u$rs$*#X~^guPg94DkbUP zWv~O(A=R~{;mSOltnB0^K+9vaS1YIdhR|HIl?1S<6)e}UA(K?kaFrfZcTs=jp9d#mS3!p0?M0A9orT3*p+0HEBo0BZGXxFoob6A_M}s zY-ss9Rawrt!c5;|v$?@58_iGP`V6fd+?Wkdhc(i94B>S9`O`XaZ;cfFt5_L{X7 zncm}zB;V0b-aVaZuLrtnPGruncBLj4f`h?SsyzoZPlIj#HRP&+r%>Gdm;Ew1U>bgW z$9S3iNB+Y2ti6Z4OrGAXp|$7)?#Md1r~FglNr3rce6)G*ET`p@)s>!8-YNRXQZ`n> zRAGI!y#bgYB3LHn5aFaZ$#{VjsY3G&>@GOmwr~+!I!L|Ak?-NTI*I#5zoeOxn)!VK z*Td69j0#f8$(kAcW)7z;t1s5wSNtRw*`J61jxnJZJDTj?2jMp=GuL&bI0%HgMy#2} z+<>Hu`dA?#K9$3tW6Q+cGx`NWOnnP$MvEoG1tQt=aTW@#Uq(5n=O;Mk4BGSk;w@5d z>M3(Kuj}K?r>7{c-ZXnRiU@H{c)F7nUR7&mtE#74l#udzeI99WD%)}R+O9Frq`N;4 z7Y&Ni*uhdw7#xRr-M{O^`v91+DF+RO)>YS?Whu?mdnD;s)ky70USciQZZa`d_l?S? zQ%QNAcM=CjOvO=&xnWw8yb6pg&v)~KvC%9V1or0ShN~RP5etP$=5a|=QxbtMXR932 z=1z2)@2nbJw$8cAN!<|$-NS~~8!E0Xx8d$TS<@$tKN_avylZ()_fGU$Bb-wmsCMp9 zB8^p`K3R!S)B1*z9_zO?UKk}T9}5fl09)CBIuYl-EXLd&A68VdSq=+~%c!#1YfuzB zA1*Cav2B3lf3tr(;hnNj=nmf>Nt?TiD7tZfoBQo+L-9$4^Jn(RPz*I)vuhWO7SFFE ztzu-*Y^CziIo0E^IWpabo4P8K(S3(Br)3dPus95f=*#@s2Bj?*uBdMQPn+jEQeO|i zx6;4v+^hF~MGPeF1TdO5u&7srV{@z3*a>|D%Wp5O)HfS8#QZ|)A!$?}aPXcz5IVhV zR4Qdqjd88|&HJU7Ohb_A6}yLC+6|lM1Bjl?07LqxxoQXRh{+3|0wL}i6KFa5mV$T8 z2#Il%zhmj0O$yV{hlJ#>T+WfD^`5Pa_3tEKETOz@$n0HwWAsDt!dX4fV{_1H?`KR< z_+`jP21{6p2mD(Z^BPZ^Z#cTC4msq=Znu)e^V5LK?s*dtpAB z=IQ^pBl)@xQ2KuwYZ26jK%U#L+Z7ydya=9xA{_%#*a@Hn>b15je6wXw5G0Xaz#SL= zcSQb3223fZc(?hvsa&6a?sSvt)5AezZ9jqXs36dHkq%%Xfz>w&>Gf0#uEK!%4=W=^ zvh54T{tYZc9rAfl68om6-=;B#FB{50AQ?#*)=x@&;)KYJEj*!z^-m8oL%QMVC?IJ7 zkb;^FsBLI^2VFf2Q7QfKK-7Df5V9J!zXXneAs2;V^$9G!vm@HUehLDKQNvN&$kI;O zOYhb9qj|ha6AOUy?3fU3dq4S!dK7#y07)_eUCd(GedD+)W8qBk52N$N)}(KYJuu$c zqWmjPE^RqxN8sXP{Nln=nPrPUfjdF@<$KDX9%q#+X$axG0j$tzmJp?If~2_?z$k17^#Sc3y9ao*&PUP@iK$Ec;}~kRo}E z%+nl;ExTu=8DoMQ!m?DSxFtBGjL2#MaLm&+@D4Dw_HXmgriIQtoSKb^qMYa5;7LN9 z>K1a|j^-lY@IJ77etVKDNy)7s!bdE?Q9X@6YhoS;^Wsop{Vg^)##`1;Kvh-kE6@l;EhH&KdQ z%Lw|by-z+eQPo=ImJssrX2bTEdFPUe$?(r|DmqSgjPuiO|MF7>Cw@)UB&Q6#^}w=9 zzqXz~P4{S5)%e2nX*Y2#9$VcJeZ5qrbp@gpiG>voN8BX&LidrLSLsP2%_$0JyP+Z_ zHj}TB-gC}I7_Ha{c&|Bdz6XZ*n@2qdc*%)3vCk~aFM_w0^`fY11C=Z{B+~_h%^jU8 z!gQ}s#iB&yy}^pSdS|6kA-0Ou5|LXO0e?VBU*6xNA^gx=80+!Sj<<~PX{FF!(_gQv z9+4}%O1nhY6M0FmtM-=LO48-nkwV*nTrc?Dv599RZ$sVv!owNimF=H;fwa6$VvpLsIJ9YQ$ja z(v@FR{dU_H;+A~HyoC8*7mvtGGaHjarc=Yvof<7-O64X6URb_d3c(h?hW$NezxjJ= zARcbHQ(=n!R@*;<*C`Ehjef0oru>l-o~*%rFf)N|cwtPE-A$wMow;KK*;mzx!h7e_ zalZON3G>C4ID%^sJo9;rf9uo3cG}?YW$3B5hVcC|d&HF1Sv?Ycw0l#yCs|+mCuks1 zoVCi!B#znMl)~(pwLJ^Bv3!LDe4Q;aa8}U*zJPM6v(ClYOG+#ujqj`-^0D`yt7eoG z@tgN=HdikbHpSDtKi$uX+*Wv8IqP%FmAIW$%Q;y4@}@1|>uUC8Ev%U6uZ~jK=vgOW z6A_39q~tXfvaPT#zvfbpXsc*uBo#S{6ujZq{}~uF+e)o}bwO+E`5~g-^+SuN!seEK zBG}}tqxH6;g>PkG6n7Q--H6A?Tnvt>{n!glrkAMeA!WpV2@k>UR$&bqLEheJ9=j`o z>??|csdtG~3HqNBOhqm`kAI}OjX2n(^lq-2>PI>EwMIukF1!(~L|6IhIlA%~A~y&X zmvJMvUnUBnmI2`rs%hjtffgh7CX?B`WYR%O{om_5plXRNybI;GD zT5W&O6(JLhIn0jSD>4q}4n1ApJwI4Dp8DkRez56sJ#@z1sXK>e=REBvD`@@OfczAt z=Lft~4GPcThJuR09rS1)3%p2K73el*aYYQW3UobUV8o(C1zLkEpri`zz%y3Ki+mRV zrb12W#)KS6!8Cu6KsQgmVE?8OHR#2J2xCG930>BIlvTdykRKulAt34q$L|=i4@hyL zgBLMI88E>`2`=p9P-(4v$~OGVDk%(@mVt!~iw9mtoXQyj>b{9U`>z=2qnIRCl-O!( z=m6ZRmeld(I=658?;X(U=bbw3>pOj@W)PmjG7eYpKsY`NWKd}?n4bkwSg$r`cX{x1 zc4?eeD;b?kc=khj1qw(nS%t7nd&G3RrKm&qY>wv@4X`wTUX2z&hEmAo8e=QHe@=Qb zPiKkn@nf~u=|<}m+vb!!E=Wl&@zGNq9yBK|oue#iOzli@zNK_$)~7QeTtom#~s)+_Q7@6gd}RauPI*XQ(-a&FFl9Q)87nIt$j}3TC`<95AK2R$pfcA_HK0_^ zdeJ1McZQI0wn@-JF4j5_VvX+#NzKu_~kSX3XICva;0#*RZuzX%VV-vezjzH z{pM_0T40&3YT}FLZu`jJ1r40PHpAYIzjQ?qSE$;NUd1k;gZuZHU2}f)Z=-FK!Y}nl6_8=({7sT`ScGr zB^}r=S2s{8eniIT+Rgip;^+#Pbs+%)->+(78At(mxQNJ1Z3|Br&lYrzhiBUN$_o` z!kxCm@cTNk#bF%C3ttRO&~!>R7b#Q?>y1)SvPtT&ka5XeYGRXl3DQx`%iTvO3#8&4 zKkxyj`OJR(*;k6?^bP92)L57hQHi`mw%F^kn5oDOYB+U%JQ^6|Kk@>V!O(5AlsXKG z-ll)0M}X2@d51%B17$I%k(ul`0(i*y>t@rB4?#%O@S(VZvY2m?8ydWDGgQhhU>sqD z$PuMew*-0mW8*YT2VxlClb!s-Yx+GwpH+kLAp)C2I07Mca%&l#a**AhRJ0V)OpEqr%^0cr{IvF_%B6F7d^8MkxTwxfRr0u{=1{RFcJfIwC>8yiqV zGH4SpAwXq}Bc?XV01zX0vRi@ zHJ1Ao|JfYY2f}5N<(Lo(Pc)Dy=H(ow9GnFbHVrgEY~B@SpvVP?`^44@2l5T5Cbd{h=>U5r(nRp;!#21 zNB>DQHV=^IqeA!Z7|H|xN-G8oNOg)T^m!?ehR6`*2f8WE`)Dctd#vv3!1Iqm0=03# zvno1Qje+7M4duY&LI8}EysUlb(&?xCzsmoQcb|VtDL-A{W!|!n20WwU((Mw1MZSwt znU$pRg!wpgpd%!&aUe$K3E0Z!jwWW?W}P;Y*xBcQe3|qU+Kp*rWTXb%+-^lVPFL1j zo}R9#*!le$HA%U!`ZVij()O!2VtoY}B%Q4hm|}OM@j`8YS+Fx~C$JiwV%-0Fiy==p?Mb@S`}JPKH4_UOP@ok*^sui4+O&g$vk}Iqar21Nga4R)RNqZnMUCtS;5mdM1QRq%&D3UhIK*qSfqPF11#IycTjRxbV1sw3p%>AfeL`D&qk$j{^xI?CN;!@`mAi|<7@>Mv;d5qVo5qk5(} z`b`E)Q8h^xE!5)V(Hg~4WEHgX(?8|q@$KnZ2kGWNm90z%LnR5NQ7*91xH*IkOtpT$ zA^4P@Xb*LPx{w76?JbTZLWgn=WJlR<-^8WyB!0u4l-%(ku4z0yR;JpG1o9ha3ZsL| zzW1#5N1~jF!Tf1kjo6vu3KGfz{$hFY#*myAA*%nA-e{1WA!+R6bGI$zcd8$9z5V?w zrLfm|=^;lopS7>0rE!v>{xP+o8aZtggH3Wa)Zk~*V@MOIA}XS^re()c4wasLbf2N~ ziQhcbFSawVvuey-Tepk&3sT;a$U1>V1-h8vPtVV4hNScl=N5!d4wkuLE&^Y=1Jd@Y zKJD3cL~y4tR+g>t;zD!C2lt2+^?$O@90M^f%A$PDHa2HCIP4HP8C~(LcpgrVeV)!j z9N&Z}-Z^8tJhxu*p0ii~Dg+r9lWHWeBviI_fwEW^A_fA^+eHO(+;9GnQcoYpt57Gf zr!h;Lj>QzrKgT7n#Afg@4po5lBl2}iX5!o@*@rYdH@G16hH^H0r6(hJ6IbtdmJH;b z$Cm#(G#kcxRK8L?*t2b`4CUp>*N@Q=b2b9^jf0z``2CwdzR_Gn+t$)M&~>%)B-@o8 z^}A3DVs5ov30I9DrSyK;X)ryHw6~u?XUVACpHImmP9hEX>I3#BL*cnvjYY`y_JI?c zdhyAJK%2Tk$3Z5uqMP5EeFwL&bBL5;XTYP+e}rOK5c9ISLLfs|kiR&U269oBs%*7$ z?w<~vvQpI%;knT<&ev5sr~IFVq&0Vj_Xp4Nk%q#h6y*K0JF)Z|vDLJMnZll& zqg~g?5WdHR&dnXiib}co-g0HD>V-n_CzL1nHd;nsa@v(8IqsfroCkw*3eJ!BHuUpu z3Pncj!7XZHmM%mKt%So z)ysL#gqO-&!T3(uMUNp!1lN~S_qWk{`=TS^a>@cNOgGoJ?KvXh=E=}vBKA4N#W%QC!e5sY3o+Y~tUKC(yDh3$d9nC1penHNGo zJ89R;i$y-hXurN|F~1mOiA+|1Ir(M;zwadpi1tWZE=QXDRsV{*VWdZ3FR4DuinZXNB4~JX3MZ+a(WH%jCDawYKvHD!_4uV!~9^Wh;XE6 znu`atogNyOQaCWF3{7I;0^MHym>nm1R%mE8Qta5gUJGcPOZ|z<^vuV4XSm8sZFSb4 zAR_i$;)r_60#dJQ|LvqcckjcK?H6iCR);hIic0KH-hL6uc7a&opOvs#tTQM%NIINg zkv7W1Xm9L6N0#aRnZin22(Lp4hg^c%4&Q7zOt@HJ8$UCZZ(-bC(f|=;SV|cpF^jAA zzJnn}>RvUxy^cwvMw$pKi+Ou)Pp2^N zvMs9|Y9G-mUP+}nMe13S4ER=~UeHi4Uc}7;AkGR zGIL&=QM~DUy@9hFl@xB~pf7Bmcm82awCwv+{cVCCdq{j8N*XOh6F*3LMl`HHC^Q(} z@%TDCec!^CqNm!xG_LZ-kRil3>HqSra5Mrqg+P9hzg|Pc5MjML_Ix}dZ}xJ$y_rw3 zr53|f@yz?mTzG6x9_2zTZRv2yRxSWzK#Pk!D~KeN7)!{UxKUr*u+(8eTFH^n`PogOq=KA{l1zts`#fKDz zFJL*FK5@SAxhAq?9`a0^!Hub;vaIIV=6Ps?UQVE}b-AR}8*0x1rHY=OcYA*?63(=! z@j9sUC}ijkTrtFof#J5Cf-v9FHk7}cBiq5>%7Z;7YrfyPPo&%ozs?4xvs%!^8xsk} z^}xs8mYtMUP9Tf}E4WAGnlgMxR&K?+|4IBnP;L%Xe1fvd*^#0e; z3cTu)MgwYrP04qRFt)l;4vf?-Mky(5W_@nC+aqo}eLo>{=DKlk#Dju}UlFkOs>{9y zvT~Kg6`Zqr_^aIsl%Iur_Sv3x+7y{N_qck#Id#m1D2fFQlM|=|AFeF;*L=Ui&{w`DSl+iZHQnX^R)q6#UxZZB#TRl28feQ4QY!}D`1bP(&*=KKtl zi34z#hl-QO0S|o42|#-t&kAkW0A>b=+6;mZc_c=_2vB0wi5igx9xp^A4GE!nfG_4# z)>TrGsk8(*a99mxYR{6(02mDd1<y^gjOE98*|Qii(Wq(-!F!xlaQWMvmZ?tX�e%Vf zmXH51haS1Zry_B7yQe-gS6WJEBHJ z`k#x+-3~2n+RpIztAK<;X)D+`Ku+490^b>95k90tB33j-ns*!f>`Z~U;rgZ)X>f*MKp^MZYJi*=T+^8TQ*#2ECJ zvz?mlqdsfx)jqpFOzCM@`$Fb?X&~ii9Z2|f`*(APg4Mflb;aL71@fz0VCbS6mhgPW zUoOP!?Iot6$CM04@^`M)A#ry}EAjBznt$amA(VyRQl08iA}TxHlYHH_^TUS_4iN_d z93}T*#zOWUp-q?ku}UR(1~_VXS*7S$`wqtafJn{#p}?jKq1!4`xLB{4_HeueI=Ht1 zhyf5|cZ|I7!xVYAH#>VRszi4x8tTw@lu?|cpWz>Fl+q>+1EL2&!YCm9O%ON;PY5ni z!6ZY|8+x83JbC`}G<`$EgaRKWMm(lqrDs0SM2rb*tw+welEJk%=#1g+B&Rti^vv~7 zWK4W|W}^4#fOT9t@k8vNw*Fd;}2tQK1D!bw$APi2h8>@u4f<#PPO zJHalPkl%(w@}d(Ijyyd_S#r;W<>pz5Iw7yffVc-L^#RcRRT&In@va}!jFoEBuV~c2U?gni5{&w-tP*L7x{~5C z&jb=BB7pilr3=&Rz9Gb)*(sAzH9h8Ij!%oV(7gM&@%2*y7VYpcZ`n**y~SHGZt^9YK{L6lu9G$Y%59SQ)8DCQe$THx{%BeGUj`;XW8!C$G^h7 zj`|P$p8GOA1%eCNu+P^Uo-(QHQmHW`3(Jx7#3GD=X_v0~6rK|HX&&MOR4pW)ZFV1) z8H-~g`q%bNmEG3wqTP2s_6vD3?%3Y^d?LSMZ4*#O4KEV2rO*GPBjv>8#rGh1|H1h5 z=1<0b4vb@dzfM0!nVG5nSADF-1XXfie9KgkiA76drSQ&c*@>28*wN=^#J-vnqk`;7 zr)}K*&V9DybgOm21tRONx^`ggLgdSrp{Ii)%$eM|^BEkbBFw7l)!hX|9}|R_ijOS@ zOZq%jUr6z1|6P>ha@3pkPuPTK+kS64AS{5q7Ke$gmrY{!+fGRe-xu|5DQ-GXtwa>= ztvI}kz!r78PX59roB3eeT2s<+`>o-A7#02X^cS4Ck|)#J5MH=!_Ns;c28R9bhi8wR z4Vcf#b9-uc*L~Szq5JP{B6?BoLfi*Gzx^mpBI{WW+8SY!7)j2VKb8=m*Y|uUkTcI! zOeAE@$?170XMY)-7X4@0c&#E1F2pRQ(4yMI*(``7`W>xfB*bwbr+{1Ecp40SA!mL1 z_BIup$h)M4#=C(1`2O3qTj9NMagIC7DV%oo&FOx_O?02Q2<@@2>C{STu*Wrwq=S!i ziGQcON}*w<+R&2zTCgOhUs_92NZe#_T%+e8dJ`MBJn5{A2CX`?C_l?)SbbJ5$Id;s zxVL2P#6G*`R4U=zXt$-g5Xl?497!qzW% zeC_gd6s@K2hO_(JRr>5$DS|zMk{${a%AQwCG^}tQ+d1qNnDZCPZnAxvmHN_BV>U-} z?UXM<^{IUy;T6$Wd+&6n-rSRmKYh!6!-&N#2#Kpv{Wc`$nLdyBOr2t{cYNmSCg&uc zT74j|+05H8kVEma>mcQD7JFhBFMIyeMSUgHg*^Ow$-gY99{Xv`K>hUB;pR&!ntm^ z6ekn(`C}k|Nn1Vi71>iNO<60U{@v3o^$F1Uab z`OoO3&dAjI`uLh#T(s_u_wrBAevLqEfy_n``s4aHUMZJ~yyv7e54H)8MLODjAsR z2WE@R{_g*c99-G7+I*K`!s>FRAKKJe<*%js_B)GDws@Qd#)&T0XJ zc4zJs#bn%{59Eh`GI@=9yuzB`VAv!ne$Rx+G|6ZI&%T{ywOrG6#V_vzta{DYlX#vy zQ*%OOZ-dgvyAj^Go2};insgAQqQAnXA-dRNZLrh>52`9!faRbOez1DC0Ne(ul?QMrrX-)j6P>j$4e7Ur=3pQYNpNC`S%&v(uWRffYst($zE<2nU*pf*52ku&*hIDrxT%f zmR}KSY{vdPJsBZ$WvL&{&3$Of>S%%nEYk6%B9RaJ)+3b7206LM72eLYK_csON?y(E zoZc$MAtL-yRCggQVVf!Lx~}bxuRj(_w3>51h@MI7-Q;15>+3fS&E9>w>YPnF_H9h1 zjwb-7On~8%4Gq{)xEOiR{SckLf(W<)>(Kr@HOvHJ2q2% zr=2+G;Y9gny>nvR$lU9QDckw{*9YOFvQrUP0rKzgSGC0}ces{KBPkb6G`*8HMYCD% zg)}?%^#1Dbt(ZnB+Sd6<*@r@$_^T(1Ge%CSe;4oL0K$bZd$#x2gv@e7V7(8JpC$(f zo?U0(*4H;gBk~5<)CDRYz+be2GMo#SF8wh|#D;H;2Za`mjGmr}8) z_pYgvd(3s5tJLT*6c0TQRb&-z zf08J&%}}pJeDIHO(&pC7{Y%^mgrj`yrT&!SQ8y@WYNdX>FM1D*3B z7hX$>#GFG1sTENwCyj!Bf6vKxxg@g1)KI00m&H6`IqGW}g}sw`k>crX0+GAO09q`^|vq>n~?<>a2&;-RHm zn&;X;#N!Oi(?UmQO@vM{`5*fW7bw;KI@XuhJBM!7oG4_D110C#N%?z7WNTiTRD1Xr1Zd+COr#xsMHogKYuE^kHlMtcw*&77qo zoX6@^)&4&oDwaIra2^Ow!LNdn)sqR)!6FU?g6+*KEUUXmJY?7cT7TvX9cYI-5-FbZ#Mh@&at3=qI{Zyf(xeHQwT0o z+okTdX4R9I$sEP~q5?{Iue4`ue8IoTEqxDIAT~y>8!f_9+0?d5YxH56>FbOsx|%d| zUv@q>;3=yo!jR_oDUxdxoENglEg_CZ);|X1>5q^9{-|g$rrV`{=Mnl|XPl3a4j4Vm zl8P81HIp50uc;3-R`snz{5#f1!>-}%;-S-g`EaYCfsq+Z2o3?1PYM=H>1zs^840kE zcxnF-c;UnvkWxJU0}X^Zy@?nWF+nCkV}al(&dpJv#Lkvk^RAtzy%K#3Qh|0zI+@}C ze$acErcYVrTfmX5nQOP_QTuFG>hS7|=>?EV&Lqc?E_iENgl6c^7xI%D4ri^|ZsczgdIt2+HUJz+F1%P=djt zG87se8>WGaQrL^cmEwtnsMBzcEc-<@b|JwHs0?--chp_ZV!eY@(oKTB@=NLb=B>(x z1qK?7*Y9Csv{@DyNFTPcb!=swTo`wL^A@)J;*-##yUR>B>EAhXfEte#4wTsy#M_t$ zzQTh8+PLSEM+xq_r206dGZ;`M6xkEnM8LjF1T`2ayf23a_*&_%?@iWYA5 za{rNG@mYI@F90BDq+k*N@q%f9EFtqun3WtME~uF%7Y`6xnTP;7>)!jM?8iqrMc!Yk z=3FX;)7T!P9L7Ln-Wxk*4ve6JfEbLMQBvTesLjFo9Vz#Zc>9$D%1%J+C<_RNjQ`*p zeyoq$DiKQ%HKp%JJRQlwE8sP|$n!s_Z`6hy`#T~6C%(k6PK!K~9h<`f_JExZB%QX2 z^D^B{Ja;pz+>0ClS};cjY#NKE02m%~Q8{hm$sZHU7e2V{zSec98eIZ*ps@u5mBa{{ zV+vNyomR}7Dx=^}|I(m_OSH{JbqSrm5@uQe-Ug&4#I~ITw_ZT6HodPx&eR({Z)(m+ z8zo`EiaO%>G#EBASVC!+X&1$<@DCN8cbfa6g#7YtM;Sj3DFG7cI*}XZTMhb=-@RA@ zcu>Z@ox3oRs|4Gwk_mBGFhJPFt9NERTs;9l)HWM>+#xVp{oWu+(QSTCe1k>_{(6cx zLheWbSDY9`R8F7d95~GW?tLPfI?Km9zhgrPHSEwdUt*P0GzjwXw>}U!Wt$JKD;JkZ zaHZ+BH2vM{5IbFQE_JgA?vM`~(k7h26*~NyN%)l@>!m-y>>}iczT8{Ac*}8PRa!M% z<`;qWCdqlBL3Sb2^AO#Di7~zyr1o$cg313~_IbOh-i@vkQ~L=0)va)+lZg6CO}!1s z9*rF1fIATr^14iR!xT-7$co5I4Qpq=#(O4V^Y{ktcP%(;omS{&m*v?*Wx#~4W`0>j z(O2+sL3sZ|#igg7%;gjFw0>ZgAYcW!sU#?(T36O9ux%P?fKX7Lhkxd z=<=3;r=i@?aC~4m1RoY(j8nwgzjmPs1m!Y4dq@Emf@s#nb4LPO;6+hmOaT}gz!so( z6c}?6|FI%S{p9g>-YqgN4TcgepzH6%$HXXrU5y{=)V=sn+bo*)vG?VhEt2#hrrhfAVJok+8;jL8*RIJdpV2(4FtGgu%G(rG@H~mm=MV3 zGIg09h_APyirfpqix64=4m{Tg#p794+~CLmj?*y`UZwMs4@Kx>5zX?_8a8a zEkJ_jhZh}EhC)pO+xr%p?oUq#hx@prwD%y3{@SySb%t#5O^5M3 zxSMPNjYR_K09{W4;vL(|8f9xB0)Dr~%_Y98VU*|qpz5+qv={I2CVYTX>fOg}0)+Oq zZ(!d+-%*;`k8L7uw~=HXiFRSfI6MIqS%iReuJ7soY&39*u4Y05f8@^zO$*dQ!%!^q zf5rad@CHyA1DqrfR~zRPfny{I@ExSz`r$op^F+FMABg+JCvW%QhjTK}TWJXJ$+9B< zjZGoIA&lfg1+TJn;Q6S0MGRFo7|lM|AX=AfDIl`hAk*p9IAhcK&Qt6T0kU=@xpfOD47Wejv18by@tzXmKnPI zpB#Lc-OBm39f6pQ zXk9Xv>AWLqGN7?Uqs=jCjhzZn)qdezxGXpYT$yqZJTQE#xb-AwE1lWDU0REU8Ml$C z(Agge0_^McLDl5nHJeQ5vXX9p;hf14_-!v&c2coct_*nXT|)#{?m& zfV&0%RpoyP5Nbq`cSSdFdA&|blF>*?4XYV4#dIYTqF}DOqVRX&+y`HCN`t5#xmGmx zwgK_Bk)XD;m2;yr>jAbp756Vq;q^}h1a@DJfWG7=TMDGl1yCWXLMMmOx`0hRq8ZtD za@|ldsYG;=T$YVavNn%0(;zBDl=>vn2p*HDYWJUMWW#VkHC|_-{NSIu9M-mYp;Dr5 zQc~|iZFHu8lVz!p{FfU-H2BtjU!q7`#THu=Lq-(n>dzw_9?KvKqd2i z!X+t#%Tb<+ZsWSxEHzoP0+E&Ph4y&?$O2Zp6B{0dN9`Qv&KapBP=V1`R)Rl%ii^&pV< z)Ea=!!3h2E2yg-YlK22K~I2M{U2W?=8kP2Gpu; z`*f}6F$)cUD)li+Rt$}@(&Kk|RmA9KdeS1D%PsUv`BMAV{zMU8w%vg1*GQp&6wJue zD0!BmUyj-H`_gk)GijW1x$1F;{l~Jyb)Iad7$57aZNIP)^g8t+oVq;kY6>7rF`|uG zMZ{Q11F24_DK;hox&s<3qzf09fGWU-fLP%-)p||7$?)UhtsQ%^+O0Y}>f!Ki~_K|gE7e^+IDA)#M5nye>yJik8sS=`QbeamV*R} zJeqPY>LY;oD^lh1U*7K~iD$klM;7me4(CzktSaLA4?n)B2O*TUtA{W@S|m2pQ2o#q z|8Q?5=I~eLY4`%uMBHy4?2+Gpz9{Yk!SOVt;zm8w+tjqP=X!^HVmIHJ05nmqOg(q3 zY^#>7o!@^tC19jmzb9M0r(b_WJ|r+9LaAxhZ`SWKlQ&D+x?_}r79Cth{F*ObI?i6| zy-*Z!u5zaLbpwC&oR}KGmz%ofRd`Wni3% z5P2O@Ydxm69y$OlO@5$8sj+rGGuET`usK3RNP7tKkVLcar4ZFV{j=YmfEhLo($MtT z>d;X*HFBp~v87zyp8W!>*(qR$QhH!_u;c9v|BP5sC|}GIV5uEv#bd><7+SjT)FQp5 zOgO_UOi!UF22cd-FY@@Sel&h`TgXj4x2!B2LAf2WWUcz`NC(dvtB8w_ z6S(WSOQKM3;Ht_AQ9f9Tl7DOaahUtCcI$LGQAZ~)*q_dPSs>Ql>&UZa%O-TN)jE(b7TAFot6f7|nUNcbEl?jKZ9Ao8Ao(%kMwxY&jF`j)p6v2tfq|JvqM$fu zQGl&)_f|#zg>KZ0H2T?Brvvtwd}ci)OjoNx*`AC9UaW&;2#%;KpMG|7+S-(k(2TR&h_iv_~x98Oto! zXLi@V-N`+D``omxy9;hTRZ}c<3qRJUKf5UXzYAA-%iP$TzPBBL^O#0?bEnSdb!9rU z_Sl)5Gxaa5f4W4;cIKL@cGDfVFYS3<=ak)iyT#>tE%z?#2^)lWy1rG~V%F@U_$DFi zZZDJcLMBN;BLh0TJx(P zH+s%dzA3HCJCHsGAxbwIxM&`?tzFoy3j_I)rZQi*le*6X8q_`pM@0y#( z5`b&*&9=Q|YvT>^n4@(xs(s$|CqI0itf=&UwOTG~ufv4-xh8K@{!YkXo!wSDVb}bm zj`p|9uUgGFz0JR-c|-F7mc1Vnleg_W`n-E%`5Rut=gk{eN%0m(uV3YV{an%Qx-#pZ zZ_GJ5<3h@=ZGQLsj2Yu>r8f#CiMb1Id^mb}+x*jd_Sd&MEt>h-?Z(Xf%jc(ll>zRD zOh_-=D42X$a-YJzWV_{0ZhhPQc(c0%aQ(yDB{^R55$bz@8#`M%8BZ}dnd_esC}MDF1U3^Gf_n3S>sTga i0o4qBGD+z_H`|Wlbrx##4+2+#1NXoBxvX Date: Tue, 27 Feb 2018 17:27:06 +0530 Subject: [PATCH 3/3] LaTeX formatting errors fixed --- mdp_apps.ipynb | 34 ++++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/mdp_apps.ipynb b/mdp_apps.ipynb index 8ce33a562..78542e075 100644 --- a/mdp_apps.ipynb +++ b/mdp_apps.ipynb @@ -832,9 +832,10 @@ "We have the following transition probability matrices:\n", "
\n", "
\n", - "Action 1: Cruising streets \n", + "Action 1: Cruising streets\n", + "
\n", "
\n", - "$\\\\\n", + "$$\\\\\n", " P^{1} = \n", " \\left[ {\\begin{array}{ccc}\n", " \\frac{1}{2} & \\frac{1}{4} & \\frac{1}{4} \\\\\n", @@ -842,12 +843,13 @@ " \\frac{1}{4} & \\frac{1}{4} & \\frac{1}{2} \\\\\n", " \\end{array}}\\right] \\\\\n", " \\\\\n", - " $\n", + "$$\n", "
\n", "
\n", - "Action 2: Waiting at the taxi stand \n", + "Action 2: Waiting at the taxi stand \n", + "
\n", "
\n", - "$\\\\\n", + "$$\\\\\n", " P^{2} = \n", " \\left[ {\\begin{array}{ccc}\n", " \\frac{1}{16} & \\frac{3}{4} & \\frac{3}{16} \\\\\n", @@ -855,12 +857,13 @@ " \\frac{1}{8} & \\frac{3}{4} & \\frac{1}{8} \\\\\n", " \\end{array}}\\right] \\\\\n", " \\\\\n", - " $\n", + "$$\n", "
\n", "
\n", "Action 3: Waiting for dispatch \n", "
\n", - "$\\\\\n", + "
\n", + "$$\\\\\n", " P^{3} =\n", " \\left[ {\\begin{array}{ccc}\n", " \\frac{1}{4} & \\frac{1}{8} & \\frac{5}{8} \\\\\n", @@ -868,7 +871,7 @@ " \\frac{3}{4} & \\frac{1}{16} & \\frac{3}{16} \\\\\n", " \\end{array}}\\right] \\\\\n", " \\\\\n", - " $\n", + "$$\n", "
\n", "
\n", "For the sake of readability, we will call the states A, B and C and the actions 'cruise', 'stand' and 'dispatch'.\n", @@ -911,7 +914,8 @@ "
\n", "Action 1: Cruising streets \n", "
\n", - "$\\\\\n", + "
\n", + "$$\\\\\n", " R^{1} = \n", " \\left[ {\\begin{array}{ccc}\n", " 10 & 4 & 8 \\\\\n", @@ -919,12 +923,13 @@ " 10 & 2 & 8 \\\\\n", " \\end{array}}\\right] \\\\\n", " \\\\\n", - " $\n", + "$$\n", "
\n", "
\n", "Action 2: Waiting at the taxi stand \n", "
\n", - "$\\\\\n", + "
\n", + "$$\\\\\n", " R^{2} = \n", " \\left[ {\\begin{array}{ccc}\n", " 8 & 2 & 4 \\\\\n", @@ -932,12 +937,13 @@ " 6 & 4 & 2\\\\\n", " \\end{array}}\\right] \\\\\n", " \\\\\n", - " $\n", + "$$\n", "
\n", "
\n", "Action 3: Waiting for dispatch \n", "
\n", - "$\\\\\n", + "
\n", + "$$\\\\\n", " R^{3} = \n", " \\left[ {\\begin{array}{ccc}\n", " 4 & 6 & 4 \\\\\n", @@ -945,7 +951,7 @@ " 4 & 0 & 8\\\\\n", " \\end{array}}\\right] \\\\\n", " \\\\\n", - " $\n", + "$$\n", "
\n", "
\n", "We now build the reward model as a dictionary using these matrices."