Skip to content

🔢 Python module that calculates probabilities for a random walk in 1-dimensional discrete state space

License

Notifications You must be signed in to change notification settings

rtmigo/markov_walk_py

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Actions Status Generic badge

This module solves a particular mathematical problem related to probability theory.


Let's say a completely drunk passenger, while on a train, is trying to find his car. The cars are connected, so he wanders between them. Some transitions between cars attract him more, and some less.

At the very beginning and at the very end of the train there are ticket collectors: if they meet a drunkard, they will kick him out of the train.*

We know which car the drunkard is in now. The questions are:

  • What is the likelihood that he will be thrown out by the ticket collector standing at the beginning of the train, and not at the end?

  • What is the likelihood that he will at least visit his car before being kicked out?

On a sober head

We are dealing with a discrete 1D random walk. At each state, we have different probabilities of making step to the left or to the right.

States L 0 < 1 > 2 3 4 5 R
P(move right) ... 0.3 0.5 0.7 0.4 0.8 0.9 ...
P(move left) ... 0.7 0.5 0.3 0.6 0.2 0.1 ...

The probability to get from state 1 to state 2 is 0.7.

The probability to get from state 1 to state 0 is 0.3.

Suppose the motion begins at state 1. What is the exact probability that we will get to state R before we get to state L? What is the probability we will get to state 3 before L and R?

What the module does

It uses Absorbing Markov chains to solve the problem. It performs all matrix calculations in numpy and returns the answers as float numbers.

How to install

cd /abc/your_project

# clone the module to /abc/your_project/markov_walk
svn export https://github.com/rtmigo/markov_walk/trunk/markov_walk markov_walk

# install dependencies
pip3 install numpy

Now you can import markov_walk from /abc/your_project/your_module.py

How to use

from markov_walk import MarkovWalk

step_right_probs = [0.3, 0.5, 0.7, 0.4, 0.8, 0.9]
walk = MarkovWalk(step_right_probs)

# the motion begins at state 2. 
# How can we calculate the probability that we will get to state R before we get to state L?
print(walk.right_edge_probs[2])

# the motion begins at state 1.
# What is the probability we will get to state 3 before L and R? 
print(walk.ever_reach_probs[1][3])
  • walk.ever_reach_probs[start_state][end_state] is the probability, that after infinite wandering started at start_state we will ever reach the point end_state

  • walk.right_edge_probs[state] is the probability for a starting state, that after infinite wandering we will leave the table on the right, and not on the left

By states we mean int indexes in step_right_probs.


* Perhaps you are worried about why the ticket collectors are going to kick the passenger out. The reason is that he is traveling on a lottery ticket.