Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions lectures/_static/quant-econ.bib
Original file line number Diff line number Diff line change
Expand Up @@ -2518,8 +2518,6 @@ @article{benhabib_wealth_2019
volume = {109},
issn = {0002-8282},
shorttitle = {Wealth {Distribution} and {Social} {Mobility} in the {US}},
url = {https://www.aeaweb.org/articles?id=10.1257/aer.20151684},
doi = {10.1257/aer.20151684},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Smit-create do we need to get rid of the doi?

abstract = {We quantitatively identify the factors that drive wealth dynamics in the United States and are consistent with its skewed cross-sectional distribution and with social mobility. We concentrate on three critical factors: (i) skewed earnings, (ii) differential saving rates across wealth levels, and (iii) stochastic idiosyncratic returns to wealth. All of these are fundamental for matching both distribution and mobility. The stochastic process for returns which best fits the cross-sectional distribution of wealth and social mobility in the United States shares several statistical properties with those of the returns to wealth uncovered by Fagereng et al. (2017) from tax records in Norway.},
language = {en},
number = {5},
Expand All @@ -2535,7 +2533,6 @@ @article{benhabib_wealth_2019

@article{cobweb_model,
ISSN = {10711031},
URL = {http://www.jstor.org/stable/1236509},
abstract = {In recent years, economists have become much interested in recursive models. This interest stems from a growing need for long-term economic projections and for forecasting the probable effects of economic programs and policies. In a dynamic world, past and present conditions help shape future conditions. Perhaps the simplest recursive model is the two-dimensional "cobweb diagram," discussed by Ezekiel in 1938. The present paper attempts to generalize the simple cobweb model somewhat. It considers some effects of price supports. It discusses multidimensional cobwebs to describe simultaneous adjustments in prices and outputs of a number of commodities. And it allows for time trends in the variables.},
author = {Frederick V. Waugh},
journal = {Journal of Farm Economics},
Expand All @@ -2556,8 +2553,6 @@ @article{hog_cycle
number = {4},
pages = {842-853},
doi = {https://doi.org/10.2307/1235116},
url = {https://onlinelibrary.wiley.com/doi/abs/10.2307/1235116},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.2307/1235116},
abstract = {Abstract A surprisingly regular four year cycle in hogs has become apparent in the past ten years. This regularity presents an unusual opportunity to study the mechanism of the cycle because it suggests the cycle may be inherent within the industry rather than the result of lagged responses to outside influences. The cobweb theorem is often mentioned as a theoretical tool for explaining the hog cycle, although a two year cycle is usually predicted. When the nature of the hog industry is examined, certain factors become apparent which enable the cobweb theorem to serve as a theoretical basis for the present four year cycle.},
year = {1960}
}
60 changes: 30 additions & 30 deletions lectures/markov_chains.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ kernelspec:
name: python3
---

# Markov Chains
# Markov Chains

In addition to what's in Anaconda, this lecture will need the following libraries:

Expand All @@ -24,7 +24,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie
Markov chains are a standard way to model time series with some
dependence between observations.

For example,
For example,

* inflation next year depends on inflation this year
* unemployment next month depends on unemployment this month
Expand All @@ -34,7 +34,7 @@ Markov chains are one of the workhorse models of economics and finance.
The theory of Markov chains is beautiful and insightful, which is another
excellent reason to study them.

In this introductory lecture, we will
In this introductory lecture, we will

* review some of the key ideas from the theory of Markov chains and
* show how Markov chains appear in some economic applications.
Expand All @@ -53,7 +53,7 @@ import numpy as np
In this section we provide the basic definitions and some elementary examples.

(finite_dp_stoch_mat)=
### Stochastic Matrices
### Stochastic Matrices

Recall that a **probability mass function** over $n$ possible outcomes is a
nonnegative $n$-vector $p$ that sums to one.
Expand Down Expand Up @@ -93,7 +93,7 @@ Therefore $P^{k+1} \mathbf 1 = P P^k \mathbf 1 = P \mathbf 1 = \mathbf 1$
The proof is done.


### Markov Chains
### Markov Chains

Now we can introduce Markov chains.

Expand Down Expand Up @@ -126,7 +126,7 @@ dot.edge("mr", "ng", label="0.145")
dot.edge("mr", "mr", label="0.778")
dot.edge("mr", "sr", label="0.077")
dot.edge("sr", "mr", label="0.508")

dot.edge("sr", "sr", label="0.492")
dot
```
Expand Down Expand Up @@ -199,7 +199,7 @@ More generally, for any $i,j$ between 0 and 2, we have
$$
\begin{aligned}
P(i,j)
& = \mathbb P\{X_{t+1} = j \,|\, X_t = i\}
& = \mathbb P\{X_{t+1} = j \,|\, X_t = i\}
\\
& = \text{ probability of transitioning from state $i$ to state $j$ in one month}
\end{aligned}
Expand Down Expand Up @@ -234,11 +234,11 @@ For example,

$$
\begin{aligned}
P(0,1)
& =
P(0,1)
& =
\text{ probability of transitioning from state $0$ to state $1$ in one month}
\\
& =
& =
\text{ probability finding a job next month}
\\
& = \alpha
Expand Down Expand Up @@ -303,7 +303,7 @@ $$
Going the other way, if we take a stochastic matrix $P$, we can generate a Markov
chain $\{X_t\}$ as follows:

* draw $X_0$ from a marginal distribution $\psi$
* draw $X_0$ from a marginal distribution $\psi$
* for each $t = 0, 1, \ldots$, draw $X_{t+1}$ from $P(X_t,\cdot)$

By construction, the resulting process satisfies {eq}`mpp`.
Expand Down Expand Up @@ -458,7 +458,7 @@ mc.simulate_indices(ts_length=4)
```

(mc_md)=
## Marginal Distributions
## Marginal Distributions

Suppose that

Expand Down Expand Up @@ -827,7 +827,7 @@ mc.stationary_distributions # Show all stationary distributions
```

(ergodicity)=
## Ergodicity
## Ergodicity

Under irreducibility, yet another important result obtains:

Expand Down Expand Up @@ -900,7 +900,7 @@ Another example is Hamilton {cite}`Hamilton2005` dynamics {ref}`discussed above

The diagram of the Markov chain shows that it is **irreducible**.

Therefore, we can see the sample path averages for each state (the fraction of time spent in each state) converges to the stationary distribution regardless of the starting state
Therefore, we can see the sample path averages for each state (the fraction of time spent in each state) converges to the stationary distribution regardless of the starting state

```{code-cell} ipython3
P = np.array([[0.971, 0.029, 0.000],
Expand All @@ -915,7 +915,7 @@ plt.subplots_adjust(wspace=0.35)
for i in range(n_state):
axes[i].grid()
axes[i].set_ylim(ψ_star[i]-0.2, ψ_star[i]+0.2)
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
label = fr'$\psi^*(X={i})$')
axes[i].set_xlabel('t')
axes[i].set_ylabel(fr'average time spent at X={i}')
Expand Down Expand Up @@ -962,7 +962,7 @@ dot

As you might notice, unlike other Markov chains we have seen before, it has a periodic cycle.

This is formally called [periodicity](https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16:_Markov_Processes/16.05:_Periodicity_of_Discrete-Time_Chains#:~:text=A%20state%20in%20a%20discrete,limiting%20behavior%20of%20the%20chain.).
This is formally called [periodicity](https://www.randomservices.org/random/markov/Periodicity.html).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Smit-create is this link comparable?


We will not go into the details of periodicity.

Expand All @@ -979,7 +979,7 @@ fig, axes = plt.subplots(nrows=1, ncols=n_state)
for i in range(n_state):
axes[i].grid()
axes[i].set_ylim(0.45, 0.55)
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
label = fr'$\psi^*(X={i})$')
axes[i].set_xlabel('t')
axes[i].set_ylabel(fr'average time spent at X={i}')
Expand Down Expand Up @@ -1016,7 +1016,7 @@ strictly positive, then $P$ has only one stationary distribution $\psi^*$ and
$$
\psi_0 P^t \to \psi
\quad \text{as } t \to \infty
$$
$$


(See, for example, {cite}`haggstrom2002finite`. Our assumptions imply that $P$
Expand Down Expand Up @@ -1090,24 +1090,24 @@ plt.subplots_adjust(wspace=0.35)
x0s = np.ones((n, n_state))
for i in range(n):
draws = np.random.randint(1, 10_000_000, size=n_state)

# Scale them so that they add up into 1
x0s[i,:] = np.array(draws/sum(draws))

# Loop through many initial values
for x0 in x0s:
x = x0
X = np.zeros((n,n_state))

# Obtain and plot distributions at each state
for t in range(0, n):
x = x @ P
x = x @ P
X[t] = x
for i in range(n_state):
axes[i].plot(range(0, n), X[:,i], alpha=0.3)

for i in range(n_state):
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
label = fr'$\psi^*(X={i})$')
axes[i].set_xlabel('t')
axes[i].set_ylabel(fr'$\psi(X={i})$')
Expand Down Expand Up @@ -1139,13 +1139,13 @@ for i in range(n):
for x0 in x0s:
x = x0
X = np.zeros((n,n_state))

for t in range(0, n):
x = x @ P
X[t] = x
for i in range(n_state):
axes[i].plot(range(20, n), X[20:,i], alpha=0.3)

for i in range(n_state):
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', label = fr'$\psi^* (X={i})$')
axes[i].set_xlabel('t')
Expand Down Expand Up @@ -1260,7 +1260,7 @@ TODO -- connect to the Neumann series lemma (Maanasee)

## Exercises

````{exercise}
````{exercise}
:label: mc_ex1

Benhabib el al. {cite}`benhabib_wealth_2019` estimated that the transition matrix for social mobility as the following
Expand Down Expand Up @@ -1291,7 +1291,7 @@ P_B = np.array(P_B)
codes_B = ( '1','2','3','4','5','6','7','8')
```

In this exercise,
In this exercise,

1. show this process is asymptotically stationary and calculate the stationary distribution using simulations.

Expand Down Expand Up @@ -1323,7 +1323,7 @@ codes_B = ( '1','2','3','4','5','6','7','8')
np.linalg.matrix_power(P_B, 10)
```

We find rows transition matrix converge to the stationary distribution
We find rows transition matrix converge to the stationary distribution

```{code-cell} ipython3
mc = qe.MarkovChain(P_B)
Expand Down Expand Up @@ -1360,7 +1360,7 @@ We can see that the time spent at each state quickly converges to the stationary
```


```{exercise}
```{exercise}
:label: mc_ex2

According to the discussion {ref}`above <mc_eg1-2>`, if a worker's employment dynamics obey the stochastic matrix
Expand Down Expand Up @@ -1443,7 +1443,7 @@ plt.show()
```{solution-end}
```

```{exercise}
```{exercise}
:label: mc_ex3

In `quantecon` library, irreducibility is tested by checking whether the chain forms a [strongly connected component](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.components.is_strongly_connected.html).
Expand Down