-
-
Notifications
You must be signed in to change notification settings - Fork 26
Fix link checker #75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Fix link checker #75
Changes from all commits
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -9,7 +9,7 @@ kernelspec: | |
| name: python3 | ||
| --- | ||
|
|
||
| # Markov Chains | ||
| # Markov Chains | ||
|
|
||
| In addition to what's in Anaconda, this lecture will need the following libraries: | ||
|
|
||
|
|
@@ -24,7 +24,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie | |
| Markov chains are a standard way to model time series with some | ||
| dependence between observations. | ||
|
|
||
| For example, | ||
| For example, | ||
|
|
||
| * inflation next year depends on inflation this year | ||
| * unemployment next month depends on unemployment this month | ||
|
|
@@ -34,7 +34,7 @@ Markov chains are one of the workhorse models of economics and finance. | |
| The theory of Markov chains is beautiful and insightful, which is another | ||
| excellent reason to study them. | ||
|
|
||
| In this introductory lecture, we will | ||
| In this introductory lecture, we will | ||
|
|
||
| * review some of the key ideas from the theory of Markov chains and | ||
| * show how Markov chains appear in some economic applications. | ||
|
|
@@ -53,7 +53,7 @@ import numpy as np | |
| In this section we provide the basic definitions and some elementary examples. | ||
|
|
||
| (finite_dp_stoch_mat)= | ||
| ### Stochastic Matrices | ||
| ### Stochastic Matrices | ||
|
|
||
| Recall that a **probability mass function** over $n$ possible outcomes is a | ||
| nonnegative $n$-vector $p$ that sums to one. | ||
|
|
@@ -93,7 +93,7 @@ Therefore $P^{k+1} \mathbf 1 = P P^k \mathbf 1 = P \mathbf 1 = \mathbf 1$ | |
| The proof is done. | ||
|
|
||
|
|
||
| ### Markov Chains | ||
| ### Markov Chains | ||
|
|
||
| Now we can introduce Markov chains. | ||
|
|
||
|
|
@@ -126,7 +126,7 @@ dot.edge("mr", "ng", label="0.145") | |
| dot.edge("mr", "mr", label="0.778") | ||
| dot.edge("mr", "sr", label="0.077") | ||
| dot.edge("sr", "mr", label="0.508") | ||
|
|
||
| dot.edge("sr", "sr", label="0.492") | ||
| dot | ||
| ``` | ||
|
|
@@ -199,7 +199,7 @@ More generally, for any $i,j$ between 0 and 2, we have | |
| $$ | ||
| \begin{aligned} | ||
| P(i,j) | ||
| & = \mathbb P\{X_{t+1} = j \,|\, X_t = i\} | ||
| & = \mathbb P\{X_{t+1} = j \,|\, X_t = i\} | ||
| \\ | ||
| & = \text{ probability of transitioning from state $i$ to state $j$ in one month} | ||
| \end{aligned} | ||
|
|
@@ -234,11 +234,11 @@ For example, | |
|
|
||
| $$ | ||
| \begin{aligned} | ||
| P(0,1) | ||
| & = | ||
| P(0,1) | ||
| & = | ||
| \text{ probability of transitioning from state $0$ to state $1$ in one month} | ||
| \\ | ||
| & = | ||
| & = | ||
| \text{ probability finding a job next month} | ||
| \\ | ||
| & = \alpha | ||
|
|
@@ -303,7 +303,7 @@ $$ | |
| Going the other way, if we take a stochastic matrix $P$, we can generate a Markov | ||
| chain $\{X_t\}$ as follows: | ||
|
|
||
| * draw $X_0$ from a marginal distribution $\psi$ | ||
| * draw $X_0$ from a marginal distribution $\psi$ | ||
| * for each $t = 0, 1, \ldots$, draw $X_{t+1}$ from $P(X_t,\cdot)$ | ||
|
|
||
| By construction, the resulting process satisfies {eq}`mpp`. | ||
|
|
@@ -458,7 +458,7 @@ mc.simulate_indices(ts_length=4) | |
| ``` | ||
|
|
||
| (mc_md)= | ||
| ## Marginal Distributions | ||
| ## Marginal Distributions | ||
|
|
||
| Suppose that | ||
|
|
||
|
|
@@ -827,7 +827,7 @@ mc.stationary_distributions # Show all stationary distributions | |
| ``` | ||
|
|
||
| (ergodicity)= | ||
| ## Ergodicity | ||
| ## Ergodicity | ||
|
|
||
| Under irreducibility, yet another important result obtains: | ||
|
|
||
|
|
@@ -900,7 +900,7 @@ Another example is Hamilton {cite}`Hamilton2005` dynamics {ref}`discussed above | |
|
|
||
| The diagram of the Markov chain shows that it is **irreducible**. | ||
|
|
||
| Therefore, we can see the sample path averages for each state (the fraction of time spent in each state) converges to the stationary distribution regardless of the starting state | ||
| Therefore, we can see the sample path averages for each state (the fraction of time spent in each state) converges to the stationary distribution regardless of the starting state | ||
|
|
||
| ```{code-cell} ipython3 | ||
| P = np.array([[0.971, 0.029, 0.000], | ||
|
|
@@ -915,7 +915,7 @@ plt.subplots_adjust(wspace=0.35) | |
| for i in range(n_state): | ||
| axes[i].grid() | ||
| axes[i].set_ylim(ψ_star[i]-0.2, ψ_star[i]+0.2) | ||
| axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', | ||
| axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', | ||
| label = fr'$\psi^*(X={i})$') | ||
| axes[i].set_xlabel('t') | ||
| axes[i].set_ylabel(fr'average time spent at X={i}') | ||
|
|
@@ -962,7 +962,7 @@ dot | |
|
|
||
| As you might notice, unlike other Markov chains we have seen before, it has a periodic cycle. | ||
|
|
||
| This is formally called [periodicity](https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16:_Markov_Processes/16.05:_Periodicity_of_Discrete-Time_Chains#:~:text=A%20state%20in%20a%20discrete,limiting%20behavior%20of%20the%20chain.). | ||
| This is formally called [periodicity](https://www.randomservices.org/random/markov/Periodicity.html). | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @Smit-create is this link comparable? |
||
|
|
||
| We will not go into the details of periodicity. | ||
|
|
||
|
|
@@ -979,7 +979,7 @@ fig, axes = plt.subplots(nrows=1, ncols=n_state) | |
| for i in range(n_state): | ||
| axes[i].grid() | ||
| axes[i].set_ylim(0.45, 0.55) | ||
| axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', | ||
| axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', | ||
| label = fr'$\psi^*(X={i})$') | ||
| axes[i].set_xlabel('t') | ||
| axes[i].set_ylabel(fr'average time spent at X={i}') | ||
|
|
@@ -1016,7 +1016,7 @@ strictly positive, then $P$ has only one stationary distribution $\psi^*$ and | |
| $$ | ||
| \psi_0 P^t \to \psi | ||
| \quad \text{as } t \to \infty | ||
| $$ | ||
| $$ | ||
|
|
||
|
|
||
| (See, for example, {cite}`haggstrom2002finite`. Our assumptions imply that $P$ | ||
|
|
@@ -1090,24 +1090,24 @@ plt.subplots_adjust(wspace=0.35) | |
| x0s = np.ones((n, n_state)) | ||
| for i in range(n): | ||
| draws = np.random.randint(1, 10_000_000, size=n_state) | ||
|
|
||
| # Scale them so that they add up into 1 | ||
| x0s[i,:] = np.array(draws/sum(draws)) | ||
|
|
||
| # Loop through many initial values | ||
| for x0 in x0s: | ||
| x = x0 | ||
| X = np.zeros((n,n_state)) | ||
|
|
||
| # Obtain and plot distributions at each state | ||
| for t in range(0, n): | ||
| x = x @ P | ||
| x = x @ P | ||
| X[t] = x | ||
| for i in range(n_state): | ||
| axes[i].plot(range(0, n), X[:,i], alpha=0.3) | ||
|
|
||
| for i in range(n_state): | ||
| axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', | ||
| axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', | ||
| label = fr'$\psi^*(X={i})$') | ||
| axes[i].set_xlabel('t') | ||
| axes[i].set_ylabel(fr'$\psi(X={i})$') | ||
|
|
@@ -1139,13 +1139,13 @@ for i in range(n): | |
| for x0 in x0s: | ||
| x = x0 | ||
| X = np.zeros((n,n_state)) | ||
|
|
||
| for t in range(0, n): | ||
| x = x @ P | ||
| X[t] = x | ||
| for i in range(n_state): | ||
| axes[i].plot(range(20, n), X[20:,i], alpha=0.3) | ||
|
|
||
| for i in range(n_state): | ||
| axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', label = fr'$\psi^* (X={i})$') | ||
| axes[i].set_xlabel('t') | ||
|
|
@@ -1260,7 +1260,7 @@ TODO -- connect to the Neumann series lemma (Maanasee) | |
|
|
||
| ## Exercises | ||
|
|
||
| ````{exercise} | ||
| ````{exercise} | ||
| :label: mc_ex1 | ||
|
|
||
| Benhabib el al. {cite}`benhabib_wealth_2019` estimated that the transition matrix for social mobility as the following | ||
|
|
@@ -1291,7 +1291,7 @@ P_B = np.array(P_B) | |
| codes_B = ( '1','2','3','4','5','6','7','8') | ||
| ``` | ||
|
|
||
| In this exercise, | ||
| In this exercise, | ||
|
|
||
| 1. show this process is asymptotically stationary and calculate the stationary distribution using simulations. | ||
|
|
||
|
|
@@ -1323,7 +1323,7 @@ codes_B = ( '1','2','3','4','5','6','7','8') | |
| np.linalg.matrix_power(P_B, 10) | ||
| ``` | ||
|
|
||
| We find rows transition matrix converge to the stationary distribution | ||
| We find rows transition matrix converge to the stationary distribution | ||
|
|
||
| ```{code-cell} ipython3 | ||
| mc = qe.MarkovChain(P_B) | ||
|
|
@@ -1360,7 +1360,7 @@ We can see that the time spent at each state quickly converges to the stationary | |
| ``` | ||
|
|
||
|
|
||
| ```{exercise} | ||
| ```{exercise} | ||
| :label: mc_ex2 | ||
|
|
||
| According to the discussion {ref}`above <mc_eg1-2>`, if a worker's employment dynamics obey the stochastic matrix | ||
|
|
@@ -1443,7 +1443,7 @@ plt.show() | |
| ```{solution-end} | ||
| ``` | ||
|
|
||
| ```{exercise} | ||
| ```{exercise} | ||
| :label: mc_ex3 | ||
|
|
||
| In `quantecon` library, irreducibility is tested by checking whether the chain forms a [strongly connected component](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.components.is_strongly_connected.html). | ||
|
|
||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Smit-create do we need to get rid of the
doi?