From f370ed36a04fc3c33a9ccafdcc0afcd584aeccd0 Mon Sep 17 00:00:00 2001 From: mmcky Date: Mon, 14 Dec 2020 11:49:27 +1100 Subject: [PATCH 01/15] use ifp as a test case' --- .gitignore | 2 ++ lectures/ifp.md | 10 ++++++++-- 2 files changed, 10 insertions(+), 2 deletions(-) create mode 100644 .gitignore diff --git a/.gitignore b/.gitignore new file mode 100644 index 000000000..9c0ea0461 --- /dev/null +++ b/.gitignore @@ -0,0 +1,2 @@ +_build/ +lectures/_build/ \ No newline at end of file diff --git a/lectures/ifp.md b/lectures/ifp.md index 0ec789c33..6d83c28a7 100644 --- a/lectures/ifp.md +++ b/lectures/ifp.md @@ -23,6 +23,10 @@ kernelspec: :depth: 2 ``` +```{code-cell} ipython +%load _static/lecture_specific/cake_eating_numerical/analytical.py +``` + In addition to what's in Anaconda, this lecture will need the following libraries: ```{code-cell} ipython @@ -476,7 +480,8 @@ start to iterate. The following function iterates to convergence and returns the approximate optimal policy. -```{literalinclude} _static/lecture_specific/coleman_policy_iter/solve_time_iter.py +```{code-cell} python3 +%load _static/lecture_specific/coleman_policy_iter/solve_time_iter.py ``` Let's carry this out using the default parameters of the `IFP` class: @@ -519,7 +524,8 @@ In this case, our income fluctuation problem is just a cake eating problem. We know that, in this case, the value function and optimal consumption policy are given by -```{literalinclude} _static/lecture_specific/cake_eating_numerical/analytical.py +```{code-cell} python3 +%load _static/lecture_specific/cake_eating_numerical/analytical.py ``` Let's see if we match up: From 34a93d3d796b59f661f1bd5df1b5ec3c453bfc14 Mon Sep 17 00:00:00 2001 From: mmcky Date: Wed, 6 Jan 2021 13:52:36 +1100 Subject: [PATCH 02/15] update lecture files to new syntax for executable code from file --- lectures/cake_eating_numerical.md | 3 ++- lectures/coleman_policy_iter.md | 12 ++++++++---- lectures/egm_policy_iter.md | 9 ++++++--- lectures/ifp.md | 6 +++--- lectures/markov_perf.md | 3 ++- lectures/optgrowth.md | 6 ++++-- lectures/optgrowth_fast.md | 12 ++++++++---- 7 files changed, 33 insertions(+), 18 deletions(-) diff --git a/lectures/cake_eating_numerical.md b/lectures/cake_eating_numerical.md index 8ad096748..afbe3a480 100644 --- a/lectures/cake_eating_numerical.md +++ b/lectures/cake_eating_numerical.md @@ -72,7 +72,8 @@ where $u$ is the CRRA utility function. The analytical solutions for the value function and optimal policy were found to be as follows. -```{literalinclude} _static/lecture_specific/cake_eating_numerical/analytical.py +```{code-cell} python3 +:file: _static/lecture_specific/cake_eating_numerical/analytical.py ``` Our first aim is to obtain these analytical solutions numerically. diff --git a/lectures/coleman_policy_iter.md b/lectures/coleman_policy_iter.md index f1920f0f2..ebf8169f9 100644 --- a/lectures/coleman_policy_iter.md +++ b/lectures/coleman_policy_iter.md @@ -267,7 +267,8 @@ As in our {doc}`previous study `, we continue to assume that This will allow us to compare our results to the analytical solutions -```{literalinclude} _static/lecture_specific/optgrowth/cd_analytical.py +```{code-cell} python3 +:file: _static/lecture_specific/optgrowth/cd_analytical.py ``` As discussed above, our plan is to solve the model using time iteration, which @@ -278,7 +279,8 @@ For this we need access to the functions $u'$ and $f, f'$. These are available in a class called `OptimalGrowthModel` that we constructed in an {doc}`earlier lecture `. -```{literalinclude} _static/lecture_specific/optgrowth_fast/ogm.py +```{code-cell} python3 +:file: _static/lecture_specific/optgrowth_fast/ogm.py ``` Now we implement a method called `euler_diff`, which returns @@ -374,7 +376,8 @@ Here is a function called `solve_model_time_iter` that takes an instance of `OptimalGrowthModel` and returns an approximation to the optimal policy, using time iteration. -```{literalinclude} _static/lecture_specific/coleman_policy_iter/solve_time_iter.py +```{code-cell} python3 +:file: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py ``` Let's call it: @@ -439,7 +442,8 @@ Compute and plot the optimal policy. We use the class `OptimalGrowthModel_CRRA` from our {doc}`VFI lecture `. -```{literalinclude} _static/lecture_specific/optgrowth_fast/ogm_crra.py +```{code-cell} python3 +:file: _static/lecture_specific/optgrowth_fast/ogm_crra.py ``` Let's create an instance: diff --git a/lectures/egm_policy_iter.md b/lectures/egm_policy_iter.md index e9690c6d4..995d72256 100644 --- a/lectures/egm_policy_iter.md +++ b/lectures/egm_policy_iter.md @@ -160,12 +160,14 @@ where This will allow us to make comparisons with the analytical solutions -```{literalinclude} _static/lecture_specific/optgrowth/cd_analytical.py +```{code-cell} python3 +:file: _static/lecture_specific/optgrowth/cd_analytical.py ``` We reuse the `OptimalGrowthModel` class -```{literalinclude} _static/lecture_specific/optgrowth_fast/ogm.py +```{code-cell} python3 +:file: _static/lecture_specific/optgrowth_fast/ogm.py ``` ### The Operator @@ -216,7 +218,8 @@ grid = og.grid Here's our solver routine: -```{literalinclude} _static/lecture_specific/coleman_policy_iter/solve_time_iter.py +```{code-cell} python3 +:file: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py ``` Let's call it: diff --git a/lectures/ifp.md b/lectures/ifp.md index 6d83c28a7..0a7d3a6da 100644 --- a/lectures/ifp.md +++ b/lectures/ifp.md @@ -24,7 +24,7 @@ kernelspec: ``` ```{code-cell} ipython -%load _static/lecture_specific/cake_eating_numerical/analytical.py +:file: _static/lecture_specific/cake_eating_numerical/analytical.py ``` In addition to what's in Anaconda, this lecture will need the following libraries: @@ -481,7 +481,7 @@ The following function iterates to convergence and returns the approximate optimal policy. ```{code-cell} python3 -%load _static/lecture_specific/coleman_policy_iter/solve_time_iter.py +:file: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py ``` Let's carry this out using the default parameters of the `IFP` class: @@ -525,7 +525,7 @@ We know that, in this case, the value function and optimal consumption policy are given by ```{code-cell} python3 -%load _static/lecture_specific/cake_eating_numerical/analytical.py +:file: _static/lecture_specific/cake_eating_numerical/analytical.py ``` Let's see if we match up: diff --git a/lectures/markov_perf.md b/lectures/markov_perf.md index 3721e2c12..ff68eec11 100644 --- a/lectures/markov_perf.md +++ b/lectures/markov_perf.md @@ -431,7 +431,8 @@ Consider the previously presented duopoly model with parameter values of: From these, we compute the infinite horizon MPE using the preceding code -```{literalinclude} _static/lecture_specific/markov_perf/duopoly_mpe.py +```{code-cell} python3 +:file: _static/lecture_specific/markov_perf/duopoly_mpe.py ``` Running the code produces the following output. diff --git a/lectures/optgrowth.md b/lectures/optgrowth.md index 1afa7bf19..fc493edcd 100644 --- a/lectures/optgrowth.md +++ b/lectures/optgrowth.md @@ -624,7 +624,8 @@ whether our code works for this particular case. In Python, the functions above can be expressed as: -```{literalinclude} _static/lecture_specific/optgrowth/cd_analytical.py +```{code-cell} python3 +:file: _static/lecture_specific/optgrowth/cd_analytical.py ``` Next let's create an instance of the model with the above primitives and assign it to the variable `og`. @@ -700,7 +701,8 @@ We are clearly getting closer. We can write a function that iterates until the difference is below a particular tolerance level. -```{literalinclude} _static/lecture_specific/optgrowth/solve_model.py +```{code-cell} python3 +:file: _static/lecture_specific/optgrowth/solve_model.py ``` Let's use this function to compute an approximate solution at the defaults. diff --git a/lectures/optgrowth_fast.md b/lectures/optgrowth_fast.md index fb8d8d31f..6e50c0cde 100644 --- a/lectures/optgrowth_fast.md +++ b/lectures/optgrowth_fast.md @@ -103,7 +103,8 @@ In particular, the algorithm is unchanged, and the only difference is in the imp As before, we will be able to compare with the true solutions -```{literalinclude} _static/lecture_specific/optgrowth/cd_analytical.py +```{code-cell} python3 +:file: _static/lecture_specific/optgrowth/cd_analytical.py ``` ## Computation @@ -125,7 +126,8 @@ class. This is where we sacrifice flexibility in order to gain more speed. -```{literalinclude} _static/lecture_specific/optgrowth_fast/ogm.py +```{code-cell} python3 +:file: _static/lecture_specific/optgrowth_fast/ogm.py ``` The class includes some methods such as `u_prime` that we do not need now @@ -186,7 +188,8 @@ def T(v, og): We use the `solve_model` function to perform iteration until convergence. -```{literalinclude} _static/lecture_specific/optgrowth/solve_model.py +```{code-cell} python3 +:file: _static/lecture_specific/optgrowth/solve_model.py ``` Let's compute the approximate solution at the default parameters. @@ -317,7 +320,8 @@ value function iteration, the JIT-compiled code is usually an order of magnitude Here's our CRRA version of `OptimalGrowthModel`: -```{literalinclude} _static/lecture_specific/optgrowth_fast/ogm_crra.py +```{code-cell} python3 +:file: _static/lecture_specific/optgrowth_fast/ogm_crra.py ``` Let's create an instance: From b276d588967f708f684c23ffb2c8edd9c4c586f5 Mon Sep 17 00:00:00 2001 From: mmcky Date: Wed, 6 Jan 2021 13:52:47 +1100 Subject: [PATCH 03/15] install myst_nb from the branch --- environment.yml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/environment.yml b/environment.yml index 4c36aaac4..b1be069b9 100644 --- a/environment.yml +++ b/environment.yml @@ -6,7 +6,8 @@ dependencies: - anaconda=2020.07 - pip - pip: - - git+https://github.com/executablebooks/jupyter-book + - jupyter-book + - git+https://github.com/executablebooks/MyST-NB.git@code-from-file - sphinxext-rediraffe - sphinx-multitoc-numbering - quantecon-book-theme From d6dd3b5fabfc6411a923379bd09f222753cf84ae Mon Sep 17 00:00:00 2001 From: mmcky Date: Wed, 6 Jan 2021 15:59:26 +1100 Subject: [PATCH 04/15] remove debug cell at the top of lecture --- lectures/ifp.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/lectures/ifp.md b/lectures/ifp.md index 0a7d3a6da..9a6b27070 100644 --- a/lectures/ifp.md +++ b/lectures/ifp.md @@ -23,10 +23,6 @@ kernelspec: :depth: 2 ``` -```{code-cell} ipython -:file: _static/lecture_specific/cake_eating_numerical/analytical.py -``` - In addition to what's in Anaconda, this lecture will need the following libraries: ```{code-cell} ipython From 78d7c466bba10e97f8cdf25697e0deff53635e84 Mon Sep 17 00:00:00 2001 From: mmcky Date: Wed, 6 Jan 2021 19:56:37 +1100 Subject: [PATCH 05/15] install mysb_nb from branch for testing --- .github/workflows/ci.yml | 3 +++ environment.yml | 1 - 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index e645b53d2..4f9ffb534 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -15,6 +15,9 @@ jobs: python-version: 3.8 environment-file: environment.yml activate-environment: lecture-python + - name: Install myst_nb (from branch for testing) + shell: bash -l {0} + run: python -m pip install git+https://github.com/executablebooks/MyST-NB.git@code-from-file - name: Display Conda Environment Versions shell: bash -l {0} run: conda list diff --git a/environment.yml b/environment.yml index b1be069b9..bc6fe4659 100644 --- a/environment.yml +++ b/environment.yml @@ -7,7 +7,6 @@ dependencies: - pip - pip: - jupyter-book - - git+https://github.com/executablebooks/MyST-NB.git@code-from-file - sphinxext-rediraffe - sphinx-multitoc-numbering - quantecon-book-theme From e65ab107057900e86fa328cf3f3a5a31d0be90c7 Mon Sep 17 00:00:00 2001 From: mmcky Date: Thu, 7 Jan 2021 13:08:36 +1100 Subject: [PATCH 06/15] add mathjax cdn address --- lectures/_config.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/lectures/_config.yml b/lectures/_config.yml index 7c523f551..da8ae1dc7 100644 --- a/lectures/_config.yml +++ b/lectures/_config.yml @@ -26,6 +26,7 @@ sphinx: og_logo_url: https://assets.quantecon.org/img/qe-og-logo.png description: This website presents a set of lectures on quantitative economic modeling, designed and written by Thomas J. Sargent and John Stachurski. keywords: Python, QuantEcon, Quantitative Economics, Economics, Sloan, Alfred P. Sloan Foundation, Tom J. Sargent, John Stachurski + mathjax_path: https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js rediraffe_redirects: index_toc.md: intro.md tojupyter_static_file_path: ["source/_static", "_static"] From f2d639980a41dfa2823d69838179df08fd60ea1e Mon Sep 17 00:00:00 2001 From: mmcky Date: Tue, 9 Feb 2021 09:59:02 +1100 Subject: [PATCH 07/15] update sources lecture-python (9ae923d) using tomyst (b06cacb) --- lectures/cass_koopmans_2.md | 6 ++--- lectures/exchangeable.md | 18 +++++++------ lectures/finite_markov.md | 29 ++++++++++---------- lectures/harrison_kreps.md | 2 +- lectures/lake_model.md | 2 +- lectures/likelihood_bayes.md | 9 +++---- lectures/lq_inventories.md | 44 +++++++++++++++++++++---------- lectures/lqcontrol.md | 20 +++++++------- lectures/markov_asset.md | 12 ++++----- lectures/navy_captain.md | 39 ++++++++++++++------------- lectures/odu.md | 4 +-- lectures/perm_income.md | 10 +++---- lectures/perm_income_cons.md | 4 +-- lectures/rational_expectations.md | 2 +- lectures/re_with_feedback.md | 14 +++++----- lectures/samuelson.md | 18 +------------ lectures/wald_friedman.md | 20 +++++++------- 17 files changed, 129 insertions(+), 124 deletions(-) diff --git a/lectures/cass_koopmans_2.md b/lectures/cass_koopmans_2.md index b5258c1cc..27c4e66d4 100644 --- a/lectures/cass_koopmans_2.md +++ b/lectures/cass_koopmans_2.md @@ -195,7 +195,7 @@ all other dates $t=1, 2, \ldots, T$. There are sequences of prices $\{w_t,\eta_t\}_{t=0}^T= \{\vec{w}, \vec{\eta} \}$ where $w_t$ is a wage or rental rate for labor at time $t$ and -$eta_t$ is a rental rate for capital at time $t$. +$\eta_t$ is a rental rate for capital at time $t$. In addition there is are intertemporal prices that work as follows. @@ -397,7 +397,7 @@ verify** approach. In this lecture {doc}`Cass-Koopmans Planning Model `, we computed an allocation $\{\vec{C}, \vec{K}, \vec{N}\}$ that solves the planning problem. -(This allocation will constitute the **Big** $K$ to be in the present instance of the *Big** $K$ **, little** $k$ trick +(This allocation will constitute the **Big** $K$ to be in the present instance of the **Big** $K$ **, little** $k$ trick that we'll apply to a competitive equilibrium in the spirit of [this lecture](https://lectures.quantecon.org/py/rational_expectations.html#) and [this lecture](https://lectures.quantecon.org/py/dyn_stack.html#).) @@ -893,7 +893,7 @@ Vice-versa for lower $\gamma$. We return to Hicks-Arrow prices and calculate how they are related to **yields** on loans of alternative maturities. -This will let us plot a **yield curve** that graphs yields on bonds of maturities $j=1, 2, \ldots$ against :math:j=1,2, ldots`. +This will let us plot a **yield curve** that graphs yields on bonds of maturities $j=1, 2, \ldots$ against $j=1,2, \ldots$. The formulas we want are: diff --git a/lectures/exchangeable.md b/lectures/exchangeable.md index 5c26bc229..ac46fecfd 100644 --- a/lectures/exchangeable.md +++ b/lectures/exchangeable.md @@ -53,7 +53,7 @@ that are Understanding the distinction between these concepts is essential for appreciating how Bayesian updating works in our example. -You can read about exchangeability [here](https://en.wikipedia.org/wiki/Exchangeable_random_variables) +You can read about exchangeability [here](https://en.wikipedia.org/wiki/Exchangeable_random_variables). Below, we'll often use @@ -116,8 +116,10 @@ $$ Using the laws of probability, we can always factor such a joint density into a product of conditional densities: $$ - p(W_T, W_{T-1}, \ldots, W_1, W_0) = & p(W_T | W_{t-1}, \ldots, W_0) p(W_{T-1} | W_{T-2}, \ldots, W_0) \cdots \cr - & p(W_1 | W_0) p(W_0) +\begin{align} + p(W_T, W_{T-1}, \ldots, W_1, W_0) = & p(W_T | W_{T-1}, \ldots, W_0) p(W_{T-1} | W_{T-2}, \ldots, W_0) \cdots \cr + & \quad \quad \cdots p(W_1 | W_0) p(W_0) +\end{align} $$ In general, @@ -178,7 +180,7 @@ $G$ with probability $1 - \tilde \pi$. Thus, we assume that the decision maker - **knows** both $F$ and $G$ -- **doesnt't know** which of these two distributions that nature has drawn +- **doesn't know** which of these two distributions that nature has drawn - summarizing his ignorance by acting as if or **thinking** that nature chose distribution $F$ with probability $\tilde \pi \in (0,1)$ and distribution $G$ with probability $1 - \tilde \pi$ - at date $t \geq 0$ has observed the partial history $w_t, w_{t-1}, \ldots, w_0$ of draws from the appropriate joint @@ -480,9 +482,9 @@ learning_example() ``` Please look at the three graphs above created for an instance in which $f$ is a uniform distribution on $[0,1]$ -(i.e., a Beta distribution with parameters $F_a=1, F_b=1$, while $g$ is a Beta distribution with the default parameter values $G_a=3, G_b=1.2$. +(i.e., a Beta distribution with parameters $F_a=1, F_b=1$), while $g$ is a Beta distribution with the default parameter values $G_a=3, G_b=1.2$. -The graph on the left plots the likehood ratio $l(w)$ on the coordinate axis against $w$ on the ordinate axis. +The graph on the left plots the likelihood ratio $l(w)$ on the coordinate axis against $w$ on the ordinate axis. The middle graph plots both $f(w)$ and $g(w)$ against $w$, with the horizontal dotted lines showing values of $w$ at which the likelihood ratio equals $1$. @@ -491,7 +493,7 @@ The graph on the right plots arrows to the right that show when Bayes' Law make to the left that show when Bayes' Law make $\pi$ decrease. Notice how the length of the arrows, which show the magnitude of the force from Bayes' Law impelling $\pi$ to change, -depend on both the prior probability $\pi$ on the ordinate axis and the evidence in the form of the current draw of +depends on both the prior probability $\pi$ on the ordinate axis and the evidence in the form of the current draw of $w$ on the coordinate axis. The fractions in the colored areas of the middle graphs are probabilities under $F$ and $G$, respectively, @@ -528,7 +530,7 @@ assumptions about nature's choice of distribution: - that nature permanently draws from $G$ Outcomes depend on a peculiar property of likelihood ratio processes that are discussed in -[this lecture](https://python-advanced.quantecon.org/additive_functionals.html) +[this lecture](https://python-advanced.quantecon.org/additive_functionals.html). To do this, we create some Python code. diff --git a/lectures/finite_markov.md b/lectures/finite_markov.md index 6058ca908..6cceeb3cc 100644 --- a/lectures/finite_markov.md +++ b/lectures/finite_markov.md @@ -1218,22 +1218,23 @@ Let $F$ be the cumulative distribution function of the normal distribution $N(0, The values $P(x_i, x_j)$ are computed to approximate the AR(1) process --- omitting the derivation, the rules are as follows: 1. If $j = 0$, then set - -$$ -P(x_i, x_j) = P(x_i, x_0) = F(x_0-\rho x_i + s/2) -$$ - + + $$ + P(x_i, x_j) = P(x_i, x_0) = F(x_0-\rho x_i + s/2) + $$ + 1. If $j = n-1$, then set - -$$ -P(x_i, x_j) = P(x_i, x_{n-1}) = 1 - F(x_{n-1} - \rho x_i - s/2) -$$ - + + $$ + P(x_i, x_j) = P(x_i, x_{n-1}) = 1 - F(x_{n-1} - \rho x_i - s/2) + $$ + 1. Otherwise, set - -$$ -P(x_i, x_j) = F(x_j - \rho x_i + s/2) - F(x_j - \rho x_i - s/2) -$$ + + $$ + P(x_i, x_j) = F(x_j - \rho x_i + s/2) - F(x_j - \rho x_i - s/2) + $$ + The exercise is to write a function `approx_markov(rho, sigma_u, m=3, n=7)` that returns $\{x_0, \ldots, x_{n-1}\} \subset \mathbb R$ and $n \times n$ matrix diff --git a/lectures/harrison_kreps.md b/lectures/harrison_kreps.md index 163d3ead7..3fba241f7 100644 --- a/lectures/harrison_kreps.md +++ b/lectures/harrison_kreps.md @@ -175,7 +175,7 @@ Remember that state $1$ is the high dividend state. * In state $0$, a type $a$ agent is more optimistic about next period's dividend than a type $b$ agent. * In state $1$, a type $b$ agent is more optimistic about next period's dividend. -However, the stationary distributions $\pi_A = \begin{bmatrix} .57 & .43 \end{bmatrix}$ and $\pi_B = \begin{bmatrix} .43 & .57 \end{bmatrix}$ tell us that a type $B$ person is more optimistic about the dividend process in the long run than is a type A person. +However, the stationary distributions $\pi_A = \begin{bmatrix} .57 & .43 \end{bmatrix}$ and $\pi_B = \begin{bmatrix} .43 & .57 \end{bmatrix}$ tell us that a type $B$ person is more optimistic about the dividend process in the long run than is a type $A$ person. Transition matrices for the temporarily optimistic and pessimistic investors are constructed as follows. diff --git a/lectures/lake_model.md b/lectures/lake_model.md index 14ea17bf4..5c403ffc9 100644 --- a/lectures/lake_model.md +++ b/lectures/lake_model.md @@ -378,7 +378,7 @@ there exists an $\bar x$ such that This equation tells us that a steady state level $\bar x$ is an eigenvector of $\hat A$ associated with a unit eigenvalue. -We also have $x_t \to \bar x$ as $t \to \infty$ provided that the remaining eigenvalue of $\hat A$ has modulus less that 1. +We also have $x_t \to \bar x$ as $t \to \infty$ provided that the remaining eigenvalue of $\hat A$ has modulus less than 1. This is the case for our default parameters: diff --git a/lectures/likelihood_bayes.md b/lectures/likelihood_bayes.md index c2185c79c..57afcbb83 100644 --- a/lectures/likelihood_bayes.md +++ b/lectures/likelihood_bayes.md @@ -50,7 +50,7 @@ We'll study how, at least in our setting, a Bayesian eventually learns the prob rests on the asymptotic behavior of likelihood ratio processes studied in {doc}`this lecture `. This lecture provides technical results that underly outcomes to be studied in {doc}`this lecture ` -and {doc}`this lecture ` and {doc}`this lecture ` +and {doc}`this lecture ` and {doc}`this lecture `. ## The Setting @@ -262,7 +262,7 @@ and the initial prior $\pi_{0}$ \pi_{t+1}=\frac{\pi_{0}L\left(w^{t+1}\right)}{\pi_{0}L\left(w^{t+1}\right)+1-\pi_{0}} . ``` -Formula {eq}`eq_Bayeslaw103` generalizes generalizes formula {eq}`eq_recur1`. +Formula {eq}`eq_Bayeslaw103` generalizes formula {eq}`eq_recur1`. Formula {eq}`eq_Bayeslaw103` can be regarded as a one step revision of prior probability $\pi_0$ after seeing the batch of data $\left\{ w_{i}\right\} _{i=1}^{t+1}$. @@ -276,8 +276,7 @@ limiting behavior of $\pi_t$. To illustrate this insight, below we will plot graphs showing **one** simulated path of the likelihood ratio process $L_t$ along with two paths of -$\pi_t$ that are associated with the *same* realization of the likelihood ratio process but *different* initial prior probabilities -probabilities $\pi_{0}$. +$\pi_t$ that are associated with the *same* realization of the likelihood ratio process but *different* initial prior probabilities $\pi_{0}$. First, we tell Python two values of $\pi_0$. @@ -374,5 +373,5 @@ $g$. This lecture has been devoted to building some useful infrastructure. We'll build on results highlighted in this lectures to understand inferences that are the foundations of -results described in {doc}`this lecture ` and {doc}`this lecture ` and {doc}`this lecture ` +results described in {doc}`this lecture ` and {doc}`this lecture ` and {doc}`this lecture `. diff --git a/lectures/lq_inventories.md b/lectures/lq_inventories.md index 9659ae25a..afea4896b 100644 --- a/lectures/lq_inventories.md +++ b/lectures/lq_inventories.md @@ -35,7 +35,8 @@ tags: [hide-output] ## Overview -This lecture can be viewed as an application of the {doc}`quantecon lecture `. +This lecture can be viewed as an application of this {doc}`quantecon lecture ` about linear quadratic control +theory. It formulates a discounted dynamic program for a firm that chooses a production schedule to balance @@ -46,19 +47,19 @@ chooses a production schedule to balance In the tradition of a classic book by Holt, Modigliani, Muth, and Simon {cite}`Holt_Modigliani_Muth_Simon`, we simplify the firm’s problem by formulating it as a linear quadratic discounted -dynamic programming problem of the type studied in this {doc}`quantecon `. +dynamic programming problem of the type studied in this {doc}`quantecon lecture `. Because its costs of production are increasing and quadratic in -production, the firm wants to smooth production across time provided +production, the firm holds inventories as a buffer stock in order to smooth production across time, provided that holding inventories is not too costly. -But the firm also prefers to sell out of existing inventories, a +But the firm also wants to make its sales out of existing inventories, a preference that we represent by a cost that is quadratic in the difference between sales in a period and the firm’s beginning of period inventories. -We compute examples designed to indicate how the firm optimally chooses -to smooth production and manage inventories while keeping inventories +We compute examples designed to indicate how the firm optimally +smooths production while keeping inventories close to sales. To introduce components of the model, let @@ -72,7 +73,7 @@ To introduce components of the model, let - $d(I_t, S_t) = d_1 I_t + d_2 (S_t - I_t)^2$, where $d_1>0, d_2 >0$, be a cost-of-holding-inventories function, consisting of two components: - - a cost $d_1 t$ of carrying inventories, and + - a cost $d_1 I_t$ of carrying inventories, and - a cost $d_2 (S_t - I_t)^2$ of having inventories deviate from sales - $p_t = a_0 - a_1 S_t + v_t$ be an inverse demand function for a @@ -84,7 +85,7 @@ To introduce components of the model, let be the present value of the firm’s profits at time $0$ - $I_{t+1} = I_t + Q_t - S_t$ be the law of motion of inventories -- $z_{t+1} = A_{22} z_t + C_2 \epsilon_{t+1}$ be the law +- $z_{t+1} = A_{22} z_t + C_2 \epsilon_{t+1}$ be a law of motion for an exogenous state vector $z_t$ that contains time $t$ information useful for predicting the demand shock $v_t$ @@ -133,17 +134,20 @@ appears in the firm’s one-period profit function) We can express the firm’s profit as a function of states and controls as $$ -\pi_t = - (x_t' R x_t + u_t' Q u_t + 2 u_t' H x_t ) +\pi_t = - (x_t' R x_t + u_t' Q u_t + 2 u_t' N x_t ) $$ -To form the matrices $R, Q, H$, we note that the firm’s profits at +To form the matrices $R, Q, N$ in an LQ dynamic programming problem, we note that the firm’s profits at time $t$ function can be expressed $$ +\begin{equation} +\begin{split} \pi_{t} =&p_{t}S_{t}-c\left(Q_{t}\right)-d\left(I_{t},S_{t}\right) \\ =&\left(a_{0}-a_{1}S_{t}+v_{t}\right)S_{t}-c_{1}Q_{t}-c_{2}Q_{t}^{2}-d_{1}I_{t}-d_{2}\left(S_{t}-I_{t}\right)^{2} \\ =&a_{0}S_{t}-a_{1}S_{t}^{2}+Gz_{t}S_{t}-c_{1}Q_{t}-c_{2}Q_{t}^{2}-d_{1}I_{t}-d_{2}S_{t}^{2}-d_{2}I_{t}^{2}+2d_{2}S_{t}I_{t} \\ - =&-\left(\underset{x_{t}^{\prime}Rx_{t}}{\underbrace{d_{1}I_{t}+d_{2}I_{t}^{2}}}\underset{u_{t}^{\prime}Qu_{t}}{\underbrace{+a_{1}S_{t}^{2}+d_{2}S_{t}^{2}+c_{2}Q_{t}^{2}}}\underset{2u_{t}^{\prime}Hx_{t}}{\underbrace{-a_{0}S_{t}-Gz_{t}S_{t}+c_{1}Q_{t}-2d_{2}S_{t}I_{t}}}\right) \\ + =&-\left(\underset{x_{t}^{\prime}Rx_{t}}{\underbrace{d_{1}I_{t}+d_{2}I_{t}^{2}}}\underset{u_{t}^{\prime}Qu_{t}}{\underbrace{+a_{1}S_{t}^{2}+d_{2}S_{t}^{2}+c_{2}Q_{t}^{2}}} + \underset{2u_{t}^{\prime}N x_{t}}{\underbrace{-a_{0}S_{t}-Gz_{t}S_{t}+c_{1}Q_{t}-2d_{2}S_{t}I_{t}}}\right) \\ =&-\left(\left[\begin{array}{cc} I_{t} & z_{t}^{\prime}\end{array}\right]\underset{\equiv R}{\underbrace{\left[\begin{array}{cc} d_{2} & \frac{d_{1}}{2}S_{c}\\ @@ -166,12 +170,14 @@ Q_{t} & S_{t}\end{array}\right]\underset{\equiv N}{\underbrace{\left[\begin{arra I_{t}\\ z_{t} \end{array}\right]\right) +\end{split} +\end{equation} $$ where $S_{c}=\left[1,0\right]$. **Remark on notation:** The notation for cross product term in the -QuantEcon library is $N$ instead of $H$. +QuantEcon library is $N$. The firms’ optimum decision rule takes the form @@ -185,6 +191,16 @@ $$ x_{t+1} = (A - BF ) x_t + C \epsilon_{t+1} $$ +The firm chooses a decision rule for $u_t$ that maximizes + +$$ +E_0 \sum_{t=0}^\infty \beta^t \pi_t +$$ + +subject to a given $x_0$. + +This is a stochastic discounted LQ dynamic program. + Here is code for computing an optimal decision rule and for analyzing its consequences. @@ -330,7 +346,7 @@ class SmoothingExample: Notice that the above code sets parameters at the following default values -- discount factor β=0.96, +- discount factor $\beta=0.96$, - inverse demand function: $a0=10, a1=1$ - cost of production $c1=1, c2=1$ - costs of holding inventories $d1=1, d2=1$ @@ -465,7 +481,7 @@ We introduce this $I_t$ **is hardwired to zero** specification in order to shed light on the role that inventories play by comparing outcomes with those under our two other versions of the problem. -The bottom right panel displays an production path for the original +The bottom right panel displays a production path for the original problem that we are interested in (the blue line) as well with an optimal production path for the model in which inventories are not useful (the green path) and also for the model in which, although diff --git a/lectures/lqcontrol.md b/lectures/lqcontrol.md index 19722f0ab..bfc2d02a0 100644 --- a/lectures/lqcontrol.md +++ b/lectures/lqcontrol.md @@ -140,7 +140,7 @@ Another alteration that's useful to introduce (we'll see why soon) is to change the control variable from consumption to the deviation of consumption from some "ideal" quantity $\bar c$. -(Most parameterizations will be such that $\bar c$ is large relative to the amount of consumption that is attainable in each period, and hence the household wants to increase consumption) +(Most parameterizations will be such that $\bar c$ is large relative to the amount of consumption that is attainable in each period, and hence the household wants to increase consumption.) For this reason, we now take our control to be $u_t := c_t - \bar c$. @@ -275,7 +275,7 @@ $$ Thus, for both the state and the control, loss is measured as squared distance from the origin. (In fact, the general case {eq}`lq_pref_flow` can also be understood in this -way, but with $R$ and $Q$ identifying other -- non-Euclidean -- notions of "distance" from the zero vector). +way, but with $R$ and $Q$ identifying other -- non-Euclidean -- notions of "distance" from the zero vector.) Intuitively, we can often think of the state $x_t$ as representing deviation from a target, such as @@ -504,7 +504,7 @@ and d_{T-1} := \beta \mathop{\mathrm{trace}}(C' P_T C) ``` -(The algebra is a good exercise --- we'll leave it up to you) +(The algebra is a good exercise --- we'll leave it up to you.) If we continue working backwards in this manner, it soon becomes clear that $J_t (x) = x' P_t x + d_t$ as claimed, where $\{P_t\}$ and $\{d_t\}$ satisfy the recursions @@ -585,7 +585,7 @@ Data contradicted the constancy of the marginal propensity to consume. In response, Milton Friedman, Franco Modigliani and others built models based on a consumer's preference for an intertemporally smooth consumption stream. -(See, for example, {cite}`Friedman1956` or {cite}`ModiglianiBrumberg1954`) +(See, for example, {cite}`Friedman1956` or {cite}`ModiglianiBrumberg1954`.) One property of those models is that households purchase and sell financial assets to make consumption streams smoother than income streams. @@ -606,7 +606,7 @@ subject to the sequence of budget constraints $a_{t+1} = (1 + r) a_t - c_t + y_t Here $q$ is a large positive constant, the role of which is to induce the consumer to target zero debt at the end of her life. -(Without such a constraint, the optimal choice is to choose $c_t = \bar c$ in each period, letting assets adjust accordingly) +(Without such a constraint, the optimal choice is to choose $c_t = \bar c$ in each period, letting assets adjust accordingly.) As before we set $y_t = \sigma w_{t+1} + \mu$ and $u_t := c_t - \bar c$, after which the constraint can be written as in {eq}`lq_lomwc`. @@ -712,7 +712,7 @@ As anticipated by the discussion on consumption smoothing, the time path of consumption is much smoother than that for income. (But note that consumption becomes more irregular towards the end of life, -when the zero final asset requirement impinges more on consumption choices). +when the zero final asset requirement impinges more on consumption choices.) The second panel in the figure shows that the time path of assets $a_t$ is closely correlated with cumulative unanticipated income, where the latter is defined as @@ -724,7 +724,7 @@ $$ A key message is that unanticipated windfall gains are saved rather than consumed, while unanticipated negative shocks are met by reducing assets. -(Again, this relationship breaks down towards the end of life due to the zero final asset requirement) +(Again, this relationship breaks down towards the end of life due to the zero final asset requirement.) These results are relatively robust to changes in parameters. @@ -946,7 +946,7 @@ subject to $a_{t+1} = (1 + r) a_t - c_t + y_t, \ t \geq 0$. For income we now take $y_t = p(t) + \sigma w_{t+1}$ where $p(t) := m_0 + m_1 t + m_2 t^2$. -(In {ref}`the next section ` we employ some tricks to implement a more sophisticated model) +(In {ref}`the next section ` we employ some tricks to implement a more sophisticated model.) The coefficients $m_0, m_1, m_2$ are chosen such that $p(0)=0, p(T/2) = \mu,$ and $p(T)=0$. @@ -1102,7 +1102,7 @@ However, we can still use our LQ methods here by suitably linking two-component These two LQ problems describe the consumer's behavior during her working life (`lq_working`) and retirement (`lq_retired`). (This is possible because, in the two separate periods of life, the respective income processes -[polynomial trend and constant] each fit the LQ framework) +[polynomial trend and constant] each fit the LQ framework.) The basic idea is that although the whole problem is not a single time-invariant LQ problem, it is still a dynamic programming problem, and hence we can use appropriate Bellman equations at @@ -1221,7 +1221,7 @@ Let's now replace $\pi_t$ in {eq}`lq_object_mp` with $\hat \pi_t := \pi_t - a_1 This makes no difference to the solution, since $a_1 \bar q_t^2$ does not depend on the controls. -(In fact, we are just adding a constant term to {eq}`lq_object_mp`, and optimizers are not affected by constant terms) +(In fact, we are just adding a constant term to {eq}`lq_object_mp`, and optimizers are not affected by constant terms.) The reason for making this substitution is that, as you will be able to verify, $\hat \pi_t$ reduces to the simple quadratic diff --git a/lectures/markov_asset.md b/lectures/markov_asset.md index c0549693c..b4d97fc03 100644 --- a/lectures/markov_asset.md +++ b/lectures/markov_asset.md @@ -281,12 +281,12 @@ where 1. $\{X_t\}$ is a finite Markov chain with state space $S$ and transition probabilities - -$$ -P(x, y) := \mathbb P \{ X_{t+1} = y \,|\, X_t = x \} -\qquad (x, y \in S) -$$ - + + $$ + P(x, y) := \mathbb P \{ X_{t+1} = y \,|\, X_t = x \} + \qquad (x, y \in S) + $$ + 1. $g$ is a given function on $S$ taking positive values You can think of diff --git a/lectures/navy_captain.md b/lectures/navy_captain.md index 6a1a49349..ea2b128cb 100644 --- a/lectures/navy_captain.md +++ b/lectures/navy_captain.md @@ -615,29 +615,30 @@ The Bayesian decision rule is: - delay deciding and draw another $z$ if $\beta \leq \pi \leq \alpha$ -We can calculate two ‘’objective’’ loss functions under this situation +We can calculate two “objective” loss functions under this situation conditioning on knowing for sure that nature has selected $f_{0}$, in the first case, or $f_{1}$, in the second case. 1. under $f_{0}$, - -$$ -V^{0}\left(\pi\right)=\begin{cases} -0 & \text{if }\alpha\leq\pi,\\ -c+EV^{0}\left(\pi^{\prime}\right) & \text{if }\beta\leq\pi<\alpha,\\ -\bar L_{1} & \text{if }\pi<\beta. -\end{cases} -$$ - + + $$ + V^{0}\left(\pi\right)=\begin{cases} + 0 & \text{if }\alpha\leq\pi,\\ + c+EV^{0}\left(\pi^{\prime}\right) & \text{if }\beta\leq\pi<\alpha,\\ + \bar L_{1} & \text{if }\pi<\beta. + \end{cases} + $$ + 1. under $f_{1}$ - -$$ -V^{1}\left(\pi\right)=\begin{cases} -\bar L_{0} & \text{if }\alpha\leq\pi,\\ -c+EV^{1}\left(\pi^{\prime}\right) & \text{if }\beta\leq\pi<\alpha,\\ -0 & \text{if }\pi<\beta. -\end{cases} -$$ + + $$ + V^{1}\left(\pi\right)=\begin{cases} + \bar L_{0} & \text{if }\alpha\leq\pi,\\ + c+EV^{1}\left(\pi^{\prime}\right) & \text{if }\beta\leq\pi<\alpha,\\ + 0 & \text{if }\pi<\beta. + \end{cases} + $$ + where $\pi^{\prime}=\frac{\pi f_{0}\left(z^{\prime}\right)}{\pi f_{0}\left(z^{\prime}\right)+\left(1-\pi\right)f_{1}\left(z^{\prime}\right)}$. @@ -852,7 +853,7 @@ $\pi^{*}=0.5=\pi_{0}$. ``` Recall that when $\pi^*=0.5$, the frequentist decision rule sets a -sample size `t_optimal` **ex ante** +sample size `t_optimal` **ex ante**. For our parameter settings, we can compute its value: diff --git a/lectures/odu.md b/lectures/odu.md index f572c683f..ef7f1935c 100644 --- a/lectures/odu.md +++ b/lectures/odu.md @@ -83,7 +83,7 @@ want to consider. ### The Basic McCall Model -Recall that, in the baseline model , an +Recall that, {doc}`in the baseline model `, an unemployed worker is presented in each period with a permanent job offer at wage $W_t$. @@ -659,7 +659,7 @@ Use the default parameters and `Q_factory` to compute an optimal policy. Your result should coincide closely with the figure for the optimal -policy [shown above](#odu-pol-vfi). +policy [shown above](#Take-1:-Solution-by-VFI). Try experimenting with different parameters, and confirm that the change in the optimal policy coincides with your intuition. diff --git a/lectures/perm_income.md b/lectures/perm_income.md index 9cd60dc40..e69eb24e6 100644 --- a/lectures/perm_income.md +++ b/lectures/perm_income.md @@ -177,7 +177,7 @@ $$ u(c_t) = - (c_t - \gamma)^2 $$ -where $\gamma$ is a bliss level of consumption +where $\gamma$ is a bliss level of consumption. ```{note} Along with this quadratic utility specification, we allow consumption to be negative. However, by choosing parameters appropriately, we can make the probability that the model generates negative consumption paths over finite time horizons as low as desired. @@ -216,7 +216,7 @@ With our quadratic preference specification, {eq}`sprob4` has the striking impli \mathbb{E}_t [c_{t+1}] = c_t ``` -(In fact, quadratic preferences are *necessary* for this conclusion [^f2]) +(In fact, quadratic preferences are *necessary* for this conclusion [^f2].) One way to interpret {eq}`sprob5` is that consumption will change only when "new information" about permanent income is revealed. @@ -226,7 +226,7 @@ These ideas will be clarified below. (odr_pi)= ### The Optimal Decision Rule -Now let's deduce the optimal decision rule [^fod] +Now let's deduce the optimal decision rule [^fod]. ```{note} One way to solve the consumer's problem is to apply *dynamic programming* @@ -432,7 +432,7 @@ We can then compute the mean and covariance of $\tilde y_t$ from To gain some preliminary intuition on the implications of {eq}`pi_ssr`, let's look at a highly stylized example where income is just IID. -(Later examples will investigate more realistic income streams) +(Later examples will investigate more realistic income streams.) In particular, let $\{w_t\}_{t = 1}^{\infty}$ be IID and scalar standard normal, and let @@ -994,7 +994,7 @@ $$ Using $\beta R = 1$ gives {eq}`sprob4` in the two-period case. -The proof for the general case is similar +The proof for the general case is similar. [^f2]: A linear marginal utility is essential for deriving {eq}`sprob5` from {eq}`sprob4`. Suppose instead that we had imposed the following more standard assumptions on the utility function: $u'(c) >0, u''(c)<0, u'''(c) > 0$ and required that $c \geq 0$. The Euler equation remains {eq}`sprob4`. But the fact that $u''' <0$ implies via Jensen's inequality that $\mathbb{E}_t [u'(c_{t+1})] > u'(\mathbb{E}_t [c_{t+1}])$. This inequality together with {eq}`sprob4` implies that $\mathbb{E}_t [c_{t+1}] > c_t$ (consumption is said to be a 'submartingale'), so that consumption stochastically diverges to $+\infty$. The consumer's savings also diverge to $+\infty$. diff --git a/lectures/perm_income_cons.md b/lectures/perm_income_cons.md index 6a49d7a57..e0ac8da05 100644 --- a/lectures/perm_income_cons.md +++ b/lectures/perm_income_cons.md @@ -64,7 +64,7 @@ In this lecture, we'll We'll then use these characterizations to construct a simple model of cross-section wealth and consumption dynamics in the spirit of Truman Bewley {cite}`Bewley86`. -(Later we'll study other Bewley models---see {doc}`this lecture `) +(Later we'll study other Bewley models---see {doc}`this lecture `.) The model will prove useful for illustrating concepts such as @@ -750,7 +750,7 @@ Across the group of people being analyzed, risk-free loans are in zero excess su We have arranged primitives so that $R = \beta^{-1}$ clears the market for risk-free loans at zero aggregate excess supply. -So the risk-free loans are being made from one person to another within our closed set of agent. +So the risk-free loans are being made from one person to another within our closed set of agents. There is no need for foreigners to lend to our group. diff --git a/lectures/rational_expectations.md b/lectures/rational_expectations.md index 8cfe2a4b8..88a0ecf5d 100644 --- a/lectures/rational_expectations.md +++ b/lectures/rational_expectations.md @@ -99,7 +99,7 @@ We begin by applying the Big $Y$, little $y$ trick in a very simple static cont Consider a static model in which a collection of $n$ firms produce a homogeneous good that is sold in a competitive market. -Each of these $n$ firms sell output $y$. +Each of these $n$ firms sells output $y$. The price $p$ of the good lies on an inverse demand curve diff --git a/lectures/re_with_feedback.md b/lectures/re_with_feedback.md index 057414b90..e5ad6dbcf 100644 --- a/lectures/re_with_feedback.md +++ b/lectures/re_with_feedback.md @@ -325,7 +325,7 @@ obeys the system comprised of {eq}`equation_1`-{eq}`equation_3`. By stable or non-explosive, we mean that neither $m_t$ nor $p_t$ diverges as $t \rightarrow + \infty$. -This requirees that we shut down the term $c \lambda^{-t}$ in equation {eq}`equation_1a` above by setting $c=0$ +This requires that we shut down the term $c \lambda^{-t}$ in equation {eq}`equation_1a` above by setting $c=0$ The solution we are after is @@ -573,7 +573,7 @@ $y_0 = \begin{bmatrix} m_0 \\ p_0 \end{bmatrix}$ with $m_0 >0, p_0 >0$, we disco in general absolute values of both components of $y_t$ diverge toward $+\infty$ as $t \rightarrow + \infty$. -To substantiate this claim, we can use the eigenector matrix +To substantiate this claim, we can use the eigenvector matrix decomposition of $H$ that is available to us because the eigenvalues of $H$ are distinct @@ -640,7 +640,7 @@ $$ But note that since $y_0 = \begin{bmatrix} m_0 \cr p_0 \end{bmatrix}$ and $m_0$ -is given to us an an initial condition, $p_0$ has to do all the adjusting to satisfy this equation. +is given to us an initial condition, $p_0$ has to do all the adjusting to satisfy this equation. Sometimes this situation is described by saying that while $m_0$ is truly a **state** variable, $p_0$ is a **jump** variable that @@ -814,7 +814,7 @@ $$ H = \begin{bmatrix} \rho & \delta \\ - (1-\lambda)/\lambda & \lambda^{-1} \end{bmatrix} . $$ -We take $m_0$ as a given intial condition and as before seek an +We take $m_0$ as a given initial condition and as before seek an initial value $p_0$ that stabilizes the system in the sense that $y_t$ converges as $t \rightarrow + \infty$. @@ -864,7 +864,7 @@ def H_eigvals(ρ=.9, λ=.5, δ=0): H_eigvals() ``` -Notice that a negative δ will not imperil the stability of the matrix +Notice that a negative $\delta$ will not imperil the stability of the matrix $H$, even if it has a big absolute value. ```{code-cell} python3 @@ -877,14 +877,14 @@ H_eigvals(δ=-0.05) H_eigvals(δ=-1.5) ``` -A sufficiently small positive δ also causes no problem. +A sufficiently small positive $\delta$ also causes no problem. ```{code-cell} python3 # sufficiently small positive δ H_eigvals(δ=0.05) ``` -But a large enough positive δ makes both eigenvalues of $H$ +But a large enough positive $\delta$ makes both eigenvalues of $H$ strictly greater than unity in modulus. For example, diff --git a/lectures/samuelson.md b/lectures/samuelson.md index 90e2bd354..53c54c96b 100644 --- a/lectures/samuelson.md +++ b/lectures/samuelson.md @@ -153,7 +153,7 @@ Y_t = C_t + I_t + G_t - The parameter $a$ is peoples' *marginal propensity to consume* out of income - equation {eq}`consumption` asserts that people consume a fraction of - math:a in (0,1) of each additional dollar of income. + $a \in (0,1)$ of each additional dollar of income. - The parameter $b > 0$ is the investment accelerator coefficient - equation {eq}`accelerator` asserts that people invest in physical capital when income is increasing and disinvest when it is decreasing. @@ -772,11 +772,6 @@ z = Symbol("z") sympy.solve(z**2 - r1*z - r2, z) ``` -$$ -\left [ \frac{\rho_{1}}{2} - \frac{1}{2} \sqrt{\rho_{1}^{2} + 4 \rho_{2}}, -\quad \frac{\rho_{1}}{2} + \frac{1}{2} \sqrt{\rho_{1}^{2} + 4 \rho_{2}}\right ] -$$ - ```{code-cell} python3 a = Symbol("α") b = Symbol("β") @@ -786,13 +781,6 @@ r2 = -b sympy.solve(z**2 - r1*z - r2, z) ``` -$$ -\left [ \frac{\alpha}{2} + \frac{\beta}{2} - \frac{1}{2} \sqrt{\alpha^{2} + -2 \alpha \beta + \beta^{2} - 4 \beta}, \quad \frac{\alpha}{2} + -\frac{\beta}{2} + \frac{1}{2} \sqrt{\alpha^{2} + 2 \alpha \beta + -\beta^{2} - 4 \beta}\right ] -$$ - ## Stochastic Shocks Now we'll construct some code to simulate the stochastic version of the @@ -1261,10 +1249,6 @@ y2 = imres[:, :, 1] y1.shape ``` -$$ -\left ( 2, \quad 6, \quad 1\right ) -$$ - Now let's compute the zeros of the characteristic polynomial by simply calculating the eigenvalues of $A$ diff --git a/lectures/wald_friedman.md b/lectures/wald_friedman.md index 84880e456..989861e6f 100644 --- a/lectures/wald_friedman.md +++ b/lectures/wald_friedman.md @@ -67,7 +67,8 @@ We'll begin with some imports: ```{code-cell} ipython import numpy as np import matplotlib.pyplot as plt -from numba import jit, prange, jitclass, float64, int64 +from numba import jit, prange, float64, int64 +from numba.experimental import jitclass from interpolation import interp from math import gamma ``` @@ -134,11 +135,11 @@ random variables is also independently and identically distributed (IID). But the observer does not know which of the two distributions generated the sequence. -For reasons explained [Exchangeability and Bayesian Updating](https://python.quantecon.org/exchangeable.html), this means that the sequence is not +For reasons explained in [Exchangeability and Bayesian Updating](https://python.quantecon.org/exchangeable.html), this means that the sequence is not IID and that the observer has something to learn, even though he knows both $f_0$ and $f_1$. -After a number of draws, also to be determined, he makes a decision about -which of the distributions is generating the draws he observes. +The decision maker chooses a number of draws (i.e., random samples from the unknown distribution) and uses them to decide +which of the two distributions is generating outcomes. He starts with prior @@ -164,13 +165,13 @@ After observing $z_k, z_{k-1}, \ldots, z_0$, the decision-maker believes that $z_{k+1}$ has probability distribution $$ -f_{{\pi}_k} (v) = \pi_k f_0(v) + (1-\pi_k) f_1 (v) +f_{{\pi}_k} (v) = \pi_k f_0(v) + (1-\pi_k) f_1 (v) , $$ -This is a mixture of distributions $f_0$ and $f_1$, with the weight +which is a mixture of distributions $f_0$ and $f_1$, with the weight on $f_0$ being the posterior probability that $f = f_0$ [^f1]. -To help illustrate this kind of distribution, let's inspect some mixtures of beta distributions. +To illustrate such a distribution, let's inspect some mixtures of beta distributions. The density of a beta probability distribution with parameters $a$ and $b$ is @@ -291,7 +292,7 @@ J(\pi) = \right\} ``` -where $\pi'$ is the random variable defined by +where $\pi'$ is the random variable defined by Bayes' Law $$ \pi' = \kappa(z', \pi) = \frac{ \pi f_0(z')}{ \pi f_0(z') + (1-\pi) f_1 (z') } @@ -539,7 +540,7 @@ wf = WaldFriedman() fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(wf.f0(wf.π_grid), label="$f_0$") ax.plot(wf.f1(wf.π_grid), label="$f_1$") -ax.set(ylabel="probability of $z_k$", xlabel="$k$", title="Distributions") +ax.set(ylabel="probability of $z_k$", xlabel="$z_k$", title="Distributions") ax.legend() plt.show() @@ -940,3 +941,4 @@ We'll dig deeper into some of the ideas used here in the following lectures: * {doc}`this lecture ` discusses the role of likelihood ratio processes in **Bayesian learning** * {doc}`this lecture ` returns to the subject of this lecture and studies whether the Captain's hunch that the (frequentist) decision rule that the Navy had ordered him to use can be expected to be better or worse than the rule sequential rule that Abraham Wald designed + From afcdca63efafb2adcebb444a52261c8735207520 Mon Sep 17 00:00:00 2001 From: mmcky Date: Tue, 9 Feb 2021 10:03:07 +1100 Subject: [PATCH 08/15] fix for newline bug in wald_friedman --- lectures/wald_friedman.md | 1 - 1 file changed, 1 deletion(-) diff --git a/lectures/wald_friedman.md b/lectures/wald_friedman.md index 989861e6f..55259bc75 100644 --- a/lectures/wald_friedman.md +++ b/lectures/wald_friedman.md @@ -941,4 +941,3 @@ We'll dig deeper into some of the ideas used here in the following lectures: * {doc}`this lecture ` discusses the role of likelihood ratio processes in **Bayesian learning** * {doc}`this lecture ` returns to the subject of this lecture and studies whether the Captain's hunch that the (frequentist) decision rule that the Navy had ordered him to use can be expected to be better or worse than the rule sequential rule that Abraham Wald designed - From 59dfc2a8191a00e01950b188578d0ee39473f971 Mon Sep 17 00:00:00 2001 From: mmcky Date: Tue, 9 Feb 2021 15:27:02 +1100 Subject: [PATCH 09/15] update config for latest sphinxcontrib-bibtex (#84) --- lectures/_config.yml | 3 +++ 1 file changed, 3 insertions(+) diff --git a/lectures/_config.yml b/lectures/_config.yml index 7c523f551..afa2cb952 100644 --- a/lectures/_config.yml +++ b/lectures/_config.yml @@ -7,6 +7,9 @@ execute: execute_notebooks: "cache" timeout: 60 +bibtex_bibfiles: + - _static/quant-econ.bib + html: baseurl: https://python.quantecon.org/ From b17fc8a7b94c01a4f6ae7c3a42c971b07d9c2643 Mon Sep 17 00:00:00 2001 From: mmcky Date: Tue, 9 Feb 2021 16:31:17 +1100 Subject: [PATCH 10/15] update environment for testing from branch mystnb:code-from-file --- .github/workflows/ci.yml | 3 --- environment.yml | 1 + 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 4f9ffb534..e645b53d2 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -15,9 +15,6 @@ jobs: python-version: 3.8 environment-file: environment.yml activate-environment: lecture-python - - name: Install myst_nb (from branch for testing) - shell: bash -l {0} - run: python -m pip install git+https://github.com/executablebooks/MyST-NB.git@code-from-file - name: Display Conda Environment Versions shell: bash -l {0} run: conda list diff --git a/environment.yml b/environment.yml index c67a39762..82649461d 100644 --- a/environment.yml +++ b/environment.yml @@ -7,6 +7,7 @@ dependencies: - pip - pip: - jupyter-book + - git+https://github.com/executablebooks/MyST-NB.git@code-from-file - sphinxext-rediraffe - sphinx-multitoc-numbering - quantecon-book-theme From c1f924d2218e6bca104486f9669d1f7c50520a08 Mon Sep 17 00:00:00 2001 From: mmcky Date: Wed, 10 Feb 2021 11:01:31 +1100 Subject: [PATCH 11/15] upload build folder as an artifact --- .github/workflows/ci.yml | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index e645b53d2..3bfb3f656 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -25,6 +25,11 @@ jobs: shell: bash -l {0} run: | jb build lectures --path-output ./ + - name: Save Build as Artifact + uses: actions/upload-artifact@v1 + with: + name: _build + path: lectures/_build - name: Preview Deploy to Netlify uses: nwtgck/actions-netlify@v1.1 with: From 3f529c82fd2831a74a11c2c5b3499542f8dcc49a Mon Sep 17 00:00:00 2001 From: mmcky Date: Wed, 10 Feb 2021 11:29:26 +1100 Subject: [PATCH 12/15] fix path --- .github/workflows/ci.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 3bfb3f656..0b5ff9a16 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -29,7 +29,7 @@ jobs: uses: actions/upload-artifact@v1 with: name: _build - path: lectures/_build + path: _build - name: Preview Deploy to Netlify uses: nwtgck/actions-netlify@v1.1 with: From 9a791caeafe7def5870fcf62ba9c2bc2dd987065 Mon Sep 17 00:00:00 2001 From: mmcky Date: Wed, 10 Feb 2021 12:18:21 +1100 Subject: [PATCH 13/15] update timeout for exection to 10min --- lectures/_config.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lectures/_config.yml b/lectures/_config.yml index 607ad6727..e34bc1a94 100644 --- a/lectures/_config.yml +++ b/lectures/_config.yml @@ -5,7 +5,7 @@ description: This website presents a set of lectures on quantitative economic mo execute: execute_notebooks: "cache" - timeout: 60 + timeout: 600 bibtex_bibfiles: - _static/quant-econ.bib From c8373838b1f4a573eb90cd809f755e67284c6743 Mon Sep 17 00:00:00 2001 From: mmcky Date: Thu, 18 Feb 2021 13:36:33 +1100 Subject: [PATCH 14/15] migrate from :file: to :load: (myst-nb) --- lectures/cake_eating_numerical.md | 2 +- lectures/coleman_policy_iter.md | 8 ++++---- lectures/egm_policy_iter.md | 6 +++--- lectures/ifp.md | 4 ++-- lectures/markov_perf.md | 2 +- lectures/optgrowth.md | 4 ++-- lectures/optgrowth_fast.md | 8 ++++---- 7 files changed, 17 insertions(+), 17 deletions(-) diff --git a/lectures/cake_eating_numerical.md b/lectures/cake_eating_numerical.md index de50a1575..108a5f2c2 100644 --- a/lectures/cake_eating_numerical.md +++ b/lectures/cake_eating_numerical.md @@ -73,7 +73,7 @@ The analytical solutions for the value function and optimal policy were found to be as follows. ```{code-cell} python3 -:file: _static/lecture_specific/cake_eating_numerical/analytical.py +:load: _static/lecture_specific/cake_eating_numerical/analytical.py ``` Our first aim is to obtain these analytical solutions numerically. diff --git a/lectures/coleman_policy_iter.md b/lectures/coleman_policy_iter.md index ebf8169f9..4ef3b82e3 100644 --- a/lectures/coleman_policy_iter.md +++ b/lectures/coleman_policy_iter.md @@ -268,7 +268,7 @@ As in our {doc}`previous study `, we continue to assume that This will allow us to compare our results to the analytical solutions ```{code-cell} python3 -:file: _static/lecture_specific/optgrowth/cd_analytical.py +:load: _static/lecture_specific/optgrowth/cd_analytical.py ``` As discussed above, our plan is to solve the model using time iteration, which @@ -280,7 +280,7 @@ These are available in a class called `OptimalGrowthModel` that we constructed in an {doc}`earlier lecture `. ```{code-cell} python3 -:file: _static/lecture_specific/optgrowth_fast/ogm.py +:load: _static/lecture_specific/optgrowth_fast/ogm.py ``` Now we implement a method called `euler_diff`, which returns @@ -377,7 +377,7 @@ Here is a function called `solve_model_time_iter` that takes an instance of using time iteration. ```{code-cell} python3 -:file: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py +:load: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py ``` Let's call it: @@ -443,7 +443,7 @@ Compute and plot the optimal policy. We use the class `OptimalGrowthModel_CRRA` from our {doc}`VFI lecture `. ```{code-cell} python3 -:file: _static/lecture_specific/optgrowth_fast/ogm_crra.py +:load: _static/lecture_specific/optgrowth_fast/ogm_crra.py ``` Let's create an instance: diff --git a/lectures/egm_policy_iter.md b/lectures/egm_policy_iter.md index 995d72256..5b6a4f1b1 100644 --- a/lectures/egm_policy_iter.md +++ b/lectures/egm_policy_iter.md @@ -161,13 +161,13 @@ where This will allow us to make comparisons with the analytical solutions ```{code-cell} python3 -:file: _static/lecture_specific/optgrowth/cd_analytical.py +:load: _static/lecture_specific/optgrowth/cd_analytical.py ``` We reuse the `OptimalGrowthModel` class ```{code-cell} python3 -:file: _static/lecture_specific/optgrowth_fast/ogm.py +:load: _static/lecture_specific/optgrowth_fast/ogm.py ``` ### The Operator @@ -219,7 +219,7 @@ grid = og.grid Here's our solver routine: ```{code-cell} python3 -:file: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py +:load: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py ``` Let's call it: diff --git a/lectures/ifp.md b/lectures/ifp.md index 2045a55bc..0820b7beb 100644 --- a/lectures/ifp.md +++ b/lectures/ifp.md @@ -476,7 +476,7 @@ The following function iterates to convergence and returns the approximate optimal policy. ```{code-cell} python3 -:file: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py +:load: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py ``` Let's carry this out using the default parameters of the `IFP` class: @@ -520,7 +520,7 @@ We know that, in this case, the value function and optimal consumption policy are given by ```{code-cell} python3 -:file: _static/lecture_specific/cake_eating_numerical/analytical.py +:load: _static/lecture_specific/cake_eating_numerical/analytical.py ``` Let's see if we match up: diff --git a/lectures/markov_perf.md b/lectures/markov_perf.md index ff68eec11..fc4a920e5 100644 --- a/lectures/markov_perf.md +++ b/lectures/markov_perf.md @@ -432,7 +432,7 @@ Consider the previously presented duopoly model with parameter values of: From these, we compute the infinite horizon MPE using the preceding code ```{code-cell} python3 -:file: _static/lecture_specific/markov_perf/duopoly_mpe.py +:load: _static/lecture_specific/markov_perf/duopoly_mpe.py ``` Running the code produces the following output. diff --git a/lectures/optgrowth.md b/lectures/optgrowth.md index c373453ad..c1ed2e4a5 100644 --- a/lectures/optgrowth.md +++ b/lectures/optgrowth.md @@ -625,7 +625,7 @@ whether our code works for this particular case. In Python, the functions above can be expressed as: ```{code-cell} python3 -:file: _static/lecture_specific/optgrowth/cd_analytical.py +:load: _static/lecture_specific/optgrowth/cd_analytical.py ``` Next let's create an instance of the model with the above primitives and assign it to the variable `og`. @@ -702,7 +702,7 @@ We can write a function that iterates until the difference is below a particular tolerance level. ```{code-cell} python3 -:file: _static/lecture_specific/optgrowth/solve_model.py +:load: _static/lecture_specific/optgrowth/solve_model.py ``` Let's use this function to compute an approximate solution at the defaults. diff --git a/lectures/optgrowth_fast.md b/lectures/optgrowth_fast.md index 6e50c0cde..eeea025b4 100644 --- a/lectures/optgrowth_fast.md +++ b/lectures/optgrowth_fast.md @@ -104,7 +104,7 @@ In particular, the algorithm is unchanged, and the only difference is in the imp As before, we will be able to compare with the true solutions ```{code-cell} python3 -:file: _static/lecture_specific/optgrowth/cd_analytical.py +:load: _static/lecture_specific/optgrowth/cd_analytical.py ``` ## Computation @@ -127,7 +127,7 @@ class. This is where we sacrifice flexibility in order to gain more speed. ```{code-cell} python3 -:file: _static/lecture_specific/optgrowth_fast/ogm.py +:load: _static/lecture_specific/optgrowth_fast/ogm.py ``` The class includes some methods such as `u_prime` that we do not need now @@ -189,7 +189,7 @@ def T(v, og): We use the `solve_model` function to perform iteration until convergence. ```{code-cell} python3 -:file: _static/lecture_specific/optgrowth/solve_model.py +:load: _static/lecture_specific/optgrowth/solve_model.py ``` Let's compute the approximate solution at the default parameters. @@ -321,7 +321,7 @@ value function iteration, the JIT-compiled code is usually an order of magnitude Here's our CRRA version of `OptimalGrowthModel`: ```{code-cell} python3 -:file: _static/lecture_specific/optgrowth_fast/ogm_crra.py +:load: _static/lecture_specific/optgrowth_fast/ogm_crra.py ``` Let's create an instance: From 60023d9f9a3adfcbd9efbe76279a870a33ba0b43 Mon Sep 17 00:00:00 2001 From: mmcky Date: Wed, 24 Feb 2021 19:59:17 +1100 Subject: [PATCH 15/15] remove install of myst_nb from branch and include latest jupyter-book master install --- environment.yml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/environment.yml b/environment.yml index 82649461d..240932850 100644 --- a/environment.yml +++ b/environment.yml @@ -6,8 +6,7 @@ dependencies: - anaconda=2020.11 - pip - pip: - - jupyter-book - - git+https://github.com/executablebooks/MyST-NB.git@code-from-file + - git+https://github.com/executablebooks/jupyter-book - sphinxext-rediraffe - sphinx-multitoc-numbering - quantecon-book-theme