Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions lectures/ar1_processes.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ series $\{ X_t\}$.

To see this, we first note that $X_t$ is normally distributed for each $t$.

This is immediate form {eq}`ar1_ma`, since linear combinations of independent
This is immediate from {eq}`ar1_ma`, since linear combinations of independent
normal random variables are normal.

Given that $X_t$ is normally distributed, we will know the full distribution
Expand Down Expand Up @@ -212,7 +212,7 @@ In fact it's easy to show that such convergence will occur, regardless of the in
To see this, we just have to look at the dynamics of the first two moments, as
given in {eq}`dyn_tm`.

When $|a| < 1$, these sequence converge to the respective limits
When $|a| < 1$, these sequences converge to the respective limits

```{math}
:label: mu_sig_star
Expand Down
8 changes: 4 additions & 4 deletions lectures/cake_eating_numerical.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ Let's write this a bit more mathematically.
### The Bellman Operator

We introduce the **Bellman operator** $T$ that takes a function v as an
argument and returns a new function $Tv$ defined by.
argument and returns a new function $Tv$ defined by

$$
Tv(x) = \max_{0 \leq c \leq x} \{u(c) + \beta v(x - c)\}
Expand All @@ -118,7 +118,7 @@ v$ converges to the solution to the Bellman equation.

### Fitted Value Function Iteration

Both consumption $c$ and the state variable $x$ are continous.
Both consumption $c$ and the state variable $x$ are continuous.

This causes complications when it comes to numerical work.

Expand Down Expand Up @@ -419,7 +419,7 @@ ax.legend()
plt.show()
```

The fit is reasoable but not perfect.
The fit is reasonable but not perfect.

We can improve it by increasing the grid size or reducing the
error tolerance in the value function iteration routine.
Expand Down Expand Up @@ -509,7 +509,7 @@ modification in the exercise above).

### Exercise 1

We need to create a class to hold our primitives and return the right hand side of the bellman equation.
We need to create a class to hold our primitives and return the right hand side of the Bellman equation.

We will use [inheritance](https://en.wikipedia.org/wiki/Inheritance_%28object-oriented_programming%29) to maximize code reuse.

Expand Down
6 changes: 3 additions & 3 deletions lectures/cass_koopmans_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,14 @@ kernelspec:

## Overview

This lecture and in {doc}`Cass-Koopmans Competitive Equilibrium <cass_koopmans_2>` describe a model that Tjalling Koopmans {cite}`Koopmans`
This lecture and lecture {doc}`Cass-Koopmans Competitive Equilibrium <cass_koopmans_2>` describe a model that Tjalling Koopmans {cite}`Koopmans`
and David Cass {cite}`Cass` used to analyze optimal growth.

The model can be viewed as an extension of the model of Robert Solow
described in [an earlier lecture](https://lectures.quantecon.org/py/python_oop.html)
but adapted to make the saving rate the outcome of an optimal choice.

(Solow assumed a constant saving rate determined outside the model).
(Solow assumed a constant saving rate determined outside the model.)

We describe two versions of the model, one in this lecture and the other in {doc}`Cass-Koopmans Competitive Equilibrium <cass_koopmans_2>`.

Expand Down Expand Up @@ -696,7 +696,7 @@ its steady state value most of the time.
plot_paths(pp, 0.3, k_ss/3, [250, 150, 50, 25], k_ss=k_ss);
```

Different colors in the above graphs are associated
Different colors in the above graphs are associated with
different horizons $T$.

Notice that as the horizon increases, the planner puts $K_t$
Expand Down
4 changes: 2 additions & 2 deletions lectures/cass_koopmans_2.md
Original file line number Diff line number Diff line change
Expand Up @@ -397,7 +397,7 @@ verify** approach.
In this lecture {doc}`Cass-Koopmans Planning Model <cass_koopmans_1>`, we computed an allocation $\{\vec{C}, \vec{K}, \vec{N}\}$
that solves the planning problem.

(This allocation will constitute the **Big** $K$ to be in the presence instance of the *Big** $K$ **, little** $k$ trick
(This allocation will constitute the **Big** $K$ to be in the present instance of the *Big** $K$ **, little** $k$ trick
that we'll apply to a competitive equilibrium in the spirit of [this lecture](https://lectures.quantecon.org/py/rational_expectations.html#)
and [this lecture](https://lectures.quantecon.org/py/dyn_stack.html#).)

Expand Down Expand Up @@ -597,7 +597,7 @@ representative household living in a competitive equilibrium.
We now turn to the problem faced by a firm in a competitive
equilibrium:

If we plug in {eq}`eq-pl` into {eq}`Zero-profits` for all t, we
If we plug {eq}`eq-pl` into {eq}`Zero-profits` for all t, we
get

$$
Expand Down
6 changes: 3 additions & 3 deletions lectures/finite_markov.md
Original file line number Diff line number Diff line change
Expand Up @@ -603,7 +603,7 @@ We'll come back to this a bit later.

### Aperiodicity

Loosely speaking, a Markov chain is called periodic if it cycles in a predictible way, and aperiodic otherwise.
Loosely speaking, a Markov chain is called periodic if it cycles in a predictable way, and aperiodic otherwise.

Here's a trivial example with three states

Expand Down Expand Up @@ -771,7 +771,7 @@ with the unit eigenvalue $\lambda = 1$.

A more stable and sophisticated algorithm is implemented in [QuantEcon.py](http://quantecon.org/quantecon-py).

This is the one we recommend you use:
This is the one we recommend you to use:

```{code-cell} python3
P = [[0.4, 0.6],
Expand Down Expand Up @@ -1023,7 +1023,7 @@ A topic of interest for economics and many other disciplines is *ranking*.
Let's now consider one of the most practical and important ranking problems
--- the rank assigned to web pages by search engines.

(Although the problem is motivated from outside of economics, there is in fact a deep connection between search ranking systems and prices in certain competitive equilibria --- see {cite}`DLP2013`)
(Although the problem is motivated from outside of economics, there is in fact a deep connection between search ranking systems and prices in certain competitive equilibria --- see {cite}`DLP2013`.)

To understand the issue, consider the set of results returned by a query to a web search engine.

Expand Down
4 changes: 2 additions & 2 deletions lectures/heavy_tails.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ where $\mu := \mathbb E X_i = \int x F(x)$ is the common mean of the sample.
The condition $\mathbb E | X_i | = \int |x| F(x) < \infty$ holds
in most cases but can fail if the distribution $F$ is very heavy tailed.

For example, it fails for the Cauchy distribution
For example, it fails for the Cauchy distribution.

Let's have a look at the behavior of the sample mean in this case, and see
whether or not the LLN is still valid.
Expand Down Expand Up @@ -590,7 +590,7 @@ $$
2^{1/\alpha} = \exp(\mu)
$$

which we solve for $\mu$ and $\sigma$ given $\alpha = 1.05$
which we solve for $\mu$ and $\sigma$ given $\alpha = 1.05$.

Here is code that generates the two samples, produces the violin plot and
prints the mean and standard deviation of the two samples.
Expand Down
9 changes: 4 additions & 5 deletions lectures/ifp.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ model <optgrowth>` and yet differs in important ways.

For example, the choice problem for the agent includes an additive income term that leads to an occasionally binding constraint.

Moreover, in this and the following lectures, we will inject more realisitic
Moreover, in this and the following lectures, we will inject more realistic
features such as correlated shocks.

To solve the model we will use Euler equation based time iteration, which proved
Expand Down Expand Up @@ -194,7 +194,7 @@ strict inequality $u' (c_t) > \beta R \, \mathbb{E}_t u'(c_{t+1})$
can occur because $c_t$ cannot increase sufficiently to attain equality.

(The lower boundary case $c_t = 0$ never arises at the optimum because
$u'(0) = \infty$)
$u'(0) = \infty$.)

With some thought, one can show that {eq}`ee00` and {eq}`ee01` are
equivalent to
Expand Down Expand Up @@ -409,8 +409,7 @@ Next we provide a function to compute the difference
```{math}
:label: euler_diff_eq

u'(c)
- \max \left\{
u'(c) - \max \left\{
\beta R \, \mathbb E_z (u' \circ \sigma) \,
[R (a - c) + \hat Y, \, \hat Z]
\, , \;
Expand Down Expand Up @@ -629,7 +628,7 @@ shocks.
Your task is to investigate how this measure of aggregate capital varies with
the interest rate.

Following tradition, put the price (i.e., interest rate) is on the vertical axis.
Following tradition, put the price (i.e., interest rate) on the vertical axis.

On the horizontal axis put aggregate capital, computed as the mean of the
stationary distribution given the interest rate.
Expand Down
6 changes: 3 additions & 3 deletions lectures/ifp_advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ It can be shown that

We now have a clear path to successfully approximating the optimal policy:
choose some $\sigma \in \mathscr C$ and then iterate with $K$ until
convergence (as measured by the distance $\rho$)
convergence (as measured by the distance $\rho$).

### Using an Endogenous Grid

Expand Down Expand Up @@ -325,7 +325,7 @@ $$
L(z, \hat z) := P(z, \hat z) \int R(\hat z, x) \phi(x) dx
$$

This indentity is proved in {cite}`ma2020income`, where $\phi$ is the
This identity is proved in {cite}`ma2020income`, where $\phi$ is the
density of the innovation $\zeta_t$ to returns on assets.

(Remember that $\mathsf Z$ is a finite set, so this expression defines a matrix.)
Expand Down Expand Up @@ -618,7 +618,7 @@ For example, we will pass in the solutions `a_star, σ_star` along with
`ifp`, even though it would be more natural to just pass in `ifp` and then
solve inside the function.

The reason we do this is because `solve_model_time_iter` is not
The reason we do this is that `solve_model_time_iter` is not
JIT-compiled.

```{code-cell} python3
Expand Down
4 changes: 2 additions & 2 deletions lectures/inventory_dynamics.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ follow so-called s-S inventory dynamics.
Such firms

1. wait until inventory falls below some level $s$ and then
1. order sufficent quantities to bring their inventory back up to capacity $S$.
1. order sufficient quantities to bring their inventory back up to capacity $S$.

These kinds of policies are common in practice and also optimal in certain circumstances.

Expand Down Expand Up @@ -176,7 +176,7 @@ fixed $T$.
We will do this by generating many draws of $X_T$ given initial
condition $X_0$.

With these draws of $X_T$ we can build up a picture of its distribution $\psi_T$
With these draws of $X_T$ we can build up a picture of its distribution $\psi_T$.

Here's one visualization, with $T=50$.

Expand Down
2 changes: 1 addition & 1 deletion lectures/jv.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ class JVWorker:
```

The function `operator_factory` takes an instance of this class and returns a
jitted version of the Bellman operator `T`, ie.
jitted version of the Bellman operator `T`, i.e.

$$
Tv(x)
Expand Down
2 changes: 1 addition & 1 deletion lectures/kalman.md
Original file line number Diff line number Diff line change
Expand Up @@ -499,7 +499,7 @@ Conditions under which a fixed point exists and the sequence $\{\Sigma_t\}$ conv

A sufficient (but not necessary) condition is that all the eigenvalues $\lambda_i$ of $A$ satisfy $|\lambda_i| < 1$ (cf. e.g., {cite}`AndersonMoore2005`, p. 77).

(This strong condition assures that the unconditional distribution of $x_t$ converges as $t \rightarrow + \infty$)
(This strong condition assures that the unconditional distribution of $x_t$ converges as $t \rightarrow + \infty$.)

In this case, for any initial choice of $\Sigma_0$ that is both non-negative and symmetric, the sequence $\{\Sigma_t\}$ in {eq}`kalman_sdy` converges to a non-negative symmetric matrix $\Sigma$ that solves {eq}`kalman_dare`.

Expand Down
2 changes: 1 addition & 1 deletion lectures/kesten_processes.md
Original file line number Diff line number Diff line change
Expand Up @@ -496,7 +496,7 @@ s_{t+1} = e_{t+1} \mathbb{1}\{s_t < \bar s\} +

Here

* the state variable $s_t$ is represents productivity (which is a proxy
* the state variable $s_t$ represents productivity (which is a proxy
for output and hence firm size),
* the IID sequence $\{ e_t \}$ is thought of as a productivity draw for a new
entrant and
Expand Down
4 changes: 2 additions & 2 deletions lectures/likelihood_ratio_process.md
Original file line number Diff line number Diff line change
Expand Up @@ -254,7 +254,7 @@ But it would be too challenging for us to that here simply by applying a standa

The reason is that the distribution of $L\left(w^{t}\right)$ is extremely skewed for large values of $t$.

Because the probabilty density in the right tail is close to $0$, it just takes too much computer time to sample enough points from the right tail.
Because the probability density in the right tail is close to $0$, it just takes too much computer time to sample enough points from the right tail.

Instead, the following code just illustrates that the unconditional means of $l(w_t)$ are $1$.

Expand Down Expand Up @@ -498,7 +498,7 @@ Notice that as $t$ increases, we are assured a larger probability
of detection and a smaller probability of false alarm associated with
a given discrimination threshold $c$.

As $t \rightarrow + \infty$, we approach the the perfect detection
As $t \rightarrow + \infty$, we approach the perfect detection
curve that is indicated by a right angle hinging on the green dot.

For a given sample size $t$, a value discrimination threshold $c$ determines a point on the receiver operating
Expand Down
2 changes: 1 addition & 1 deletion lectures/linear_algebra.md
Original file line number Diff line number Diff line change
Expand Up @@ -1290,7 +1290,7 @@ $$
(Q + B'PB)u + B'PAx = 0
$$

which is the first-order condition for maximizing L w.r.t. u.
which is the first-order condition for maximizing $L$ w.r.t. $u$.

Thus, the optimal choice of u must satisfy

Expand Down
8 changes: 4 additions & 4 deletions lectures/lln_clt.md
Original file line number Diff line number Diff line change
Expand Up @@ -385,7 +385,7 @@ To this end, we now perform the following simulation
Here's some code that does exactly this for the exponential distribution
$F(x) = 1 - e^{- \lambda x}$.

(Please experiment with other choices of $F$, but remember that, to conform with the conditions of the CLT, the distribution must have a finite second moment)
(Please experiment with other choices of $F$, but remember that, to conform with the conditions of the CLT, the distribution must have a finite second moment.)

(sim_one)=
```{code-cell} python3
Expand Down Expand Up @@ -437,7 +437,7 @@ random variable, the distribution of $Y_n$ will smooth out into a bell-shaped cu
The next figure shows this process for $X_i \sim f$, where $f$ was
specified as the convex combination of three different beta densities.

(Taking a convex combination is an easy way to produce an irregular shape for $f$)
(Taking a convex combination is an easy way to produce an irregular shape for $f$.)

In the figure, the closest density is that of $Y_1$, while the furthest is that of
$Y_5$
Expand Down Expand Up @@ -650,7 +650,7 @@ n \to \infty

This theorem is used frequently in statistics to obtain the asymptotic distribution of estimators --- many of which can be expressed as functions of sample means.

(These kinds of results are often said to use the "delta method")
(These kinds of results are often said to use the "delta method".)

The proof is based on a Taylor expansion of $g$ around the point $\mu$.

Expand Down Expand Up @@ -741,7 +741,7 @@ n \| \mathbf Q ( \bar{\mathbf X}_n - \boldsymbol \mu ) \|^2
where $\chi^2(k)$ is the chi-squared distribution with $k$ degrees
of freedom.

(Recall that $k$ is the dimension of $\mathbf X_i$, the underlying random vectors)
(Recall that $k$ is the dimension of $\mathbf X_i$, the underlying random vectors.)

Your second exercise is to illustrate the convergence in {eq}`lln_ctc` with a simulation.

Expand Down
4 changes: 2 additions & 2 deletions lectures/mccall_fitted_vfi.md
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,7 @@ The exercises ask you to explore the solution and how it changes with parameters
Use the code above to explore what happens to the reservation wage when the wage parameter $\mu$
changes.

Use the default parameters and $\mu$ in `mu_vals = np.linspace(0.0, 2.0, 15)`
Use the default parameters and $\mu$ in `mu_vals = np.linspace(0.0, 2.0, 15)`.

Is the impact on the reservation wage as you expected?

Expand All @@ -338,7 +338,7 @@ support.

Use `s_vals = np.linspace(1.0, 2.0, 15)` and `m = 2.0`.

State how you expect the reservation wage vary with $s$.
State how you expect the reservation wage to vary with $s$.

Now compute it. Is this as you expected?

Expand Down
4 changes: 2 additions & 2 deletions lectures/mccall_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ Step 4: if the deviation is larger than some fixed tolerance, set $v = v'$ and g

Step 5: return $v$.

Let $\{ v_k \}$ denote the sequence genererated by this algorithm.
Let $\{ v_k \}$ denote the sequence generated by this algorithm.

This sequence converges to the solution
to {eq}`odu_pv2` as $k \to \infty$, which is the value function $v^*$.
Expand All @@ -321,7 +321,7 @@ itself via
```

(A new vector $Tv$ is obtained from given vector $v$ by evaluating
the r.h.s. at each $i$)
the r.h.s. at each $i$.)

The element $v_k$ in the sequence $\{v_k\}$ of successive
approximations corresponds to $T^k v$.
Expand Down
2 changes: 1 addition & 1 deletion lectures/mccall_model_with_separation.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ If he rejects, then he receives unemployment compensation $c$.

The process then repeats.

(Note: we do not allow for job search while employed---this topic is taken up in a {doc}`later lecture <jv>`)
(Note: we do not allow for job search while employed---this topic is taken up in a {doc}`later lecture <jv>`.)

## Solving the Model

Expand Down
2 changes: 1 addition & 1 deletion lectures/multi_hyper.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ $$
To evaluate whether the selection procedure is **color blind** the administrator wants to study whether the particular realization of $X$ drawn can plausibly
be said to be a random draw from the probability distribution that is implied by the **color blind** hypothesis.

The appropriate probability distribution is the one described [here](https://en.wikipedia.org/wiki/Hypergeometric_distribution)
The appropriate probability distribution is the one described [here](https://en.wikipedia.org/wiki/Hypergeometric_distribution).

Let's now instantiate the administrator's problem, while continuing to use the colored balls metaphor.

Expand Down
Loading