diff --git a/lectures/_config.yml b/lectures/_config.yml index 2d94df9a0..e6f506ed9 100644 --- a/lectures/_config.yml +++ b/lectures/_config.yml @@ -32,7 +32,7 @@ latex: targetname: quantecon-python.tex sphinx: - extra_extensions: [sphinx_multitoc_numbering, sphinxext.rediraffe, sphinx_tojupyter, sphinxcontrib.youtube, sphinx.ext.todo] + extra_extensions: [sphinx_multitoc_numbering, sphinxext.rediraffe, sphinx_tojupyter, sphinxcontrib.youtube, sphinx.ext.todo, sphinx_exercise, sphinx_togglebutton] config: nb_render_priority: html: diff --git a/lectures/ar1_processes.md b/lectures/ar1_processes.md index bf4688beb..0ea5d695a 100644 --- a/lectures/ar1_processes.md +++ b/lectures/ar1_processes.md @@ -322,7 +322,8 @@ important concept for statistics and simulation. ## Exercises -### Exercise 1 +```{exercise} +:label: ar1p_ex1 Let $k$ be a natural number. @@ -355,8 +356,11 @@ $$ when $m$ is large. Confirm this by simulation at a range of $k$ using the default parameters from the lecture. +``` + -### Exercise 2 +```{exercise} +:label: ar1p_ex2 Write your own version of a one dimensional [kernel density estimator](https://en.wikipedia.org/wiki/Kernel_density_estimation), @@ -398,8 +402,11 @@ Use $n=500$. Make a comment on your results. (Do you think this is a good estimator of these distributions?) +``` + -### Exercise 3 +```{exercise} +:label: ar1p_ex3 In the lecture we discussed the following fact: for the $AR(1)$ process @@ -438,10 +445,14 @@ color) as follows: Try this for $n=2000$ and confirm that the simulation based estimate of $\psi_{t+1}$ does converge to the theoretical distribution. +``` + ## Solutions -### Exercise 1 +```{solution-start} ar1p_ex1 +:class: dropdown +``` ```{code-cell} python3 from numba import njit @@ -479,7 +490,13 @@ ax.legend() plt.show() ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} ar1p_ex2 +:class: dropdown +``` Here is one solution: @@ -532,7 +549,13 @@ for α, β in parameter_pairs: We see that the kernel density estimator is effective when the underlying distribution is smooth but less so otherwise. -### Exercise 3 +```{solution-end} +``` + + +```{solution-start} ar1p_ex3 +:class: dropdown +``` Here is our solution @@ -579,3 +602,5 @@ plt.show() The simulated distribution approximately coincides with the theoretical distribution, as predicted. +```{solution-end} +``` diff --git a/lectures/cake_eating_numerical.md b/lectures/cake_eating_numerical.md index 5bac89672..c98227e7b 100644 --- a/lectures/cake_eating_numerical.md +++ b/lectures/cake_eating_numerical.md @@ -482,7 +482,8 @@ This is due to ## Exercises -### Exercise 1 +```{exercise} +:label: cen_ex1 Try the following modification of the problem. @@ -500,15 +501,22 @@ where $\alpha$ is a parameter satisfying $0 < \alpha < 1$. Make the required changes to value function iteration code and plot the value and policy functions. Try to reuse as much code as possible. +``` + -### Exercise 2 +```{exercise} +:label: cen_ex2 Implement time iteration, returning to the original case (i.e., dropping the modification in the exercise above). +``` + ## Solutions -### Exercise 1 +```{solution-start} cen_ex1 +:class: dropdown +``` We need to create a class to hold our primitives and return the right hand side of the Bellman equation. @@ -582,7 +590,13 @@ plt.show() Consumption is higher when $\alpha < 1$ because, at least for large $x$, the return to savings is lower. -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} cen_ex2 +:class: dropdown +``` Here's one way to implement time iteration. @@ -670,3 +684,5 @@ ax.legend(fontsize=12) plt.show() ``` +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/cake_eating_problem.md b/lectures/cake_eating_problem.md index 6004c1781..2ad821eb4 100644 --- a/lectures/cake_eating_problem.md +++ b/lectures/cake_eating_problem.md @@ -506,7 +506,8 @@ Combining this fact with {eq}`bellman_envelope` recovers the Euler equation. ## Exercises -### Exercise 1 +```{exercise} +:label: cep_ex1 How does one obtain the expressions for the value function and optimal policy given in {eq}`crra_vstar` and {eq}`crra_opt_pol` respectively? @@ -523,10 +524,12 @@ Starting from this conjecture, try to obtain the solutions {eq}`crra_vstar` and In doing so, you will need to use the definition of the value function and the Bellman equation. +``` ## Solutions -### Exercise 1 +```{solution} cep_ex1 +:class: dropdown We start with the conjecture $c_t^*=\theta x_t$, which leads to a path for the state variable (cake size) given by @@ -611,4 +614,4 @@ v^*(x_t) = \left(1-\beta^\frac{1}{\gamma}\right)^{-\gamma}u(x_t) $$ Our claims are now verified. - +``` diff --git a/lectures/career.md b/lectures/career.md index 0a61521c9..3cc0ee7b3 100644 --- a/lectures/career.md +++ b/lectures/career.md @@ -362,8 +362,9 @@ the worker cannot change careers without changing jobs. ## Exercises -(career_ex1)= -### Exercise 1 +```{exercise-start} +:label: career_ex1 +``` Using the default parameterization in the class `CareerWorkerProblem`, generate and plot typical sample paths for $\theta$ and $\epsilon$ @@ -372,13 +373,16 @@ when the worker follows the optimal policy. In particular, modulo randomness, reproduce the following figure (where the horizontal axis represents time) ```{figure} /_static/lecture_specific/career/career_solutions_ex1_py.png - ``` Hint: To generate the draws from the distributions $F$ and $G$, use `quantecon.random.draw()`. -(career_ex2)= -### Exercise 2 +```{exercise-end} +``` + + +```{exercise} +:label: career_ex2 Let's now consider how long it takes for the worker to settle down to a permanent job, given a starting point of $(\theta, \epsilon) = (0, 0)$. @@ -402,16 +406,21 @@ $$ Collect 25,000 draws of this random variable and compute the median (which should be about 7). Repeat the exercise with $\beta=0.99$ and interpret the change. +``` + -(career_ex3)= -### Exercise 3 +```{exercise} +:label: career_ex3 Set the parameterization to `G_a = G_b = 100` and generate a new optimal policy figure -- interpret. +``` ## Solutions -### Exercise 1 +```{solution-start} career_ex1 +:class: dropdown +``` Simulate job/career paths. @@ -455,7 +464,13 @@ plt.legend() plt.show() ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} career_ex2 +:class: dropdown +``` The median for the original parameterization can be computed as follows @@ -498,7 +513,13 @@ The medians are subject to randomness but should be about 7 and 14 respectively. Not surprisingly, more patient workers will wait longer to settle down to their final job. -### Exercise 3 +```{solution-end} +``` + + +```{solution-start} career_ex3 +:class: dropdown +``` ```{code-cell} python3 cw = CareerWorkerProblem(G_a=100, G_b=100) @@ -522,3 +543,6 @@ In the new figure, you see that the region for which the worker stays put has grown because the distribution for $\epsilon$ has become more concentrated around the mean, making high-paying jobs less realistic. + +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/cass_koopmans_1.md b/lectures/cass_koopmans_1.md index fa9f38851..a00036a2d 100644 --- a/lectures/cass_koopmans_1.md +++ b/lectures/cass_koopmans_1.md @@ -866,17 +866,28 @@ state in which $f'(K)=\rho +\delta$. ### Exercise +```{exercise} +:label: ck1_ex1 + - Plot the optimal consumption, capital, and saving paths when the initial capital level begins at 1.5 times the steady state level as we shoot towards the steady state at $T=130$. - Why does the saving rate respond as it does? +``` ### Solution +```{solution-start} ck1_ex1 +:class: dropdown +``` + ```{code-cell} python3 plot_saving_rate(pp, 0.3, k_ss*1.5, [130], k_ter=k_ss, k_ss=k_ss, s_ss=s_ss) ``` +```{solution-end} +``` + ## Concluding Remarks In {doc}`Cass-Koopmans Competitive Equilibrium `, we study a decentralized version of an economy with exactly the same diff --git a/lectures/coleman_policy_iter.md b/lectures/coleman_policy_iter.md index 152044bec..76e0b77b9 100644 --- a/lectures/coleman_policy_iter.md +++ b/lectures/coleman_policy_iter.md @@ -424,7 +424,8 @@ and accuracy, at least for this model. ## Exercises -### Exercise 1 +```{exercise} +:label: cpi_ex1 Solve the model with CRRA utility @@ -435,10 +436,13 @@ $$ Set `γ = 1.5`. Compute and plot the optimal policy. +``` ## Solutions -### Exercise 1 +```{solution-start} cpi_ex1 +:class: dropdown +``` We use the class `OptimalGrowthModel_CRRA` from our {doc}`VFI lecture `. @@ -468,3 +472,5 @@ ax.legend() plt.show() ``` +```{solution-end} +``` diff --git a/lectures/finite_markov.md b/lectures/finite_markov.md index 272cfc183..b1f94fb88 100644 --- a/lectures/finite_markov.md +++ b/lectures/finite_markov.md @@ -997,8 +997,8 @@ Premultiplication by $(I - \beta P)^{-1}$ amounts to "applying the **resolvent o ## Exercises -(mc_ex1)= -### Exercise 1 +```{exercise} +:label: fm_ex1 According to the discussion {ref}`above `, if a worker's employment dynamics obey the stochastic matrix @@ -1033,9 +1033,12 @@ it is close to $p$. You will see that this statement is true regardless of the choice of initial condition or the values of $\alpha, \beta$, provided both lie in $(0, 1)$. +``` + -(mc_ex2)= -### Exercise 2 +```{exercise-start} +:label: fm_ex2 +``` A topic of interest for economics and many other disciplines is *ranking*. @@ -1059,7 +1062,6 @@ is known as [PageRank](https://en.wikipedia.org/wiki/PageRank). To illustrate the idea, consider the following diagram ```{figure} /_static/lecture_specific/finite_markov/web_graph.png - ``` Imagine that this is a miniature version of the WWW, with @@ -1197,8 +1199,12 @@ re.findall('\w', 'a ^^ b &&& $$ c') When you solve for the ranking, you will find that the highest ranked node is in fact `g`, while the lowest is `a`. -(mc_ex3)= -### Exercise 3 +```{exercise-end} +``` + + +```{exercise} +:label: fm_ex3 In numerical work, it is sometimes convenient to replace a continuous model with a discrete one. @@ -1260,10 +1266,13 @@ $\{x_0, \ldots, x_{n-1}\} \subset \mathbb R$ and $n \times n$ matrix $P$ as described above. * Even better, write a function that returns an instance of [QuantEcon.py's](http://quantecon.org/quantecon-py) MarkovChain class. +``` ## Solutions -### Exercise 1 +```{solution-start} fm_ex1 +:class: dropdown +``` We will address this exercise graphically. @@ -1301,7 +1310,13 @@ ax.legend(loc='upper right') plt.show() ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} fm_ex2 +:class: dropdown +``` ```{code-cell} python3 """ @@ -1340,10 +1355,16 @@ for name, rank in sorted(ranked_pages.items(), key=itemgetter(1), reverse=1): print(f'{name}: {rank:.4}') ``` -### Exercise 3 +```{solution-end} +``` + + +```{solution} fm_ex3 +:class: dropdown A solution from the [QuantEcon.py](http://quantecon.org/quantecon-py) library can be found [here](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/markov/approximation.py). -[^pm]: Hint: First show that if $P$ and $Q$ are stochastic matrices then so is their product --- to check the row sums, try post multiplying by a column vector of ones. Finally, argue that $P^n$ is a stochastic matrix using induction. +``` +[^pm]: Hint: First show that if $P$ and $Q$ are stochastic matrices then so is their product --- to check the row sums, try post multiplying by a column vector of ones. Finally, argue that $P^n$ is a stochastic matrix using induction. diff --git a/lectures/harrison_kreps.md b/lectures/harrison_kreps.md index e90bbeb7a..4b124128d 100644 --- a/lectures/harrison_kreps.md +++ b/lectures/harrison_kreps.md @@ -516,7 +516,9 @@ He emphasizes how limiting short sales and limiting leverage have opposite effec ## Exercises -### Exercise 1 +```{exercise-start} +:label: hk_ex1 +``` This exercise invites you to recreate the summary table using the functions we have built above. @@ -563,9 +565,14 @@ $$ We'll use these transition matrices when we present our solution of exercise 1 below. +```{exercise-end} +``` + ## Solutions -### Exercise 1 +```{solution-start} hk_ex1 +:class: dropdown +``` First, we will obtain equilibrium price vectors with homogeneous beliefs, including when all investors are optimistic or pessimistic. @@ -611,5 +618,7 @@ for p, label in zip(opt_beliefs, labels): Notice that the equilibrium price with heterogeneous beliefs is equal to the price under single beliefs with **permanently optimistic** investors - this is due to the marginal investor in the heterogeneous beliefs equilibrium always being the type who is temporarily optimistic. -[^f1]: By assuming that both types of agents always have "deep enough pockets" to purchase all of the asset, the model takes wealth dynamics off the table. The Harrison-Kreps model generates high trading volume when the state changes either from 0 to 1 or from 1 to 0. +```{solution-end} +``` +[^f1]: By assuming that both types of agents always have "deep enough pockets" to purchase all of the asset, the model takes wealth dynamics off the table. The Harrison-Kreps model generates high trading volume when the state changes either from 0 to 1 or from 1 to 0. \ No newline at end of file diff --git a/lectures/heavy_tails.md b/lectures/heavy_tails.md index 345135054..4eb6a447a 100644 --- a/lectures/heavy_tails.md +++ b/lectures/heavy_tails.md @@ -365,18 +365,25 @@ You are asked to reproduce this figure in the exercises. ## Exercises -### Exercise 1 +```{exercise} +:label: ht_ex1 Replicate {ref}`the figure presented above ` that compares normal and Cauchy draws. Use `np.random.seed(11)` to set the seed. +``` + -### Exercise 2 +```{exercise} +:label: ht_ex2 Prove: If $X$ has a Pareto tail with tail index $\alpha$, then $\mathbb E[X^r] = \infty$ for all $r \geq \alpha$. +``` -### Exercise 3 + +```{exercise} +:label: ht_ex3 Repeat exercise 1, but replace the three distributions (two normal, one Cauchy) with three Pareto distributions using different choices of @@ -385,16 +392,21 @@ $\alpha$. For $\alpha$, try 1.15, 1.5 and 1.75. Use `np.random.seed(11)` to set the seed. +``` + -### Exercise 4 +```{exercise} +:label: ht_ex4 Replicate the rank-size plot figure {ref}`presented above `. If you like you can use the function `qe.rank_size` from the `quantecon` library to generate the plots. Use `np.random.seed(13)` to set the seed. +``` -### Exercise 5 +```{exercise} +:label: ht_ex5 There is an ongoing argument about whether the firm size distribution should be modeled as a Pareto distribution or a lognormal distribution (see, e.g., @@ -443,10 +455,13 @@ What differences do you observe? (Note: a better approach to this problem would be to model firm dynamics and try to track individual firms given the current distribution. We will discuss firm dynamics in later lectures.) +``` ## Solutions -### Exercise 1 +```{solution-start} ht_ex1 +:class: dropdown +``` ```{code-cell} python3 n = 120 @@ -477,7 +492,12 @@ plt.subplots_adjust(hspace=0.25) plt.show() ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution} ht_ex2 +:class: dropdown Let $X$ have a Pareto tail with tail index $\alpha$ and let $F$ be its cdf. @@ -501,8 +521,12 @@ $$ We know that $\int_{\bar x}^\infty x^{r-\alpha-1} x = \infty$ whenever $r - \alpha - 1 \geq -1$. Since $r \geq \alpha$, we have $\mathbb E X^r = \infty$. +``` + -### Exercise 3 +```{solution-start} ht_ex3 +:class: dropdown +``` ```{code-cell} ipython3 from scipy.stats import pareto @@ -526,7 +550,13 @@ plt.subplots_adjust(hspace=0.4) plt.show() ``` -### Exercise 4 +```{solution-end} +``` + + +```{solution-start} ht_ex4 +:class: dropdown +``` First let's generate the data for the plots: @@ -564,7 +594,14 @@ fig.subplots_adjust(hspace=0.4) plt.show() ``` -### Exercise 5 +```{solution-end} +``` + + + +```{solution-start} ht_ex5 +:class: dropdown +``` To do the exercise, we need to choose the parameters $\mu$ and $\sigma$ of the lognormal distribution to match the mean and median @@ -672,3 +709,5 @@ tax_rev_lognorm.mean(), tax_rev_lognorm.std() Looking at the output of the code, our main conclusion is that the Pareto assumption leads to a lower mean and greater dispersion. +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/ifp.md b/lectures/ifp.md index 03cd0b039..5151bfd90 100644 --- a/lectures/ifp.md +++ b/lectures/ifp.md @@ -544,14 +544,15 @@ Success! ## Exercises -### Exercise 1 +```{exercise-start} +:label: ifp_ex1 +``` Let's consider how the interest rate affects consumption. Reproduce the following figure, which shows (approximately) optimal consumption policies for different interest rates ```{figure} /_static/lecture_specific/ifp/ifp_policies.png - ``` * Other than `r`, all parameters are at their default values. @@ -560,8 +561,13 @@ Reproduce the following figure, which shows (approximately) optimal consumption The figure shows that higher interest rates boost savings and hence suppress consumption. -(ifp_lrex)= -### Exercise 2 +```{exercise-end} +``` + + +```{exercise-start} +:label: ifp_ex2 +``` Now let's consider the long run asset levels held by households under the default parameters. @@ -614,7 +620,12 @@ Your task is to generate such a histogram. z_0)$ will not matter. * You might find it helpful to use the `MarkovChain` class from `quantecon`. -### Exercise 3 +```{exercise-end} +``` + + +```{exercise} +:label: ifp_ex3 Following on from exercises 1 and 2, let's look at how savings and aggregate asset holdings vary with the interest rate @@ -634,10 +645,14 @@ Following tradition, put the price (i.e., interest rate) on the vertical axis. On the horizontal axis put aggregate capital, computed as the mean of the stationary distribution given the interest rate. +``` + ## Solutions -### Exercise 1 +```{solution-start} ifp_ex1 +:class: dropdown +``` Here's one solution: @@ -655,7 +670,13 @@ ax.legend() plt.show() ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} ifp_ex2 +:class: dropdown +``` First we write a function to compute a long asset series. @@ -704,7 +725,13 @@ Here it is left skewed when in reality it has a long right tail. In a {doc}`subsequent lecture ` we will rectify this by adding more realistic features to the model. -### Exercise 3 +```{solution-end} +``` + + +```{solution-start} ifp_ex3 +:class: dropdown +``` Here's one solution @@ -728,3 +755,5 @@ plt.show() As expected, aggregate savings increases with the interest rate. +```{solution-end} +``` diff --git a/lectures/ifp_advanced.md b/lectures/ifp_advanced.md index 903a3a0b4..3edb1bbd0 100644 --- a/lectures/ifp_advanced.md +++ b/lectures/ifp_advanced.md @@ -591,9 +591,10 @@ diverge even in the highest state. ## Exercises -### Exercise 1 +```{exercise} +:label: ifpa_ex1 -Let's repeat our {ref}`earlier exercise ` on the long-run +Let's repeat our {ref}`earlier exercise ` on the long-run cross sectional distribution of assets. In that exercise, we used a relatively simple income fluctuation model. @@ -605,10 +606,13 @@ In particular, we failed to match the long right tail of the wealth distribution Your task is to try again, repeating the exercise, but now with our more sophisticated model. Use the default parameters. +``` ## Solutions -### Exercise 1 +```{solution-start} ifpa_ex1 +:class: dropdown +``` First we write a function to compute a long asset series. @@ -678,3 +682,5 @@ ax.set(xlabel='assets') plt.show() ``` +```{solution-end} +``` diff --git a/lectures/inventory_dynamics.md b/lectures/inventory_dynamics.md index 0118eb0d4..5473c33b4 100644 --- a/lectures/inventory_dynamics.md +++ b/lectures/inventory_dynamics.md @@ -278,7 +278,8 @@ histogram just above. ## Exercises -### Exercise 1 +```{exercise} +:label: id_ex1 This model is asymptotically stationary, with a unique stationary distribution. @@ -300,17 +301,23 @@ distribution.) You should see convergence, in the sense that differences between successive distributions are getting smaller. Try different initial conditions to verify that, in the long run, the distribution is invariant across initial conditions. +``` + -### Exercise 2 +```{exercise} +:label: id_ex2 Using simulation, calculate the probability that firms that start with $X_0 = 70$ need to order twice or more in the first 50 periods. You will need a large sample size to get an accurate reading. +``` ## Solutions -### Exercise 1 +```{solution-start} id_ex1 +:class: dropdown +``` Below is one possible solution: @@ -377,7 +384,12 @@ testing a few of them. For example, try rerunning the code above will all firms starting at $X_0 = 20$ or $X_0 = 80$. -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} id_ex2 +``` Here is one solution. @@ -426,3 +438,5 @@ Depending on your system, the difference can be substantial. (On our desktop machine, the speed up is by a factor of 5.) +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/jv.md b/lectures/jv.md index 1f75bf578..2a309dafc 100644 --- a/lectures/jv.md +++ b/lectures/jv.md @@ -413,8 +413,9 @@ Overall, the policies match well with our predictions from {ref}`above ` we see that $x_t \approx $s_t = s(x_t) \approx 0$ and $\phi_t = \phi(x_t) \approx 0.6$. -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} jv_ex2 +:class: dropdown +``` The figure can be produced as follows @@ -546,7 +560,7 @@ plt.show() Observe that the maximizer is around 0.6. This is similar to the long-run value for $\phi$ obtained in -exercise 1. +{ref}`jv_ex1`. Hence the behavior of the infinitely patent worker is similar to that of the worker with $\beta = 0.96$. @@ -554,3 +568,5 @@ of the worker with $\beta = 0.96$. This seems reasonable and helps us confirm that our dynamic programming solutions are probably correct. +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/kalman.md b/lectures/kalman.md index ccda4560c..782b190e5 100644 --- a/lectures/kalman.md +++ b/lectures/kalman.md @@ -544,8 +544,9 @@ You can view the program [on GitHub](https://github.com/QuantEcon/QuantEcon.py/b ## Exercises -(kalman_ex1)= -### Exercise 1 +```{exercise-start} +:label: kalman_ex1 +``` Consider the following simple application of the Kalman filter, loosely based on {cite}`Ljungqvist2012`, section 2.9.2. @@ -568,11 +569,14 @@ In the simulation, take $\theta = 10$, $\hat x_0 = 8$ and $\Sigma_0 = 1$. Your figure should -- modulo randomness -- look something like this ```{figure} /_static/lecture_specific/kalman/kl_ex1_fig.png +``` +```{exercise-end} ``` -(kalman_ex2)= -### Exercise 2 +```{exercise-start} +:label: kalman_ex2 +``` The preceding figure gives some support to the idea that probability mass converges to $\theta$. @@ -590,11 +594,15 @@ Plot $z_t$ against $T$, setting $\epsilon = 0.1$ and $T = 600$. Your figure should show error erratically declining something like this ```{figure} /_static/lecture_specific/kalman/kl_ex2_fig.png +``` +```{exercise-end} ``` -(kalman_ex3)= -### Exercise 3 + +```{exercise-start} +:label: kalman_ex3 +``` As discussed {ref}`above `, if the shock sequence $\{w_t\}$ is not degenerate, then it is not in general possible to predict $x_t$ without error at time $t-1$ (and this would be the case even if we could observe $x_{t-1}$). @@ -650,23 +658,29 @@ Finally, set $x_0 = (0, 0)$. You should end up with a figure similar to the following (modulo randomness) ```{figure} /_static/lecture_specific/kalman/kalman_ex3.png - ``` Observe how, after an initial learning period, the Kalman filter performs quite well, even relative to the competitor who predicts optimally with knowledge of the latent state. -(kalman_ex4)= -### Exercise 4 +```{exercise-end} +``` + + +```{exercise} +:label: kalman_ex4 Try varying the coefficient $0.3$ in $Q = 0.3 I$ up and down. Observe how the diagonal values in the stationary solution $\Sigma$ (see {eq}`kalman_dare`) increase and decrease in line with this coefficient. The interpretation is that more randomness in the law of motion for $x_t$ causes more (permanent) uncertainty in prediction. +``` ## Solutions -### Exercise 1 +```{solution-start} kalman_ex1 +:class: dropdown +``` ```{code-cell} python3 # Parameters @@ -699,7 +713,13 @@ ax.legend(loc='upper left') plt.show() ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} kalman_ex2 +:class: dropdown +``` ```{code-cell} python3 ϵ = 0.1 @@ -733,7 +753,13 @@ ax.fill_between(range(T), np.zeros(T), z, color="blue", alpha=0.2) plt.show() ``` -### Exercise 3 +```{solution-end} +``` + + +```{solution-start} kalman_ex3 +:class: dropdown +``` ```{code-cell} python3 # Define A, C, G, H @@ -786,5 +812,8 @@ ax.legend() plt.show() ``` +```{solution-end} +``` + [^f1]: See, for example, page 93 of {cite}`Bishop2006`. To get from his expressions to the ones used above, you will also need to apply the [Woodbury matrix identity](https://en.wikipedia.org/wiki/Woodbury_matrix_identity). diff --git a/lectures/kesten_processes.md b/lectures/kesten_processes.md index c387aefe9..81d1332b2 100644 --- a/lectures/kesten_processes.md +++ b/lectures/kesten_processes.md @@ -431,7 +431,8 @@ quantitative analysis. ## Exercises -### Exercise 1 +```{exercise} +:label: kp_ex1 Simulate and plot 15 years of daily returns (consider each year as having 250 working days) using the GARCH(1, 1) process in {eq}`garch11v`--{eq}`garch11r`. @@ -443,16 +444,20 @@ Set $\alpha_0 = 0.00001, \alpha_1 = 0.1, \beta = 0.9$ and $\sigma_0 = 0$. Compare visually with the Nasdaq Composite Index returns {ref}`shown above `. While the time path differs, you should see bursts of high volatility. +``` -### Exercise 2 +```{exercise} +:label: kp_ex2 In our discussion of firm dynamics, it was claimed that {eq}`firm_dynam` is more consistent with the empirical literature than Gibrat's law in {eq}`firm_dynam_gb`. (The empirical literature was reviewed immediately above {eq}`firm_dynam`.) In what sense is this true (or false)? +``` -### Exercise 3 +```{exercise} +:label: kp_ex3 Consider an arbitrary Kesten process as given in {eq}`kesproc`. @@ -469,8 +474,12 @@ only if $\mu < 0$. Obtain the value of $\alpha$ that makes the Kesten--Goldie conditions hold. +``` -### Exercise 4 + +```{exercise-start} +:label: kp_ex4 +``` One unrealistic aspect of the firm dynamics specified in {eq}`firm_dynam` is that it ignores entry and exit. @@ -549,9 +558,14 @@ M = 1_000_000 # number of firms s_init = 1.0 # initial condition for each firm ``` +```{exercise-end} +``` + ## Solutions -### Exercise 1 +```{solution-start} kp_ex1 +:class: dropdown +``` Here is one solution: @@ -582,7 +596,13 @@ ax.set(xlabel='time', ylabel='$\\sigma_t^2$') plt.show() ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} kp_ex2 +:class: dropdown +``` The empirical findings are that @@ -618,7 +638,13 @@ Both of these decline with firm size $s$, consistent with the data. Moreover, the law of motion {eq}`firm_dynam_2` clearly approaches Gibrat's law {eq}`firm_dynam_gb` as $s_t$ gets large. -### Exercise 3 +```{solution-end} +``` + + +```{solution-start} kp_ex3 +:class: dropdown +``` Since $a_t$ has a density it is nonarithmetic. @@ -643,7 +669,13 @@ $$ Solving for $\alpha$ gives $\alpha = -2\mu / \sigma^2$. -### Exercise 4 +```{solution-end} +``` + + +```{solution-start} kp_ex4 +:class: dropdown +``` Here's one solution. First we generate the observations: @@ -697,3 +729,5 @@ plt.show() The plot produces a straight line, consistent with a Pareto tail. +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/lake_model.md b/lectures/lake_model.md index a2fe3d887..85b29e8f8 100644 --- a/lectures/lake_model.md +++ b/lectures/lake_model.md @@ -896,7 +896,8 @@ The level that maximizes steady state welfare is approximately 62. ## Exercises -### Exercise 1 +```{exercise} +:label: lm_ex1 In the Lake Model, there is derived data such as $A$ which depends on primitives like $\alpha$ and $\lambda$. @@ -918,8 +919,11 @@ This is safer and means we don't need to create a fresh instance for every new p In this exercise, your task is to arrange the `LakeModel` class by using descriptors and decorators such as `@property`. (If you need to refresh your understanding of how these work, consult [this lecture](https://python-programming.quantecon.org/python_advanced_features.html).) +``` + -### Exercise 2 +```{exercise} +:label: lm_ex2 Consider an economy with an initial stock of workers $N_0 = 100$ at the steady state level of employment in the baseline parameterization @@ -942,8 +946,11 @@ How long does the economy take to converge to its new steady state? What is the new steady state level of employment? Note: it may be easier to use the class created in exercise 1 to help with changing variables. +``` + -### Exercise 3 +```{exercise} +:label: lm_ex3 Consider an economy with an initial stock of workers $N_0 = 100$ at the steady state level of employment in the baseline parameterization. @@ -955,10 +962,13 @@ Plot the transition dynamics of the unemployment and employment stocks for 50 pe Plot the transition dynamics for the rates. How long does the economy take to return to its original steady state? +``` ## Solutions -### Exercise 1 +```{solution-start} lm_ex1 +:class: dropdown +``` ```{code-cell} python3 class LakeModelModified: @@ -1102,7 +1112,13 @@ class LakeModelModified: x = self.A_hat @ x ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} lm_ex2 +:class: dropdown +``` We begin by constructing the class containing the default parameters and assigning the steady state values to `x0` @@ -1172,7 +1188,14 @@ plt.show() We see that it takes 20 periods for the economy to converge to its new steady state levels. -### Exercise 3 +```{solution-end} +``` + + + +```{solution-start} lm_ex3 +:class: dropdown +``` This next exercise has the economy experiencing a boom in entrances to the labor market and then later returning to the original levels. @@ -1258,3 +1281,5 @@ plt.tight_layout() plt.show() ``` +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/linear_algebra.md b/lectures/linear_algebra.md index 0bd311baa..8619b2fca 100644 --- a/lectures/linear_algebra.md +++ b/lectures/linear_algebra.md @@ -1151,7 +1151,7 @@ Then 1. $\frac{\partial y'B z}{\partial y} = B z$ 1. $\frac{\partial y'B z}{\partial B} = y z'$ -Exercise 1 below asks you to apply these formulas. +{ref}`la_ex1` below asks you to apply these formulas. ### Further Reading @@ -1165,7 +1165,9 @@ is {cite}`Janich1994`. ## Exercises -### Exercise 1 +```{exercise-start} +:label: la_ex1 +``` Let $x$ be a given $n \times 1$ vector and consider the problem @@ -1209,9 +1211,14 @@ As we will see, in economic contexts Lagrange multipliers often are shadow price If we don't care about the Lagrange multipliers, we can substitute the constraint into the objective function, and then just maximize $-(Ax + Bu)'P (Ax + Bu) - u' Q u$ with respect to $u$. You can verify that this leads to the same maximizer. ``` +```{exercise-end} +``` + ## Solutions -### Solution to Exercise 1 +```{solution-start} la_ex1 +:class: dropdown +``` We have an optimization problem: @@ -1361,8 +1368,10 @@ Therefore, the solution to the optimization problem $v(x) = -x' \tilde{P}x$ follows the above result by denoting $\tilde{P} := A'PA - A'PB(Q + B'PB)^{-1}B'PA$ +```{solution-end} +``` + [^fn_mdt]: Although there is a specialized matrix data type defined in NumPy, it's more standard to work with ordinary NumPy arrays. See [this discussion](https://python-programming.quantecon.org/numpy.html#matrix-multiplication). [^cfn]: Suppose that $\|S \| < 1$. Take any nonzero vector $x$, and let $r := \|x\|$. We have $\| Sx \| = r \| S (x/r) \| \leq r \| S \| < r = \| x\|$. Hence every point is pulled towards the origin. - diff --git a/lectures/linear_models.md b/lectures/linear_models.md index cc48009d2..e2e6a4f85 100644 --- a/lectures/linear_models.md +++ b/lectures/linear_models.md @@ -1345,8 +1345,8 @@ Examples of usage are given in the solutions to the exercises. ## Exercises -(lss_ex1)= -### Exercise 1 +```{exercise} +:label: lss_ex1 In several contexts, we want to compute forecasts of geometric sums of future random variables governed by the linear state-space system {eq}`st_space_rep`. @@ -1375,10 +1375,12 @@ $$ $$ what must the modulus for every eigenvalue of $A$ be less than? +``` ## Solutions -### Exercise 1 +```{solution} lss_ex1 +:class: dropdown Suppose that every eigenvalue of $A$ has modulus strictly less than $\frac{1}{\beta}$. @@ -1401,11 +1403,12 @@ $$ = G[I - \beta A]^{-1} x_t $$ +``` + [^foot1]: The eigenvalues of $A$ are $(1,-1, i,-i)$. [^fn_ag]: The correct way to argue this is by induction. Suppose that $x_t$ is Gaussian. Then {eq}`st_space_rep` and {eq}`lss_glig` imply that $x_{t+1}$ is Gaussian. Since $x_0$ is assumed to be Gaussian, it follows that every $x_t$ is Gaussian. -Evidently, this implies that each $y_t$ is Gaussian. - +Evidently, this implies that each $y_t$ is Gaussian. \ No newline at end of file diff --git a/lectures/lln_clt.md b/lectures/lln_clt.md index 6236dbd10..190174651 100644 --- a/lectures/lln_clt.md +++ b/lectures/lln_clt.md @@ -631,8 +631,10 @@ n \to \infty ## Exercises -(lln_ex1)= -### Exercise 1 + +```{exercise-start} +:label: lln_ex1 +``` One very useful consequence of the central limit theorem is as follows. @@ -663,8 +665,13 @@ What happens when you replace $[0, \pi / 2]$ with $[0, \pi]$? What is the source of the problem? -(lln_ex2)= -### Exercise 2 +```{exercise-end} +``` + + +```{exercise-start} +:label: lln_ex2 +``` Here's a result that's often used in developing statistical tests, and is connected to the multivariate central limit theorem. @@ -769,9 +776,14 @@ Hints: 1. `scipy.linalg.sqrtm(A)` computes the square root of `A`. You still need to invert it. 1. You should be able to work out $\Sigma$ from the preceding information. +```{exercise-end} +``` + ## Solutions -### Exercise 1 +```{solution-start} lln_ex1 +:class: dropdown +``` Here is one solution @@ -816,7 +828,13 @@ $\pi/2$, and since $g' = \cos$, we have $g'(\mu) = 0$. Hence the conditions of the delta theorem are not satisfied. -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} lln_ex2 +:class: dropdown +``` First we want to verify the claim that @@ -908,3 +926,5 @@ ax.hist(chisq_obs, bins=50, density=True) plt.show() ``` +```{solution-end} +``` diff --git a/lectures/lq_inventories.md b/lectures/lq_inventories.md index 261e94eec..ad5d4dcf2 100644 --- a/lectures/lq_inventories.md +++ b/lectures/lq_inventories.md @@ -706,7 +706,8 @@ ex6.simulate(x03, T=20) Please try to analyze some inventory sales smoothing problems using the `SmoothingExample` class. -### Exercise 1 +```{exercise} +:label: lqi_ex1 Assume that the demand shock follows AR(2) process below: @@ -726,16 +727,25 @@ $\epsilon_t$. Compute the stationary states $\bar{x}$ by simulating for a long period. Then try to add shocks with different magnitude to $\bar{\nu}_t$ and simulate paths. You should see how firms respond differently by staring at the production plans. +``` + -### Exercise 2 +```{exercise} +:label: lqi_ex2 Change parameters of $C(Q_t)$ and $d(I_t, S_t)$. 1. Make production more costly, by setting $c_2=5$. 1. Increase the cost of having inventories deviate from sales, by setting $d_2=5$. +``` -### Solution 1 + +## Solutions + +```{solution-start} lqi_ex1 +:class: dropdown +``` ```{code-cell} python3 # set parameters @@ -795,7 +805,13 @@ x_bar1[2] += 10 ex1_no_noise.simulate(x_bar1, T=T) ``` -### Solution 2 +```{solution-end} +``` + + +```{solution-start} lqi_ex2 +:class: dropdown +``` ```{code-cell} python3 x0 = [0, 1, 0] @@ -809,3 +825,5 @@ SmoothingExample(c2=5).simulate(x0) SmoothingExample(d2=5).simulate(x0) ``` +```{solution-end} +``` diff --git a/lectures/lqcontrol.md b/lectures/lqcontrol.md index 32a7a6b39..3c147d691 100644 --- a/lectures/lqcontrol.md +++ b/lectures/lqcontrol.md @@ -1058,7 +1058,7 @@ Once again, smooth consumption is a dominant feature of the sample paths. The asset path exhibits dynamics consistent with standard life cycle theory. -Exercise 1 gives the full set of parameters used here and asks you to replicate the figure. +{ref}`lqc_ex1` gives the full set of parameters used here and asks you to replicate the figure. (lq_nsi2)= ### Application 2: A Permanent Income Model with Retirement @@ -1124,7 +1124,7 @@ The next figure shows one simulation based on this procedure. ``` -The full set of parameters used in the simulation is discussed in {ref}`Exercise 2 `, where you are asked to replicate the figure. +The full set of parameters used in the simulation is discussed in {ref}`lqc_ex2`, where you are asked to replicate the figure. Once again, the dominant feature observable in the simulation is consumption smoothing. @@ -1250,19 +1250,22 @@ It's now relatively straightforward to find $R$ and $Q$ such that Furthermore, the matrices $A, B$ and $C$ from {eq}`lq_lom` can be found by writing down the dynamics of each element of the state. -{ref}`Exercise 3 ` asks you to complete this process, and reproduce the preceding figures. +{ref}`lqc_ex3` asks you to complete this process, and reproduce the preceding figures. ## Exercises -(lqc_ex1)= -### Exercise 1 + +```{exercise} +:label: lqc_ex1 Replicate the figure with polynomial income {ref}`shown above `. The parameters are $r = 0.05, \beta = 1 / (1 + r), \bar c = 1.5, \mu = 2, \sigma = 0.15, T = 50$ and $q = 10^4$. +``` -(lqc_ex2)= -### Exercise 2 + +```{exercise} +:label: lqc_ex2 Replicate the figure on work and retirement {ref}`shown above `. @@ -1290,19 +1293,24 @@ back to the start of retirement. With some careful footwork, the simulation can be generated by patching together the simulations from these two separate models. +``` -(lqc_ex3)= -### Exercise 3 + +```{exercise} +:label: lqc_ex3 Reproduce the figures from the monopolist application {ref}`given above `. For parameters, use $a_0 = 5, a_1 = 0.5, \sigma = 0.15, \rho = 0.9, \beta = 0.95$ and $c = 2$, while $\gamma$ varies between 1 and 50 (see figures). +``` ## Solutions -### Exercise 1 +```{solution-start} lqc_ex1 +:class: dropdown +``` Here’s one solution. @@ -1390,7 +1398,13 @@ for ax in axes: plt.show() ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} lqc_ex2 +:class: dropdown +``` This is a permanent income / life-cycle model with polynomial growth in income over working life followed by a fixed retirement income. @@ -1498,7 +1512,13 @@ for ax in axes: plt.show() ``` -### Exercise 3 +```{solution-end} +``` + + +```{solution-start} lqc_ex3 +:class: dropdown +``` The first task is to find the matrices $A, B, C, Q, R$ that define the LQ problem. @@ -1592,3 +1612,5 @@ ax.text(max(time) * 0.6, 1 * q_bar.max(), s, fontsize=14) plt.show() ``` +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/markov_asset.md b/lectures/markov_asset.md index 4223517db..372d8e721 100644 --- a/lectures/markov_asset.md +++ b/lectures/markov_asset.md @@ -309,7 +309,7 @@ You can think of The next figure shows a simulation, where -* $\{X_t\}$ evolves as a discretized AR1 process produced using {ref}`Tauchen's method `. +* $\{X_t\}$ evolves as a discretized AR1 process produced using {ref}`Tauchen's method `. * $g_t = \exp(X_t)$, so that $\ln g_t = X_t$ is the growth rate. ```{code-cell} ipython @@ -397,7 +397,7 @@ v = (I - \beta K)^{-1} \beta K{\mathbb 1} Let's calculate and plot the price-dividend ratio at some parameters. -As before, we'll generate $\{X_t\}$ as a {ref}`discretized AR1 process ` and set $g_t = \exp(X_t)$. +As before, we'll generate $\{X_t\}$ as a {ref}`discretized AR1 process ` and set $g_t = \exp(X_t)$. Here's the code, including a test of the spectral radius condition @@ -915,7 +915,8 @@ Then $m_1 = \beta M$, and $m_{j+1} = M m_j$ for $j \geq 1$. ## Exercises -### Exercise 1 +```{exercise} +:label: ma_ex1 In the lecture, we considered **ex-dividend assets**. @@ -929,8 +930,12 @@ price of a cum-dividend asset? With a growing, non-random dividend process $d_t = g d_t$ where $0 < g \beta < 1$, what is the equilibrium price of a cum-dividend asset? +``` + -### Exercise 2 +```{exercise-start} +:label: ma_ex2 +``` Consider the following primitives @@ -953,7 +958,11 @@ Do the same for * the price of the risk-free consol when $\zeta = 1$ * the call option on the consol when $\zeta = 1$ and $p_S = 150.0$ -### Exercise 3 +```{exercise-end} +``` + +```{exercise} +:label: ma_ex3 Let's consider finite horizon call options, which are more common than infinite horizon ones. @@ -991,13 +1000,15 @@ $$ Write a function that computes $w_k$ for any given $k$. -Compute the value of the option with `k = 5` and `k = 25` using parameter values as in Exercise 1. +Compute the value of the option with `k = 5` and `k = 25` using parameter values as in {ref}`ma_ex1`. Is one higher than the other? Can you give intuition? +``` ## Solutions -### Exercise 1 +```{solution} ma_ex1 +:class: dropdown For a cum-dividend asset, the basic risk-neutral asset pricing equation is @@ -1017,8 +1028,12 @@ With a growing, non-random dividend process, the equilibrium price is $$ p_t = \frac{1}{1 - \beta g} d_t $$ +``` -### Exercise 2 + +```{solution-start} ma_ex2 +:class: dropdown +``` First, let's enter the parameters: @@ -1066,7 +1081,13 @@ ax.legend() plt.show() ``` -### Exercise 3 +```{solution-end} +``` + + +```{solution-start} ma_ex3 +:class: dropdown +``` Here's a suitable function: @@ -1109,3 +1130,5 @@ Not surprisingly, options with larger $k$ are worth more. This is because an owner has a longer horizon over which the option can be exercised. +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/markov_perf.md b/lectures/markov_perf.md index ae5040447..b88c96690 100644 --- a/lectures/markov_perf.md +++ b/lectures/markov_perf.md @@ -512,15 +512,20 @@ As expected, output is higher and prices are lower under duopoly than monopoly. ## Exercises -### Exercise 1 +```{exercise} +:label: mp_ex1 Replicate the {ref}`pair of figures ` showing the comparison of output and prices for the monopolist and duopoly under MPE. Parameters are as in duopoly_mpe.py and you can use that code to compute MPE policies under duopoly. The optimal policy in the monopolist case can be computed using [QuantEcon.py](http://quantecon.org/quantecon-py)'s LQ class. +``` + -### Exercise 2 +```{exercise-start} +:label: mp_ex2 +``` In this exercise, we consider a slightly more sophisticated duopoly problem. @@ -600,7 +605,6 @@ e1 = e2 = np.array([10, 10, 3]) ``` ```{figure} /_static/lecture_specific/markov_perf/judd_fig2.png - ``` Inventories trend to a common steady state. @@ -610,12 +614,17 @@ If we increase the depreciation rate to $\delta = 0.05$, then we expect steady s This is indeed the case, as the next figure shows ```{figure} /_static/lecture_specific/markov_perf/judd_fig1.png +``` +```{exercise-end} ``` ## Solutions ### Exercise 1 +```{solution-start} mp_ex1 +:class: dropdown +``` First, let's compute the duopoly MPE under the stated parameters @@ -728,7 +737,13 @@ ax.legend(loc='upper right', frameon=0) plt.show() ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} mp_ex2 +:class: dropdown +``` We treat the case $\delta = 0.02$ @@ -833,3 +848,5 @@ ax.legend() plt.show() ``` +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/mccall_correlated.md b/lectures/mccall_correlated.md index 94b0e37a4..fafc73265 100644 --- a/lectures/mccall_correlated.md +++ b/lectures/mccall_correlated.md @@ -416,15 +416,20 @@ This is because the value of waiting increases with unemployment compensation. ## Exercises ### Exercise 1 +```{exercise} +:label: mc_ex1 Investigate how mean unemployment duration varies with the discount factor $\beta$. * What is your prior expectation? * Do your results match up? +``` ## Solutions -### Exercise 1 +```{solution-start} mc_ex1 +:class: dropdown +``` Here is one solution. @@ -447,3 +452,5 @@ plt.show() The figure shows that more patient individuals tend to wait longer before accepting an offer. +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/mccall_fitted_vfi.md b/lectures/mccall_fitted_vfi.md index d671e2a3d..3ae845267 100644 --- a/lectures/mccall_fitted_vfi.md +++ b/lectures/mccall_fitted_vfi.md @@ -315,7 +315,8 @@ The exercises ask you to explore the solution and how it changes with parameters ## Exercises -### Exercise 1 +```{exercise} +:label: mfv_ex1 Use the code above to explore what happens to the reservation wage when the wage parameter $\mu$ changes. @@ -323,8 +324,11 @@ changes. Use the default parameters and $\mu$ in `mu_vals = np.linspace(0.0, 2.0, 15)`. Is the impact on the reservation wage as you expected? +``` + -### Exercise 2 +```{exercise} +:label: mfv_ex2 Let us now consider how the agent responds to an increase in volatility. @@ -341,10 +345,12 @@ Use `s_vals = np.linspace(1.0, 2.0, 15)` and `m = 2.0`. State how you expect the reservation wage to vary with $s$. Now compute it. Is this as you expected? +``` ## Solutions -### Exercise 1 +```{solution-start} mfv_ex1 +``` Here is one solution. @@ -370,7 +376,12 @@ plt.show() Not surprisingly, the agent is more inclined to wait when the distribution of offers shifts to the right. -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} mfv_ex2 +``` Here is one solution. @@ -405,3 +416,5 @@ But job search is like holding an option: the worker is only exposed to upside r More volatility means higher upside potential, which encourages the agent to wait. +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/mccall_model.md b/lectures/mccall_model.md index ec9605034..90d9fd531 100644 --- a/lectures/mccall_model.md +++ b/lectures/mccall_model.md @@ -650,7 +650,8 @@ You can use this code to solve the exercise below. ## Exercises -### Exercise 1 +```{exercise} +:label: mm_ex1 Compute the average duration of unemployment when $\beta=0.99$ and $c$ takes the following values @@ -663,8 +664,12 @@ given the parameters, and then simulate to see how long it takes to accept. Repeat a large number of times and take the average. Plot mean unemployment duration as a function of $c$ in `c_vals`. +``` + -### Exercise 2 +```{exercise-start} +:label: mm_ex2 +``` The purpose of this exercise is to show how to replace the discrete wage offer distribution used above with a continuous distribution. @@ -719,9 +724,14 @@ For default parameters, use `c=25, β=0.99, σ=0.5, μ=2.5`. Once your code is working, investigate how the reservation wage changes with $c$ and $\beta$. +```{exercise-end} +``` + ## Solutions -### Exercise 1 +```{solution-start} mm_ex1 +:class: dropdown +``` Here's one solution @@ -767,7 +777,13 @@ ax.legend() plt.show() ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} mm_ex2 +:class: dropdown +``` ```{code-cell} python3 mccall_data_continuous = [ @@ -851,3 +867,5 @@ ax.ticklabel_format(useOffset=False) plt.show() ``` +```{solution-end} +``` diff --git a/lectures/mccall_model_with_separation.md b/lectures/mccall_model_with_separation.md index 12026d907..880c47190 100644 --- a/lectures/mccall_model_with_separation.md +++ b/lectures/mccall_model_with_separation.md @@ -497,7 +497,9 @@ Hence the reservation wage is lower. ## Exercises -### Exercise 1 +```{exercise-start} +:label: mmws_ex1 +``` Reproduce all the reservation wage figures shown above. @@ -510,9 +512,14 @@ beta_vals = np.linspace(0.8, 0.99, grid_size) # discount factors alpha_vals = np.linspace(0.05, 0.5, grid_size) # separation rate ``` +```{exercise-end} +``` + ## Solutions -### Exercise 1 +```{solution-start} mmws_ex1 +:class: dropdown +``` Here's the first figure. @@ -570,3 +577,6 @@ ax.legend() plt.show() ``` +```{solution-end} +``` + diff --git a/lectures/mle.md b/lectures/mle.md index 9aa097db4..203b0ac31 100644 --- a/lectures/mle.md +++ b/lectures/mle.md @@ -777,7 +777,9 @@ example notebook can be found ## Exercises -### Exercise 1 + +```{exercise} +:label: mle_ex1 Suppose we wanted to estimate the probability of an event $y_i$ occurring, given some observations. @@ -805,8 +807,12 @@ Hessian. The `scipy` module `stats.norm` contains the functions needed to compute the cmf and pmf of the normal distribution. +``` + -### Exercise 2 +```{exercise-start} +:label: mle_ex2 +``` Use the following dataset and initial values of $\boldsymbol{\beta}$ to estimate the MLE with the Newton-Raphson algorithm developed earlier in @@ -850,9 +856,14 @@ Note that the simple Newton-Raphson algorithm developed in this lecture is very sensitive to initial values, and therefore you may fail to achieve convergence with different starting values. +```{exercise-end} +``` + ## Solutions -### Exercise 1 +```{solution-start} mle_ex1 +:class: dropdown +``` The log-likelihood can be written as @@ -930,7 +941,13 @@ class ProbitRegression: return -(ϕ * (y * a + (1 - y) * b) * X.T) @ X ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} mle_ex2 +:class: dropdown +``` ```{code-cell} python3 X = np.array([[1, 2, 4], @@ -957,3 +974,5 @@ newton_raphson(prob) print(Probit(y, X).fit().summary()) ``` +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/odu.md b/lectures/odu.md index 5cefe5958..81a4eae76 100644 --- a/lectures/odu.md +++ b/lectures/odu.md @@ -654,7 +654,8 @@ $\bar w$. ## Exercises -### Exercise 1 +```{exercise} +:label: odu_ex1 Use the default parameters and `Q_factory` to compute an optimal policy. @@ -664,10 +665,13 @@ policy [shown above](#Take-1:-Solution-by-VFI). Try experimenting with different parameters, and confirm that the change in the optimal policy coincides with your intuition. +``` ## Solutions -### Exercise 1 +```{solution-start} odu_ex1 +:class: dropdown +``` This code solves the “Offer Distribution Unknown” model by iterating on a guess of the reservation wage function. @@ -731,6 +735,9 @@ ax.grid() plt.show() ``` +```{solution-end} +``` + ## Appendix A The next piece of code generates a fun simulation to see what the effect diff --git a/lectures/ols.md b/lectures/ols.md index f87b45d8e..70178c6c7 100644 --- a/lectures/ols.md +++ b/lectures/ols.md @@ -556,7 +556,8 @@ If you are familiar with R, you may want to use the [formula interface](http://w ## Exercises -### Exercise 1 +```{exercise} +:label: ols_ex1 In the lecture, we think the original model suffers from endogeneity bias due to the likely effect income has on institutional development. @@ -596,8 +597,11 @@ endogenous. Using the above information, estimate a Hausman test and interpret your results. +``` + -### Exercise 2 +```{exercise} +:label: ols_ex2 The OLS parameter $\beta$ can also be estimated using matrix algebra and `numpy` (you may need to review the @@ -634,10 +638,13 @@ $$ Using the above information, compute $\hat{\beta}$ from model 1 using `numpy` - your results should be the same as those in the `statsmodels` output from earlier in the lecture. +``` ## Solutions -### Exercise 1 +```{solution-start} ols_ex1 +:class: dropdown +``` ```{code-cell} python3 # Load in data @@ -665,7 +672,13 @@ print(reg2.summary()) The output shows that the coefficient on the residuals is statistically significant, indicating $avexpr_i$ is endogenous. -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} ols_ex2 +:class: dropdown +``` ```{code-cell} python3 # Load in data @@ -691,3 +704,5 @@ It is also possible to use `np.linalg.inv(X.T @ X) @ X.T @ y` to solve for $\beta$, however `.solve()` is preferred as it involves fewer computations. +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/optgrowth.md b/lectures/optgrowth.md index f7b632284..c95dbb93c 100644 --- a/lectures/optgrowth.md +++ b/lectures/optgrowth.md @@ -757,8 +757,9 @@ the true policy. ## Exercises -(ogex1)= -### Exercise 1 + +```{exercise} +:label: og_ex1 A common choice for utility function in this kind of work is the CRRA specification @@ -774,9 +775,11 @@ utility specification. Setting $\gamma = 1.5$, compute and plot an estimate of the optimal policy. Time how long this function takes to run, so you can compare it to faster code developed in the {doc}`next lecture `. +``` + -(og_ex2)= -### Exercise 2 +```{exercise} +:label: og_ex2 Time how long it takes to iterate with the Bellman operator 20 times, starting from initial condition $v(y) = u(y)$. @@ -784,10 +787,13 @@ Time how long it takes to iterate with the Bellman operator Use the model specification in the previous exercise. (As before, we will compare this number with that for the faster code developed in the {doc}`next lecture `.) +``` ## Solutions -### Exercise 1 +```{solution-start} og_ex1 +:class: dropdown +``` Here we set up the model. @@ -819,7 +825,13 @@ ax.legend() plt.show() ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} og_ex2 +:class: dropdown +``` Let's set up: @@ -838,3 +850,5 @@ for i in range(20): v = v_new ``` +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/optgrowth_fast.md b/lectures/optgrowth_fast.md index 1e0c2a1c9..0fac0392a 100644 --- a/lectures/optgrowth_fast.md +++ b/lectures/optgrowth_fast.md @@ -236,15 +236,18 @@ np.max(np.abs(v_greedy - σ_star(og.grid, og.α, og.β))) ## Exercises -(ogfast_ex1)= -### Exercise 1 +```{exercise} +:label: ogfast_ex1 Time how long it takes to iterate with the Bellman operator 20 times, starting from initial condition $v(y) = u(y)$. Use the default parameterization. +``` ### Exercise 2 +```{exercise} +:label: ogfast_ex2 Modify the optimal growth model to use the CRRA utility specification. @@ -258,13 +261,16 @@ Set `γ = 1.5` as the default value and maintaining other specifications. have to copy the class and change the relevant parameters and methods.) Compute an estimate of the optimal policy, plot it and compare visually with -the same plot from the {ref}`analogous exercise ` in the first optimal +the same plot from the {ref}`analogous exercise ` in the first optimal growth lecture. Compare execution time as well. +``` -(ogfast_ex3)= -### Exercise 3 + +```{exercise-start} +:label: ogfast_ex3 +``` In this exercise we return to the original log utility specification. @@ -278,7 +284,6 @@ The next figure shows a simulation of 100 elements of this sequence for three different discount factors (and hence three different policies). ```{figure} /_static/lecture_specific/optgrowth/solution_og_ex2.png - ``` In each sequence, the initial condition is $y_0 = 0.1$. @@ -293,9 +298,14 @@ Notice that more patient agents typically have higher wealth. Replicate the figure modulo randomness. +```{exercise-end} +``` + ## Solutions -### Exercise 1 +```{solution-start} ogfast_ex1 +:class: dropdown +``` Let's set up the initial condition. @@ -316,7 +326,13 @@ for i in range(20): Compared with our {ref}`timing ` for the non-compiled version of value function iteration, the JIT-compiled code is usually an order of magnitude faster. -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} ogfast_ex2 +:class: dropdown +``` Here's our CRRA version of `OptimalGrowthModel`: @@ -351,11 +367,17 @@ plt.show() ``` This matches the solution that we obtained in our non-jitted code, -{ref}`in the exercises `. +{ref}`in the exercises `. Execution time is an order of magnitude faster. -### Exercise 3 +```{solution-end} +``` + + +```{solution-start} ogfast_ex3 +:class: dropdown +``` Here's one solution: @@ -390,3 +412,5 @@ ax.legend(loc='lower right') plt.show() ``` +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/pandas_panel.md b/lectures/pandas_panel.md index a9f5dce5b..43ca6148c 100644 --- a/lectures/pandas_panel.md +++ b/lectures/pandas_panel.md @@ -492,7 +492,9 @@ extends pandas to N-dimensional data structures. ## Exercises -### Exercise 1 +```{exercise-start} +:label: pp_ex1 +``` In these exercises, you'll work with a dataset of employment rates in Europe by age and sex from [Eurostat](http://ec.europa.eu/eurostat/data/database). @@ -511,7 +513,12 @@ Start off by exploring the dataframe and the variables available in the Write a program that quickly returns all values in the `MultiIndex`. -### Exercise 2 +```{exercise-end} +``` + + +```{exercise} +:label: pp_ex2 Filter the above dataframe to only include employment as a percentage of 'active population'. @@ -520,10 +527,13 @@ Create a grouped boxplot using `seaborn` of employment rates in 2015 by age group and sex. **Hint:** `GEO` includes both areas and countries. +``` ## Solutions -### Exercise 1 +```{solution-start} pp_ex1 +:class: dropdown +``` ```{code-cell} python3 employ = pd.read_csv(url3) @@ -548,7 +558,13 @@ for name in employ.columns.names: print(name, employ.columns.get_level_values(name).unique()) ``` -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} pp_ex2 +:class: dropdown +``` To easily filter by country, swap `GEO` to the top level and sort the `MultiIndex` @@ -597,3 +613,5 @@ plt.legend(bbox_to_anchor=(1,0.5)) plt.show() ``` +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/prob_meaning.md b/lectures/prob_meaning.md index 0ea0bdbdf..0e866fe21 100644 --- a/lectures/prob_meaning.md +++ b/lectures/prob_meaning.md @@ -121,7 +121,8 @@ The probability $\textrm{Prob}(X = k | \theta)$ answers the following question As usual, a law of large numbers justifies this answer. -**Exercise 1:** +```{exercise} +:label: pm_ex1 1. Please write a Python class to compute $f_k^I$ @@ -129,10 +130,10 @@ As usual, a law of large numbers justifies this answer. $\textrm{Prob}(X = k | \theta)$ for various values of $\theta, n$ and $I$ 3. With the Law of Large numbers in mind, use your code to say something +``` -+++ - -**Answer Code:** +```{solution-start} pm_ex1 +``` ```{code-cell} ipython3 class frequentist: @@ -203,6 +204,9 @@ freq.compare() From the table above, can you see the law of large numbers at work? +```{solution-end} +``` + Let's do some more calculations. **Comparison with different $\theta$** @@ -357,7 +361,8 @@ $$ where $B(\alpha, \beta)$ is a **beta function** , so that $P(\theta)$ is a **beta distribution** with parameters $\alpha, \beta$. -**Exercise 2:** +```{exercise} +:label: pm_ex2 **a)** Please write down the **likelihood function** for a sample of length $n$ from a binomial distribution with parameter $\theta$. @@ -376,8 +381,11 @@ a **beta distribution** with parameters $\alpha, \beta$. **h)** Please compute the Posterior probabililty that $\theta \in [.45, .55]$ for various values of sample size $n$. **i)** Please use your Python class to study what happens to the posterior distribution as $n \rightarrow + \infty$, again assuming that the true value of $\theta = .4$, though it is unknown to the person doing the updating via Bayes' Law. +``` + -**Answer:** +```{solution-start} pm_ex2 +``` **a)** Please write down the **likelihood function** and the **posterior** distribution for $\theta$ after observing one flip of our coin. @@ -615,6 +623,9 @@ ax[1].set_xlabel('Number of Observations', fontsize=11) plt.show() ``` +```{solution-end} +``` + How shall we interpret the patterns above? The answer is encoded in the Bayesian updating formulas. @@ -652,7 +663,7 @@ The random variables $k$ and $N-k$ are governed by Binomial Distribution with $\ Call this the true data generating process. -According to the Law of Large Numbers, for a large number of observations, observed frequencies of $k$ and $N-k$ will be described by the true data generating process, i.e., the population probability distribution that we assumed when generating the observations on the computer. (See Exercise $1$). +According to the Law of Large Numbers, for a large number of observations, observed frequencies of $k$ and $N-k$ will be described by the true data generating process, i.e., the population probability distribution that we assumed when generating the observations on the computer. (See {ref}`pm_ex1`). Consequently, the mean of the posterior distribution converges to $0.4$ and the variance withers to zero. diff --git a/lectures/rational_expectations.md b/lectures/rational_expectations.md index 940aeba99..42415c0c9 100644 --- a/lectures/rational_expectations.md +++ b/lectures/rational_expectations.md @@ -544,8 +544,8 @@ $(\kappa_0, \kappa_1, h_0, h_1, h_2)$ in {eq}`ree_hlom2`--{eq}`ree_ex5`. ## Exercises -(ree_ex1)= -### Exercise 1 +```{exercise} +:label: ree_ex1 Consider the firm problem {ref}`described above `. @@ -562,27 +562,31 @@ $$ Express the solution of the firm's problem in the form {eq}`ree_ex5` and give the values for each $h_j$. If there were a unit measure of identical competitive firms all behaving according to {eq}`ree_ex5`, what would {eq}`ree_ex5` imply for the *actual* law of motion {eq}`ree_hlom` for market supply. +``` + -(ree_ex2)= -### Exercise 2 +```{exercise} +:label: ree_ex2 Consider the following $\kappa_0, \kappa_1$ pairs as candidates for the aggregate law of motion component of a rational expectations equilibrium (see {eq}`ree_hlom2`). -Extending the program that you wrote for exercise 1, determine which if any +Extending the program that you wrote for {ref}`ree_ex1`, determine which if any satisfy {ref}`the definition ` of a rational expectations equilibrium * (94.0886298678, 0.923409232937) * (93.2119845412, 0.984323478873) * (95.0818452486, 0.952459076301) -Describe an iterative algorithm that uses the program that you wrote for exercise 1 to compute a rational expectations equilibrium. +Describe an iterative algorithm that uses the program that you wrote for {ref}`ree_ex1` to compute a rational expectations equilibrium. (You are not being asked actually to use the algorithm you are suggesting) +``` -(ree_ex3)= -### Exercise 3 + +```{exercise} +:label: ree_ex3 Recall the planner's problem {ref}`described above ` @@ -591,9 +595,11 @@ Recall the planner's problem {ref}`described above ` * $a_0= 100, a_1= 0.05, \beta = 0.95, \gamma=10$ 1. Represent the solution in the form $Y_{t+1} = \kappa_0 + \kappa_1 Y_t$. 1. Compare your answer with the results from exercise 2. +``` + -(ree_ex4)= -### Exercise 4 +```{exercise} +:label: ree_ex4 A monopolist faces the industry demand curve {eq}`ree_comp3d` and chooses $\{Y_t\}$ to maximize $\sum_{t=0}^{\infty} \beta^t r_t$ where @@ -603,7 +609,7 @@ $$ Formulate this problem as an LQ problem. -Compute the optimal policy using the same parameters as the previous exercise. +Compute the optimal policy using the same parameters as {ref}`ree_ex2`. In particular, solve for the parameters in @@ -611,11 +617,15 @@ $$ Y_{t+1} = m_0 + m_1 Y_t $$ -Compare your results with the previous exercise -- comment. +Compare your results with {ref}`ree_ex2` -- comment. +``` ## Solutions -### Exercise 1 + +```{solution-start} ree_ex1 +:class: dropdown +``` To map a problem into a [discounted optimal linear control problem](https://python.quantecon.org/lqcontrol.html), we need to define @@ -727,7 +737,13 @@ Y_{t+1} = n 96.949 + (1 - n 0.046) Y_t $$ -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} ree_ex2 +:class: dropdown +``` To determine whether a $\kappa_0, \kappa_1$ pair forms the aggregate law of motion component of a rational expectations @@ -790,7 +806,13 @@ lecture. (There is in general no guarantee that this iterative process will converge to a rational expectations equilibrium) -### Exercise 3 +```{solution-end} +``` + + +```{solution-start} ree_ex3 +:class: dropdown +``` We are asked to write the planner problem as an LQ problem. @@ -853,7 +875,13 @@ print(κ0, κ1) The output yields the same $(\kappa_0, \kappa_1)$ pair obtained as an equilibrium from the previous exercise. -### Exercise 4 +```{solution-end} +``` + + +```{solution-start} ree_ex4 +:class: dropdown +``` The monopolist's LQ problem is almost identical to the planner's problem from the previous exercise, except that @@ -899,6 +927,9 @@ implying a higher market price. This is analogous to the elementary static-case results +```{solution-end} +``` + [^fn_im]: A literature that studies whether models populated with agents who learn can converge to rational expectations equilibria features iterations on a modification of the mapping $\Phi$ that can be diff --git a/lectures/scalar_dynam.md b/lectures/scalar_dynam.md index 3a3ada3d9..f3acdbe14 100644 --- a/lectures/scalar_dynam.md +++ b/lectures/scalar_dynam.md @@ -436,6 +436,8 @@ ts_plot(g, xmin, xmax, x0, ts_length=20) ## Exercises ### Exercise 1 +```{exercise} +:label: sd_ex1 Consider again the linear model $x_{t+1} = a x_t + b$ with $a \not=1$. @@ -452,10 +454,13 @@ What differences do you notice in the cases $a \in (-1, 0)$ and $a Use $a=0.5$ and then $a=-0.5$ and study the trajectories Set $b=1$ throughout. +``` ## Solutions -### Exercise 1 +```{solution-start} sd_ex1 +:class: dropdown +``` We will start with the case $a=0.5$. @@ -513,3 +518,5 @@ and back again. In the current context, the series is said to exhibit **damped oscillations**. +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/schelling.md b/lectures/schelling.md index 0da2fcc78..fa1090872 100644 --- a/lectures/schelling.md +++ b/lectures/schelling.md @@ -135,8 +135,9 @@ Even with these preferences, the outcome is a high degree of segregation. ## Exercises -(schelling_ex1)= -### Exercise 1 +```{exercise-start} +:label: schelling_ex1 +``` Implement and run this simulation for yourself. @@ -171,9 +172,14 @@ while agents are still moving Use 250 agents of each type. +```{exercise-end} +``` + ## Solutions -### Exercise 1 +```{solution-start} schelling_ex1 +:class: dropdown +``` Here's one solution that does the job we want. @@ -272,3 +278,5 @@ while True: print('Converged, terminating.') ``` +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/short_path.md b/lectures/short_path.md index 07cfba503..b51190beb 100644 --- a/lectures/short_path.md +++ b/lectures/short_path.md @@ -252,8 +252,10 @@ But, importantly, we now have a methodology for tackling large graphs. ## Exercises -(short_path_ex1)= -### Exercise 1 + +```{exercise-start} +:label: short_path_ex1 +``` The text below describes a weighted directed graph. @@ -376,9 +378,15 @@ node98, node99 0.33 node99, ``` +```{exercise-end} +``` + ## Solutions -### Exercise 1 + +```{solution-start} short_path_ex1 +:class: dropdown +``` First let's write a function that reads in the graph data above and builds a distance matrix. @@ -477,3 +485,6 @@ The total cost of the path should agree with $J[0]$ so let's check this. ```{code-cell} python3 J[0] ``` + +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/uncertainty_traps.md b/lectures/uncertainty_traps.md index 1361d51b0..2a1ca2328 100644 --- a/lectures/uncertainty_traps.md +++ b/lectures/uncertainty_traps.md @@ -322,7 +322,6 @@ To get a clearer idea of the dynamics, let's look at all the main time series at once, for a given set of shocks ```{figure} /_static/lecture_specific/uncertainty_traps/uncertainty_traps_sim.png - ``` Notice how the traps only take hold after a sequence of bad draws for the fundamental. @@ -331,8 +330,8 @@ Thus, the model gives us a *propagation mechanism* that maps bad random draws in ## Exercises -(uncertainty_traps_ex1)= -### Exercise 1 +```{exercise} +:label: uncertainty_traps_ex1 Fill in the details behind {eq}`update_mean` and {eq}`update_prec` based on the following standard result (see, e.g., p. 24 of {cite}`young2005`). @@ -354,16 +353,21 @@ $$ \quad \text{and} \quad \gamma_0 = \gamma + M \gamma_x $$ +``` -### Exercise 2 + +```{exercise} +:label: uncertainty_traps_ex2 Modulo randomness, replicate the simulation figures shown above. * Use the parameter values listed as defaults in the __init__ method of the UncertaintyTrapEcon class. +``` ## Solutions -### Exercise 1 +```{solution} uncertainty_traps_ex1 +:class: dropdown This exercise asked you to validate the laws of motion for $\gamma$ and $\mu$ given in the lecture, based on the stated @@ -387,8 +391,12 @@ If we take a random variable $\theta$ with this distribution and then evaluate the distribution of $\rho \theta + \sigma_\theta w$ where $w$ is independent and standard normal, we get the expressions for $\mu'$ and $\gamma'$ given in the lecture. +``` + -### Exercise 2 +```{solution-start} uncertainty_traps_ex2 +:class: dropdown +``` First, let's replicate the plot that illustrates the law of motion for precision, which is @@ -509,3 +517,5 @@ series. distributions for the shocks, but this is a big exercise since it takes us outside the world of the standard Kalman filter) +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/wealth_dynamics.md b/lectures/wealth_dynamics.md index 2d8cde333..579910a87 100644 --- a/lectures/wealth_dynamics.md +++ b/lectures/wealth_dynamics.md @@ -529,7 +529,8 @@ We see that greater volatility has the effect of increasing inequality in this m ## Exercises -### Exercise 1 +```{exercise} +:label: wd_ex1 For a wealth or income distribution with Pareto tail, a higher tail index suggests lower inequality. @@ -547,8 +548,11 @@ Use sample of size 1,000 for each $a$ and the sampling method for generating Par To the extent that you can, interpret the monotone relationship between the Gini index and $a$. +``` -### Exercise 2 +```{exercise-start} +:label: wd_ex2 +``` The wealth process {eq}`wealth_dynam_ah` is similar to a {doc}`Kesten process `. @@ -580,12 +584,17 @@ T = 500 # shift forward T periods z_0 = wdy.z_mean ``` +```{exercise-end} +``` + ## Solutions Here is one solution, which produces a good match between theory and simulation. -### Exercise 1 +```{solution-start} wd_ex1 +:class: dropdown +``` ```{code-cell} ipython3 a_vals = np.linspace(1, 10, 25) # Pareto tail index @@ -609,7 +618,13 @@ This means less extreme values for wealth and hence more equality. More equality translates to a lower Gini index. -### Exercise 2 +```{solution-end} +``` + + +```{solution-start} wd_ex2 +:class: dropdown +``` First let's generate the distribution: @@ -635,3 +650,5 @@ ax.set_ylabel("log size") plt.show() ``` +```{solution-end} +``` \ No newline at end of file