diff --git a/lectures/mccall_model_with_sep_markov.md b/lectures/mccall_model_with_sep_markov.md index ca7ac8f35..fc9f1cb0b 100644 --- a/lectures/mccall_model_with_sep_markov.md +++ b/lectures/mccall_model_with_sep_markov.md @@ -96,18 +96,20 @@ $$ where $\{Z_t\}$ is IID and standard normal. -Informally, we set $W_t = \exp(Z_t)$. - -In practice, we - -* discretize the AR1 process using {ref}`Tauchen's method ` and -* take the exponential of the resulting wage offer values. Below we will always choose $\rho \in (0, 1)$. This means that the wage process will be positively correlated: the higher the current wage offer, the more likely we are to get a high offer tomorrow. +To go from the AR1 process to the wage offer process, we set $W_t = \exp(X_t)$. + +Actually, in practice, we approximate this wage process as follows: + +* discretize the AR1 process using {ref}`Tauchen's method ` and +* take the exponential of the resulting wage offer values. + + ### Value Functions @@ -259,9 +261,9 @@ def T(v: jnp.ndarray, model: Model) -> jnp.ndarray: """ n, w_vals, P, P_cumsum, β, c, α, γ = model d = 1 / (1 - β * (1 - α)) - accept = d * (u(w_vals, γ) + α * β * P @ v) - reject = u(c, γ) + β * P @ v - return jnp.maximum(accept, reject) + v_e = d * (u(w_vals, γ) + α * β * P @ v) + h = u(c, γ) + β * P @ v + return jnp.maximum(v_e, h) ``` Here's a routine for value function iteration. @@ -312,10 +314,10 @@ def get_reservation_wage(v: jnp.ndarray, model: Model) -> float: # Compute accept and reject values d = 1 / (1 - β * (1 - α)) v_e = d * (u(w_vals, γ) + α * β * P @ v) - continuation_value = u(c, γ) + β * P @ v + continuation_values = u(c, γ) + β * P @ v # Find where acceptance becomes optimal - accept_indices = v_e >= continuation_value + accept_indices = v_e >= continuation_values first_accept_idx = jnp.argmax(accept_indices) # index of first True # If no acceptance (all False), return infinity