diff --git a/lectures/mccall_fitted_vfi.md b/lectures/mccall_fitted_vfi.md index e82b138d8..63b27a551 100644 --- a/lectures/mccall_fitted_vfi.md +++ b/lectures/mccall_fitted_vfi.md @@ -3,6 +3,8 @@ jupytext: text_representation: extension: .md format_name: myst + format_version: 0.13 + jupytext_version: 1.17.2 kernelspec: display_name: Python 3 language: python @@ -26,19 +28,29 @@ kernelspec: ## Overview -In this lecture we again study the {doc}`McCall job search model with separation `, but now with a continuous wage distribution. +This lecture follows on from the job search model with separation presented in the {doc}`previous lecture `. -While we already considered continuous wage distributions briefly in the -exercises of the {doc}`first job search lecture `, -the change was relatively trivial in that case. +In that lecture mixed exogenous job separation events and Markov wage offer distributions. -This is because we were able to reduce the problem to solving for a single -scalar value (the continuation value). +In this lecture we allow this wage offer process to be continuous rather than discrete. -Here, with separation, the change is less trivial, since a continuous wage distribution leads to an uncountably infinite state space. +In particular, -The infinite state space leads to additional challenges, particularly when it -comes to applying value function iteration (VFI). +$$ + W_t = \exp(X_t) + \quad \text{where} \quad + X_{t+1} = \rho X_t + \nu Z_{t+1} +$$ + +and $\{Z_t\}$ is IID and standard normal. + +While we already considered continuous wage distributions briefly in Exercise {ref}`mm_ex2` of the {doc}`first job search lecture `, the change was relatively trivial in that case. + +The reason is that we were able to reduce the problem to solving for a single scalar value (the continuation value). + +Here, in our Markov setting, the change is less trivial, since a continuous wage distribution leads to an uncountably infinite state space. + +The infinite state space leads to additional challenges, particularly when it comes to applying value function iteration (VFI). These challenges will lead us to modify VFI by adding an interpolation step. @@ -46,72 +58,96 @@ The combination of VFI and this interpolation step is called **fitted value func Fitted VFI is very common in practice, so we will take some time to work through the details. +In addition to what's in Anaconda, this lecture will need the following libraries + +```{code-cell} ipython3 +:tags: [hide-output] + +!pip install quantecon +``` + We will use the following imports: ```{code-cell} ipython3 import matplotlib.pyplot as plt import jax import jax.numpy as jnp +from jax import lax from typing import NamedTuple import quantecon as qe - -# Set JAX to use CPU -jax.config.update('jax_platform_name', 'cpu') ``` -## The algorithm +## Model -The model is the same as the McCall model with job separation that we {doc}`studied before `, except that the wage offer distribution is continuous. +The model is the same as in the {doc}`discrete case `, with the following features: -We are going to start with the two Bellman equations we obtained for the model with job separation after {ref}`a simplifying transformation `. +- Each period, an unemployed agent receives a wage offer $w$ +- Wage offers follow a continuous Markov process: $W_t = \exp(X_t)$ where $X_{t+1} = \rho X_t + \nu Z_{t+1}$ +- $\{Z_t\}$ is IID and standard normal +- Jobs terminate with probability $\alpha$ each period (separation rate) +- Unemployed workers receive compensation $c$ per period +- Workers have CRRA utility $u(c) = \frac{c^{1-\gamma} - 1}{1-\gamma}$ +- Future payoffs are discounted by factor $\beta \in (0,1)$ -Modified to accommodate continuous wage draws, they take the following form: +## The algorithm -```{math} -:label: bell1mcmc -d = \int \max \left\{ v(w'), \, u(c) + \beta d \right\} q(w') d w' -``` +### Value function iteration -and +In the {doc}`discrete case `, we ended up iterating on the Bellman operator ```{math} :label: bell2mcmc -v(w) = u(w) + \beta - \left[ - (1-\alpha)v(w) + \alpha d - \right] + (Tv_u)(w) = + \max + \left\{ + \frac{1}{1-\beta(1-\alpha)} \cdot + \left( + u(w) + \alpha\beta (Pv_u)(w) + \right), + u(c) + \beta(Pv_u)(w) + \right\} ``` -The unknowns here are the function $v$ and the scalar $d$. +where -The differences between these and the pair of Bellman equations we previously worked on are +$$ + (P v_u)(w) := \sum_{w'} v_u(w') P(w, w') +$$ -1. In {eq}`bell1mcmc`, what used to be a sum over a finite number of wage values is an integral over an infinite set. -1. The function $v$ in {eq}`bell2mcmc` is defined over all $w \in \mathbb R_+$. +Here we iterate on the same law after changing the definition of the $P$ operator to -The function $q$ in {eq}`bell1mcmc` is the density of the wage offer distribution. +$$ + (P v_u)(w) := \int v_u(w') p(w, w') d w' +$$ -Its support is taken as equal to $\mathbb R_+$. +where $p(w, \cdot)$ is the conditional density of $w'$ given $w$. -### Value function iteration +We can write this more explicitly as + +$$ + (P v_u)(w) := \int v_u( w^\rho \exp(\nu z) ) \psi(z) dz, +$$ + +where $\psi$ is the standard normal density. + +Here we are thinking of $v_u$ as a function on all of $\RR_+$. + + +### Fitting In theory, we should now proceed as follows: -1. Begin with a guess $v, d$ for the solutions to {eq}`bell1mcmc`--{eq}`bell2mcmc`. -1. Plug $v, d$ into the right hand side of {eq}`bell1mcmc`--{eq}`bell2mcmc` and - compute the left hand side to obtain updates $v', d'$ -1. Unless some stopping condition is satisfied, set $(v, d) = (v', d')$ - and go to step 2. +1. Begin with a guess $v$ +1. Applying $T$ to obtain the update $v' = Tv$ +1. Unless some stopping condition is satisfied, set $v = v'$ and go to step 2. -However, there is a problem we must confront before we implement this procedure: -The iterates of the value function can neither be calculated exactly nor stored on a computer. +However, there is a problem we must confront before we implement this procedure: The iterates of the value function can neither be calculated exactly nor stored on a computer. To see the issue, consider {eq}`bell2mcmc`. -Even if $v$ is a known function, the only way to store its update $v'$ -is to record its value $v'(w)$ for every $w \in \mathbb R_+$. +Even if $v$ is a known function, the only way to store its update $v'$ is to record its value $v'(w)$ for every $w \in \mathbb R_+$. Clearly, this is impossible. @@ -123,8 +159,7 @@ The procedure is as follows: Let a current guess $v$ be given. -Now we record the value of the function $v'$ at only -finitely many "grid" points $w_1 < w_2 < \cdots < w_I$ and then reconstruct $v'$ from this information when required. +Now we record the value of the function $v'$ at only finitely many "grid" points $w_1 < w_2 < \cdots < w_I$ and then reconstruct $v'$ from this information when required. More precisely, the algorithm will be @@ -138,8 +173,7 @@ How should we go about step 2? This is a problem of function approximation, and there are many ways to approach it. -What's important here is that the function approximation scheme must not only -produce a good approximation to each $v$, but also that it combines well with the broader iteration algorithm described above. +What's important here is that the function approximation scheme must not only produce a good approximation to each $v$, but also that it combines well with the broader iteration algorithm described above. One good choice from both respects is continuous piecewise linear interpolation. @@ -151,10 +185,9 @@ This method Linear interpolation will be implemented using JAX's interpolation function `jnp.interp`. -The next figure illustrates piecewise linear interpolation of an arbitrary -function on grid points $0, 0.2, 0.4, 0.6, 0.8, 1$. +The next figure illustrates piecewise linear interpolation of an arbitrary function on grid points $0, 0.2, 0.4, 0.6, 0.8, 1$. -```{code-cell} python3 +```{code-cell} ipython3 def f(x): y1 = 2 * jnp.cos(6 * x) + jnp.sin(14 * x) return y1 + 2.5 @@ -181,127 +214,218 @@ plt.show() The first step is to build a JAX-compatible structure for the McCall model with separation and a continuous wage offer distribution. -We will take the utility function to be the log function for this application, with $u(c) = \ln c$. +The key computational challenge is evaluating the conditional expectation $(Pv_u)(w) = \int v_u(w') p(w, w') dw'$ at each wage grid point. -We will adopt the lognormal distribution for wages, with $w = \exp(\mu + \sigma z)$ -when $z$ is standard normal and $\mu, \sigma$ are parameters. +From the equation above, we have: -```{code-cell} python3 -def lognormal_draws(n=1000, μ=2.5, σ=0.5, seed=1234): - key = jax.random.PRNGKey(seed) - z = jax.random.normal(key, (n,)) - w_draws = jnp.exp(μ + σ * z) - return w_draws +$$ +(Pv_u)(w) = \int v_u(w^\rho \exp(\nu z)) \psi(z) dz +$$ + +where $\psi$ is the standard normal density. + +We approximate this integral using Monte Carlo integration with draws from the standard normal distribution: + +$$ +(Pv_u)(w) \approx \frac{1}{N} \sum_{i=1}^N v_u(w^\rho \exp(\nu z_i)) +$$ + +We use the same CRRA utility function as in the discrete case: + +```{code-cell} ipython3 +def u(c, γ): + return (c**(1 - γ) - 1) / (1 - γ) ``` Here's our model structure using a NamedTuple. -```{code-cell} python3 -class McCallModelContinuous(NamedTuple): +```{code-cell} ipython3 +class Model(NamedTuple): c: float # unemployment compensation α: float # job separation rate β: float # discount factor + ρ: float # wage persistence + ν: float # wage volatility + γ: float # utility parameter w_grid: jnp.ndarray # grid of points for fitted VFI - w_draws: jnp.ndarray # draws of wages for Monte Carlo - -def create_mccall_model(c=1, - α=0.1, - β=0.96, - grid_min=1e-10, - grid_max=5, - grid_size=100, - μ=2.5, - σ=0.5, - mc_size=1000, - seed=1234, - w_draws=None): + z_draws: jnp.ndarray # draws from the standard normal distribution + +def create_mccall_model( + c: float = 1.0, + α: float = 0.1, + β: float = 0.96, + ρ: float = 0.9, + ν: float = 0.2, + γ: float = 1.5, + grid_size: int = 100, + mc_size: int = 1000, + seed: int = 1234 + ): """Factory function to create a McCall model instance.""" - if w_draws is None: - # Generate wage draws if not provided - w_draws = lognormal_draws(n=mc_size, μ=μ, σ=σ, seed=seed) + key = jax.random.PRNGKey(seed) + z_draws = jax.random.normal(key, (mc_size,)) - w_grid = jnp.linspace(grid_min, grid_max, grid_size) - return McCallModelContinuous(c=c, α=α, β=β, w_grid=w_grid, w_draws=w_draws) + # Discretize just to get a suitable wage grid for interpolation + mc = qe.markov.tauchen(grid_size, ρ, ν) + w_grid = jnp.exp(jnp.array(mc.state_values)) -@jax.jit -def update(model, v, d): - """Update value function and continuation value.""" + return Model(c=c, α=α, β=β, ρ=ρ, ν=ν, γ=γ, w_grid=w_grid, z_draws=z_draws) +``` + +```{code-cell} ipython3 +def T(model, v): + """Update the value function.""" # Unpack model parameters - c, α, β, w_grid, w_draws = model - u = jnp.log - + c, α, β, ρ, ν, γ, w_grid, z_draws = model + # Interpolate array represented value function vf = lambda x: jnp.interp(x, w_grid, v) - - # Update d using Monte Carlo to evaluate integral - d_new = jnp.mean(jnp.maximum(vf(w_draws), u(c) + β * d)) - - # Update v - v_new = u(w_grid) + β * ((1 - α) * v + α * d) - - return v_new, d_new + + def compute_expectation(w): + # Use Monte Carlo to evaluate integral (P v)(w) + # Compute E[v(w' | w)] where w' = w^ρ * exp(ν * z) + w_next = w**ρ * jnp.exp(ν * z_draws) + return jnp.mean(vf(w_next)) + + compute_exp_all = jax.vmap(compute_expectation) + Pv = compute_exp_all(w_grid) + + d = 1 / (1 - β * (1 - α)) + accept = d * (u(w_grid, γ) + α * β * Pv) + reject = u(c, γ) + β * Pv + return jnp.maximum(accept, reject) ``` -We then return the current iterate as an approximate solution. +Here's the solver: -```{code-cell} python3 +```{code-cell} ipython3 @jax.jit -def solve_model(model, tol=1e-5, max_iter=2000): - """ - Iterates to convergence on the Bellman equations +def vfi( + model: Model, + tolerance: float = 1e-6, # Error tolerance + max_iter: int = 100_000, # Max iteration bound + ): + + v_init = jnp.zeros(model.w_grid.shape) + + def cond(loop_state): + v, error, i = loop_state + return (error > tolerance) & (i <= max_iter) + + def update(loop_state): + v, error, i = loop_state + v_new = T(model, v) + error = jnp.max(jnp.abs(v_new - v)) + new_loop_state = v_new, error, i + 1 + return new_loop_state + + initial_state = (v_init, tolerance + 1, 1) + final_loop_state = lax.while_loop(cond, update, initial_state) + v_final, error, i = final_loop_state + + return v_final +``` - * model is an instance of McCallModelContinuous - """ - - # Initial guesses - v = jnp.ones_like(model.w_grid) - d = 1.0 - - def body_fun(state): - v, d, i, error = state - v_new, d_new = update(model, v, d) - error_1 = jnp.max(jnp.abs(v_new - v)) - error_2 = jnp.abs(d_new - d) - error = jnp.maximum(error_1, error_2) - return v_new, d_new, i + 1, error - - def cond_fun(state): - _, _, i, error = state - return (error > tol) & (i < max_iter) - - initial_state = (v, d, 0, tol + 1) - v_final, d_final, _, _ = jax.lax.while_loop(cond_fun, body_fun, initial_state) - - return v_final, d_final +The next function computes the optimal policy under the assumption that $v$ is +the value function: + +```{code-cell} ipython3 +def get_greedy(v: jnp.ndarray, model: Model) -> jnp.ndarray: + """Get a v-greedy policy.""" + c, α, β, ρ, ν, γ, w_grid, z_draws = model + + # Interpolate value function + vf = lambda x: jnp.interp(x, w_grid, v) + + def compute_expectation(w): + # Use Monte Carlo to evaluate integral (P v)(w) + # Compute E[v(w' | w)] where w' = w^ρ * exp(ν * z) + w_next = w**ρ * jnp.exp(ν * z_draws) + return jnp.mean(vf(w_next)) + + compute_exp_all = jax.vmap(compute_expectation) + Pv = compute_exp_all(w_grid) + + d = 1 / (1 - β * (1 - α)) + accept = d * (u(w_grid, γ) + α * β * Pv) + reject = u(c, γ) + β * Pv + σ = accept >= reject + return σ ``` -Here's a function `compute_reservation_wage` that takes an instance of `McCallModelContinuous` +Here's a function that takes an instance of `Model` and returns the associated reservation wage. -If $v(w) < h$ for all $w$, then the function returns `jnp.inf` - -```{code-cell} python3 +```{code-cell} ipython3 @jax.jit -def compute_reservation_wage(model): +def get_reservation_wage(σ: jnp.ndarray, model: Model) -> float: """ - Computes the reservation wage of an instance of the McCall model - by finding the smallest w such that v(w) >= h. + Calculate the reservation wage from a given policy. + + Parameters: + - σ: Policy array where σ[i] = True means accept wage w_grid[i] + - model: Model instance containing wage values - If no such w exists, then w_bar is set to inf. + Returns: + - Reservation wage (lowest wage for which policy indicates acceptance) """ - c, α, β, w_grid, w_draws = model - u = jnp.log - - v, d = solve_model(model) - h = u(c) + β * d - - # Find the first wage where v(w) >= h - indices = jnp.where(v >= h, size=1, fill_value=-1) - w_bar = jnp.where(indices[0] >= 0, w_grid[indices[0]], jnp.inf) - - return w_bar + c, α, β, ρ, ν, γ, w_grid, z_draws = model + + # Find the first index where policy indicates acceptance + # σ is a boolean array, argmax returns the first True value + first_accept_idx = jnp.argmax(σ) + + # If no acceptance (all False), return infinity + # Otherwise return the wage at the first acceptance index + return jnp.where(jnp.any(σ), w_grid[first_accept_idx], jnp.inf) +``` + +## Computing the Solution + +Let's solve the model: + +```{code-cell} ipython3 +model = create_mccall_model() +c, α, β, ρ, ν, γ, w_grid, z_draws = model +v_star = vfi(model) +σ_star = get_greedy(v_star, model) +``` + +Next we compute some related quantities, including the reservation wage. + +```{code-cell} ipython3 +# Interpolate the value function for computing expectations +vf = lambda x: jnp.interp(x, w_grid, v_star) + +def compute_expectation(w): + # Use Monte Carlo to evaluate integral (P v)(w) + # Compute E[v(w' | w)] where w' = w^ρ * exp(ν * z) + w_next = w**ρ * jnp.exp(ν * z_draws) + return jnp.mean(vf(w_next)) + +compute_exp_all = jax.vmap(compute_expectation) +Pv = compute_exp_all(w_grid) + +d = 1 / (1 - β * (1 - α)) +accept = d * (u(w_grid, γ) + α * β * Pv) +h_star = u(c, γ) + β * Pv +w_star = get_reservation_wage(σ_star, model) +``` + +Let's plot our results. + +```{code-cell} ipython3 +fig, ax = plt.subplots(figsize=(9, 5.2)) +ax.plot(w_grid, h_star, linewidth=4, ls="--", alpha=0.4, + label="continuation value") +ax.plot(w_grid, accept, linewidth=4, ls="--", alpha=0.4, + label="stopping value") +ax.plot(w_grid, v_star, "k-", alpha=0.7, label=r"$v_u^*(w)$") +ax.legend(frameon=False) +ax.set_xlabel(r"$w$") +plt.show() ``` The exercises ask you to explore the solution and how it changes with parameters. @@ -311,12 +435,8 @@ The exercises ask you to explore the solution and how it changes with parameters ```{exercise} :label: mfv_ex1 -Use the code above to explore what happens to the reservation wage when the wage parameter $\mu$ -changes. - -Use the default parameters and $\mu$ in `μ_vals = jnp.linspace(0.0, 2.0, 15)`. +Use the code above to explore what happens to the reservation wage when $c$ changes. -Is the impact on the reservation wage as you expected? ``` ```{solution-start} mfv_ex1 @@ -325,24 +445,27 @@ Is the impact on the reservation wage as you expected? Here is one solution -```{code-cell} python3 -def compute_res_wage_given_μ(μ): - model = create_mccall_model(μ=μ) - w_bar = compute_reservation_wage(model) +```{code-cell} ipython3 +def compute_res_wage_given_c(c): + model = create_mccall_model(c=c) + v_star = vfi(model) + σ_star = get_greedy(v_star, model) + w_bar = get_reservation_wage(σ_star, model) return w_bar -μ_vals = jnp.linspace(0.0, 2.0, 15) -w_bar_vals = jax.vmap(compute_res_wage_given_μ)(μ_vals) +c_vals = jnp.linspace(0.0, 2.0, 15) +w_bar_vals = jax.vmap(compute_res_wage_given_c)(c_vals) fig, ax = plt.subplots() -ax.set(xlabel='mean', ylabel='reservation wage') -ax.plot(μ_vals, w_bar_vals, label=r'$\bar w$ as a function of $\mu$') +ax.set(xlabel='unemployment compensation', ylabel='reservation wage') +ax.plot(c_vals, w_bar_vals, label=r'$\bar w$ as a function of $c$') ax.legend() plt.show() ``` -Not surprisingly, the agent is more inclined to wait when the distribution of -offers shifts to the right. +As unemployment compensation increases, the reservation wage also increases. + +This makes economic sense: when the value of being unemployed rises (through higher $c$), workers become more selective about which job offers to accept. ```{solution-end} ``` @@ -352,11 +475,9 @@ offers shifts to the right. Let us now consider how the agent responds to an increase in volatility. -To try to understand this, compute the reservation wage when the wage offer -distribution is uniform on $(m - s, m + s)$ and $s$ varies. +To try to understand this, compute the reservation wage when the wage offer distribution is uniform on $(m - s, m + s)$ and $s$ varies. -The idea here is that we are holding the mean constant and spreading the -support. +The idea here is that we are holding the mean constant and spreading the support. (This is a form of *mean-preserving spread*.) @@ -371,42 +492,54 @@ Now compute it - is this as you expected? :class: dropdown ``` -Here is one solution +Maybe add an exercise that explores a pure increase in volatility. -```{code-cell} python3 -def compute_res_wage_given_s(s, m=2.0, seed=1234): - a, b = m - s, m + s - key = jax.random.PRNGKey(seed) - uniform_draws = jax.random.uniform(key, shape=(10_000,), minval=a, maxval=b) +```{solution-end} +``` - # Create model with default parameters but replace wage draws - model = create_mccall_model(w_draws=uniform_draws) - w_bar = compute_reservation_wage(model) - return w_bar +```{exercise} +:label: mfv_ex3 -s_vals = jnp.linspace(1.0, 2.0, 15) +Create a plot that shows how the reservation wage changes with the risk aversion parameter $\gamma$. -# Use vmap with different seeds for each s value -seeds = jnp.arange(len(s_vals)) -compute_vectorized = jax.vmap(compute_res_wage_given_s, in_axes=(0, None, 0)) -w_bar_vals = compute_vectorized(s_vals, 2.0, seeds) +Use `γ_vals = jnp.linspace(1.2, 2.5, 15)` and keep all other parameters at their default values. + +How do you expect the reservation wage to vary with $\gamma$? Why? -fig, ax = plt.subplots() -ax.set(xlabel='volatility', ylabel='reservation wage') -ax.plot(s_vals, w_bar_vals, label=r'$\bar w$ as a function of wage volatility') -ax.legend() -plt.show() ``` -The reservation wage increases with volatility. +```{solution-start} mfv_ex3 +:class: dropdown +``` + +We compute the reservation wage for different values of the risk aversion parameter: + +```{code-cell} ipython3 +γ_vals = jnp.linspace(1.2, 2.5, 15) +w_star_vec = jnp.empty_like(γ_vals) + +for i, γ in enumerate(γ_vals): + model = create_mccall_model(γ=γ) + v_star = vfi(model) + σ_star = get_greedy(v_star, model) + w_star = get_reservation_wage(σ_star, model) + w_star_vec = w_star_vec.at[i].set(w_star) + +fig, ax = plt.subplots(figsize=(9, 5.2)) +ax.plot(γ_vals, w_star_vec, linewidth=2, alpha=0.6, + label='reservation wage') +ax.legend(frameon=False) +ax.set_xlabel(r'$\gamma$') +ax.set_ylabel(r'$w^*$') +ax.set_title('Reservation wage as a function of risk aversion') +plt.show() +``` -One might think that higher volatility would make the agent more inclined to -take a given offer, since doing so represents certainty and waiting represents -risk. +As risk aversion ($\gamma$) increases, the reservation wage decreases. -But job search is like holding an option: the worker is only exposed to upside risk (since, in a free market, no one can force them to take a bad offer). +This occurs because more risk-averse workers place higher value on the security of employment relative to the uncertainty of continued search. -More volatility means higher upside potential, which encourages the agent to wait. +With higher $\gamma$, the utility cost of unemployment (foregone consumption) becomes more severe, making workers more willing to accept lower wages rather than continue searching. ```{solution-end} ``` diff --git a/lectures/mccall_model_with_sep_markov.md b/lectures/mccall_model_with_sep_markov.md index 8ba89fa60..f096daebe 100644 --- a/lectures/mccall_model_with_sep_markov.md +++ b/lectures/mccall_model_with_sep_markov.md @@ -1,14 +1,16 @@ --- -jupytext: - text_representation: - extension: .md - format_name: myst - format_version: 0.13 - jupytext_version: 1.17.2 -kernelspec: - display_name: Python 3 (ipykernel) - language: python - name: python3 +jupyter: + jupytext: + default_lexer: ipython3 + text_representation: + extension: .md + format_name: markdown + format_version: '1.3' + jupytext_version: 1.17.2 + kernelspec: + display_name: Python 3 (ipykernel) + language: python + name: python3 --- (mccall_with_sep_markov)= @@ -89,14 +91,14 @@ When unemployed and receiving wage offer $w$, the agent chooses between: The unemployed worker's value function satisfies the Bellman equation $$ - v_u(w) = \max\{v_e(w), c + \beta \sum_{w'} v_u(w') P(w,w')\} + v_u(w) = \max\{v_e(w), u(c) + \beta \sum_{w'} v_u(w') P(w,w')\} $$ The employed worker's value function satisfies the Bellman equation $$ v_e(w) = - w + \beta + u(w) + \beta \left[ \alpha \sum_{w'} v_u(w') P(w,w') + (1-\alpha) v_e(w) \right] @@ -114,7 +116,7 @@ We use the following approach to solve this problem. $$ v_e(w) = - \frac{1}{1-\beta(1-\alpha)} \cdot (w + \alpha\beta(Pv_u)(w)) + \frac{1}{1-\beta(1-\alpha)} \cdot (u(w) + \alpha\beta(Pv_u)(w)) $$ 2. Substitute into the unemployed agent's Bellman equation to get: @@ -124,20 +126,27 @@ $$ v_u(w) = \max \left\{ - \frac{1}{1-\beta(1-\alpha)} \cdot (w + \alpha\beta(Pv_u)(w)), - c + \beta(Pv_u)(w) + \frac{1}{1-\beta(1-\alpha)} \cdot (u(w) + \alpha\beta(Pv_u)(w)), + u(c) + \beta(Pv_u)(w) \right\} $$ 3. Use value function iteration to solve for $v_u$ -4. Compute optimal policy: accept if $v_e(w) ≥ c + β(Pv_u)(w)$ +4. Compute optimal policy: accept if $v_e(w) ≥ u(c) + β(Pv_u)(w)$ The optimal policy turns out to be a reservation wage strategy: accept all wages above some threshold. ## Code +The default utility function is a CRRA utility function + +```{code-cell} ipython3 +def u(c, γ): + return (c**(1 - γ) - 1) / (1 - γ) +``` + Let's set up a `Model` class to store information needed to solve the model. We include `P_cumsum`, the row-wise cumulative sum of the transition matrix, to @@ -152,10 +161,16 @@ class Model(NamedTuple): β: float c: float α: float + γ: float ``` The function below holds default values and creates a `Model` instance: +The wage offer process will be formed as the exponential of the discretization of an AR1 process. + +* discretize a Gaussian AR1 process of the form $X' = \rho X + \nu Z'$ +* take the exponential of the resulting process + ```{code-cell} ipython3 def create_js_with_sep_model( n: int = 200, # wage grid size @@ -163,7 +178,8 @@ def create_js_with_sep_model( ν: float = 0.2, # wage volatility β: float = 0.96, # discount factor α: float = 0.05, # separation rate - c: float = 1.0 # unemployment compensation + c: float = 1.0, # unemployment compensation + γ: float = 1.5 # utility parameter ) -> Model: """ Creates an instance of the job search model with separation. @@ -172,7 +188,7 @@ def create_js_with_sep_model( mc = tauchen(n, ρ, ν) w_vals, P = jnp.exp(jnp.array(mc.state_values)), jnp.array(mc.P) P_cumsum = jnp.cumsum(P, axis=1) - return Model(n, w_vals, P, P_cumsum, β, c, α) + return Model(n, w_vals, P, P_cumsum, β, c, α, γ) ``` Here's the Bellman operator for the unemployed worker's value function: @@ -180,10 +196,10 @@ Here's the Bellman operator for the unemployed worker's value function: ```{code-cell} ipython3 def T(v: jnp.ndarray, model: Model) -> jnp.ndarray: """The Bellman operator for the value of being unemployed.""" - n, w_vals, P, P_cumsum, β, c, α = model + n, w_vals, P, P_cumsum, β, c, α, γ = model d = 1 / (1 - β * (1 - α)) - accept = d * (w_vals + α * β * P @ v) - reject = c + β * P @ v + accept = d * (u(w_vals, γ) + α * β * P @ v) + reject = u(c, γ) + β * P @ v return jnp.maximum(accept, reject) ``` @@ -193,10 +209,10 @@ the value function: ```{code-cell} ipython3 def get_greedy(v: jnp.ndarray, model: Model) -> jnp.ndarray: """Get a v-greedy policy.""" - n, w_vals, P, P_cumsum, β, c, α = model + n, w_vals, P, P_cumsum, β, c, α, γ = model d = 1 / (1 - β * (1 - α)) - accept = d * (w_vals + α * β * P @ v) - reject = c + β * P @ v + accept = d * (u(w_vals, γ) + α * β * P @ v) + reject = u(c, γ) + β * P @ v σ = accept >= reject return σ ``` @@ -247,7 +263,7 @@ def get_reservation_wage(σ: jnp.ndarray, model: Model) -> float: Returns: - Reservation wage (lowest wage for which policy indicates acceptance) """ - n, w_vals, P, P_cumsum, β, c, α = model + n, w_vals, P, P_cumsum, β, c, α, γ = model # Find the first index where policy indicates acceptance # σ is a boolean array, argmax returns the first True value @@ -264,7 +280,7 @@ Let's solve the model: ```{code-cell} ipython3 model = create_js_with_sep_model() -n, w_vals, P, P_cumsum, β, c, α = model +n, w_vals, P, P_cumsum, β, c, α, γ = model v_star = vfi(model) σ_star = get_greedy(v_star, model) ``` @@ -273,8 +289,8 @@ Next we compute some related quantities, including the reservation wage. ```{code-cell} ipython3 d = 1 / (1 - β * (1 - α)) -accept = d * (w_vals + α * β * P @ v_star) -h_star = c + β * P @ v_star +accept = d * (u(w_vals, γ) + α * β * P @ v_star) +h_star = u(c, γ) + β * P @ v_star w_star = get_reservation_wage(σ_star, model) ``` @@ -346,7 +362,7 @@ def update_agent(key, is_employed, wage_idx, model, σ): period via the probabilites in P(w, .) """ - n, w_vals, P, P_cumsum, β, c, α = model + n, w_vals, P, P_cumsum, β, c, α, γ = model key1, key2 = jax.random.split(key) # Use precomputed cumulative sum for efficient sampling @@ -390,7 +406,7 @@ def simulate_employment_path( """ key = jax.random.PRNGKey(seed) # Unpack model - n, w_vals, P, P_cumsum, β, c, α = model + n, w_vals, P, P_cumsum, β, c, α, γ = model # Initial conditions is_employed = 0 @@ -556,7 +572,7 @@ def _simulate_cross_section_compiled( ): """JIT-compiled core simulation loop using lax.fori_loop. Returns only the final employment state to save memory.""" - n, w_vals, P, P_cumsum, β, c, α = model + n, w_vals, P, P_cumsum, β, c, α, γ = model # Initialize arrays wage_indices = jnp.zeros(n_agents, dtype=jnp.int32) @@ -699,7 +715,6 @@ model_low_c = create_js_with_sep_model(c=0.5) plot_cross_sectional_unemployment(model_low_c) ``` - ## Exercises ```{exercise-start}