Skip to content

Commit

Permalink
Minor updates of readme and notebooks
Browse files Browse the repository at this point in the history
  • Loading branch information
mikkelpm committed Oct 13, 2023
1 parent 2fde580 commit 31bc665
Show file tree
Hide file tree
Showing 7 changed files with 35 additions and 33 deletions.
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2021 Matthew D. Cocci and Mikkel Plagborg-Moller
Copyright (c) 2023 Matthew D. Cocci and Mikkel Plagborg-Moller

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ Python package that computes worst-case standard errors (SE) for minimum distanc
The computed worst-case SE for the estimated parameters are sharp upper bounds on the true SE (which depend on the unknown moment correlation structure). For over-identified models, the package also computes the efficient moment selection that minimizes the worst-case SE. Additionally, the package can carry out tests of parameter restrictions or over-identifying restrictions.

**Reference:**
Cocci, Matthew D., and Mikkel Plagborg-Møller (2021), "Standard Errors for Calibrated Parameters", [arXiv:2109.08109](https://arxiv.org/abs/2109.08109)
Cocci, Matthew D., and Mikkel Plagborg-Møller (2023), "Standard Errors for Calibrated Parameters", [arXiv:2109.08109](https://arxiv.org/abs/2109.08109)

Tested in: Python 3.8.11 (Anaconda distribution) on Windows 10 PC
Tested in: Python 3.8.18 (Anaconda distribution) on Windows 10 PC with NumPy version 1.24.3.

Other versions: [Matlab](https://github.com/mikkelpm/stderr_calibration_matlab)

Expand All @@ -19,9 +19,9 @@ Other versions: [Matlab](https://github.com/mikkelpm/stderr_calibration_matlab)

- [stderr_calibration](stderr_calibration): Python package for minimum distance estimation, standard errors, and testing

- [estimate_hank.py](estimate_hank.py): Empirical application to estimation of a heterogeneous agent New Keynesian macro model, using impulse response estimates from [Chang, Chen & Schorfheide (2021)](https://cpb-us-w2.wpmucdn.com/web.sas.upenn.edu/dist/e/242/files/2021/05/EvalHAmodels_v6_pub.pdf) and [Miranda-Agrippino & Ricco (2021)](https://doi.org/10.1257/mac.20180124), which are stored in the [data](data) folder
- [estimate_hank.py](estimate_hank.py): Empirical application to estimation of a heterogeneous agent New Keynesian macro model, using impulse response estimates from [Chang, Chen & Schorfheide (2023)](https://web.sas.upenn.edu/schorf/files/2023/09/EvalHAmodels_v15_nocolor.pdf) and [Miranda-Agrippino & Ricco (2021)](https://doi.org/10.1257/mac.20180124), which are stored in the [data](data) folder

- [sequence_jacobian](sequence_jacobian): Copy of the [Sequence-Space Jacobian](https://github.com/shade-econ/sequence-jacobian) package developed by [Auclert, Bardóczy, Rognlie & Straub (2021)](http://web.stanford.edu/~aauclert/sequence_space_jacobian.pdf), with minor changes made to the file [hank.py](sequence_jacobian/hank.py)
- [sequence_jacobian](sequence_jacobian): Copy of the [Sequence-Space Jacobian](https://github.com/shade-econ/sequence-jacobian) package developed by [Auclert, Bardóczy, Rognlie & Straub (2021)](https://doi.org/10.3982/ECTA17434), with minor changes made to the file [hank.py](sequence_jacobian/hank.py)

- [tests](tests): Unit tests intended for use with the [pytest](https://docs.pytest.org/) framework

Expand Down
29 changes: 15 additions & 14 deletions docs/example.html
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
.highlight .m { color: var(--jp-mirror-editor-number-color) } /* Literal.Number */
.highlight .s { color: var(--jp-mirror-editor-string-color) } /* Literal.String */
.highlight .ow { color: var(--jp-mirror-editor-operator-color); font-weight: bold } /* Operator.Word */
.highlight .pm { color: var(--jp-mirror-editor-punctuation-color) } /* Punctuation.Marker */
.highlight .w { color: var(--jp-mirror-editor-variable-color) } /* Text.Whitespace */
.highlight .mb { color: var(--jp-mirror-editor-number-color) } /* Literal.Number.Bin */
.highlight .mf { color: var(--jp-mirror-editor-number-color) } /* Literal.Number.Float */
Expand Down Expand Up @@ -13976,7 +13977,7 @@ <h1 id="Standard-errors-for-calibrated-parameters:-Example">Standard errors for
<p>We observe the noisy estimates $(\hat{\mu}_1,\hat{\mu}_2,\hat{\mu}_3) = (1.1,0.8,-0.1)$ of the true moments. The standard errors of the three empirical moments are $(\hat{\sigma}_1,\hat{\sigma}_2,\hat{\sigma}_3)=(0.1,0.2,0.05)$.</p>
<p>We will estimate the parameters $(\theta_1,\theta_2)$ by minimum distance, matching the model-implied moments $h(\theta_1,\theta_2)$ to the empirical moments:
$$\hat{\theta} = \text{argmin}_{\theta}\; (\hat{\mu}-h(\theta))'\hat{W}(\hat{\mu}-h(\theta)).$$</p>
<p>To compute standard errors for the estimated parameters, test hypotheses, and compute the efficient weight matrix $\hat{W}$, we use the formulas in <a href="https://scholar.princeton.edu/mikkelpm/calibration">Cocci &amp; Plagborg-Møller (2021)</a>, which do not require knowledge of the correlation structure of the empirical moments.</p>
<p>To compute standard errors for the estimated parameters, test hypotheses, and compute the efficient weight matrix $\hat{W}$, we use the formulas in <a href="https://arxiv.org/abs/2109.08109">Cocci &amp; Plagborg-Møller (2023)</a>, which do not require knowledge of the correlation structure of the empirical moments.</p>
<h2 id="Define-the-model">Define the model<a class="anchor-link" href="#Define-the-model">&#182;</a></h2><p>We first import relevant packages and define the model and data.</p>

</div>
Expand Down Expand Up @@ -14119,12 +14120,12 @@ <h2 id="Test-of-parameter-restrictions">Test of parameter restrictions<a class="
<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre> pcost dcost gap pres dres k/t
0: 2.0000e+00 2.0000e+00 1e+00 9e-02 0e+00 1e+00
1: 2.8317e+00 2.7762e+00 4e-01 3e-02 2e-17 3e-01
2: 2.8651e+00 2.8553e+00 6e-02 4e-03 5e-17 4e-02
3: 2.8870e+00 2.8845e+00 1e-02 6e-04 7e-17 5e-03
4: 2.8889e+00 2.8888e+00 4e-04 2e-05 4e-18 2e-04
5: 2.8889e+00 2.8889e+00 9e-06 6e-07 2e-17 4e-06
6: 2.8889e+00 2.8889e+00 2e-07 1e-08 1e-17 9e-08
1: 2.8317e+00 2.7762e+00 4e-01 3e-02 8e-18 3e-01
2: 2.8651e+00 2.8553e+00 6e-02 4e-03 1e-17 4e-02
3: 2.8870e+00 2.8845e+00 1e-02 6e-04 4e-17 5e-03
4: 2.8889e+00 2.8888e+00 4e-04 2e-05 3e-17 2e-04
5: 2.8889e+00 2.8889e+00 9e-06 6e-07 8e-18 4e-06
6: 2.8889e+00 2.8889e+00 2e-07 1e-08 7e-18 9e-08
Optimal solution found.

t-statistics for testing individual parameters
Expand Down Expand Up @@ -14178,16 +14179,16 @@ <h2 id="Test-of-parameter-restrictions">Test of parameter restrictions<a class="
<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre> pcost dcost gap pres dres k/t
0: 2.0000e+00 2.0000e+00 1e+00 9e-02 0e+00 1e+00
1: 2.8317e+00 2.7762e+00 4e-01 3e-02 2e-17 3e-01
2: 2.8651e+00 2.8553e+00 6e-02 4e-03 5e-17 4e-02
3: 2.8870e+00 2.8845e+00 1e-02 6e-04 7e-17 5e-03
4: 2.8889e+00 2.8888e+00 4e-04 2e-05 4e-18 2e-04
5: 2.8889e+00 2.8889e+00 9e-06 6e-07 2e-17 4e-06
6: 2.8889e+00 2.8889e+00 2e-07 1e-08 1e-17 9e-08
1: 2.8317e+00 2.7762e+00 4e-01 3e-02 8e-18 3e-01
2: 2.8651e+00 2.8553e+00 6e-02 4e-03 1e-17 4e-02
3: 2.8870e+00 2.8845e+00 1e-02 6e-04 4e-17 5e-03
4: 2.8889e+00 2.8888e+00 4e-04 2e-05 3e-17 2e-04
5: 2.8889e+00 2.8889e+00 9e-06 6e-07 8e-18 4e-06
6: 2.8889e+00 2.8889e+00 2e-07 1e-08 7e-18 9e-08
Optimal solution found.

p-value of joint test
0.19901704910658857
0.19901704910658835
</pre>
</div>
</div>
Expand Down
15 changes: 8 additions & 7 deletions docs/example_ngm.html
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
.highlight .m { color: var(--jp-mirror-editor-number-color) } /* Literal.Number */
.highlight .s { color: var(--jp-mirror-editor-string-color) } /* Literal.String */
.highlight .ow { color: var(--jp-mirror-editor-operator-color); font-weight: bold } /* Operator.Word */
.highlight .pm { color: var(--jp-mirror-editor-punctuation-color) } /* Punctuation.Marker */
.highlight .w { color: var(--jp-mirror-editor-variable-color) } /* Text.Whitespace */
.highlight .mb { color: var(--jp-mirror-editor-number-color) } /* Literal.Number.Bin */
.highlight .mf { color: var(--jp-mirror-editor-number-color) } /* Literal.Number.Float */
Expand Down Expand Up @@ -13972,7 +13973,7 @@
<div class="jp-Cell-inputWrapper"><div class="jp-InputPrompt jp-InputArea-prompt">
</div><div class="jp-RenderedHTMLCommon jp-RenderedMarkdown jp-MarkdownOutput " data-mime-type="text/markdown">
<h1 id="Standard-errors-for-calibrated-parameters:-Neoclassical-Growth-Model-example">Standard errors for calibrated parameters: Neoclassical Growth Model example<a class="anchor-link" href="#Standard-errors-for-calibrated-parameters:-Neoclassical-Growth-Model-example">&#182;</a></h1><p><em>We are grateful to Ben Moll for suggesting this example. Any errors are our own.</em></p>
<p>In this notebook we will work through the basic logic of <a href="https://scholar.princeton.edu/mikkelpm/calibration">Cocci &amp; Plagborg-Møller (2021)</a> in the context of calibrating a simple version of the Neoclassical Growth Model (NGM). Though the model is highly stylized, it helps provide intuition for our procedures. Please see our paper for other, arguably more realistic, empirical applications.</p>
<p>In this notebook we will work through the basic logic of <a href="https://arxiv.org/abs/2109.08109">Cocci &amp; Plagborg-Møller (2023)</a> in the context of calibrating a simple version of the Neoclassical Growth Model (NGM). Though the model is highly stylized, it helps provide intuition for our procedures. Please see our paper for other, arguably more realistic, empirical applications.</p>
<h2 id="Model">Model<a class="anchor-link" href="#Model">&#182;</a></h2><p>We consider the simplest version of the NGM without population growth or technological growth. As explained in section 3.4 of <a href="https://perhuaman.files.wordpress.com/2014/06/macrotheory-dirk-krueger.pdf">Dirk Krueger's lecture notes</a> (note the difference in notation), this model implies three key steady-state equations:</p>
<ol>
<li><strong>Euler equation:</strong> $r = \rho$, where $\rho$ is the household discount rate and $r$ is the real interest rate.</li>
Expand Down Expand Up @@ -14012,7 +14013,7 @@ <h2 id="Limited-information-inference">Limited-information inference<a class="an
where
$$\hat{x}_1=\hat{x}_2=\widehat{\frac{K}{Y}},\quad \hat{x}_3=\left(\hat{r}+\widehat{\frac{I}{K}} \right).$$
Notice that this upper bound only depends on things that we know: the sample averages themselves and their individual standard errors (but not the correlations across moments).</p>
<p>It's impossible to improve the bound without further knowledge of the correlation structure: The bound turns out to equal the actual standard error when the three sample averages are perfectly correlated with each other. This is proved in Lemma 1 in <a href="https://scholar.princeton.edu/mikkelpm/calibration">our paper</a>. For this reason, we refer to the standard error bound as the <em>worst-case standard error</em>.</p>
<p>It's impossible to improve the bound without further knowledge of the correlation structure: The bound turns out to equal the actual standard error when the three sample averages are perfectly correlated with each other. This is proved in Lemma 1 in <a href="https://arxiv.org/abs/2109.08109">our paper</a>. For this reason, we refer to the standard error bound as the <em>worst-case standard error</em>.</p>
<h2 id="Numerical-example">Numerical example<a class="anchor-link" href="#Numerical-example">&#182;</a></h2><p>Our software package makes it easy to calculate worst-case standard errors. As an illustration, suppose the sample averages (with standard errors in parentheses) equal
$$\hat{r}=0.02\;(0.002), \quad \widehat{\frac{I}{K}}=0.08\;(0.01), \quad \widehat{\frac{K}{Y}} = 3\;(0.1).$$
We define the model equations and data as follows. Let $\theta=(\rho,\delta,\alpha)$ and $\mu=(r,\frac{I}{K},\frac{K}{Y})$ denote the vectors of parameters and moments, respectively.</p>
Expand Down Expand Up @@ -14146,10 +14147,10 @@ <h2 id="Over-identification-test">Over-identification test<a class="anchor-link"
<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre> pcost dcost gap pres dres k/t
0: 1.0118e-29 1.0118e-29 1e-02 2e+00 0e+00 1e+00
1: -4.4391e-18 1.0118e-29 1e-04 2e-02 1e-17 1e-02
2: -8.6524e-20 1.0118e-29 1e-06 2e-04 2e-18 1e-04
3: -6.3766e-22 1.0118e-29 1e-08 2e-06 7e-18 1e-06
4: -5.9271e-24 1.0118e-29 1e-10 2e-08 2e-18 1e-08
1: -4.4391e-18 1.0118e-29 1e-04 2e-02 7e-18 1e-02
2: -7.3161e-20 1.0118e-29 1e-06 2e-04 3e-18 1e-04
3: -8.8737e-22 1.0118e-29 1e-08 2e-06 5e-18 1e-06
4: -9.2425e-24 1.0118e-29 1e-10 2e-08 7e-18 1e-08
Optimal solution found.
Error in matching non-targeted moment
-0.09999999999999998
Expand All @@ -14170,7 +14171,7 @@ <h2 id="Over-identification-test">Over-identification test<a class="anchor-link"
</div><div class="jp-RenderedHTMLCommon jp-RenderedMarkdown jp-MarkdownOutput " data-mime-type="text/markdown">
<p>Since the absolute value of the t-statistic lies between 1.64 and 1.96, we can reject the validity of the model at the 10% significance level, but not at the 5% level.</p>
<p>The over-identification test checks how different the estimate $\hat{\alpha}$ would have been if we had instead computed it as the sample average capital share of income $1-0.6=0.4$. Is the difference in parameter estimates between the two calibration strategies too large to be explained by statistical noise?</p>
<h2 id="Other-features-in-the-paper">Other features in the paper<a class="anchor-link" href="#Other-features-in-the-paper">&#182;</a></h2><p>The above NGM example is very simple and stylized. In <a href="https://scholar.princeton.edu/mikkelpm/calibration">our paper</a> we extend the basic ideas along various dimensions that are relevant for applied research. For example:</p>
<h2 id="Other-features-in-the-paper">Other features in the paper<a class="anchor-link" href="#Other-features-in-the-paper">&#182;</a></h2><p>The above NGM example is very simple and stylized. In <a href="https://arxiv.org/abs/2109.08109">our paper</a> we extend the basic ideas along various dimensions that are relevant for applied research. For example:</p>
<ul>
<li>The matched moments need not be simple sample averages, but could be regression coefficients, quantiles, etc. The moments need not be related to steady-state quantities, but could involve essentially any feature of the available data.</li>
<li>The calibration (method-of-moments) estimator need not be available in closed form (usually one would obtain it by numerical optimization).</li>
Expand Down
4 changes: 2 additions & 2 deletions example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"We will estimate the parameters $(\\theta_1,\\theta_2)$ by minimum distance, matching the model-implied moments $h(\\theta_1,\\theta_2)$ to the empirical moments:\n",
"$$\\hat{\\theta} = \\text{argmin}_{\\theta}\\; (\\hat{\\mu}-h(\\theta))'\\hat{W}(\\hat{\\mu}-h(\\theta)).$$\n",
"\n",
"To compute standard errors for the estimated parameters, test hypotheses, and compute the efficient weight matrix $\\hat{W}$, we use the formulas in [Cocci & Plagborg-Møller (2021)](https://scholar.princeton.edu/mikkelpm/calibration), which do not require knowledge of the correlation structure of the empirical moments.\n",
"To compute standard errors for the estimated parameters, test hypotheses, and compute the efficient weight matrix $\\hat{W}$, we use the formulas in [Cocci & Plagborg-Møller (2023)](https://arxiv.org/abs/2109.08109), which do not require knowledge of the correlation structure of the empirical moments.\n",
"\n",
"## Define the model\n",
"We first import relevant packages and define the model and data."
Expand Down Expand Up @@ -374,7 +374,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.11"
"version": "3.8.18"
}
},
"nbformat": 4,
Expand Down
8 changes: 4 additions & 4 deletions example_ngm.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
"*We are grateful to Ben Moll for suggesting this example. Any errors are our own.*\n",
"\n",
"\n",
"In this notebook we will work through the basic logic of [Cocci & Plagborg-Møller (2021)](https://scholar.princeton.edu/mikkelpm/calibration) in the context of calibrating a simple version of the Neoclassical Growth Model (NGM). Though the model is highly stylized, it helps provide intuition for our procedures. Please see our paper for other, arguably more realistic, empirical applications.\n",
"In this notebook we will work through the basic logic of [Cocci & Plagborg-Møller (2023)](https://arxiv.org/abs/2109.08109) in the context of calibrating a simple version of the Neoclassical Growth Model (NGM). Though the model is highly stylized, it helps provide intuition for our procedures. Please see our paper for other, arguably more realistic, empirical applications.\n",
"\n",
"\n",
"## Model\n",
Expand Down Expand Up @@ -68,7 +68,7 @@
"$$\\hat{x}_1=\\hat{x}_2=\\widehat{\\frac{K}{Y}},\\quad \\hat{x}_3=\\left(\\hat{r}+\\widehat{\\frac{I}{K}} \\right).$$\n",
"Notice that this upper bound only depends on things that we know: the sample averages themselves and their individual standard errors (but not the correlations across moments).\n",
"\n",
"It's impossible to improve the bound without further knowledge of the correlation structure: The bound turns out to equal the actual standard error when the three sample averages are perfectly correlated with each other. This is proved in Lemma 1 in [our paper](https://scholar.princeton.edu/mikkelpm/calibration). For this reason, we refer to the standard error bound as the *worst-case standard error*.\n",
"It's impossible to improve the bound without further knowledge of the correlation structure: The bound turns out to equal the actual standard error when the three sample averages are perfectly correlated with each other. This is proved in Lemma 1 in [our paper](https://arxiv.org/abs/2109.08109). For this reason, we refer to the standard error bound as the *worst-case standard error*.\n",
"\n",
"\n",
"## Numerical example\n",
Expand Down Expand Up @@ -176,7 +176,7 @@
"\n",
"## Other features in the paper\n",
"\n",
"The above NGM example is very simple and stylized. In [our paper](https://scholar.princeton.edu/mikkelpm/calibration) we extend the basic ideas along various dimensions that are relevant for applied research. For example:\n",
"The above NGM example is very simple and stylized. In [our paper](https://arxiv.org/abs/2109.08109) we extend the basic ideas along various dimensions that are relevant for applied research. For example:\n",
"- The matched moments need not be simple sample averages, but could be regression coefficients, quantiles, etc. The moments need not be related to steady-state quantities, but could involve essentially any feature of the available data.\n",
"- The calibration (method-of-moments) estimator need not be available in closed form (usually one would obtain it by numerical optimization).\n",
"- If some, but not all, of the correlations between the empirical moments are known, this can be exploited to sharpen inference.\n",
Expand All @@ -203,7 +203,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.11"
"version": "3.8.18"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion stderr_calibration/worstcase_se.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
Reference:
Cocci, Matthew D. & Mikkel Plagborg-Moller, "Standard Errors for Calibrated Parameters"
https://scholar.princeton.edu/mikkelpm/calibration
https://arxiv.org/abs/2109.08109
"""


Expand Down

0 comments on commit 31bc665

Please sign in to comment.