Functions to run GMM/CMD. (Preliminary/In progress.)
Note: need to install a slightly custom version of LsqFit.jl
from https://github.com/Gkreindler/LsqFit.jl
Run:
- ]remove LsqFit
- ]add https://github.com/Gkreindler/LsqFit.jl
The user provides two objects:
- A function
moments(theta,data)
that returns an NxM matrix, wheretheta
is the parameter vector,N
is the number of observations, andM
is the number of moments. - An object
data
. (Can be anything. By default Dict{String, Any} with values tha are vectors or matrices with 1st dimension of sizeN
. This ensures automatic "slow" bootstrap.)
See examples in example_serial.jl
and example_distributed.jl
.
- (DONE) run GMM (or classical minimum distance) one-step or two-step with optimal weight matrix
- (DONE) parameter box constraints
- (DONE) optimize using multiple initial conditions (serial or embarrassingly parallel with Distributed.jl) asymptotic variance-covariance matrix
- (DONE) “quick” bootstrap
- (DONE) “slow” bootstrap (serial or embarrassingly parallel)
- (DONE) output estimation results text
- (Pending) output estimation results latex
- (Pending) save results to files, re-start estimation based on incomplete results (e.g. when bootstrap run #63 fails after many hours of running!)
- (Pending) compute sensitivity measure (Andrews et al 2017)
- (Pending) (using user-provided function to generate data from model) Monte Carlo simulation to compute size and power.
- (Pending) (using user-provided function to generate data from model) Monte Carlo simulation of estimation finite sample properties (simulate data for random parameter values ⇒ run GMM ⇒ compare estimated parameters with underlying true parameters)
- the optimizer is a slightly modified version of LsqFit.jl, because the GMM/CMD objective is a sum of squares (using the Cholesky decomposition of the weighting matrix). In principle, other optimizers can be used.
- in optimization, the gradient is currently computed using finite differences (surely automatic differentiation can be added)