marginal effects in discrete choice do not have standard errors defined #393

Closed
jseabold opened this Issue Jul 17, 2012 · 3 comments

Projects

None yet

2 participants

@jseabold
Member

No description provided.

@jseabold
Member

I figured this out, but don't have time to clean up and commit right now. The asymptotic variance-covariance of the marginal effects is given by

[d_margeff / d_params].dot(V).dot((d_margeff / d_params).T)

We can either program by hand the derivatives, which are a bit tricky when X isn't a vector (and where I got stuck in the first place). Or use numerical differentiation. Numerical differentiation agrees with Stata up to at least 8 decimals which is expected since these functions are pretty simple. Something like this is general (for means or medians - any vector X are commented out) for all the discrete choice models, though I think it needs to be equation by equation for MNLogit.

from statsmodels.sandbox.regression import numdiff

data = sm.datasets.spector.load()
data.exog = sm.add_constant(data.exog)

res1 = sm.Probit(data.endog, data.exog).fit(method="newton", disp=0)

def dmargdparams(params, exog):
    return res1.model._derivative_exog(params, exog).squeeze()

#X = np.median(res1.model.exog, axis=0) # at median
#X = np.mean(res1.model.exog, axis=0) # at mean
X = res1.model.exog # overall
params = res1.params
V = res1.cov_params()
#mat = numdiff.approx_fprime1(params, dmargdparams, args=(X.T,), centered=True)
mat = numdiff.approx_fprime_cs(params, dmargdparams, args=(X,)) # much more accurate
mat = np.mean(mat, axis=1) # if doing overall
margeff_se = np.sqrt(np.diag(mat.dot(V).dot(mat.T)))
@josef-pkt josef-pkt closed this Sep 12, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment