Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Joint one-sided p-value #36

Open
dvmasterov opened this issue Aug 4, 2020 · 4 comments
Open

Joint one-sided p-value #36

dvmasterov opened this issue Aug 4, 2020 · 4 comments

Comments

@dvmasterov
Copy link

Would it be possible to add a joint standardized one-sided p-value for H0: diff >=0 against Ha:diff<0? Alternatively, could you specify how one might calculate this?

@bquistorff
Copy link
Owner

The e(pval_joint_post_t) is the standardized joint test implemented. As it's RMSPE, the test is already comparing non-negative values and so is inherently a one-sided test. One could envision other joint tests like 'average post-period effect' and then there'd be a difference between the one- and two-sided tests. This can be done on the user side by collapsing all post-period outcomes to a single average for each unit and then running synth.

Feel free to re-open if I didn't understand your question.

@dvmasterov
Copy link
Author

Thanks for being so patient with my query.

Let me try to explain better. The reason this is important for me is that I have primary metric that is a short-run outcome and using two-sided test makes sense. I see strong positive results. I also have a secondary surrogate metric where I want to rule out that the policy change made things worse. I don't care how much better it is since the primary effect is positive. When I use a two-sided joint p-value, I often reject the null, but the treatment lies on top of all FP effects with very low individual p-values. This leads me to think that the two-sided joint p-value is for the wrong hypothesis.

To make things simple, let's assume that there are only two post periods. As I understand the method, the joint p-value is for the null H0: effect_1 = 0 and effect_2 = 0 against the two-sided alternative of effect_1 != 0 and effect_2 != 0. The e(pval_joint_post_std) calculates the proportion of effects from control units that have post-treatment RMSPE at least as great as the treated unit, but standardized by the relevant pre-treatment RMSPE. If treatment effect is very large in absolute value in either of the 2 periods (regardless of its sign), you might reject the joint null if it lands in the right tails of the FP scaled RMSPE distribution. If the treatment effect is a bit smaller, but present in both periods, you might reject as well if the proportions add up to enough.

With a one-sided superiority test, the null becomes H0: effect_1 >= 0 and effect_2 >= 0 against the two-sided alternative of effect_1 < 0 and effect_2 < 0. To be conservative, we focus on the p-value at the most extreme point of the null hypothesis, closest to alternative parameter space, which is for effect_t=0. So we do the same false placebo tests as before, but now a big negative error still counts against the null, but you ignore the big positive errors (even if they both have the same RMSPE). Or maybe RMSPE inherits its sign (even if that's mathematically dubious). A consistent pattern of less negative results might also cause you to reject the null.

Am I missing something here? Or is this a reasonable approach to constructing a joint one-sided p-value for the superiority null?

@dvmasterov
Copy link
Author

I believe that I cannot re-open my own issues if a repo collaborator closed it.

@bquistorff
Copy link
Owner

I see now what you are thinking about. Yes, I think you could construct such a test from the placebo distribution. There are likely many different test hypotheses one might want to test, so the functionality should probably stay on the user-side. I think the easiest way to do this would be to allow synth_runner to leave the placebo distribution (mata variable do_effects_p) in memory. It is normally cleaned up in the end by program cleanup_mata. I probably won't be able to push code that will make this a standard option, but in the mean time you should be able to just comment out this line and then use the mata variable yourself.

I'll re-open as an issue to make this functionality standard.

@bquistorff bquistorff reopened this Aug 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants