-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Which kind of inference method is preferable? #76
Comments
The nonparametric bootstrap is infeasible when the number of treated units
is small -- they simply would not appear in some of the bootstrapped
samples.
…On Fri, Feb 4, 2022 at 9:44 AM ccepeda10 ***@***.***> wrote:
Hello there,
I was wondering about the criteria for preferring one inference method
over another (parametric/nonparametric bootstrap or jackknife). The package
recommends using parametric bootstraps or jackknife for small samples, but
there isn't much guidance for bigger samples. I also read the paper, but I
couldn't find much information about this either, except from a brief
comment about the validity of parametric bootstrapping under some
conditions (which are not specified). I know this is a niche subject, but
is there any reference about this matter?
Thanks
—
Reply to this email directly, view it on GitHub
<#76>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB2PKGFBQANCNV5ANLUWMKDUZQGANANCNFSM5NSM3X7Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
Yiqing Xu
Assistant Professor
Department of Political Science
Stanford University
https://yiqingxu.org/
|
Thanks for the answer! |
Inference using the nonparametric bootstraps is unconditional on factors
and loadings and thus is preferable in my view. It is also easier to
interpret.
…On Fri, Feb 4, 2022 at 9:50 AM ccepeda10 ***@***.***> wrote:
Thanks for the answer!
What about bigger samples, though? Which one would be preferable? Or it's
relatively indifferent?
—
Reply to this email directly, view it on GitHub
<#76 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB2PKGFJFHTBVUXXM73P4L3UZQGWHANCNFSM5NSM3X7Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you commented.Message ID:
***@***.***>
--
Yiqing Xu
Assistant Professor
Department of Political Science
Stanford University
https://yiqingxu.org/
|
Thanks a lot for the help and the prompt answer! I'm going to use nonparametric bootstraps then. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello there,
I was wondering about the criteria for preferring one inference method over another (parametric/nonparametric bootstrap or jackknife). The package recommends using parametric bootstraps or jackknife for small samples, but there isn't much guidance for bigger samples. I also read the paper, but I couldn't find much information about this either, except from a brief comment about the validity of parametric bootstrapping under some conditions (which are not specified). I know this is a niche subject, but is there any reference about this matter?
Thanks
The text was updated successfully, but these errors were encountered: