-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added anova.afex_aov() and nice() option to adjust F-tests for multiple comparisons #3
Conversation
Hi Frederik,
Perhaps @EJWagenmakers can chip in as well, but as far as I see it, I would suggest to apply the correction indiscriminately to all reported p-values independently of the actual design (which would also be somewhat more in line with the Cheers, |
Yes, I think the issue holds regardless of the design Eric-Jan Wagenmakers Web: ejwagenmakers.com “Man follows only phantoms.” On Wed, Sep 30, 2015 at 9:44 AM, singmann notifications@github.com wrote:
|
You are, of course, correct that the adjustment does not depend on the presence of within-subject factors and I'm not arguing for this. This is a technical issue with the way In the current implementation, regardless of the research design
In both cases, possible sphericity corrections are applied before p-values are adjusted according to the The technical difficulty arises with the current implementation of But this is precisely where the difficulty with implementing adjusted p-values for this output arises. As is detailed nicely in the paper, for some adjustment methods the resulting p-value is contingent on the other p-values in the family. It is, thus, not possible to provide adjusted p-values if we don't know what p-values are included in the set (uncorrected, GG, or HF). Also, with the As I see it, this leaves two of options:
I chose the first option because I did not want to add more elaborate changes to the design of afex in the same pull request, not because it's the best option. I've put a little thought into how the design of the |
Hi Frederik, |
Hi Henrik,
I have added an option
p.adjust.method
toanova.afex_aov()
andnice()
that allows the user to adjust the results of ANOVA F-tests for multiple comparisons as per Cramer, A. O. J., van Ravenzwaaij, D., Matzke, D., Steingroever, H., Wetzels, R., Grasman, R. P. P. P., … Wagenmakers, E.-J. (2015). Hidden multiplicity in exploratory multiway ANOVA: Prevalence and remedies. Psychonomic Bulletin & Review, 1–8. http://doi.org/10.3758/s13423-015-0913-5 (Open access)The option can be passed in the
anova_table
-list when callingaov_car
and is added as an attribute namedp.adjust.method
toanova_table
-objects produced byanova.afex_aov()
(and thus also toafex_aov$anova_table
). All model objects such asanova
,Anova.mlm
,aov
,lm
and output whenreturn = "univariate"
are not affected.anova.afex_aov()
andnice()
default to thep.adjust.method
specified in the call toaov_car()
(no adjustment if unspecified) but it can be changed by passing a new value.When calling
summary.afex_aov()
, displayed p-values are (un-)adjusted as specified in the call toaov_car()
if it is a purely between subject ANOVA (of classanova
). This is not easily achieved for within or mixed designs (Should we correct for the intercept F-test? Which sphericity correction should we use?). Thus, the method returns the untouchedunivariate
object (as it previously did) but if the user asked for an adjustment when callingaov_car()
a message notifies the user that the displayed results are not adjusted:I have also updated the documentation and added a unit test to
test-aov_car-basic.car-basic.R
and tested the new functionality more extensively by hand. I think it should give you no trouble. Let me know what you think.Best regards,
Frederik