@@ -472,12 +472,12 @@ 2017-05-25
##
## Linear Hypotheses:
## Estimate Std. Error t value Pr(>|t|)
-## MP - MT == 0 10.831 4.612 2.349 0.068859 .
-## MP - AC == 0 18.100 4.612 3.925 0.000777 ***
+## MP - MT == 0 10.831 4.612 2.349 0.068834 .
+## MP - AC == 0 18.100 4.612 3.925 0.000885 ***
## MP - DA == 0 4.556 4.612 0.988 0.325273
## MT - AC == 0 7.269 4.612 1.576 0.281590
## MT - DA == 0 -6.275 4.612 -1.361 0.296932
-## AC - DA == 0 -13.544 4.612 -2.937 0.017800 *
+## AC - DA == 0 -13.544 4.612 -2.937 0.017607 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- free method)
@@ -610,7 +610,7 @@ 2017-05-25
Plotting
Function lsmip from package lsmeans can be used for plotting the data directly from an afex_aov object. As said initially, we are interested in the three-way interaction of instruction with inference, plausibility, and instruction. A plot of this interaction could be the following:
lsmip(a1, instruction ~ inference|plausibility)
-
+
@@ -649,7 +649,7 @@ 2017-05-25
at = 1:4,
labels = c("pl:v", "im:v", "pl:i", "im:i")
)))
-
+
We see the critical predicted cross-over interaction in the left of those two graphs. For valid but implausible problems (im:v) deductive responses are larger than probabilistic responses. The opposite is true for invalid but plausible problems (pl:i). We now tests these differences at each of the four x-axis ticks in each plot using custom contrasts (diff_1 to diff_4). Furthermore, we test for a validity effect and plausibility effect in both conditions.
(m4 <- lsmeans(a2, ~instruction+plausibility+validity|what))
## what = affirmation:
@@ -718,10 +718,10 @@ 2017-05-25
##
## Linear Hypotheses:
## Estimate Std. Error t value Pr(>|t|)
-## diff_1 == 0 4.175 8.500 0.491 0.62387
-## diff_2 == 0 34.925 8.500 4.109 0.00023 ***
-## diff_3 == 0 -23.600 8.500 -2.777 0.01733 *
-## diff_4 == 0 -8.100 8.500 -0.953 0.56474
+## diff_1 == 0 4.175 8.500 0.491 0.623874
+## diff_2 == 0 34.925 8.500 4.109 0.000263 ***
+## diff_3 == 0 -23.600 8.500 -2.777 0.017272 *
+## diff_4 == 0 -8.100 8.500 -0.953 0.564739
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- free method)
@@ -733,7 +733,7 @@ 2017-05-25
##
## Linear Hypotheses:
## Estimate Std. Error t value Pr(>|t|)
-## diff_1 == 0 -22.425 8.500 -2.638 0.0331 *
+## diff_1 == 0 -22.425 8.500 -2.638 0.0332 *
## diff_2 == 0 -2.700 8.500 -0.318 0.9554
## diff_3 == 0 -0.925 8.500 -0.109 0.9554
## diff_4 == 0 -3.650 8.500 -0.429 0.9554
diff --git a/inst/doc/afex_mixed_example.html b/inst/doc/afex_mixed_example.html
index f250c0d..a0a9f36 100644
--- a/inst/doc/afex_mixed_example.html
+++ b/inst/doc/afex_mixed_example.html
@@ -164,7 +164,7 @@ 2017-05-25
fhch_long <- fhch %>% gather("rt_type", "rt", rt, log_rt)
histogram(~rt|rt_type, fhch_long, breaks = "Scott", type = "density",
scale = list(x = list(relation = "free")))
-
+
Descriptive Analysis
@@ -181,7 +181,7 @@ 2017-05-25
panel.points(tmp$x, tmp$y, pch = 13, cex =1.5)
}) +
bwplot(mean ~ density:frequency|task+stimulus, agg_p, pch="|", do.out = FALSE)
-
+
Now we plot the same data but aggregated across items:
agg_i <- fhch %>% group_by(item, task, stimulus, density, frequency) %>%
summarise(mean = mean(log_rt)) %>%
@@ -194,7 +194,7 @@ 2017-05-25
panel.points(tmp$x, tmp$y, pch = 13, cex =1.5)
}) +
bwplot(mean ~ density:frequency|task+stimulus, agg_i, pch="|", do.out = FALSE)
-
+
These two plots show a very similar pattern and suggest several things:
- Responses to
nonwords appear slower than responses to words, at least for the naming task.
|