You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running some diagnostic tests for a GLMM using the DHARMa package and I did not detect any singularity, overdispersion or zero inflation.
When I check the quantile residuals plots though, using two different methods a) and b) I obtain two different results.
a) simulationOutput <- simulateResiduals(fittedModel = m1)
plot(simulationOutput)
b) testQuantiles(simulationOutput)
With method a) no significant quantile deviation is detected (Fig 2, right plot), while it is detected with method b) (Fig 3). However, I was expecting a) and b) to give the same results since the R documentation for testQuantiles {DHARMa} explains:
"the quantile test is automatically performed inÂ
Not run:
plot(simulationOutput)
plotResiduals(simulationOutput)"
I understand that method a) plots rank transformed model predictions, while b) doesn't but I wouldn't think this affects the p-value, am I correct?Â
I also understand that method a) automatically adds some noise to the residuals to maintain a uniform response (see:Â integerResponse parameter). Does method b) do the same? If it doesn't, could that be the reason for the different results?
The text was updated successfully, but these errors were encountered:
the difference between the two methods is indeed that plot() rank-transforms the x axis per default, while testQuantiles() works per default on the raw values.
This can indeed created differences in the fitted quantile functions if the distribution of predicted values is far from uniform, as obviously the case here. It seems as if in your case, there are very few (but lower) values around 0.01, which get more weight in the untransformed version, thus the significant quantile line.
The randomisation is standardised, i.e. the same for both plots, so the difference is really only the y axis.
Practical advice:
In principle, regardless of how you plot x, there should be no pattern, so what you are seeing here is a slight misfit
The misfit is relatively small. I don't think it necessarily requires action, so you could just leave it like that. Alternatively, you could try if adding a nonlinear term on your important predictors improves the results (because it looks a bit as if you are missing a nonlinearity). You could also plot residuals against predictors, maybe this gives you an idea which predictor is responsible for the nonlinearity.
Question from a user
The text was updated successfully, but these errors were encountered: