Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

plot() and testQuantiles() give different results #191

Closed
florianhartig opened this issue Jul 8, 2020 · 1 comment
Closed

plot() and testQuantiles() give different results #191

florianhartig opened this issue Jul 8, 2020 · 1 comment
Labels

Comments

@florianhartig
Copy link
Owner

florianhartig commented Jul 8, 2020

Question from a user

I am running some diagnostic tests for a GLMM using the DHARMa package and I did not detect any singularity, overdispersion or zero inflation.

When I check the quantile residuals plots though, using two different methods a) and b) I obtain two different results.

a) simulationOutput <- simulateResiduals(fittedModel = m1)
plot(simulationOutput)

b) testQuantiles(simulationOutput)

With method a) no significant quantile deviation is detected (Fig 2, right plot), while it is detected with method b) (Fig 3). However, I was expecting a) and b) to give the same results since the R documentation for testQuantiles {DHARMa} explains:
"the quantile test is automatically performed inÂ
Not run:
plot(simulationOutput)
plotResiduals(simulationOutput)"

I understand that method a) plots rank transformed model predictions, while b) doesn't but I wouldn't think this affects the p-value, am I correct?Â
I also understand that method a) automatically adds some noise to the residuals to maintain a uniform response (see:Â integerResponse parameter). Does method b) do the same? If it doesn't, could that be the reason for the different results?

image

image

@florianhartig
Copy link
Owner Author

Hi Alessandra,

the difference between the two methods is indeed that plot() rank-transforms the x axis per default, while testQuantiles() works per default on the raw values.

This can indeed created differences in the fitted quantile functions if the distribution of predicted values is far from uniform, as obviously the case here. It seems as if in your case, there are very few (but lower) values around 0.01, which get more weight in the untransformed version, thus the significant quantile line.

The randomisation is standardised, i.e. the same for both plots, so the difference is really only the y axis.

Practical advice:

  • In principle, regardless of how you plot x, there should be no pattern, so what you are seeing here is a slight misfit

  • The misfit is relatively small. I don't think it necessarily requires action, so you could just leave it like that. Alternatively, you could try if adding a nonlinear term on your important predictors improves the results (because it looks a bit as if you are missing a nonlinearity). You could also plot residuals against predictors, maybe this gives you an idea which predictor is responsible for the nonlinearity.

Best
Florian

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant