Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
Already on GitHub? Sign in to your account
Warn user when having nonsensical random effects #397
Comments
|
I agree that it would be good to alert the user to this type of ill-considered model specification. It turns out to be difficult to detect without throwing up a lot of false positives. |
|
@dmbates, do you have failed attempts you could point me to? I'd be willing to at least think about possible strategies ... |
smathot
commented
Sep 26, 2016
|
I'm afraid that providing a concrete solution is beyond me, but just as a suggestion: If it is too difficult to solve the general problem of warning the user against nonsensical random effects (which I imagine is really difficult), then you could think of the most common situation(s) in which this happens, and warn against those. I think the situation above probably ranks number one among those situations (although I may be wrong). These specific situations could probably be identified relatively easily and trigger a warning. That way the user is at least warned against the most common mistakes. Theoretically perhaps not very satisfying. But practically very useful. |
smathot commentedSep 26, 2016
Imagine an experiment with two categories of words (so each word occurs only in one category). For several subjects, you measure the reaction time (RT) to each word. You're interested in finding an effect of word category on RT. To do so, you might be tempted to create a model like this:
In other words, you have random intercepts and slopes for both subjects and words. But for words this doesn't make sense, because each word only belongs to one category; therefore, you cannot 'control' for differences between words as you can for subjects. Do I understand that correctly?
So, assuming that my understanding is correct, the model should (or at least could) be:
I suspect that this is a common mistake; it's certainly one that I made, even though I'd say that my understanding of and vigilance for these things is, while imperfect, above average. So it would make sense to throw a warning or error message in this case!
Cheers!
Sebastiaan