Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
Already on GitHub? Sign in to your account
Automatic regularisation of input variables option in `lmer/glmer` #428
Comments
bachlaw
commented
Jul 8, 2017
|
At least from a Bayesian standpoint, this sounds to me like the equivalent of introducing a ridge or other similar type prior. Wouldn't the blme package (which extends lme4) meet this requirement?
Jonathan
…
|
hadjipantelis
commented
Jul 8, 2017
|
@bachlaw Thank you for your comment but I think your comment presents a slight misinterpretation of what I asked. For a very simple case what I described is effectively something like: |
|
This already sort of exists, but it's undocumented/experimental and doesn't actually seem to be working at the moment. In principle, |
hadjipantelis
commented
Jul 12, 2017
|
@bbolker Thank you for your answer. I fully agree, potentially orthonormalising the matrix would be the most proper method. I suggested |
hadjipantelis commentedJul 8, 2017
I think
lmeris great, thank you for your work it! I have a enhancement suggestion:Would it possible to introduce an input argument
normalisesuch that IfTRUE, each fixed-effects variable is standardized to have unit L2 norm, or otherwise it is left alone? (As for example inlars::lars,glmnet::glmnet, etc.) Default should/would beFALSEto ensure backwards compatibility, etc. but I think it will be helpful because: 1. It potentially increases convenience when interpreting results. 2. If users need to rescale variables because of optimisation warnings they can do it automatically, 3. Draws some attention to the relation between mixed effects regression and standard L2-regularisation regression approaches.