Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
Already on GitHub? Sign in to your account
glmer memory gets out of hand #392
Comments
|
Just so this doesn't go completely ignored ... I sympathize, but diagnosing and solving this could take a while. What exactly do you mean by "each iteration of the stepwise backward reduction process"? Do you mean that with |
themeo
commented
Sep 20, 2016
|
Ok, I did some testing and simplified the problem. Both for lmer() and glmer() I ran the same code snippet: print (gc())
for (i in 1:10) {
print (i)
# lm = lmer(f, data=asum, REML=T)
# lm = glmer(f, data=asum, family=binomial(link="logit"), control=glmerControl(optimizer="bobyqa"))
print(gc())
}In the first run the first line was uncommented, in the second run the second line. Of course, I reset R before each run. Formula was the same in both runs (except the dependent variable), and included 9 fixed effects, a few by-item and by-subject random effects + some random interactions. Data included 12031 data-points. For lm(), memory usage did not increase with each run (or rather it stabilized after the 2nd run):
(...)
However, with glmer() the situation was different:
.. and in the course of the 10 runs Vcells memory usage rose from 49Mb to 220Mb. What is interesting, the memory increase was not deterministic -- sometimes it occurred, sometimes not. So it looks like there is a memory leak in glmer/bobyqa. Please let me know if you need any more information. Aha, this is sessionInfo():
thanks, |
|
OK, I will try with valgrind (sigh) and see what I can find out. |
themeo commentedAug 20, 2016
I am comparing a lot of ME models (backward regression), and I use the 'doParallel' package to compute many reduced models for comparison at the same time. When I use lmer() everything is fine, but when I use glmer() with logit link functinon and bobyqa optimizer things quickly get out of hand: memory used by the master process running all the parallelization doubles with each iteration of the stepwise backward reduction process.
Aside from using glmer against lmer, the code is identical.
At each step I am calling gc() to facilitate garbage collection, but this doesn't help.