Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loo crashes R when dealing with large log-likelihood matrices #35

Closed
daltonhance opened this issue Oct 26, 2016 · 5 comments
Closed

loo crashes R when dealing with large log-likelihood matrices #35

daltonhance opened this issue Oct 26, 2016 · 5 comments

Comments

@daltonhance
Copy link

loo keeps crashing R when I call loo() on a log likelihood matrix of around 100 MB. Size is 5250 iterations by 2552 observations of log-likelihood. I've played around with reducing the size by limiting the number of rows and I am able to get it to run without crashing. Also seems to happen more often in RStudio than stand alone R.

I suspect an issue with running out memory?

Session info:
R version 3.3.1 (2016-06-21)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1

8 core laptop with 64 MB of RAM.

@jgabry
Copy link
Member

jgabry commented Oct 27, 2016

Yeah it's probably a memory issue, but with as much memory as you have (I assume you meant 64 GB) that's a bit surprising. Can you try manually setting cores=1 when you call loo? It might have something to do with extra copies being unnecessarily made when parallelizing.

I think that we can make big improvements to the loo internals to better deal with memory consumption. The loo.function method is supposed to be better than loo.matrix, so you could try that maybe, but even the function method can be improved in that regard. This is on the short term to-do list.

@jgabry
Copy link
Member

jgabry commented Oct 27, 2016

Yeah it's probably a memory issue, but with as much memory as you have
that's a bit surprising. Can you try manually setting cores=1 when you call
loo? It might have something to do with extra copies being unnecessarily
made when parallelizing.

I think that we can make big improvements to the loo internals to better
deal with memory consumption. The loo.function method is supposed to be
better than loo.matrix, so you could try that maybe, but even the function
method can be improved in that regard. This is on the short term to-do
list.

Jonah

On Wednesday, October 26, 2016, daltonhance notifications@github.com
wrote:

loo keeps crashing R when I call loo() on a log likelihood matrix of
around 100 MB. Size is 5250 iterations by 2552 observations of
log-likelihood. I've played around with reducing the size by limiting the
number of rows and I am able to get it to run without crashing. Also seems
to happen more often in RStudio than stand alone R.

I suspect an issue with running out memory?

Session info:
R version 3.3.1 (2016-06-21)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1

8 core laptop with 64 MB of RAM.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#35, or mute the thread
https://github.com/notifications/unsubscribe-auth/AHb4QxDs6MchbE9BTlrf5LfF2lM8-qe3ks5q3-JfgaJpZM4KhyS9
.

@daltonhance
Copy link
Author

Hey Jonah,

Sorry for the delay in getting back to you. Yep. I meant 64 GB of RAM.

I was able to find a moment to restore my workspace and set cores to 1 using a global options statement. R (in RStudio) crashed on the second call to loo. (I'm working with 16 log-likelihood matrices each 102.2 MB in size, so it crashed on the second one of these. )

I was able to get loo to run successfully using the standard R console by making sure I ran each call to loo one at a time (i.e. pressing Ctrl-R on one line and waiting for loo to finish before providing the next command).

@cfhammill
Copy link
Contributor

The issue here is probably related to #53, your 8 core machine was making 8 forks of your R session, depending on what was in there to begin with 8 forks could be a lot of memory. Setting mc.cores would not have fixed this, psislw looks at loo.cores instead. Switching from rstudio to R in the terminal probably fixed this by reducing overhead, possibly not restoring the full environment.

@jgabry jgabry closed this as completed Apr 22, 2018
@AdrianEcology
Copy link

Hi everyone, I'm currently experiencing the same issue. However, I don't know how to manually set the number of cores when running loo, to 1.

Any suggestions on how to work around this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants