-
-
Notifications
You must be signed in to change notification settings - Fork 2k
-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NUTS sampler gets 'stuck' for very long periods #776
Comments
This is actually quite useful for us. Any chance you could post the full model and data somewhere? I would love to be able to dig into this and optimize the slow parts. I suspect that if NUTS goes from fast to slow, that it has begun to make very large trees, which we can probably fix some way. |
I'd be happy too - I'm currently running the other model (the one that ran out of memory) in a loop over night on my workstation to see where exactly is the leak. I do not know how reproducible that 'frozen' state actually is though. |
Great, let me know when you're able to post the model/data, and I'll take a look. |
I sent you an e-mail with the subset of the data. |
Thanks :) On Tue, Jun 23, 2015 at 1:52 AM Federico Vaggi notifications@github.com
|
I'm having the same issue, also on a model similar to that described above. My data are pretty well balanced. |
Any data/NB to reproduce the problem? |
I had e-mailed the data to John as soon as he asked for it, but didn't hear I also ran into a limitation of the pymc3 API I could not work around, so On Mon, 9 Nov 2015 at 13:42 Thomas Wiecki notifications@github.com wrote:
|
@FedericoV sure. Can you send me the data as well? firstname.lastname@gmail.com |
I think this is related to the model being close to undetermined. You place a hyperprior on the three beta coefficients which is not a whole lot to infer mu and sd for them. Here is an example the illustrates the problem: import pymc3 as pm
with pm.Model():
mu = pm.Normal('mu', 0, 1)
sd = pm.HalfCauchy('sd', 1)
obs = pm.Normal('obs', mu=mu, sd=sd, observed=[0.1, -0.1])
start = pm.find_MAP()
step = pm.NUTS(scaling=start)
trace = pm.sample(2000, step, start=start) Now I'm not sure whether this is a bug or a property of the posterior space which is just extremely flat. In any case, if I remove the hyperprior in your model it converges just fine. If I keep the hyperprior but sample using Metropolis and then using the last sample as the starting point for NUTS it also works well (although there are some convergence instabilities). @jwjohnson314 How many coefficients are in your hierarchical model? |
68 - it's a hierarchical model similar to the radon model with varying On Mon, Nov 9, 2015 at 9:48 AM, Thomas Wiecki notifications@github.com
|
Have you tried sampling with Metropolis or Slice? |
No. I'll give it a go a let you know how it does. |
Was this error resolved - maybe we could add it to some sort of user guide - or just add a warning somewhere. |
With the new init (#1523) and some model tweaks this works pretty well: |
Hi!
Sorry to keep opening issues - but just noticed this:
Working on a hierarchical model, again very similar to the standard model described by Thomas Wiecki:
After a few looops, that mostly took this timing:
It's currently stuck like so:
I suspect this is largely because of this line:
del_idx is a boolean array of length n, while gs_code is an array with 3 possible values (0, 1, 2), and the values are quite imbalanced:
I also had to simplify the model quite a bit, because when I was fitting an alpha term independently (as in the original Wiecki notebook) the iteration times were incredibly long. So I just did some independent mean centering for each condition as a pre-processing step - which is of course far less precise.
The text was updated successfully, but these errors were encountered: