-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge 3.1 branch #1658
Merge 3.1 branch #1658
Conversation
…tting (e.g. nanguardmode) for Theano functions
* Started to write Base class for pymc3.models * mode `add_var` to public api * Added some docstrings * Added some docstrings * added getitem and fixed a typo * added assertion check * added resolve var method * decided not to add resolve method * Added linear component * Docs fix * patsy's intercept is inited properly now * refactored code * updated docs * added possibility to init coefficients with random variables * added glm * refactored api, fixed formula init * refactored linear model, extended acceptable types * moved useful matrix and labels creation to utils file * code style * removed redundant evaluation of shape * refactored resolver for constructing matrix and labels * changed error message * changed signature of init * simplified utils any_to_tensor_and_labels code * tests for `any_to_tensor_and_labels` * added docstring for `any_to_tensor_and_labels` util * forgot to document return type in `any_to_tensor_and_labels` * refactored code for dict * dict tests fix(do not check labels there) * added access to random vars of model * added a shortcut for all variables so there is a unified way to get them * added default priors for linear model * update docs for linear * refactored UserModel api, made it more similar to pm.Model class * Lots of refactoring, tests for base class, more plain api design * deleted unused module variable * fixed some typos in docstring * Refactored pm.Model class, now it is ready for inheritance * Added documentation for Model class * Small typo in docstring * nested contains for treedict (needed for add_random_variable) * More accurate duplicate implementation of treedict/treelist * refactored treedict/treelist * changed `__imul__` of treelist * added `root` property and `isroot` indicator for base model * protect `parent` and `model` attributes from violation * travis' python2 did not fail on bad syntax(maybe it's too new), fixed * decided not to use functools wrapper Unfortunately functools wrapper fails when decorating built-in methods so I need to fix that improper behaviour. Some bad but needed tricks were implemented * Added models package to setup script * Refactor utils * Fix some typos in pm.model
* Started to write Base class for pymc3.models * mode `add_var` to public api * Added some docstrings * Added some docstrings * added getitem and fixed a typo * added assertion check * added resolve var method * decided not to add resolve method * Added linear component * Docs fix * patsy's intercept is inited properly now * refactored code * updated docs * added possibility to init coefficients with random variables * added glm * refactored api, fixed formula init * refactored linear model, extended acceptable types * moved useful matrix and labels creation to utils file * code style * removed redundant evaluation of shape * refactored resolver for constructing matrix and labels * changed error message * changed signature of init * simplified utils any_to_tensor_and_labels code * tests for `any_to_tensor_and_labels` * added docstring for `any_to_tensor_and_labels` util * forgot to document return type in `any_to_tensor_and_labels` * refactored code for dict * dict tests fix(do not check labels there) * added access to random vars of model * added a shortcut for all variables so there is a unified way to get them * added default priors for linear model * update docs for linear * refactored UserModel api, made it more similar to pm.Model class * Lots of refactoring, tests for base class, more plain api design * deleted unused module variable * fixed some typos in docstring * Refactored pm.Model class, now it is ready for inheritance * Added documentation for Model class * Small typo in docstring * nested contains for treedict (needed for add_random_variable) * More accurate duplicate implementation of treedict/treelist * refactored treedict/treelist * changed `__imul__` of treelist * added `root` property and `isroot` indicator for base model * protect `parent` and `model` attributes from violation * travis' python2 did not fail on bad syntax(maybe it's too new), fixed * decided not to use functools wrapper Unfortunately functools wrapper fails when decorating built-in methods so I need to fix that improper behaviour. Some bad but needed tricks were implemented * Added models package to setup script * Refactor utils * Fix some typos in pm.model
* refactored GARCH and added Mv(Gaussian/StudentT)RandomWalk * refactored garch logp * added docs * fix typo * even more typos * better description for tau
MAINT Replace discrete RV check.
in 7f95425 I fix cyclic import, you can do the same |
Thanks @ferrine. Not sure why the exact sample test for HMC is failing here. @ColCarroll, ideas? |
I feel like I've seen this exact failure before -- in the travis logs the traces look exactly the same (though I guess only the first 23 entries are actually the same). I can look more closely later, but probably need to print the full traces. |
Most of them differ by E-20, but I saw one differing on order of .1. |
If we are going to add a Also, we have things like |
Can we try |
Ah, update on HMC -- I looked into it for a while, and took a larger sample from a working SHA and HEAD. The larger samples had the same behavior: they'd be equal for a while, then all of a sudden be different, the converge again, though there was no readily apparent pattern in how different they were. It is interesting because
The algorithm still seems fine -- my working theory is that the calls to the numpy random number generator are getting evaluated in a different order sometimes, or some other function is making a draw from the RNG. I'm not against replacing the test with the current trace, but I'll probably keep looking at this if only because I'm intrigued now. |
In particular, I think the other call to an RNG is for the metropolis acceptance step, so it seems like that must be it... |
@ColCarroll If that's the issue, does it suggest that the Metropolis accept does not use the correct seed? How do we fix it? |
It'll take more testing to figure this out -- everything uses On master:
In current branch: On master:
This would explain why it is usually in sync, but goes into and out of sync. |
OK, so if we change the stored samples we should be good? |
Yeah, seems like. The statistical check on HMC still passes. Will rest
easier if I knew why it changed, though.
…On Tue, Jan 17, 2017, 10:05 AM Thomas Wiecki ***@***.***> wrote:
OK, so if we change the stored samples we should be good?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1658 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ACMHEHdNfoDVphTKxhQrF2sAOnduDKalks5rTNipgaJpZM4Lebbh>
.
|
@ferrine Can you push the update? Maybe we can decouple the easy resting from making travis happy. At least you looked deep enough that there aren't clear bugs. |
I've merged my PR with |
wowza -- after some close checking, there is a transposed negative sign in the hmc sampler. Since hmc rarely rejects samples, the differences were very close. I'll add that change in. |
Was computing negative energy for HMC.
The Metropolis tuning algorithm needed just a little nudge in the right direction.
@ColCarroll Wow, those tests really paid off here! |
OK, I'm going to merge this so that we get out of this limbo. |
Nice work guys! |
No description provided.