Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 32 additions & 16 deletions RELEASE-NOTES.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,11 @@

We are proud and excited to release the first stable version of PyMC3, the product of more than [5 years](https://github.com/pymc-devs/pymc3/commit/85c7e06b6771c0d99cbc09cb68885cda8f7785cb) of ongoing development and contributions from over 80 individuals. PyMC3 is a Python module for Bayesian modeling which focuses on modern Bayesian computational methods, primarily gradient-based (Hamiltonian) MCMC sampling and variational inference. Models are specified in Python, which allows for great flexibility. The main technological difference in PyMC3 relative to previous versions is the reliance on Theano for the computational backend, rather than on Fortran extensions.

### New features

Since the beta release last year, the following improvements have been implemented:

* Added `variational` submodule, which features the automatic differentiation variational inference (ADVI) fitting method. Much of this work was due to the efforts of Taku Yoshioka, and important guidance was provided by the Stan team (specifically Alp Kucukelbir and Daniel Lee).
* Added `variational` submodule, which features the automatic differentiation variational inference (ADVI) fitting method. Also supports mini-batch ADVI for large data sets. Much of this work was due to the efforts of Taku Yoshioka, and important guidance was provided by the Stan team (specifically Alp Kucukelbir and Daniel Lee).

* Added model checking utility functions, including leave-one-out (LOO) cross-validation, BPIC, WAIC, and DIC.

Expand All @@ -21,15 +23,29 @@ Since the beta release last year, the following improvements have been implement

* Refactored test suite for better efficiency.

* Added von Mises, zero-inflated negative binomial, and Lewandowski, Kurowicka and Joe (LKJ) distributions.
* Added von Mises, zero-inflated negative binomial, and Lewandowski, Kurowicka and Joe (LKJ) distributions.

* Adopted `joblib` for managing parallel computation of chains.

* Added contributor guidelines, contributor code of conduct and governance document.

We on the PyMC3 core team would like to thank everyone for contributing and now feel that this is ready for the big time. We look forward to hearing about all the cool stuff you use PyMC3 for, and look forward to continued development on the package.
### Deprecations

* Argument order of tau and sd was switched for distributions of the normal family:
- `Normal()`
- `Lognormal()`
- `HalfNormal()`

Old: `Normal(name, mu, tau)`
New: `Normal(name, mu, sd)` (supplying keyword arguments is unaffected).

* `MvNormal` calling signature changed:
Old: `MvNormal(name, mu, tau)`
New: `MvNormal(name, mu, cov)` (supplying keyword arguments is unaffected).

We on the PyMC3 core team would like to thank everyone for contributing and now feel that this is ready for the big time. We look forward to hearing about all the cool stuff you use PyMC3 for, and look forward to continued development on the package.

## Contributors
### Contributors

A Kuz <for.akuz@gmail.com>
A. Flaxman <abie@alum.mit.edu>
Expand All @@ -38,48 +54,48 @@ Alexey Goldin <alexey.goldin@gmail.com>
Anand Patil <anand.prabhakar.patil@gmail.com>
Andrea Zonca <code@andreazonca.com>
Andreas Klostermann <andreasklostermann@googlemail.com>
Andres Asensio Ramos
Andres Asensio Ramos
Andrew Clegg <andrew.clegg@pearson.com>
Anjum48
Anjum48
AustinRochford <arochford@monetate.com>
Benjamin Edwards <bedwards@cs.unm.edu>
Boris Avdeev <borisaqua@gmail.com>
Brian Naughton <briannaughton@gmail.com>
Byron Smith
Byron Smith
Chad Heyne <chadheyne@gmail.com>
Chris Fonnesbeck <chris.fonnesbeck@vanderbilt.edu>
Colin
Colin
Corey Farwell <coreyf@rwell.org>
David Huard <david.huard@gmail.com>
David Huard <huardda@angus.meteo.mcgill.ca>
David Stück <dstuck@users.noreply.github.com>
DeliciousHair <mshepit@gmail.com>
Dustin Tran
Dustin Tran
Eigenblutwurst <Hannes.Bathke@gmx.net>
Gideon Wulfsohn <gideon.wulfsohn@gmail.com>
Gil Raphaelli <g@raphaelli.com>
Gogs <gogitservice@gmail.com>
Ilan Man
Ilan Man
Imri Sofer <imrisofer@gmail.com>
Jake Biesinger <jake.biesinger@gmail.com>
James Webber <jamestwebber@gmail.com>
John McDonnell <john.v.mcdonnell@gmail.com>
John Salvatier <jsalvatier@gmail.com>
Jordi Diaz
Jordi Diaz
Jordi Warmenhoven <jordi.warmenhoven@gmail.com>
Karlson Pfannschmidt <kiudee@mail.uni-paderborn.de>
Kyle Bishop <citizenphnix@gmail.com>
Kyle Meyer <kyle@kyleam.com>
Lin Xiao
Lin Xiao
Mack Sweeney <mackenzie.sweeney@gmail.com>
Matthew Emmett <memmett@unc.edu>
Maxim
Maxim
Michael Gallaspy <gallaspy.michael@gmail.com>
Nick <nalourie@example.com>
Osvaldo Martin <aloctavodia@gmail.com>
Patricio Benavente <patbenavente@gmail.com>
Peadar Coyle (springcoil) <peadarcoyle@googlemail.com>
Raymond Roberts
Raymond Roberts
Rodrigo Benenson <rodrigo.benenson@gmail.com>
Sergei Lebedev <superbobry@gmail.com>
Skipper Seabold <chris.fonnesbeck@vanderbilt.edu>
Expand All @@ -88,8 +104,8 @@ The Gitter Badger <badger@gitter.im>
Thomas Kluyver <takowl@gmail.com>
Thomas Wiecki <thomas.wiecki@gmail.com>
Tobias Knuth <mail@tobiasknuth.de>
Volodymyr
Volodymyr Kazantsev
Volodymyr
Volodymyr Kazantsev
Wes McKinney <wesmckinn@gmail.com>
Zach Ploskey <zploskey@gmail.com>
akuz <for.akuz@gmail.com>
Expand Down
7 changes: 4 additions & 3 deletions docs/source/notebooks/dp_mix.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -580,8 +580,8 @@
"\n",
" tau = pm.Gamma('tau', 1., 1., shape=K)\n",
" lambda_ = pm.Uniform('lambda', 0, 5, shape=K)\n",
" mu = pm.Normal('mu', 0, lambda_ * tau, shape=K)\n",
" obs = pm.Normal('obs', mu[component], lambda_[component] * tau[component],\n",
" mu = pm.Normal('mu', 0, tau=lambda_ * tau, shape=K)\n",
" obs = pm.Normal('obs', mu[component], tau=lambda_[component] * tau[component],\n",
" observed=old_faithful_df.std_waiting.values)"
]
},
Expand Down Expand Up @@ -1188,8 +1188,9 @@
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python [default]",
"language": "python",
"name": "python3"
},
Expand Down
5 changes: 3 additions & 2 deletions docs/source/notebooks/pmf-pymc.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1529,8 +1529,9 @@
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python [default]",
"language": "python",
"name": "python3"
},
Expand All @@ -1544,7 +1545,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
"version": "3.5.2"
}
},
"nbformat": 4,
Expand Down
9 changes: 5 additions & 4 deletions docs/source/notebooks/rugby_analytics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -243,10 +243,10 @@
"model = pm.Model()\n",
"with pm.Model() as model:\n",
" # global model parameters\n",
" home = pm.Normal('home', 0, .0001)\n",
" home = pm.Normal('home', 0, tau=.0001)\n",
" tau_att = pm.Gamma('tau_att', .1, .1)\n",
" tau_def = pm.Gamma('tau_def', .1, .1)\n",
" intercept = pm.Normal('intercept', 0, .0001)\n",
" intercept = pm.Normal('intercept', 0, tau=.0001)\n",
" \n",
" # team-specific model parameters\n",
" atts_star = pm.Normal(\"atts_star\", \n",
Expand Down Expand Up @@ -464,8 +464,9 @@
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python [default]",
"language": "python",
"name": "python3"
},
Expand All @@ -479,7 +480,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
"version": "3.5.2"
}
},
"nbformat": 4,
Expand Down
67 changes: 45 additions & 22 deletions pymc3/distributions/continuous.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,18 +174,40 @@ class Normal(Continuous):
----------
mu : float
Mean.
tau : float
Precision (tau > 0).
sd : float
Standard deviation (sd > 0).
tau : float
Precision (tau > 0).
"""

def __init__(self, mu=0.0, tau=None, sd=None, *args, **kwargs):
super(Normal, self).__init__(*args, **kwargs)
def __init__(self, *args, **kwargs):
# FIXME In order to catch the case where Normal('x', 0, .1) is
# called to display a warning we have to fetch the args and
# kwargs manually. After a certain period we should revert
# back to the old calling signature.

if len(args) == 1:
mu = args[0]
sd = kwargs.pop('sd', None)
tau = kwargs.pop('tau', None)
elif len(args) == 2:
warnings.warn(('The order of positional arguments to Normal()'
'has changed. The new signature is:'
'Normal(name, mu, sd) instead of Normal(name, mu, tau).'),
DeprecationWarning)
mu, sd = args
tau = kwargs.pop('tau', None)
else:
mu = kwargs.pop('mu', 0.)
sd = kwargs.pop('sd', None)
tau = kwargs.pop('tau', None)

self.mean = self.median = self.mode = self.mu = mu
self.tau, self.sd = get_tau_sd(tau=tau, sd=sd)
self.variance = 1. / self.tau

super(Normal, self).__init__(**kwargs)

def random(self, point=None, size=None, repeat=None):
mu, tau, sd = draw_values([self.mu, self.tau, self.sd],
point=point)
Expand Down Expand Up @@ -219,21 +241,21 @@ class HalfNormal(PositiveContinuous):

Parameters
----------
tau : float
Precision (tau > 0).
sd : float
Standard deviation (sd > 0).
tau : float
Precision (tau > 0).
"""

def __init__(self, tau=None, sd=None, *args, **kwargs):
def __init__(self, sd=None, tau=None, *args, **kwargs):
super(HalfNormal, self).__init__(*args, **kwargs)
self.tau, self.sd = get_tau_sd(tau=tau, sd=sd)
self.mean = tt.sqrt(2 / (np.pi * self.tau))
self.variance = (1. - 2 / np.pi) / self.tau

def random(self, point=None, size=None, repeat=None):
tau = draw_values([self.tau], point=point)
return generate_samples(stats.halfnorm.rvs, loc=0., scale=tau**-0.5,
sd = draw_values([self.sd], point=point)
return generate_samples(stats.halfnorm.rvs, loc=0., scale=sd,
dist_shape=self.shape,
size=size)

Expand Down Expand Up @@ -382,7 +404,7 @@ class Beta(UnitContinuous):
\alpha &= \mu \kappa \\
\beta &= (1 - \mu) \kappa

\text{where } \kappa = \frac{\mu(1-\mu)}{\sigma^2} - 1
\text{where } \kappa = \frac{\mu(1-\mu)}{\sigma^2} - 1

Parameters
----------
Expand Down Expand Up @@ -554,15 +576,16 @@ class Lognormal(PositiveContinuous):
Scale parameter (tau > 0).
"""

def __init__(self, mu=0, tau=1, *args, **kwargs):
def __init__(self, mu=0, sd=None, tau=None, *args, **kwargs):
super(Lognormal, self).__init__(*args, **kwargs)

self.mu = mu
self.tau = tau
self.mean = tt.exp(mu + 1. / (2 * tau))
self.tau, self.sd = get_tau_sd(tau=tau, sd=sd)

self.mean = tt.exp(mu + 1. / (2 * self.tau))
self.median = tt.exp(mu)
self.mode = tt.exp(mu - 1. / tau)
self.variance = (tt.exp(1. / tau) - 1) * tt.exp(2 * mu + 1. / tau)
self.mode = tt.exp(mu - 1. / self.tau)
self.variance = (tt.exp(1. / self.tau) - 1) * tt.exp(2 * mu + 1. / self.tau)

def _random(self, mu, tau, size=None):
samples = np.random.normal(size=size)
Expand Down Expand Up @@ -1199,7 +1222,7 @@ class VonMises(Continuous):
R"""
Univariate VonMises log-likelihood.
.. math::
f(x \mid \mu, \kappa) =
f(x \mid \mu, \kappa) =
\frac{e^{\kappa\cos(x-\mu)}}{2\pi I_0(\kappa)}

where :I_0 is the modified Bessel function of order 0.
Expand Down Expand Up @@ -1244,7 +1267,7 @@ class SkewNormal(Continuous):
R"""
Univariate skew-normal log-likelihood.
.. math::
f(x \mid \mu, \tau, \alpha) =
f(x \mid \mu, \tau, \alpha) =
2 \Phi((x-\mu)\sqrt{\tau}\alpha) \phi(x,\mu,\tau)
======== ==========================================
Support :math:`x \in \mathbb{R}`
Expand All @@ -1266,13 +1289,13 @@ class SkewNormal(Continuous):
Alternative scale parameter (tau > 0).
alpha : float
Skewness parameter.

Notes
-----
When alpha=0 we recover the Normal distribution and mu becomes the mean,
tau the precision and sd the standard deviation. In the limit of alpha
approaching plus/minus infinite we get a half-normal distribution.
When alpha=0 we recover the Normal distribution and mu becomes the mean,
tau the precision and sd the standard deviation. In the limit of alpha
approaching plus/minus infinite we get a half-normal distribution.

"""
def __init__(self, mu=0.0, sd=None, tau=None, alpha=1, *args, **kwargs):
super(SkewNormal, self).__init__(*args, **kwargs)
Expand Down
Loading