Skip to content

Commit

Permalink
Merge branch 'master' into plot_bpv
Browse files Browse the repository at this point in the history
  • Loading branch information
aloctavodia committed Jun 22, 2020
2 parents b021105 + e5e4eab commit 0765d73
Show file tree
Hide file tree
Showing 51 changed files with 39,969 additions and 2,427 deletions.
2 changes: 2 additions & 0 deletions .azure-pipelines/azure-pipelines-external.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@ jobs:
displayName: 'Debug information'
- script: |
sudo apt-get update
sudo apt-get install jags
python -m pip install --upgrade pip
if [ "$(pytorch.version)" = "latest" ]; then
Expand Down
14 changes: 10 additions & 4 deletions .azure-pipelines/azure-pipelines-wheel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,8 @@ jobs:
- script: |
python -m pip install --upgrade pip
python -m pip install --no-cache-dir -r requirements.txt
pip install wheel
python -m pip install wheel
python -m pip install twine
displayName: 'Install requirements'
- script: |
Expand All @@ -43,9 +44,10 @@ jobs:
displayName: 'Build a wheel'
- script: |
cd dist
ls -lh
ls | grep *.whl | xargs python -m pip install
mkdir install_test
cd install_test
ls -lh ../dist
python -m pip install ../dist/*.whl
python -c "import arviz as az; print(az);print(az.summary(az.load_arviz_data('non_centered_eight')))"
cd ..
displayName: 'Install and test the wheel'
Expand All @@ -60,3 +62,7 @@ jobs:
pathtoPublish: 'dist'
artifactName: 'arviz_wheel_dist'
displayName: 'Publish the wheel'

- script: |
python -m twine upload -u __token__ -p $(PYPI_PASSWORD) --skip-existing dist/*
displayName: 'Upload wheel to PyPI'
27 changes: 24 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,38 @@
## v0.x.x Unreleased

### New features
* loo-pit plot. The kde is computed over the data interval (this could be shorter than [0, 1]). The hdi is computed analitically (#1215)
* loo-pit plot. The kde is computed over the data interval (this could be shorter than [0, 1]). The HDI is computed analitically (#1215)
* Added `html_repr` of InferenceData objects for jupyter notebooks. (#1217)
* Added support for PyJAGS via the function `from_pyjags`. (#1219 and #1245)
* `from_pymc3` can now retrieve `coords` and `dims` from model context (#1228, #1240 and #1249)
* `plot_trace` now supports multiple aesthetics to identify chain and variable
shape and support matplotlib aliases (#1253)
* `plot_hdi` can now take already computed HDI values (#1241)

### Maintenance and fixes

* Include data from `MultiObservedRV` to `observed_data` when using
`from_pymc3` (#1098)
* Added a note on `plot_pair` when trying to use `plot_kde` on `InferenceData`
objects. (#1218)
* Added `log_likelihood` argument to `from_pyro` and a warning if log likelihood cannot be obtained (#1227)
* Skip tests on matplotlib animations if ffmpeg is not installed (#1227)
* Fix hpd bug where arguments were being ignored (#1236)
* Remove false positive warning in `plot_hdi` and fixed matplotlib axes generation (#1241)
* Change the default `zorder` of scatter points from `0` to `0.6` in `plot_pair` (#1246)
* Update `get_bins` for numpy 1.19 compatibility (#1256)
* Fixes to `rug`, `divergences` arguments in `plot_trace` (#1253)

### Deprecation
* Using `from_pymc3` without a model context available now raises a
`FutureWarning` and will be deprecated in a future version (#1227)
* In `plot_trace`, `chain_prop` and `compact_prop` as tuples will now raise a
`FutureWarning` (#1253)
* `hdi` with 2d data raises a FutureWarning (#1241)

### Documentation
* A section has been added to the documentation at InferenceDataCookbook.ipynb illustrating the use of ArviZ in conjunction with PyJAGS. (#1219 and #1245)
* Fixed inconsistent capitalization in `plot_hdi` docstring (#1221)
* Fixed and extended `InferenceData.map` docs (#1255)

## v0.8.3 (2020 May 28)
### Maintenance and fixes
Expand Down Expand Up @@ -281,4 +303,3 @@
## v0.3.0 (2018 Dec 14)

* First Beta Release

23 changes: 18 additions & 5 deletions GOVERNANCE.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,8 +93,12 @@ Council Members will have the responsibility of
* Make decisions when regular community discussion doesn’t produce consensus on an issue in a reasonable time frame.
* Make decisions about strategic collaborations with other organizations or individuals.
* Make decisions about the overall scope, vision and direction of the project.
* Developing funding sources
* Deciding how to disburse funds with consultation from Core Contributors

Note that each individual council member does not have the power to unilaterally wield these responsibilities, but the council as a whole must jointly make these decisions. In other words, Council Members are first and foremost Core Contributors, but only when needed they can collectively make decisions for the health of the project.
The council may choose to delegate these responsibilities to sub-committees. If so, Council members must update this document to make the delegation clear.

Note that individual council member does not have the power to unilaterally wield these responsibilities, but the council as a whole must jointly make these decisions. In other words, Council Members are first and foremost Core Contributors, but only when needed they can collectively make decisions for the health of the project.

ArviZ will be holding its first election to determine its initial council in the coming weeks and this document will be updated.

Expand Down Expand Up @@ -182,9 +186,9 @@ Each voter can vote zero or more times, once per each candidate. As this is not

#### Voting Criteria For Future Elections
Voting for first election is restricted to establish stable governance, and to defer major decision to elected leaders
* For the first election only the folks in Slack can vote (excluding GSOC students)
* For the first election only the people registered following the guidelines in elections/ArviZ_2020.md can vote
* In the first year, the council must determine voting eligibility for future elections between two criteria:
* Those with commit bits
* Core contributors
* The contributing community at large

### Core Contributors
Expand All @@ -197,9 +201,18 @@ Current Core Contributors can nominate candidates for consideration by the counc
can make the determination for acceptance with a process of their choosing.

#### Current Core Contributors
* Will be updated with Core Contributor list during first election
* Oriol Abril-Pla (@OriolAbril)
* Alex Andorra (@AlexAndorra)
* Seth Axen (@sethaxen)
* Colin Carroll (@ColCarroll)
* Robert P. Goldman (@rpgoldman)
* Ari Hartikainen (@ahartikainen)
* Ravin Kumar (@canyon289)
* Osvaldo Martin (@aloctavodia)
* Mitzi Morris (@mitzimorris)
* Du Phan (@fehiepsi)
* Aki Vehtari (@avehtari)

#### Core Contributor Responsibilities
* Enforce code of conduct
* Maintain a check against Council

2 changes: 2 additions & 0 deletions arviz/data/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
from .io_cmdstan import from_cmdstan
from .io_cmdstanpy import from_cmdstanpy
from .io_dict import from_dict
from .io_pyjags import from_pyjags
from .io_pymc3 import from_pymc3, from_pymc3_predictions
from .io_pystan import from_pystan
from .io_emcee import from_emcee
Expand All @@ -24,6 +25,7 @@
"dict_to_dataset",
"convert_to_dataset",
"convert_to_inference_data",
"from_pyjags",
"from_pymc3",
"from_pymc3_predictions",
"from_pystan",
Expand Down
90 changes: 78 additions & 12 deletions arviz/data/inference_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,16 @@
from collections.abc import Sequence
from copy import copy as ccopy, deepcopy
from datetime import datetime
from html import escape
import warnings
import uuid

import netCDF4 as nc
import numpy as np
import xarray as xr
from xarray.core.options import OPTIONS

from ..utils import _subset_list
from ..utils import _subset_list, HtmlTemplate
from ..rcparams import rcParams

SUPPORTED_GROUPS = [
Expand Down Expand Up @@ -125,14 +128,39 @@ def __init__(self, **kwargs):
self._groups_warmup.append(key)

def __repr__(self):
"""Make string representation of object."""
"""Make string representation of InferenceData object."""
msg = "Inference data with groups:\n\t> {options}".format(
options="\n\t> ".join(self._groups)
)
if self._groups_warmup:
msg += "\n\nWarmup iterations saved ({}*).".format(WARMUP_TAG)
return msg

def _repr_html_(self):
"""Make html representation of InferenceData object."""
display_style = OPTIONS["display_style"]
if display_style == "text":
html_repr = f"<pre>{escape(repr(self))}</pre>"
else:
elements = "".join(
[
HtmlTemplate.element_template.format(
group_id=group + str(uuid.uuid4()),
group=group,
xr_data=getattr( # pylint: disable=protected-access
self, group
)._repr_html_(),
)
for group in self._groups_all
]
)
formatted_html_template = HtmlTemplate.html_template.format( # pylint: disable=possibly-unused-variable
elements
)
css_template = HtmlTemplate.css_template # pylint: disable=possibly-unused-variable
html_repr = "%(formatted_html_template)s%(css_template)s" % locals()
return html_repr

def __delattr__(self, group):
"""Delete a group from the InferenceData object."""
if group in self._groups:
Expand Down Expand Up @@ -346,20 +374,28 @@ def _group_names(self, groups, filter_groups=None):
def map(self, fun, groups=None, filter_groups=None, inplace=False, args=None, **kwargs):
"""Apply a function to multiple groups.
Applies ``fun`` groupwise to the selected ``InferenceData`` groups and overwrites the
group with the result of the function.
Parameters
----------
fun: callable
Function to be applied to each group.
groups: str or list of str, optional
fun : callable
Function to be applied to each group. Assumes the function is called as
``fun(dataset, *args, **kwargs)``.
groups : str or list of str, optional
Groups where the selection is to be applied. Can either be group names
or metagroup names.
inplace: bool, optional
filter_groups : {None, "like", "regex"}, optional
If `None` (default), interpret var_names as the real variables names. If "like",
interpret var_names as substrings of the real variables names. If "regex",
interpret var_names as regular expressions on the real variables names. A la
`pandas.filter`.
inplace : bool, optional
If ``True``, modify the InferenceData object inplace,
otherwise, return the modified copy.
args: array_like, optional
Positional arguments passed to ``fun``. Assumes the function is called as
``fun(dataset, *args, **kwargs)``.
**kwargs: mapping, optional
args : array_like, optional
Positional arguments passed to ``fun``.
**kwargs : mapping, optional
Keyword arguments passed to ``fun``.
Returns
Expand All @@ -376,10 +412,40 @@ def map(self, fun, groups=None, filter_groups=None, inplace=False, args=None, **
In [1]: import arviz as az
...: idata = az.load_arviz_data("non_centered_eight")
...: idata_shifted_obs = idata.map(lambda x: x + 3, groups="observed_RVs")
...: idata_shifted_obs = idata.map(lambda x: x + 3, groups="observed_vars")
...: print(idata_shifted_obs.observed_data)
...: print(idata_shifted_obs.posterior_predictive)
Rename and update the coordinate values in both posterior and prior groups.
.. ipython::
In [1]: idata = az.load_arviz_data("radon")
...: idata = idata.map(
...: lambda ds: ds.rename({"gamma_dim_0": "uranium_coefs"}).assign(
...: uranium_coefs=["intercept", "u_slope", "xbar_slope"]
...: ),
...: groups=["posterior", "prior"]
...: )
...: idata.posterior
Add extra coordinates to all groups containing observed variables
.. ipython::
In [1]: idata = az.load_arviz_data("rugby")
...: home_team, away_team = np.array([
...: m.split() for m in idata.observed_data.match.values
...: ]).T
...: idata = idata.map(
...: lambda ds, **kwargs: ds.assign_coords(**kwargs),
...: groups="observed_vars",
...: home_team=("match", home_team),
...: away_team=("match", away_team),
...: )
...: print(idata.posterior_predictive)
...: print(idata.observed_data)
"""
if args is None:
args = []
Expand Down Expand Up @@ -427,7 +493,7 @@ def _wrap_xarray_method(
In [1]: import arviz as az
...: idata = az.load_arviz_data("non_centered_eight")
...: idata_means = idata._wrap_xarray_method("mean", groups="latent_RVs")
...: idata_means = idata._wrap_xarray_method("mean", groups="latent_vars")
...: print(idata_means.posterior)
...: print(idata_means.observed_data)
Expand Down
Loading

0 comments on commit 0765d73

Please sign in to comment.