Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor variational module, add histogram approximation #1904

Merged
merged 9 commits into from Mar 17, 2017
Merged

refactor variational module, add histogram approximation #1904

merged 9 commits into from Mar 17, 2017

Conversation

ferrine
Copy link
Member

@ferrine ferrine commented Mar 15, 2017

Here I implement Histogram Approx with the same interface as other approximations

@ferrine
Copy link
Member Author

ferrine commented Mar 15, 2017

Histogram is needed to put trace into theano graph for general purposes

@twiecki
Copy link
Member

twiecki commented Mar 15, 2017

What's Histogram? Is there a reference?

@ferrine
Copy link
Member Author

ferrine commented Mar 15, 2017

Not yet

@ferrine
Copy link
Member Author

ferrine commented Mar 15, 2017

@twiecki, do you mean a paper on arxiv under reference?

@twiecki
Copy link
Member

twiecki commented Mar 15, 2017

@ferrine Anything that makes me understand what this is useful for.

@ferrine
Copy link
Member Author

ferrine commented Mar 15, 2017

@twiecki I'll use posterior widely for minimizing Bayesian Risk.

  1. Consider you have a model that for given inputs yields posterior predictive distribution.
    X -> p(y|theta)
  2. You know the desired y you want, name it y_target, and a loss function L(y_target, y_true)
  3. Your decision is to choose the best X that minimizes L

So the objective is E_theta[L(y_target, y_true)] -> min X

Here I do not need to know anything about p(theta|D) I want to sample from it. I can do it with variational approximation only now. After implementing Histogram I can use SVGD/NUTS to make a trace and sample from it.

@twiecki
Copy link
Member

twiecki commented Mar 15, 2017

OK, what's an example application? Also, do you assume theta is already estimated?

@ferrine
Copy link
Member Author

ferrine commented Mar 16, 2017

Ye posterior is already estimated. You will be able to use any PyMC3 posterior for cost-benefit optimization. At my work I do exactly what I described. I want to minimize costs of inputs with archiving my target and be confident I can do it. I can't give more details because of NDA:(
I'm planning to implement all that stuff in few weeks to write a post including toy examples of

  • Maximizing stochastic output of model E_{p(theta|D)}[f(X)] -> max X
  • Minimizing costs of inputs to archive targetE_{p(y|X, theta, D)}[L(y_target, X, y)] -> min X
  • Minimizing Bayesian risk: E_{p(y|X, theta, D)}[L(y_p, y)] -> min y_p
    I'm going to use ADAM for optimization

@ferrine ferrine mentioned this pull request Mar 16, 2017
@ferrine
Copy link
Member Author

ferrine commented Mar 16, 2017

@twiecki I want to merge Histogram tomorrow when I'm sure tests pass as I have ~95% coverage. I also think that sample_vp method should be renamed to sample as here I use traces from step methods. For other approximations I want to do the same as I want consistent interface for sampling. Your thoughts?

@ferrine ferrine merged commit 0f45720 into pymc-devs:master Mar 17, 2017
ferrine added a commit that referenced this pull request Mar 17, 2017
@ferrine
Copy link
Member Author

ferrine commented Mar 17, 2017

I've forgot Docs for the class:( I'll add them in a separate PR

@fonnesbeck
Copy link
Member

Can we get a working example of this, either as a notebook if you have time to do a fully-documented case study, or a model in the pymc3/examples folder.

@ferrine ferrine mentioned this pull request Mar 26, 2017
@ferrine ferrine deleted the histogram_approx branch March 26, 2017 21:42
davidbrochart pushed a commit to davidbrochart/pymc3 that referenced this pull request Mar 27, 2017
)

* refactor module, add histogram

* add more tests

* refactor some code concerning AEVB histogram

* fix test for histogram

* use mean as deterministic point in Histogram

* remove unused import

* change names of shortcuts

* add names to shared params

* add new line at the end of `approximations.py`
twiecki pushed a commit that referenced this pull request Mar 27, 2017
* Added live_traceplot function

* Cosmetic change

* Changed the API to pm.sample(..., live_plot=True)

* Don't include `-np.inf` in calculating average ELBO (#1880)

* Adds an infmean for advi reporting

* fixing typo

* Add tutorial to detect sampling problems (#1866)

* Expand sampler-stats.ipynb example

include model diagnose from case study example in Stan http://mc-stan.org/documentation/case-studies/divergences_and_bias.html

* Sampler Diagnose for NUTS

* descriptive annotation and axis labels

* Fix typos

* PEP8 styling

* minor updates

1, add example to examples.rst
2, original content in Markdown code block

* Make install scripts idempotent (#1879)

* DOC Change heading names.

* Add examples of censored data models (#1870)

* Raise TypeError on non-data values of observed (#1872)

* Raise TypeError on non-data values of observed

* Added check for observed TypeError

* Make exponential mode have the correct shape

* Fix support of LKJCorr

* Added tutorial notebook on updating priors

* Fixed y-axis bug in forestplot; added transform argument to summary

* Style cleanup

* Made small changes and executed the notebook

* Added probit and invprobit functions

* Added carriage return to end of file

* Fixed indentation

* Changed probit test to use assert_allclose

* Fix tests for LKJCorr

* Added warning for ignoring init arguments in sample

* Kill stray tab

* Improve performance of transformations

* DOC Add new features

* Bump version.

* Added docs and scripts to MANIFEST

* WIP: Implement opvi (#1694)

* migrate useful functions from previous PR

(cherry picked from commit 9f61ab4)

* opvi draft

(cherry picked from commit d0997ff)

* made some test work

(cherry picked from commit b1a87d5)

* refactored approximation to support aevb (without test)

* refactor opvi

delete unnecessary methods from operator, change method order

* change log_q_local computation

* add full rank approximation

* add more_params argument to ObjectiveFunction.updates (aevb case)

* refactor density computation in full rank approximation

* typo: cast dict values to list

* typo: cast dict values to list

* typo: undefined T in dist_math

* refactor gradient scaling as suggested in approximateinference.org/accepted/RoederEtAl2016.pdf

* implement Langevin-Stein (LS) operator

* fix docstring

* add blank line in docs

* refactor ObjectiveFunction

* add not working LS Op test

* experiments with not working LS Op

* change activations

* refactor networks

* add step_function

* remove Langevin Stein, done refactoring

* remove Langevin Stein, done refactoring

* change optimizers

* refactor init params

* implement tests

* implement Inference

* code style

* test fix

* add minibatch test (fails now)

* add more tests for minibatch training

* add logdet to FullRank approximation

* add conversion of arrays to floatX

* tiny changes

* change number of iterations

* fix test and pylint check

* memoize functions in Objective function

* Optimize code a lot

* a bit more efficient pickling

* add docs

* Add MeanField -> FullRank parameter transfer

* refactor MeanField and FullRank a bit

* fix FullRank bug with shapes in random

* refactor Model.flatten (CC @taku-y)

* add `approximate` to inference

* rename approximate->fit

* change abbreviations

* Fix bug with scaling input variable in aevb

* fix theano bottleneck in graph

* more efficient scaling for local vars

* fix typo in local Q

* add aevb test

* refactor memoize to work with my objects

* add tests for numpy view usage

* pickle-hash fix

* pickle-hash fix again

* add node sampling + make up some code

* add notebook with example

* sample_proba explained

* Revert "small fix for multivariate mixture models"

* Added message about init only working with auto-assigned step methods

* doc(DiagInferDiv): formatting fix in blog post quote. Closes #1895. (#1909)

* delete unnecessary text and add some benchmarks (#1901)

* Add LKJCholeskyCov

* Added newline to MANIFEST

* Replaced package list with find_packages in setup.py; removed examples/data/__init__.py

* Fix log jacobian in LKJCholeskyCov

* Updated version to rc2

* Fixed stray version string

* Fix indexing traces with steps greater one

* refactor variational module, add histogram approximation (#1904)

* refactor module, add histogram

* add more tests

* refactor some code concerning AEVB histogram

* fix test for histogram

* use mean as deterministic point in Histogram

* remove unused import

* change names of shortcuts

* add names to shared params

* add new line at the end of `approximations.py`

* Add documentation for LKJCholeskyCov

* SVGD problems (#1916)

* fix some svgd problems

* switch -> ifelse

* except in record

* Histogram docs (#1914)

* add docs

* delete redundant code

* add usage example

* remove unused import

* Add expand_packed_triangular

* improve aesthetics

* Bump theano to 0.9.0rc4 (#1921)

* Add tests for LKJCholeskyCov

* Histogram: use only free RVs from trace (#1926)

* use only free RVs from trace

* use memoize in Histogram.histogram_logp

* Change tests for histogram

* Bump theano to be at least 0.9.0

* small fix to prevent a TypeError with the ufunc true_divide

* Fix tests for py2

* Add floatX wrappers in test_advi

* Changed the API to pm.sample(..., live_plot=True)

* Better formatting
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants