Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support for pymc3 as a modeling language? #20

Closed
dustinvtran opened this issue Feb 23, 2016 · 17 comments
Closed

support for pymc3 as a modeling language? #20

dustinvtran opened this issue Feb 23, 2016 · 17 comments

Comments

@dustinvtran
Copy link
Member

dustinvtran commented Feb 23, 2016

We want to support modeling languages which are either popular or are useful for certain tasks over the alternatives we support. With that in mind, pymc3 seems appealing for specifying large discrete latent variable models. You can't write them as a Stan program, and it could be rather annoying to code them up in raw TensorFlow or NumPy/SciPy.

On the other hand, it's one more modeling language to maintain; pymc3 actually uses Theano as a backend, which may lead to some bugs(?); and I don't know how popular pymc3 would be as a use case over the other supported languages.

@datnamer
Copy link

datnamer commented May 6, 2016

@twiecki

Pymc3 has the most appealing DSL in my opinion.

@twiecki
Copy link
Contributor

twiecki commented May 6, 2016

My biased opinion is that this is a great idea :). CC @jsalvatier @fonnesbeck.

@dustinvtran Are you in touch with Kevin Murphy by any chance?

We have discussed either allowing for an additional backend like TensorFlow (probably results in a pymc3 fork), or using something like https://github.com/dementrock/tensorfuse to support switching out backends.

For the first option, it shouldn't be too difficult to change out the backend. Theano is used to compute the model logp and its gradient. The step methods and variational inference are pure python (only some parts we sped up by theano). The distributions should be easiest to port (see e.g. https://github.com/pymc-devs/pymc3/blob/master/pymc3/distributions/continuous.py). model.py then combines those to the full model model graph (see e.g. https://github.com/pymc-devs/pymc3/blob/master/pymc3/model.py#L164).

Or were you imagining a different way of combining them?

@fonnesbeck
Copy link

Keep in mind that PyMC isn't really another language -- one is essentially coding models in pure Python, by instantiating high-level PyMC objects. We have studiously avoided exposing most of Theano to users. The DSL (if you want to call it that) just uses standard Python idioms, for the most part (e.g. context managers for specifying models).

@dustinvtran
Copy link
Member Author

dustinvtran commented May 6, 2016

Hi @twiecki @fonnesbeck. Thanks for the description! Yes, it would be great to have PyMC as a modeling language if it's not much effort to get it in. Generally, the library just needs a way to calculate the model's log joint density and its gradient, log p(x, z).

The easiest way to get PyMC into the library is just to treat the model's log density function as one node in TensorFlow's graph. There's some black box in how it's computed, and it need only take the right inputs and spit out the right output, as well as a way to differentiate this density. If there's a way to get log density evaluations and their gradient from PyMC programs, then this can be easily achieved.

Further integration would be as you said, where we treat the PyMC program not as one node but its own corresponding graph from Theano ported over to TensorFlow.

@twiecki
Copy link
Contributor

twiecki commented May 7, 2016

OK, that should be pretty simple actually, as we expose model.logp() and model.dlogp() directly.

@dustinvtran
Copy link
Member Author

Perfect. Then the first step would be to write a wrapper in the same way one for Stan is written. It will take a vector of parameters from TensorFlow and converts it to the right dictionary format to plug into model.logp().

@akucukelbir
Copy link
Contributor

nice!

@twiecki
Copy link
Contributor

twiecki commented May 9, 2016

This seems to work:

import edward as ed
import numpy as np
import tensorflow as tf
from edward import PythonModel, Variational, Beta
from scipy.stats import beta, bernoulli
import pymc3 as pm
import theano
import numpy as np

data_shared = theano.shared(np.zeros(1))

with pm.Model() as model:
    beta = pm.Beta('beta', 1, 1, transform=None)
    out = pm.Bernoulli('data', 
                       beta,
                       observed=data_shared)

class PyMC3Model:
    """
    Model wrapper for models written in PyMC3.

    Arguments
    ----------
    model : pymc3.Model object
    observed : The shared theano tensor passed to the model likelihood
    """
    def __init__(self, model, observed):
        self.model = model
        self.observed = observed

        vars = pm.inputvars(model.cont_vars)

        bij = pm.DictToArrayBijection(pm.ArrayOrdering(vars), model.test_point)
        self.logp = bij.mapf(model.fastlogp)
        self.dlogp = bij.mapf(model.fastdlogp(vars))

        self.num_vars = len(vars)

    def log_prob(self, xs, zs):
        return tf.py_func(self._py_log_prob, [xs, zs], [tf.float32])[0]

    def _py_log_prob(self, xs, zs):
        n_minibatch = zs.shape[0]
        lp = np.zeros(n_minibatch, dtype=np.float32)
        self.observed.set_value(xs)
        for s in range(n_minibatch):
            lp[s] = self.logp(zs[s, :])

        return lp

data = ed.Data(np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 1]))
m = PyMC3Model(model, data_shared)
variational = Variational()
variational.add(Beta(m.num_vars))
inference = ed.MFVI(m, variational, data)
inference.run(n_iter=10000)

@twiecki
Copy link
Contributor

twiecki commented May 9, 2016

Does it not require the gradient somewhere?

@akucukelbir
Copy link
Contributor

this is cool. i'll try to play around with this sometime soon.

@twiecki we implicitly end up calling tensorflow's autodiff in ed.MFVI

@twiecki
Copy link
Contributor

twiecki commented May 9, 2016

But how can it compute the gradient over the pymc3 blackbox node?
On May 9, 2016 8:12 PM, "Alp Kucukelbir" notifications@github.com wrote:

this is cool. i'll try to play around with this sometime soon.

@twiecki https://github.com/twiecki we implicitly end up calling
tensorflow's autodiff in ed.MFVI


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#20 (comment)

@akucukelbir
Copy link
Contributor

in this particular model, we use the score function gradient estimator. thus, we only need to evaluate _py_log_prob and the autodiff'ing happens on the Beta in variational.

@dustinvtran
Copy link
Member Author

Awesome! This is excellent. Yes, for this Beta-Bernoulli model we just use the score function gradient which does not require model gradients.

When I run the code, it seems to converge to roughly -9.6. The model codes converge to around -6.2. Is this a bug or something different in how this model is written compared to the other Beta-Bernoulli examples?

@twiecki
Copy link
Contributor

twiecki commented May 9, 2016

Ah, this is due to the autotransformation we apply to beta. Setting it to None fixes it and converges to -6.2. I updated the code above.

Regarding the autodiff -- is there an API for the model gradient?

Also, is mini-batching supported right now? Seems like xs would also need to be a list of sub-samples?

@dustinvtran
Copy link
Member Author

That's really cool. So I'm not too familiar with PyMC3 syntax. Is there a reason that data has to appear both when defining the model and when calculating the joint log density? Also, do you mind submitting a pull request so we can officially merge this in? :)

Regarding the autodiff -- is there an API for the model gradient?

It's mentioned here in TensorFlow's API but I haven't spent much time figuring out how to register gradients. It would be excellent to do this though.

Also, is mini-batching supported right now? Seems like xs would also need to be a list of sub-samples?

Yes, the default is that we subsample from the first slice of the data structure, so random elements from a vector in this case, or more generally rows in a matrix or matrices from a tensor. You can see this in edward/data.py.

@twiecki
Copy link
Contributor

twiecki commented May 9, 2016

That's really cool. So I'm not too familiar with PyMC3 syntax. Is there a reason that data has to appear both when defining the model and when calculating the joint log density?

It's not required, you only have to set something for the shape. I changed the above example to do so.

Also, do you mind submitting a pull request so we can officially merge this in? :)

Yeah, I'll do that.

@twiecki
Copy link
Contributor

twiecki commented May 9, 2016

Here we go: #75

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants