Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend Rice distribution #3289

Merged
merged 9 commits into from Dec 6, 2018
Merged

Extend Rice distribution #3289

merged 9 commits into from Dec 6, 2018

Conversation

nbud
Copy link
Contributor

@nbud nbud commented Dec 4, 2018

Caveat: This PR is at the moment at the top of #3287.

I'm trying to do some inference on the parameters of a Rice distribution:

import pymc3 as pm
import numpy as np
from scipy import stats

true_nu = 7.
true_sigma = 2.

np.random.seed(123)

# one observation
observed = stats.rice(b=true_nu/true_sigma, scale=true_sigma).rvs()

# multiple observations - not working at the moment
#observed = stats.rice(b=true_nu/true_sigma, scale=true_sigma).rvs(size=20)

with pm.Model() as model:
    nu = pm.Normal("nu", 5., sd=3.)
    sigma = pm.HalfNormal('sigma', sd=1)
    x = pm.Rice("y", nu=nu, sd=sigma, observed=observed)

with model:
    step = pm.NUTS()
    trace = pm.sample(1000, step=[step], chains=4, cores=1)

axes = pm.traceplot(trace)
axes[0,0].axvline(true_nu, linestyle="--", color="k")
axes[1,0].axvline(true_sigma, linestyle="--", color="k")

Initially, this code thew

MethodNotDefined: ('grad', <class 'pymc3.distributions.dist_math.I0e'>, 'I0e')

I defined the gradient of pymc3.distributions.dist_math.i0e in this PR, now the single-observation inference works.

However, if I use multiple observations I obtain:

TypeError: ('Variable type field must be a Scalar.', Elemwise{mul,no_inplace}.0, TensorType(float64, vector))

Any idea how to fix that?

@lucianopaz
Copy link
Contributor

First of all, this is really great!

About the error you're getting, it's because the ops you wrote inherit from UnaryScalarOp and as such only work on scalars, not vectors. You should then also define the elemwise and inplace implementations of the op and use those instead. This should be really straightforward by using the @_scal_elemwise and @_scal_inplace decorators. You can take a look at how this PR did it.

@nbud
Copy link
Contributor Author

nbud commented Dec 5, 2018

@lucianopaz thank you for your reply. I tried to naively Elemwise the scalar ops (replicating the logic of @_scal_elemwise) but the grad (and maybe other parts!) is now broken:

================================== FAILURES ===================================
______________________________ TestI0e.test_grad ______________________________

self = <pymc3.tests.test_dist_math.TestI0e object at 0x000000000AAF2D30>

    @theano.configparser.change_flags(compute_test_value="ignore")
    def test_grad(self):
>       utt.verify_grad(i0e, [0.5])

pymc3\tests\test_dist_math.py:201:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\tests\unittest_tools.py:92: in verify_grad
    T.verify_grad(op, pt, n_tests, rng, *args, **kwargs)
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\gradient.py:1770: in verify_grad
    disconnected_inputs='ignore')
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\gradient.py:605: in grad
    grad_dict, wrt, cost_name)
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\gradient.py:1371: in _populate_grad_dict
    rval = [access_grad_cache(elem) for elem in wrt]
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\gradient.py:1371: in <listcomp>
    rval = [access_grad_cache(elem) for elem in wrt]
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\gradient.py:1326: in access_grad_cache
    term = access_term_cache(node)[idx]
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\gradient.py:1162: in access_term_cache
    new_output_grads)
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\tensor\elemwise.py:543: in L_op
    rval = self._bgrad(inputs, outs, ograds)
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\tensor\elemwise.py:643: in _bgrad
    ret.append(transform(scalar_igrad))
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\tensor\elemwise.py:635: in transform
    *[transform(ipt) for ipt in node.inputs])
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\tensor\elemwise.py:635: in <listcomp>
    *[transform(ipt) for ipt in node.inputs])
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\tensor\elemwise.py:635: in transform
    *[transform(ipt) for ipt in node.inputs])
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\gof\op.py:615: in __call__
    node = self.make_node(*inputs, **kwargs)
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\tensor\elemwise.py:482: in make_node
    DimShuffle, *inputs)
C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\tensor\elemwise.py:424: in get_output_info
    for i in inputs])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <theano.tensor.basic.ScalarFromTensor object at 0x00000000069D5F60>
t = <float64>

    def make_node(self, t):
>       assert isinstance(t.type, TensorType)
E       AssertionError

C:\ProgramData\Anaconda3\envs\pymc3-dev\lib\site-packages\theano\tensor\basic.py:1127: AssertionError

Do you know how to fix this? Also, do I need the inplace op, and if yes where to use it?

@nbud
Copy link
Contributor Author

nbud commented Dec 6, 2018

Issue fixed. Rice distribution now accepts multiple parameters and observations and is usable with NUTS. Here is an example of inference on nu[0], nu[1] and sigma:

import pymc3 as pm
import numpy as np
from scipy import stats

true_nu = np.array([4., 7.])
true_sigma = 2.

np.random.seed(123)
observed = stats.rice(b=true_nu / true_sigma, scale=true_sigma).rvs(
    size=(1000, len(true_nu))
)

with pm.Model() as model:
    nu = pm.Uniform("nu", 0.1, 20., shape=len(true_nu))
    sigma = pm.Uniform("sigma", 0.1, 10.)
    x = pm.Rice("y", nu=nu, sd=sigma, observed=observed)

with model:
    step = pm.NUTS()
    trace = pm.sample(1000, step=[step], chains=4, cores=1, seed=123)

axes = pm.traceplot(trace)
for i, nu in enumerate(true_nu):
    axes[0, 0].axvline(nu, linestyle="--", color="k", label=f"true_nu[{i}]")
axes[0, 0].legend()
axes[1, 0].axvline(true_sigma, linestyle="--", color="k")

inference_nu

Can someone please review this PR?

I'll rebase it on top of master as soon as preliminary PR #3287 is merged.

@nbud nbud changed the title [WIP] Extend Rice distribution Extend Rice distribution Dec 6, 2018
@@ -1178,7 +1178,9 @@ def test_multidimensional_beta_construction(self):

def test_rice(self):
self.pymc3_matches_scipy(Rice, Rplus, {'nu': Rplus, 'sd': Rplusbig},
lambda value, nu, sd: sp.rice.logpdf(value, b=nu, loc=0, scale=sd))
lambda value, nu, sd: sp.rice.logpdf(value, b=nu / sd, loc=0, scale=sd))
self.pymc3_matches_scipy(Rice, Rplus, {'b': Rplus, 'sd': Rplusbig},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this test work before your last commit with the elementwise fixes? If not, we might want to add one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is actually part of #3287 and successfully passes. The current PR (#3289) initially broke it when I tried to use elemwise but now passes. If PR #3287 is merged, I'll rebase the current PR so it will be clearer to see the differences.

@twiecki twiecki merged commit 99dbf35 into pymc-devs:master Dec 6, 2018
@twiecki
Copy link
Member

twiecki commented Dec 6, 2018

Thanks, this looks like it wasn't trivial to figure out!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants