Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upper lower #3424

Closed
wants to merge 16 commits into from
Closed

Upper lower #3424

wants to merge 16 commits into from

Conversation

chang111
Copy link
Contributor

To fix the issue 3248

@chang111
Copy link
Contributor Author

@junpenglao All check has passed. Please review it.

Copy link
Contributor

@lucianopaz lucianopaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The commit history is a bit weird and shows all the commits you had done in your previous PR.

Nevertheless, you need to do 3 more things to get this done:

  1. Add a line to the 3.7 release notes under maintenance specifying this fix.
  2. Add tests to the test suite to explicitly test and ensure that the issue, Error for default params of NoneType in pm.distributions.distribution._draw_value() #3248, is fixed and all potential values for the bounds work well.
  3. Be careful about the _normalization function that relies on the bounds being None or not. You should change this to some isinf kind of control.

There was a previous PR, #3250, that did this but became stale and was never finished. You can look at it to get some ideas.

@@ -571,8 +571,8 @@ def __init__(self, mu=0, sigma=None, tau=None, lower=None, upper=None,
tau, sigma = get_tau_sigma(tau=tau, sigma=sigma)
self.sigma = self.sd = tt.as_tensor_variable(sigma)
self.tau = tt.as_tensor_variable(tau)
self.lower = tt.as_tensor_variable(floatX(lower)) if lower is not None else lower
self.upper = tt.as_tensor_variable(floatX(upper)) if upper is not None else upper
self.lower = tt.as_tensor_variable(floatX(lower)) if lower is not None else tt.as_tensor_variable(-np.inf)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have to also floatX np.inf

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your reply. I will check the pr 3250.

@chang111 chang111 closed this Mar 25, 2019
@twiecki
Copy link
Member

twiecki commented Mar 25, 2019

@chang111 Did you mean to close this?

@twiecki twiecki reopened this Mar 25, 2019
@twiecki twiecki closed this Mar 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants