Skip to content

Conversation

bwengals
Copy link
Contributor

@bwengals bwengals commented Sep 4, 2017

Fix for issue #2466. Closing #2468 in favor of this one.

  • float32 support
  • Different arg, method="L-BFGS-B" instead of fmin=optimize.fmin_l_bfgs_b
  • Allow keyboard interrupt, save current value
  • Adds tqdm progress bar
  • if bad optimization result (logp = inf, etc.) last good optimizer value is returned. If this happens, a warning is printed, instead of an error.
  • fixes to tests

@bwengals bwengals mentioned this pull request Sep 4, 2017
@junpenglao
Copy link
Member

Is there a way to pass custom optimizer?

@bwengals
Copy link
Contributor Author

bwengals commented Sep 4, 2017

Yes, directly using scipys minimize interface. Check here, section called "Custom minimizers".

@junpenglao
Copy link
Member

I see, thanks!

@bwengals
Copy link
Contributor Author

bwengals commented Sep 4, 2017

Np! Ill see about the errors. I wasnt sure how to not compute the gradient with the logp_dlogp_func. Would like to see about using that for this if anyone knows?

else:
norm_grad = np.linalg.norm(grad)
self.progress.set_description(self.desc.format(neg_value, norm_grad))

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stray empty lines.

@aseyboldt
Copy link
Member

@bwengals Not computing the grad with logp_dlogp_function is not possible at the moment. Shouldn't be hard to add however, I can do that if you like.

@bwengals
Copy link
Contributor Author

bwengals commented Sep 4, 2017

@aseyboldt Absolutely if you don't mind. I can also give it a whack, but would like to ask you a couple Q's first about ValueGradFunction if you don't mind.

We could work on that after this PR is merged as a separate project too. Depending on how tricky it may be. What do folks think?

@junpenglao
Copy link
Member

Hey Bill, do you think you can squeeze the changes mentions below into your PR?
#2531 #2545
Basically, we should output the non-transformed dictionary from find_MAP(). Even better is to have a include_transformed kwarg like in https://github.com/pymc-devs/pymc3/blob/master/pymc3/stats.py#L719

@kyleabeauchamp
Copy link
Contributor

#2523 is the PR where I implemented the "output both" feature in master. I agree that a kwarg is even better.

@bwengals
Copy link
Contributor Author

bwengals commented Sep 7, 2017

Oh, so instead of output both, only output the human-readable version of the parameter values (with a kwarg option to include the *__ transfomed ones? Good w me

@michaelosthege
Copy link
Member

michaelosthege commented Sep 11, 2017

I'd also like the kwarg-option.
And I noticed that the sampling currently also returns traces of the transformed variables. Do you think this should follow the same pattern?
EDIT: Oups, looks like I wasn't paying attention - the multitrace does include both.

@junpenglao
Copy link
Member

LGTM. Is it ready?

@bwengals
Copy link
Contributor Author

If everyone's ok with it, I think so!

@junpenglao junpenglao merged commit 848cf78 into pymc-devs:master Sep 20, 2017
@junpenglao
Copy link
Member

Thanks bill!

ColCarroll pushed a commit that referenced this pull request Nov 9, 2017
* small typo fix in ValueGradFunction docstring, extra_args -> extra_vars

* added different interface to scipy optimizers, removed errors

* change error if optimization result is bad to warning

* remove test for optimization error, remove transform=None, update find_MAP call args

* update find_MAP call arg

* small docstring change

* remove blank links, remove extraneous callback arg

* remove unused imports

* test fail with precision a bit off, 9.996 vs. 10.  Switching to previous default method of BFGS.

* removed optimization check (since 'return_raw' is an option), added 'include_transformed'

* remove unused import

* need include_transformed=True
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants