New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Fit.covar method #1922

Merged
merged 1 commit into from Nov 13, 2018

Conversation

Projects
2 participants
@cdeil
Member

cdeil commented Nov 13, 2018

This PR adds a Fit.covar method. It doesn't implement this yet for scipy or sherpa, it just prepares the Fit interface.

The idea is to develop the fitting framework to have stateless backends, and for Fit to have separate compute tasks for optimize, covar, conf, profile and contour.

For now minuit is a stateful backend, so we hold on to the minuit instance in the fit class and pass it back in on each call (covar in this case).

I also made a small change to the optimise method to return a FitResult. And I made a small change to the run method to be opinionated as clearly a combined optimize and covar and nothing else. I think we could consider removing it completely or keep as-is, I'm -1 to extend it with other tasks like conf, profile and contour.

Just now I saw that I forgot to call hesse in the iminuit_covar. This could be because MINUIT often automatically calls hesse from migrad, or it could be because we don't have a good test case yet.
I'll improve the tests and look at that tomorrow.

@cdeil cdeil added the feature label Nov 13, 2018

@cdeil cdeil added this to the 0.9 milestone Nov 13, 2018

@cdeil cdeil self-assigned this Nov 13, 2018

@cdeil cdeil added this to In progress in Modeling via automation Nov 13, 2018

@cdeil cdeil referenced this pull request Nov 13, 2018

Open

iminuit #150

@cdeil cdeil merged commit f9dfc39 into gammapy:master Nov 13, 2018

2 of 4 checks passed

continuous-integration/travis-ci/pr The Travis CI build could not complete due to an error
Details
Codacy/PR Quality Review Not up to standards. This pull request quality could be better.
Details
Scrutinizer Analysis: 354 new issues, 8 updated code elements – Tests: passed
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details

Modeling automation moved this from In progress to Done Nov 13, 2018

@dtnguyen2

This comment has been minimized.

dtnguyen2 commented Nov 13, 2018

I don't recommend to put to much faith on the results returned from Sherpa's covar because the internal implementation relies on very sketchy numerical algorithm. Perhaps, you may want to look into some very good autodiff tools (such as autograd if model func is pure python see: https://github.com/HIPS/autograd for details or adept if in C++ land, see https://github.com/rjhogan/Adept etc...).

@cdeil

This comment has been minimized.

Member

cdeil commented Nov 14, 2018

@dtnguyen2

Perhaps, you may want to look into some very good autodiff tools (such as autograd if model func is pure python see: https://github.com/HIPS/autograd for details or adept if in C++ land, see https://github.com/rjhogan/Adept etc...).

I would like to look at Numpy-based autograd or Tensorflow based implementations (e.g. covar estimation and confidence interval computations could probably be added to https://www.tensorflow.org/probability/) which should provide huge advantages by having autograd and in the case of Tensorflow use any compute hardware one has without extra implementation effort. But we don't have the bandwidth to prototype this, at the moment we are just doing Numpy evaluations without gradients.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment