Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove old sherpa backend from SpectrumFit #1605

Merged
merged 2 commits into from Aug 2, 2018

Conversation

Projects
4 participants
@joleroi
Copy link
Contributor

joleroi commented Jul 31, 2018

This PR removes the old sherpa backend from SpectrumFit. It was hardcoded to the specific case of a 1D spectrum fit and is superseded by the generic iminuit backend (and maybe a generic sherpa backend added soon).

There is some loss of functionality because the old sherpa backend could handle fitting a background model to the off vector in a cash fit. To add this back with our new generic backends requires quite some work. But it has to be done anyway, I guess, and nobody (except me) was using the background model fit as far as I know.

@joleroi joleroi changed the title Eemove old sherpa backend from SpectrumFit Remove old sherpa backend from SpectrumFit Jul 31, 2018

@joleroi joleroi force-pushed the joleroi:remove_sherpa branch from 3064e28 to 971087e Jul 31, 2018

@joleroi joleroi force-pushed the joleroi:remove_sherpa branch from 971087e to 1018bb1 Jul 31, 2018

@cdeil cdeil added the cleanup label Jul 31, 2018

@cdeil cdeil added this to In progress in Modeling via automation Jul 31, 2018

@cdeil cdeil added this to the 0.8 milestone Jul 31, 2018

@cdeil
Copy link
Member

cdeil left a comment

👍 to merge.

You can make a reminder issue concerning the spectrum background model if you like.

But we have to go this step to cleaner and simpler code in a few places, before re-building fancy functionality in a better way.

@registerrier
Copy link
Contributor

registerrier left a comment

Most comments concern tests and the necessity to add a tolerance to avoid changing them for changes that are well below the expected systematic

@@ -110,10 +91,10 @@ def test_wstat(self):
stat='wstat', forward_folded=False)
fit.fit()
assert_allclose(fit.result[0].model.parameters['index'].value,
1.997344538577775)
1.9973633924241248)

This comment has been minimized.

@registerrier

registerrier Aug 1, 2018

Contributor

Globally for most asserts here, it would be better to pass an absolute or relative tolerance to avoid changing the test every time the code is updated.

This comment has been minimized.

@joleroi

joleroi Aug 2, 2018

Author Contributor

@registerrier What is a good relative tolerance, here the changes sometimes are O(1e-3) but for most code changes I would expect the changes well below this

This comment has been minimized.

@cdeil

cdeil Aug 2, 2018

Member

I would suggest this: usually put a reference number with high precision, 8 digits.
If it's a low-precision result, manually choose a lower rtol or atol. In this case rtol=1e-3 or rtol=1e-2 might be a good choice.

I explained this in the dev docs a few days ago here:
http://docs.gammapy.org/dev/development/howto.html#assert-convention

This comment has been minimized.

@adonath

adonath Aug 2, 2018

Member

I've also asked myself the question how to choose a good rtol for tests in the past. In some cases one could even make a "physical" choice. E.g. for spectral indices a relative error of 0.01 is perfectly fine, because systematics will always be larger...but I don't really have a suggestion.

This comment has been minimized.

@joleroi

joleroi Aug 2, 2018

Author Contributor

I won't add a tolerance for now, because I already changed the test values. But I will keep it in mind for the future

100244.89943081759)
assert_allclose(fit.result[0].statval, 30.022315611837342)
100247.95103265066)
assert_allclose(fit.result[0].statval, 30.022319652565265)

This comment has been minimized.

@registerrier

registerrier Aug 1, 2018

Contributor

Same here

@@ -125,7 +106,7 @@ def test_joint(self):
model=self.source_model, forward_folded=False)
fit.fit()
assert_allclose(fit.result[0].model.parameters['index'].value,
1.9964768894266016)
1.9977254068253105)

This comment has been minimized.

@registerrier

registerrier Aug 1, 2018

Contributor

same

assert_allclose(result.model.parameters.error('index'), 0.09787747219456712)
assert_allclose(result.model.parameters.error('amplitude'), 2.1992645712596426e-12)
assert_allclose(result.model.parameters.error('index'), 0.0978669538795921)
assert_allclose(result.model.parameters.error('amplitude'), 2.199480205049676e-12)

This comment has been minimized.

@registerrier

registerrier Aug 1, 2018

Contributor

same

self.fit.result[0].to_table()

def test_compound(self):
fit = SpectrumFit(self.obs_list[0], self.pwl * 2)
fit.fit()
result = fit.result[0]
pars = result.model.parameters
assert_quantity_allclose(pars['index'].value, 2.2542315426423283)
assert_quantity_allclose(pars['index'].value, 2.254163434607357)

This comment has been minimized.

@registerrier

registerrier Aug 1, 2018

Contributor

same

from .sherpa_utils import SherpaExponentialCutoffPowerLaw
from sherpa.models import ArithmeticModel, modelCacher1d, Parameter

class SherpaExponentialCutoffPowerLaw(ArithmeticModel):

This comment has been minimized.

@registerrier

registerrier Aug 1, 2018

Contributor

I don't know if this one is specifically needed. This is for sherpa users who want to use an exponential cutoff PL model, right?
If we get rid of sherpa in most of the places, it is better to leave this do tests fully outside gammapy, no? I don't know how users could access this but does it really belong to gammapy/spectrum/models?

This comment has been minimized.

@cdeil

cdeil Aug 1, 2018

Member

Note that there's already this model here: https://github.com/zblz/naima/blob/master/naima/sherpa_models.py#L74

I would prefer to delete Sherpa models from Gammapy for now, and focus on developing our modeling.

Then we can still add things back to interface with Astropy or Sherpa, either for specific models or in a generic way like http://saba.readthedocs.io/en/latest/

This comment has been minimized.

@joleroi

joleroi Aug 2, 2018

Author Contributor

@registerrier I left this only for you 😆 I thought you like to fit our data directly with sherpa. I'm happy to remove it. At the moment all our spectralmodel have a to_sherpa function, which returns an equivalent sherpa model. So I will get rid of this method for all spectral models ok?

This comment has been minimized.

@adonath

adonath Aug 2, 2018

Member

👍 to remove all the to_sherpa() methods.

@joleroi joleroi force-pushed the joleroi:remove_sherpa branch from 7f595e4 to 2fd648b Aug 2, 2018

@joleroi joleroi merged commit 2463fd8 into gammapy:master Aug 2, 2018

1 of 2 checks passed

continuous-integration/travis-ci/pr The Travis CI build is in progress
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details

Modeling automation moved this from In progress to Done Aug 2, 2018

@joleroi joleroi deleted the joleroi:remove_sherpa branch Aug 2, 2018

@cdeil

This comment has been minimized.

Copy link
Member

cdeil commented Aug 2, 2018

@joleroi - there are 7 fails because of the tolerance in master now (for me locally):
https://gist.github.com/cdeil/c621d75a952a1e1c472871c1ec8a0597
I will adjust those tolerances via a commit in master now.

@cdeil

This comment has been minimized.

Copy link
Member

cdeil commented Aug 2, 2018

I've adjusted the assert tolerances in 647cfde

There were some wild ones, even bare assert on floats in two cases.

@joleroi

This comment has been minimized.

Copy link
Contributor Author

joleroi commented Aug 3, 2018

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.