New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix parameter limit handling in fitting #1904

Merged
merged 5 commits into from Nov 7, 2018

Conversation

Projects
3 participants
@cdeil
Member

cdeil commented Nov 7, 2018

This PR fixes the parameter limit handling for iminuit and sherpa in gammapy.utils.fitting.

This was reported in #1878 and the changes here are what we discussed last wee (see #1878 (comment)).

While working on this I noticed two other small and related issues, fixed in
1b25f8e (make the factor 10 autoscale behave as intended for scales smaller than 1) and in
b0a6505 I changed get_sherpa_optimizer to be spelled with "z" instead of "s", giving us now consistency with spelling of "optimize" in Gammapy.

@adonath @registerrier - Please review.

For the min / max scale I wasn't sure what to put as data member and what as derived property.
For now, I left min and max as data member, and added factor_min = min / scale and factor_max = max / scale as derived properties. I.e. it's opposite from value = factor * scale where factor is the data member.

I looked at http://cta.irap.omp.eu/gammalib-devel/doxygen/classGOptimizerPar.html#details and see that they have always the factor as data member, and then value, min and max all defined as factor times scale properties:

value    = m_factor_value    * m_scale
error    = m_factor_error    * m_scale
gradient = m_factor_gradient * m_scale
min      = m_factor_min      * m_scale
max      = m_factor_max      * m_scale

I think I like that a little bit better, because it's a bit more uniform.

@adonath @registerrier - Any thoughts / preference on this point?

cdeil added some commits Nov 6, 2018

Small fix to parameter autoscale.
Before 2e-10 was split as 0.2 x 1e-9.
Now it's split as 2 x 1e-10, which is what the docstring claimed already.

The tests were improved to be easier to read and maintain
(fewer tests covering the important bits)

@cdeil cdeil added the bug label Nov 7, 2018

@cdeil cdeil added this to the 0.9 milestone Nov 7, 2018

@cdeil cdeil self-assigned this Nov 7, 2018

@cdeil cdeil added this to In progress in Modeling via automation Nov 7, 2018

@cdeil cdeil requested review from adonath and registerrier Nov 7, 2018

@registerrier

This looks good to me. Thanks.
I also prefer the option where factor_ is uniformly a member of the Parameter class.

@adonath

This comment has been minimized.

Member

adonath commented Nov 7, 2018

@cdeil I see the point of putting factor_ as the data member uniformly, but the API is not affected right? If changed, it would require implementing a setter for .min and .max and just create additional code. So I'm fine with the current solution, because it offers the same functionality with less code...

@adonath

adonath approved these changes Nov 7, 2018

Looks good to me, I have no further comments.

@cdeil

This comment has been minimized.

Member

cdeil commented Nov 7, 2018

@adonath @registerrier - Thanks for the review.

The CI fails are unrelated, I've created #1905 and #1906 to make that better.

Merging now.

About the factor_min data member question:

but the API is not affected right?

As long as we only get the min / max factor and never want to set it, yes.

If changed, it would require implementing a setter for .min and .max and just create additional code.
So I'm fine with the current solution, because it offers the same functionality with less code...

Yes, but it's very few and extremely simple extra lines for the two properties. I have a preference for uniformity.

But maybe let's see how the whole factor API evolves, I'll continue working on tests and changes related to factor covar and errors today.

@adonath - Could you please put parameter factor and fitting backend as one point for the call on Friday? I see that currently there's a lot missing, probably most importantly scipy.optimize.minimize and error estimation and levmar with Sherpa. I think with very few days of work we could make big progress there for v0.9, so let's discuss a bit on Friday.

@cdeil cdeil merged commit dc8ebb7 into gammapy:master Nov 7, 2018

2 of 4 checks passed

continuous-integration/travis-ci/pr The Travis CI build could not complete due to an error
Details
Codacy/PR Quality Review Not up to standards. This pull request quality could be better.
Details
Scrutinizer Analysis: 30 new issues, 11 updated code elements – Tests: passed
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details

Modeling automation moved this from In progress to Done Nov 7, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment