Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix fit parameter bug for very large numbers #1883

Merged
merged 2 commits into from Oct 19, 2018
Merged

Conversation

@cdeil
Copy link
Member

@cdeil cdeil commented Oct 19, 2018

I noticed that fitting very large parameters, like 1e35, fails in a weird way:
https://github.com/gammasky/joint-crab/pull/155

The problem comes because int ** int in Python creates bit int numbers, and when passed to Numpy, those are stored as Python int objects in an array of dtype object:

>>> import numpy as np
>>> (10 ** 35).bit_length()
117
>>> np.array(10 ** 35)
array(100000000000000000000000000000000000, dtype=object)
>>> np.array(1e35)
array(1.e+35)

In Gammapy these types are created here:

power = int(np.log10(np.absolute(value)))
scale = 10 ** power
self.factor = value / scale
self.scale = scale

And then a covariance matrix of dtype object is created here:

def set_covariance_factors(self, matrix):
"""Set covariance from factor covariance matrix.
Used in the optimiser interface.
"""
scales = np.array([par.scale for par in self.parameters])
scale_matrix = scales[:, np.newaxis] * scales
self.covariance = scale_matrix * matrix

And then operations that assume float numbers fail.

One simple solution is to always convert par.factor and par.scale to float, which is the Python 64-bit float type. This would be mu suggestion. Currently there is a property setter, which also lets through other number types.

@cdeil cdeil added the bug label Oct 19, 2018
@cdeil cdeil added this to the 0.9 milestone Oct 19, 2018
@cdeil cdeil self-assigned this Oct 19, 2018
@cdeil cdeil added this to To do in gammapy.modeling via automation Oct 19, 2018
@cdeil
Copy link
Member Author

@cdeil cdeil commented Oct 19, 2018

The one build fail is unrelated to this PR, I've filed an issue here: lebigot/uncertainties#89

Merging this now.

@cdeil cdeil merged commit 022ea5e into gammapy:master Oct 19, 2018
3 of 4 checks passed
3 of 4 checks passed
continuous-integration/travis-ci/pr The Travis CI build could not complete due to an error
Details
Codacy/PR Quality Review Up to standards. A positive pull request.
Details
Scrutinizer Analysis: 808 new issues, 3 updated code elements – Tests: passed
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
gammapy.modeling automation moved this from To do to Done Oct 19, 2018
Copy link
Contributor

@registerrier registerrier left a comment

Looks good to me. Sorry for the late comments.

@@ -173,7 +173,7 @@ def autoscale(self, method="scale10"):
value = self.value
if value != 0:
power = int(np.log10(np.absolute(value)))
scale = 10 ** power
scale = float(10 ** power)

This comment has been minimized.

@registerrier

registerrier Oct 19, 2018
Contributor

Note, you could also use numpy.float_power there. Or even more simply (10.0 ** power).

This comment has been minimized.

@cdeil

cdeil Oct 19, 2018
Author Member

Thanks for the review. I put 10. ** power in b41621f.

@cdeil cdeil changed the title Force parameter numbers to float Fix fit parameter bug for very large numbers Nov 23, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Linked issues

Successfully merging this pull request may close these issues.

None yet

2 participants