Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DM-23045: Validate linearity correction #30

Merged
merged 11 commits into from Apr 21, 2020
Merged

DM-23045: Validate linearity correction #30

merged 11 commits into from Apr 21, 2020

Conversation

plazas
Copy link
Contributor

@plazas plazas commented Apr 14, 2020

No description provided.

Parameters from n-order polynomial fit to meanSignalVector vs exposureTimeVector.

fracNonLinearityResidual : `list` of `float`
Fractial residuals from the meanSignal vs exposureTime curve with respect to linear part of
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fractial → Fractional

Comment on lines +803 to +804
signalIdeal = parsFit[0] + parsFit[1]*timeRange
signalUncorrected = self.funcPolynomial(parsFit, timeRange)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch on fixing that, not sure how that was ever OK...

Comment on lines 832 to 833
return (polynomialLinearizerCoefficients, c0, linearizerTableRow, linResidual, parsFit, parsFitErr,
reducedChiSquaredNonLinearityFit)
fracNonLinearityResidual, reducedChiSquaredNonLinearityFit)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have got to get away from making the return types massive tuples, it's just way too fragile.
Could you at least make this a named tuple so that people can't get it wrong, and code accessing components of the return won't be broken?

Also, just a general point: not putting the new returned item last is unnecessarily dangerous, as other code relying on the order will accidentally grab the wrong item here (if it doesn't crash because of it) whereas putting on the end wouldn't have that same effect.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And actually, having looked further, I don't see any calls to calculateLinearityResidualAndLinearizers() in the tests, which means that this is likely a breaking change, and should have been caught, so could you add something that would test this bit of code please too?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could also create a quick dataclass instead of a named tuple. They are also light weight but much clearer to set up and use.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, Tim is right, this has got dataclass written all over it.

for i in np.arange(len(localDataset.nonLinearityResiduals[ampName])):
self.assertAlmostEqual(localDataset.nonLinearityResiduals[ampName][i],
self.inputNonLinearityResiduals[i])
for i in np.arange(len(returnedDataset.nonLinearityResiduals[ampName])):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if there's a more elegant way of doing that?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe something like this?
for calc, truth in zip(returnedDataset.nonLinearityResiduals[ampName], inputNonLinearityResiduals): self.assertAlmostEqual(calc, truth)

@plazas plazas merged commit 1e51306 into master Apr 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants