New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DM-30172: Define BFK tests for cp_verify #109
Conversation
701db01
to
7e8a921
Compare
7e8a921
to
bd6ad88
Compare
# If we drop an element of ``scaledCorrList`` | ||
# (which is what this does), we need to ensure we | ||
# drop the flux entry as well. | ||
fluxes = np.delete(fluxes, xcorrNum) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't you also need to delete the corresponding entry from mask
and variances
to keep everything in step?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, this delete is needed so that the length of the cross-correlations and fluxes match when we pass them to the quadratic solver. The mask
and varaiances
aren't passed anywhere, so they can be left alone.
@@ -274,6 +279,7 @@ def run(self, inputPtc, dummy, camera, inputDims): | |||
if (xcorrCheck > self.config.xcorrCheckRejectLevel) or not (np.isfinite(xcorrCheck)): | |||
self.log.warning("Amp: %s %d skipped due to value of triangle-inequality sum %f", | |||
ampName, xcorrNum, xcorrCheck) | |||
fluxes = np.delete(fluxes, xcorrNum) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And if so, same for here.
expectation = np.array([[1.55337977e-30, 3.13979520e-30, 4.60562292e-30, 5.19037630e-30, | ||
4.60562292e-30, 3.13979520e-30, 1.55337977e-30], | ||
[3.07372386e-30, 6.40017812e-30, 1.00923202e-29, 1.15502594e-29, | ||
1.00923202e-29, 6.40017812e-30, 3.07372386e-30], | ||
[4.34133757e-30, 9.45605038e-30, 1.43881780e-29, 1.39759388e-29, | ||
1.43881780e-29, 9.45605038e-30, 4.34133757e-30], | ||
[4.83557602e-30, 1.06596290e-29, 1.71783205e-29, 1.55771296e-29, | ||
1.71783205e-29, 1.06596290e-29, 4.83557602e-30], | ||
[4.34133757e-30, 9.45605038e-30, 1.43881780e-29, 1.39759388e-29, | ||
1.43881780e-29, 9.45605038e-30, 4.34133757e-30], | ||
[3.07372386e-30, 6.40017812e-30, 1.00923202e-29, 1.15502594e-29, | ||
1.00923202e-29, 6.40017812e-30, 3.07372386e-30], | ||
[1.55337977e-30, 3.13979520e-30, 4.60562292e-30, 5.19037630e-30, | ||
4.60562292e-30, 3.13979520e-30, 1.55337977e-30]]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These values are very different (at least, by ratio!) - I trust that's correct though 🙂
The one bit I'm not sure I understand here is the atol=1e-5
on the testing line next: 1) is that correct (I think it was this type of testing that bit us before on that top hat integrator thing, right?) and 2) aren't these new values here consistent with the old ones at the atol=1e-5
level, i.e. isn't this unnecessary?
(I'm not saying you shouldn't change the values to be the right ones, of course! Just that it looks, at first glance at least, like the old values would also have worked, right?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm reasonably sure the values are correct. I don't think the previous solution was calling the quadratic code correctly, and I think the values are so odd because the input data is made up.
I don't think the tolerance would have worked with the old values, as if it had, I likely would have left it alone. I believe the test failed, and that's why I updated the solution expectation.
bd6ad88
to
cfff96d
Compare
cfff96d
to
52c56b1
Compare
No description provided.