New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DM-36337: Brighter-fatter kernels cannot be converted for disk due to length error #232
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that the changes will address the problem at hand, thanks! I'd like to see a bit more information in the error message, maybe the shapes that could not be repacked or something that helps determine the problem a bit better. And I'd also like for us to think a bit more about whether we need to do this type of repacking in other arrays as well at the same time, or not.
# Repack data. | ||
self.repackCorrelations(amp, correlationShape) | ||
else: | ||
raise ValueError("Could not coerce rawXcorrs into appropriate shape.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the error be more informative?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the message to note how many covariances we have (correlationShape[0]
), and how many we expect from the mask (np.sum(self.expIdMask[amp])
). I think this will be a rare/never called error, as I think we'll either have all the entries (and need to mask them) or we'll have the pre-masked set (and will need to repad here).
@@ -504,6 +524,27 @@ def toTable(self): | |||
|
|||
return tableList | |||
|
|||
def repackCorrelations(self, amp, correlationShape): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's nice to have a helper function for repacking the correlations. My general worry is, wouldn't we need to repack also other quantities like the fluxes (means) at the same time? Are we running out of sync somehow between the different arrays? The PTC code has an associated "covariances weights" array, although it seems that it is not used in the BFK code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the means and variances, the BF code uses the raw versions. Previously, the covariances had the same length as those, which is likely why I used the raw values. I've filed a ticket, DM-36344, to put more thought into this choice; If the final values have the same length as the covariances now, those may be the more natural choice.
What are the weights used for in the PTC? I'd have to think more about the BF algorithm to determine if we should be using those as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "covariances weights" in the PTC code are calculated from Pierre's code (using the variance of the variance formula 2*Var/Npix
), and then used in the fit to the FULLCOVARIANCE
model.
# Repack data. | ||
self.repackCorrelations(amp, correlationShape) | ||
else: | ||
raise ValueError("Could not coerce rawXcorrs into appropriate shape.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need to expand the error message here too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, good catch. Thanks!
50d362f
to
6c78cf7
Compare
Repack correlations if needed to ensure consistent lengths.