New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DM-24704: Make brighter-fatter correction a subclass of lsst.ip.isr.IsrCalib #175
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor comments from a quick look. Seems like there's some code duplication. There is a lot of dict manipulation and keys that have to match across multiple methods but I'm not entirely sure how to clean that up.
Other keyword parameters to set in the metadata. | ||
""" | ||
kwargs['LEVEL'] = self.level | ||
kwargs['KERNEL_DX'] = self.shape[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I learned the other day that if we say "HIERARCH KERNEL_DX"
for the key names that astropy will no longer warn like the world is coming to an end. The astropy fits reader is clever enough to let you read that back as "KERNEL_DX". I think astropy is already doing that (and complaining) so that suggests that somewhere in the bit of code that writes out these headers to FITS we should be explicitly adding the HIERARCH to stop the complaints.
Lower case dicts in FITS also are forced to HIERARCH by the FITS standard.
""" | ||
calib = cls() | ||
|
||
if calib._OBSTYPE != dictionary['metadata']['OBSTYPE']: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might be able to say:
if calib._OBSTYPE != (found := dictionary['metadata']['OBSTYPE']):
raise RuntimeError(f"... found {found}")
to stop the repetition.
smallShape = (int((calib.shape[0] - 1) / 2), int((calib.shape[1] - 1)/2)) | ||
nObservations = set([len(calib.means[amp]) for amp in calib.means]) | ||
if len(nObservations) != 1: | ||
raise RuntimeError("Inconsistent number of observations found.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe:
if (nobs := len(nObservations)) != 1:
raise RuntimeError(f"Inconsistent number of observations found ({nobs} != 1).")
tableList = [] | ||
self.updateMetadata() | ||
|
||
# Lengths |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this chunk of code duplicated above? Please refactor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replaced with a new method so that all three places these numbers are used, they're calculated consistenly.
detectorKernel is the kernel generated for a detector as a | ||
whole, as generated by having level == 'DETECTOR' | ||
|
||
makeDetectorKernelFromAmpwiseKernels is a method to generate the | ||
kernel for a detector, constructed by averaging together the | ||
ampwise kernels in the detector. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makeDetectorKernelFromAmpwiseKernels
: in terms of usage, do we get this when selecting AMP
, or when selecting DETECTOR
, or in both cases? Perhaps it would be good to clarify in the docstring.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upon reading further (isrTask.py
), I see that there's two options to control this: brighterFatterKernel.level
and self.config.brighterFatterLevel
., and that makeDetectorKernelFromAmpwiseKernels
happens when self.config.brighterFatterLevel == 'DETECTOR'
and brighterFatterKernel.level == AMP
. So maybe you can add this info here too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the docstring. At some point, there will probably be questions about why the application code expects level == 'DETECTOR'
, but the generation code defaults to level == 'AMP'
.
# Lengths for reshape: | ||
_, smallShape, nObs = calib.getLengths() | ||
|
||
calib.rawXcorrs = {amp: np.array(dictionary['rawXcorrs'][amp]).reshape((nObs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still a bit confused by the reshaping that is happening here. If, let's say, the original shape is (8,8)
(default) and nObs
is, say, 20
, then rawXcorrs
total length is 20*8*8 = 1280
. However, smallShape
here (which I think has a more appropriate name inside the function, as it is smallLength
) would be int((8-1)*(8-1)/4) = 12
, so the reshape here would be 1280-->2800
? I'm probably missing something, sorry!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And, why the reshape pf the covariance matrices from PTC? in preparation for the tiling when doing the convolution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The calib.shape
is the shape of the kernel, which has been tiled to fill all quadrants. As in the previous comment, this is 17x17 for an 8x8 covariance. So for 20 observations, the returned lengths will be kernelLength = 17*17 = 289, smallLength = ((17-1)*(17-1))/4 = 8*8 = 64, nObs = 20
.
The reshape is here just because the code stores the covariances for future validation, and they need to be persistable to FITS.
However, in looking at this block again, I made a mistake on the code deduplication ticket. As you point out, smallLength
is the better name, as it's just a single value. However, this is the "to FITS" reshape size, so in constructing the calibration ("from FITS"), it needs to use the sqrt
of this value as the shapes.
I think this also implies I need to update the unit tests as well, as a read/write test would have found that error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm mostly requesting clarifications (especially with regards to the reshaping covariance matrices and the multiple options of the levels at which the kernels are calculated).
Fix flake8. Clear unused code. Fix kernel averaging.
Fix typos in read method.
- Remove duplicated calculations of reshape lengths. This now has a method. - Use := operator to make log message checks easier. - Switch to generic schema name.
- Update docstrings and comments to be clearer. - Fix faulty valid/badAmp logic. - Add __eq__ handler.
Add IsrCalib subclass for Brighter-Fatter Kernels, allowing the existing numpy kernels to continue to be used where they exist.