New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DM-36994: Add additional metrics to ip_isr #245
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All very minor comments.
python/lsst/ip/isr/isrStatistics.py
Outdated
|
||
if self.config.bandingUseHalfDetector: | ||
fullLength = len(outputStats['AMP_BANDING']) | ||
outputStats['DET_BANDING'] = float(np.median(outputStats['AMP_BANDING'][0:fullLength//2])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we need a nanmedian
for the case of bad amps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, the amps do iterate in order, right? If not, cutting this list in half like that doesn't necessarily give a physical half of the chip.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cut here yields the "upper" amplifiers C10-C17. nanmedian
makes sense, and I consistently forget it exists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sure it does, my worry was that it needn't necessarily. I'm sure it's fine for now though, as is that other loop without an amp id.
def test_bandingStatistics(self): | ||
"""Look at the banding on a raw mock image. | ||
|
||
The value obtained is far larger than we expect in real data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'll say! Do you know where that comes from? This is with MEDIAN_PER_ROW on as well... I guess we put some really weird stuff in the mocks? I'm guessing you ran this code on those images we were playing with (the ones Dan took). Do you have the results to hand by any chance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a large gradient added to the input on the mock image, which isn't removed by the overscan (as it's "astronomical"). The tests I ran on actual images matched the supplied values for those exposures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, if the numbers matched that's perfect. Kind of interested as to why the overscan doesn't take out the gradient though, even if it is "astronomical"...
3e44da6
to
e0539f9
Compare
e0539f9
to
af98c79
Compare
Add banding metrics, full amplifier projection extraction, and projection FFTs.