Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DM-14182: subtract fit from overscan; add option to set variance empirically #53

Merged
merged 2 commits into from Apr 25, 2018

Conversation

PaulPrice
Copy link
Contributor

No description provided.

Copy link
Contributor

@mrawls mrawls left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new functionality makes sense and it's good to return the overscanFit as well as the offImage for downstream use. Thanks for doing python 2 support cleanup as well as some numpydoc along the way. The changes are mostly fine, but please see my individual comments.


The ``ampMaskedImage`` and ``overscanImage`` are modified, with the fit
subtracted. Note that the ``overscanImage`` should not be a subimage of
the ``ampMaskedImage``, to avoid being subtracted twice.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good point. Is it possible to add a test to check that overscanImage is not a subimage of ampMaskedImage? You say in your commit message that it's not clear how to detect this case, but I think it's rather important if it's doable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe it's possible to test, since we have no way of determining whether two images share the same physical pixels. I therefore consider that the only way to deal with this is to document the possibility.

doEmpiricalReadNoise = pexConfig.Field(
dtype=bool,
default=False,
doc="Calculate empirical read noise instead of value from AmpInfo data?"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarify: "instead of using value from amp.getReadNoise?" This also makes me wonder what sets the read noise value in the amplifier info in the first place. Where does it usually come from?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The read noise is hard-wired into the camera configuration, often in the form of entries in FITS files.

if self.config.doEmpiricalReadNoise and overscanImage is not None:
stats = afwMath.StatisticsControl()
stats.setAndMask(overscanImage.mask.getPlaneBitMask(maskPlanes))
readNoise = afwMath.makeStatistics(overscanImage, afwMath.STDEVCLIP, stats).getValue()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming a plain old .getValue() is adequate here even though earlier uses of makeStatistics to get, e.g., the mean specifies .getValue(afwMath.MEAN).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getValue() works if you requested only a single statistic: https://github.com/lsst/afw/blob/master/include/lsst/afw/math/Statistics.h#L277-L282

@@ -887,17 +930,17 @@ def overscanCorrection(self, exposure, amp):

maskedImage = exposure.getMaskedImage()
dataView = maskedImage.Factory(maskedImage, amp.getRawDataBBox())
overscanImage = maskedImage.Factory(maskedImage, amp.getRawHorizontalOverscanBBox())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this assume overscan is always and forever horizontal? I'm thinking of DECam, which has both horizontal and vertical overscan regions (and its own isr.py, of course, so this code doesn't break anything, but I still think it should be as generic as possible).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that the point of the overscanCorrection function is to correct for bias drift as the device is read out. It therefore uses the serial overclock ("horizontal overscan"), because that tracks the bias level as a function of time as the device is read out. I don't believe the parallel overclock ("vertical overscan") is useful for this because it's read out after all the science pixels; to my knowledge we've never used it (for anything). I'm not even sure what use it has unless you have zero bias drift and simply want more pixels to throw into your calculation.


class EmpiricalVarianceTestCast(lsst.utils.tests.TestCase):
def setUp(self):
"""Constructs a CCD with two amplifiers and prepares for ISR"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you use a real image from some testdata rather than building a fake one here? I'd be more inclined to believe the test result for a real-life exposure than for this minimalistic fake one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

obs_decam tests include running processCcd.py on some real DECam data, and ci_hsc does similarly with real HSC data. Both ran fine under Jenkins.

I think leaving the tests on real data to other packages is the right balance: it keeps the size of this code repo small, and using simulated data means we can isolate the features being tested, and we know the exact right answer.


def testEmpiricalVariance(self):
results = self.task.run(self.exposure)
exposure = results.exposure
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mildly confusing variable name (since there is a self.exposure floating around already, but this is an exposure created from the result of processing self.exposure). Perhaps call one raw and one post-ISR.

@@ -44,6 +44,9 @@ def testOverscanCorrectionY(self, **kwargs):
maskedImage = afwImage.MaskedImageF(bbox)
maskedImage.set(10, 0x0, 1)

dataBox = afwGeom.Box2I(afwGeom.Point2I(0, 0), afwGeom.Extent2I(10, 10))
dataImage = afwImage.MaskedImageF(maskedImage, dataBox)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the point of dataImage just to be a small chunk of maskedImage? Does it save a lot of time to run the overscan correction test on this rather than the entire maskedImage throughout?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maskedImage includes the overscan, so if I were to operate on that I would be subtracting the overscan twice, which breaks the checks.

This allows downstream processing to inspect the fit results.

Because the overscan is now subtracted explicitly, this requires that the
overscan passed to this function not be contained within the image being
corrected (or the correction is applied twice). Unfortunately, it's not
clear how to detect this case; however, the primary user is IsrTask, which
pulls out the components explicitly and so shouldn't mix them.

Since this involved changes to the docstring, reformatted the docstring
for this function into numpydoc.
The read noise is measured from the overscan.

This might be useful when working on detectors whose read noise hasn't
been measured, or drifts due to hardware tests.
@PaulPrice PaulPrice merged commit 9379bf4 into master Apr 25, 2018
@ktlim ktlim deleted the tickets/DM-14182 branch August 25, 2018 06:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants