New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DM-16468: Use measured convergence to predict gain #235
Conversation
037bc97
to
8288193
Compare
8288193
to
2d99eb7
Compare
4653cd1
to
c50d5f2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handful of cleanups. Thanks for pair coding the test with me!
"If ``baseGain`` is None, a conservative gain " | ||
"will be calculated from the number of subfilters. " | ||
"Small values imply slower convergence of the solution, but can " | ||
"help prevent overshooting and failures in the fit.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Swap this and the previous sentence.
self.log.warn("Coadd %s diverged before reaching maximum iterations or" | ||
" desired convergence improvement of %s." | ||
" Divergence: %s", | ||
subBBox, self.config.convergenceThreshold, convergenceCheck) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a "warning", or "error" state? If it diverges, should the code raise an exception, or can things later on continue with a divergent value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it should be just a warning, because the model will typically have converged well overall and should be fine to use for making templates for image differencing.
The quality of fit metric from each previous iteration. | ||
gainList : `list` of `float` | ||
The gains used in each previous iteration: appended with new gain | ||
value. Gains are numbers between ``self.config.baseGain`` and 1. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"appended with the new gain value"
return self.config.baseGain | ||
nIter = len(convergenceList) | ||
if nIter != len(gainList) + 1: | ||
raise ValueError("convergenceList must be one element longer than gainList.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, we should have put the values in this error, something like: "(convergenceList (%d) ... gainList (%d)"%(len, len)
# weighted by the gains used in each previous iteration. | ||
estFinalConv = [((1 + gainList[i])*convergenceList[i + 1] - convergenceList[i])/gainList[i] | ||
for i in range(nIter - 1)] | ||
# WORDS about why negative is bad |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, we need WORDS here!
estFinalConv[estFinalConv < 0] = 0 | ||
# Because the estimate may slowly change over time, only use the most recent measurements. | ||
estFinalConv = np.median(estFinalConv[max(nIter - 5, 0):]) | ||
lastGain = gainList[nIter - 2] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not gainList[-1]
to make it obvious it's the last value of the list?
Similarly for the other two convergenceList[-2]
and convergenceList[-1]
?
|
||
import lsst.afw.image | ||
import lsst.daf.persistence | ||
from lsst.pipe.tasks.dcrAssembleCoadd import DcrAssembleCoaddTask, DcrAssembleCoaddConfig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need unittest.mock
, lsst.afw.image
, or lsst.daf.persistence
here. I'm not sure why flake8 didn't complain.
The implementation is essentially a Kalman filter. Include tests of API and two simple gain calculations.
c50d5f2
to
0de0f62
Compare
The iterative forward modeling in
DcrAssembleCoadd
requires a gain term to weight the new solution and prevent oscillations. The correct gain can be difficult to guess for a user, but can be calculated from the measured values of convergence of the model using a simple Kalman filter, which is added in this ticket.