New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tickets/dm-15636: Improve DcrModel regularization #221
Conversation
d63e663
to
55d188c
Compare
@@ -60,41 +60,46 @@ class DcrAssembleCoaddConfig(CompareWarpAssembleCoaddConfig): | |||
"If not set, skips calculating convergence and runs for ``maxNumIter`` iterations", | |||
default=True, | |||
) | |||
useProgressiveGain = pexConfig.Field( | |||
dtype=bool, | |||
doc="Allow the relative weight of the new model solution to increase with each iteration?", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sentence fragment does not make sense to end with a question mark. Either make it a proper statement or reword it to a proper question. I think our standard says the docs should be statements.
clampFrequency = pexConfig.Field( | ||
dtype=float, | ||
doc="Maximum relative change of the model allowed between subfilters.", | ||
doc="Threshold to exclude noise-like pixels from regularization between iterations.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the threshold positive or negative, ie are pixels above or below the threshold excluded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This parameter has actually become unnecessary with the changes on this ticket, so I've simply removed it.
dtype=float, | ||
doc="Maximum convergence-weighted gain to apply between forward modeling iterations.", | ||
default=2., | ||
doc="Threshold to exclude noise-like pixels from regularization across subfilters.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also removed.
dtype=float, | ||
doc="Minimum convergence-weighted gain to apply between forward modeling iterations.", | ||
default=0.5, | ||
doc="Maximum relative change of the model allowed between subfilters. Set to zero to disable.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
relative change in what units
@@ -537,14 +548,18 @@ def newModelFromResidual(self, dcrModels, residualGeneratorList, bbox, statsFlag | |||
Statistics control object for coadd | |||
weightList : `list` of `float` | |||
The weight to give each input exposure in the coadd | |||
mask : `lsst.afw.image.Mask` | |||
Mask of the initial template coadd. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doc comment does not explain much, consider being more descriptive
dcrModels.conditionDcrModel(subfilter, newModel, bbox, gain=1.) | ||
# Catch any invalid values | ||
badPixels = ~np.isfinite(newModel.image.array) | ||
newModel.image.array[badPixels] = model[bbox].image.array[badPixels] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it better to replace these values, or carry around a mask and mask them out of further calculations
gain : `float` | ||
Relative weight to give the new solution when updating the model. | ||
""" | ||
if self.config.useProgressiveGain: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the particular values of these numbers do not seem well motivated, can you add a comment explaining why these magic numbers and not others? Or include a link to a paper or something where they come from
if self.config.useProgressiveGain: | ||
baseGain = 0.5 | ||
if modelIter < 2: | ||
return baseGain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
baseGain is only used here, you could just return 0.5 or only define it inside this block if you want it named, but naming it does not seem to explain why the value is 0.5 any more so maybe err to less lines of code
else: | ||
return np.log(modelIter) | ||
else: | ||
return 1. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This whole function could probably be:
if self.config.useProgressiveGain:
return 0.5 if modelIter <2 else np.log(modelIter)
return 1
55d188c
to
e05dbaf
Compare
e05dbaf
to
f5eee8c
Compare
These changes provide greater flexibility in applying regularization to DcrModels between iterations and between frequencies, and reduce artifacts in the final images.