Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tickets/dm-15636: Improve DcrModel regularization #221

Merged
merged 5 commits into from Sep 26, 2018
Merged

Conversation

isullivan
Copy link
Contributor

These changes provide greater flexibility in applying regularization to DcrModels between iterations and between frequencies, and reduce artifacts in the final images.

@@ -60,41 +60,46 @@ class DcrAssembleCoaddConfig(CompareWarpAssembleCoaddConfig):
"If not set, skips calculating convergence and runs for ``maxNumIter`` iterations",
default=True,
)
useProgressiveGain = pexConfig.Field(
dtype=bool,
doc="Allow the relative weight of the new model solution to increase with each iteration?",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sentence fragment does not make sense to end with a question mark. Either make it a proper statement or reword it to a proper question. I think our standard says the docs should be statements.

clampFrequency = pexConfig.Field(
dtype=float,
doc="Maximum relative change of the model allowed between subfilters.",
doc="Threshold to exclude noise-like pixels from regularization between iterations.",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the threshold positive or negative, ie are pixels above or below the threshold excluded.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This parameter has actually become unnecessary with the changes on this ticket, so I've simply removed it.

dtype=float,
doc="Maximum convergence-weighted gain to apply between forward modeling iterations.",
default=2.,
doc="Threshold to exclude noise-like pixels from regularization across subfilters.",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also removed.

dtype=float,
doc="Minimum convergence-weighted gain to apply between forward modeling iterations.",
default=0.5,
doc="Maximum relative change of the model allowed between subfilters. Set to zero to disable.",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

relative change in what units

@@ -537,14 +548,18 @@ def newModelFromResidual(self, dcrModels, residualGeneratorList, bbox, statsFlag
Statistics control object for coadd
weightList : `list` of `float`
The weight to give each input exposure in the coadd
mask : `lsst.afw.image.Mask`
Mask of the initial template coadd.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doc comment does not explain much, consider being more descriptive

dcrModels.conditionDcrModel(subfilter, newModel, bbox, gain=1.)
# Catch any invalid values
badPixels = ~np.isfinite(newModel.image.array)
newModel.image.array[badPixels] = model[bbox].image.array[badPixels]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it better to replace these values, or carry around a mask and mask them out of further calculations

gain : `float`
Relative weight to give the new solution when updating the model.
"""
if self.config.useProgressiveGain:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the particular values of these numbers do not seem well motivated, can you add a comment explaining why these magic numbers and not others? Or include a link to a paper or something where they come from

if self.config.useProgressiveGain:
baseGain = 0.5
if modelIter < 2:
return baseGain
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

baseGain is only used here, you could just return 0.5 or only define it inside this block if you want it named, but naming it does not seem to explain why the value is 0.5 any more so maybe err to less lines of code

else:
return np.log(modelIter)
else:
return 1.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This whole function could probably be:

if self.config.useProgressiveGain:
    return 0.5 if modelIter <2 else np.log(modelIter)
return 1

@isullivan isullivan merged commit 95848a1 into master Sep 26, 2018
@timj timj deleted the tickets/DM-15636 branch February 18, 2021 15:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants