Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DM-32252: Add variance_median and improve test coverage. #197

Merged
merged 1 commit into from Oct 19, 2021

Conversation

erykoff
Copy link
Contributor

@erykoff erykoff commented Oct 18, 2021

No description provided.


We compare the instFlux inside and outside source Footprints on an
extremely high S/N image.
"""
# We choose a random seed which causes the test to pass.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slightly scary comment - is it super sensitive to this? randomSeed=0 doesn't seem like you had to tune it too much, or just got very lucky. Can you make this sound a little less scary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just cut and paste from the original test. I actually don't know the story...

Comment on lines 95 to 98
# N.B. Next line checks that a random value is correct to a
# statistical 1-sigma prediction; some RNG seeds may cause it to
# fail (indeed, 67% should)
self.assertLess(record.get("test_NoiseReplacer_outside"), np.sqrt(sumVariance))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, OK, I see. Is there a method that also returns many values, that you can then do a slightly more thorough test on, like checking their mean and stddev?

Comment on lines 100 to 116
def testVarianceNoiseReplacer(self):
"""Test noise replacement in SFM with ''variance'' mode.
"""
# We choose a random seed which causes the test to pass.
task = self.makeSingleFrameMeasurementTask("test_NoiseReplacer")
task.config.noiseReplacer.noiseSource = 'variance'
exposure, catalog = self.dataset.realize(1.0, task.schema, randomSeed=0)
task.run(catalog, exposure)
sumVariance = exposure.getMaskedImage().getVariance().getArray().sum()
for record in catalog:
self.assertFloatsAlmostEqual(record.get("test_NoiseReplacer_inside"),
record.get("truth_instFlux"), rtol=1E-3)

# N.B. Next line checks that a random value is correct to a
# statistical 1-sigma prediction; some RNG seeds may cause it to
# fail (indeed, 67% should)
self.assertLess(record.get("test_NoiseReplacer_outside"), np.sqrt(sumVariance))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this just an exact copy and paste of the test above, meaning you could do the whole thing with

for noiseSource in ['measure', 'variance']:
    task.config.noiseReplacer.noiseSource = noiseSource```

# We choose a random seed which causes the test to pass.
task = self.makeSingleFrameMeasurementTask("test_NoiseReplacer")
# This selects the VariancePlaneNoiseReplacer.
task.config.noiseReplacer.noiseSource = 'variance_median'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one also looks like it could go in the single test loop over a list comp one above too, right?

Comment on lines 137 to 139
# N.B. Next line checks that a random value is correct to a
# statistical 1-sigma prediction; some RNG seeds may cause it to
# fail (indeed, 67% should)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, so this comment claims that 67% should fail statistically, but there's actually been no fine tuning there - all the random seeds are zero, and all apparently pass. What's up with that? (Yes, it could be luck, but still, worth wondering here).

@erykoff erykoff merged commit c490e7e into master Oct 19, 2021
@erykoff erykoff deleted the tickets/DM-32252 branch October 19, 2021 17:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants