Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DM-13187 Fix selected refStar counting #65

Merged
merged 1 commit into from Jan 9, 2018
Merged

Conversation

parejkoj
Copy link
Collaborator

@parejkoj parejkoj commented Jan 9, 2018

No description provided.

Copy link
Contributor

@r-owen r-owen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good, but raises some questions.

My main concern is a test that loses all but 2 ref stars; should that test be changed, e.g. using different inputs or different config parameters?

I'm also curious why some tests check more metrics than other tests.

/**
* @brief Return the number of fittedStars that have an associated refStar.
*/
size_t nFittedStarsWithAssociatedRefStar() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a const method since it does not alter the object (add const before {)

Also, please consider moving the implementation to Associations.cc.

@@ -55,7 +55,7 @@ def test_jointcalTask_2_visits_photometry(self):
# NOTE: ndof==1 from 4 fit parameters (2 model, 2 fittedStar), and
# 5 degrees-of-freedom (3 star measurements, with 2 reference stars).
metrics = {'collected_photometry_refStars': 183,
'selected_photometry_refStars': 183,
'selected_photometry_refStars': 2,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yikes! Surely this suggests something is wrong.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is intended: the test is a minimal working example with a total of 3 stars across 2 ccds, designed so that one can do the fitting math "by hand."

'selected_astrometry_refStars': 825,
'selected_photometry_refStars': 825,
'selected_astrometry_refStars': 350,
'selected_photometry_refStars': 350,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it expected that jointcal can only use a smallish fraction of the reference stars?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've never dug into it, but I'm not surprised: the refstar list comes from loadSkyCircle, which would give stars outside the bbox of the input exposures. That plus the cuts imposed by astrometrySourceSelector mean I'd be very surprised if even a majority of refStars are used in these tests (especially given the not-full-focal plane test data).

'astrometry_final_chi2': 7929.656,
'astrometry_final_ndof': 14262,
'photometry_final_chi2': None,
'photometry_final_ndof': None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the drastic pruning? It also makes me realize that some tests have all these metrics and others do not. Some explanation as a code comment would be welcome when standard metrics are missing. If it's a matter of needing a looser tolerance, this suggests the test framework is too rigid and should support a number and a tolerance, though that is arguably out of scope for this ticket.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This particular change was removing metrics that weren't being measured. Some tests are photometry/astrometry only, so they don't have the other metrics.

JointcalTestBase._test_metrics() loops over the measurements, so if something is being measured but doesn't have an expected value, the test will fail, unless explicitly defined as None.

Update metric numbers to the new, correct, values.
@parejkoj parejkoj merged commit 2651e55 into master Jan 9, 2018
@parejkoj
Copy link
Collaborator Author

parejkoj commented Jan 9, 2018

Thanks: I've replied to your concerns inline.

@ktlim ktlim deleted the tickets/DM-13187 branch August 25, 2018 06:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants