New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DM-30172: Define BFK tests for cp_verify #8
Conversation
1b2b509
to
d7eb674
Compare
|
||
catalogVerify['BRIGHT_SLOPE'] = False | ||
catalogVerify['NUM_MATCHES'] = False | ||
# These values need justification. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
People should be rewarded for leaving comments like this, rather than punished, but reading it I still can't really help but ask... so, what's the justification for these values? (sorry!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NUM_MATCHES > 10
exists because we need to be able to trust the fit. In any real science image, this should be trivially met (the majority of my tests had ~200-300).
BRIGHT_SLOPE < -0.5
is entirely arbitrary. alpha
must be negative (smaller size difference at larger magnitudes/fainter fluxes), but making it be simply negative ignores that the size change should be significant. If alpha =~ -0.01
we're essentially doing no correction. Every other constraint I can think if is circular, because it requires trying to figure out how much smaller the kernel should make things.
I am open to any suggestion on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I don't really have any great suggestions, just had to comment on the comment really. Maybe just pasting this into a comment and replacing the # These values need justification.
line would do it. And that way, if there's verifications failing in future, we can see that perhaps these values should be adjusted, as opposed to tests failing. (The other potential problem I can imagine is that if there's not much BF present in the first place (due to low depth in the observations, for example), or different sensors, (because we're camera-agnostic, haha), then this could cause a fail here, so noting that these values shouldn't be taken as gospel would be good).
d7eb674
to
29891e2
Compare
29891e2
to
40aec4b
Compare
This adds catalog support to cp_verify, and then uses that to compare two ISR runs: one with and one without brighter-fatter correction. If the sizes trend in the correct direction, then the correction has improved things and is likely good.