New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DM-41108: add associated diaSources to diaPipe output #217
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The second commit seems to be trying to hack around an underlying problem, possibly in the ApPipe
definition; happy to help with diagnosing and fixing it.
Oh, I forgot: when rebasing, please update https://github.com/lsst/ap_verify/blob/main/pipelines/_ingredients/MetricsRuntime.yaml#L276-L278 to account for |
Thinking about this further, I believe we want the current timing metric to only include the tasks up through |
Then the metric is no longer even pretending to measure the running time of |
8fd7c36
to
7ee01af
Compare
I don't feel like the timing metric is arbitrary; it's tracking the wall clock time of the core AP tasks that contribute towards our timing requirements. It is clearly not the entire time since it can't include data transfer (etc..), but it's the part we have the most direct control over. The analysis_tools "afterburner" can happen after we send alerts, so I don't believe we need to include it in the timing metric that tracks the core pipeline time. Would it help if I changed the text in |
It's arbitrary for two reasons:
The most natural place to constrain time-to-alerts would be in Prompt Processing itself ( |
OK, I completely agree on two points: any official "time-to-alerts" metric should only come from Prompt Processing (and can't be captured by Separately, I do also want a timing metric for "core" AP, so that we can monitor the timing of just that portion of the pipeline with our CI datasets. This would be just to monitor trends over time, and not anything we would officially report. That can be done on a different ticket, though. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No objections as things stand. Thanks for hearing me out!
scons
and/orstack-os-matrix
)?ap_verify.py
on at least one of the standard datasets?For changes to metrics, the
print_metricvalues
script fromlsst.verify
will be useful.