New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DM-10779: Implement running time metric(s) #3
Conversation
1d37418
to
3e7cca1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just three high-level comments:
I don't want us wrapping ap_pipe libraries in a special ApPipe object. I'd rather we call the required library functions directly from run_ap_verify().
I think slightly more descriptive names for api.py and performance.py would be useful.
# see <http://www.lsstcorp.org/LegalNotices/>. | ||
# | ||
|
||
"""Code for measuring software performance metrics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be called something more descriptive than "api.py?"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't think of one, and still can't. This file's purpose is to provide a single point of entry to the measurements
sub-package, so that ap_verify.py
doesn't need to know about implementation details like "which measurements have actually been implemented"?
Is there a name that communicates that to you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about compute_metrics
? "API" suggests to me a library with multiple methods, not an abstraction layer.
A top-level docstring which better indicated the purpose of the package would help here--the code for measuring metrics is not actually in this file.
|
||
import lsst.verify | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similarly, could this be called something more descriptive than "performance.py?"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Such as? measurements/performance.py
seems to quickly summarize what the code is for.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It depends where you plan to put other metrics code--it's all "performance" at some level. If you're thinking of small packages, runtime_metrics.py
would be more clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"runtime" to me sounds like "not compile-time", but I see your point. How about profiling.py
?
(I'd prefer to avoid 'metrics' or 'measurement' in the name to avoid Department of Redundancy Department).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay, that works for me.
I am willing to replace the However, I must strongly object to dumping the |
Okay, I'll let you refactor the code to be more functional, but I don't agree that |
a9e032f
to
a4f0c47
Compare
Metadata will be used to recover Measurements from individual pipeline Tasks, and to compute "afterburner" metrics. The Pipeline API has been changed to support metadata, and a method has been added to Pipeline to support current plans for Measurement handling.
Basic parsing is delegated to daf.persistence.Policy. The config loading has been factored into a singleton class to ensure options are only loaded once while decoupling it from other ap_verify code.
The measurement code has been put in a subpackage, measurements, which is expected to have many other measurements added to it in the future.
Package was not ready to add before.
This change reduces the conceptual complexity of ap_verify, and makes the data flow more obvious. Dependencies on the ap_pipe API are still contained in a separate module, where they can't clutter up the top-level logic.
a4f0c47
to
25eb227
Compare
This PR adds a function for extracting running time from Task metadata, and adds some infrastructure to
ap_verify
to support it and future features.