-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MAINTENANCE] Clean Up ValidationGraph API Usage, Improve Exception Handling for Metrics, Clean Up Type Hints #3399
[MAINTENANCE] Clean Up ValidationGraph API Usage, Improve Exception Handling for Metrics, Clean Up Type Hints #3399
Conversation
…tions_architecture_update-2021_09_15-15
✔️ Deploy Preview for niobium-lead-7998 ready! 🔨 Explore the source changes: 60c0151 🔍 Inspect the deploy log: https://app.netlify.com/sites/niobium-lead-7998/deploys/61428e8eef27ce000746270d 😎 Browse the preview: https://deploy-preview-3399--niobium-lead-7998.netlify.app |
HOWDY! This is your friendly 🤖 CHANGELOG bot 🤖Please don't forget to add a clear and succinct description of your change under the Develop header in ✨ Thank you! ✨ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly this looks great! Just a few clarifying comments and questions.
elif self._caching and v.id in self._metric_cache: | ||
metric_dependencies[k] = self._metric_cache[v.id] | ||
else: | ||
raise ge_exceptions.MetricError( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if I'm understanding this right, but if the metric has not been calculated yet (but may be later) would this raise a MetricError?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anthonyburdi This code focuses on evaluating the given metric. For it, all of its dependencies must already be calculated and stored/cached. Hence, the code reflects this logic.
if len(metric_fn_bundle) > 0: | ||
resolved_metrics.update(self.resolve_metric_bundle(metric_fn_bundle)) | ||
try: | ||
new_resolved = self.resolve_metric_bundle(metric_fn_bundle) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool! So if this line fails then we don't update the resolved_metrics
dict.
@@ -24,7 +24,7 @@ def __init__( | |||
self._metric_value_kwargs = metric_value_kwargs | |||
if metric_dependencies is None: | |||
metric_dependencies = {} | |||
self.metric_dependencies = metric_dependencies | |||
self._metric_dependencies = metric_dependencies |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are not doing anything special in the setter and getter can we keep this attribute public?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anthonyburdi We can -- if you think we should. Personally, I like the consistent property setter/getter as a style and keep fields private. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally a stylistic call, totally up to you and this won't gate my approval. My rationale is that keeping the attribute public with default setters and getters keeps the class less verbose, with the main benefit that setters and getters with custom logic are more obvious.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for the clarifications.
* develop: Release Prep release-prep-2021-09-16 (#3402) [MAINTENANCE] Clean Up ValidationGraph API Usage, Improve Exception Handling for Metrics, Clean Up Type Hints (#3399) [FEATURE] Configurable multi-threaded checkpoint speedup (#3362) [BUGFIX] fix error getting validation result from DataContext (#3359) Fix incorrect ToC section name (#3395) Bugfix/skip substitute config variables in ge cloud mode (#3393) [BUGFIX] fixed typo and added CLA links (#3347) [MAINTENANCE] Clean up ValidationGraph API and add Type Hints (#3392) Enhancement/update _set methods with kwargs (#3391)
Note
This work incorporates part of #2412 (other parts of it will be the focus of a follow-on PR).
Please annotate your PR title to describe what the PR does, then give a brief bulleted description of your PR below. PR titles should begin with [BUGFIX], [FEATURE], [DOCS], or [MAINTENANCE]. If a new feature introduces breaking changes for the Great Expectations API or configuration files, please also add [BREAKING]. You can read about the tags in our contributor checklist.
Changes proposed in this pull request:
After submitting your PR, CI checks will run and @tiny-tim-bot will check for your CLA signature.
For a PR with nontrivial changes, we review with both design-centric and code-centric lenses.
In a design review, we aim to ensure that the PR is consistent with our relationship to the open source community, with our software architecture and abstractions, and with our users' needs and expectations. That review often starts well before a PR, for example in github issues or slack, so please link to relevant conversations in notes below to help reviewers understand and approve your PR more quickly (e.g.
closes #123
).Previous Design Review notes:
Definition of Done
Please delete options that are not relevant.
Thank you for submitting!