Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support model monitoring for batch prediction in Vertex SDK #1570

Merged
merged 8 commits into from Aug 16, 2022
Merged

Conversation

rosiezou
Copy link
Contributor

@rosiezou rosiezou commented Aug 4, 2022

Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:

  • Make sure to open an issue as a bug/issue before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea
  • Ensure the tests and linter pass
  • Code coverage does not decrease (if any source code was changed)
  • Appropriate docs were updated (if necessary)

Fixes #<issue_number_goes_here> 🦕

@rosiezou rosiezou requested a review from a team as a code owner August 4, 2022 22:35
@product-auto-label product-auto-label bot added size: l Pull request size is large. api: vertex-ai Issues related to the googleapis/python-aiplatform API. labels Aug 4, 2022
@rosiezou rosiezou self-assigned this Aug 5, 2022
Optional. The objective config for model monitoring. Passing this parameter enables
monitoring on the model associated with this batch prediction job.
model_monitoring_alert_config (aiplatform.model_monitoring.EmailAlertConfig):
Optional. Configures how model monitoring alerts are sent to the user. Right now
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these docstrings copied from the source at https://github.com/googleapis/googleapis/tree/master/google/cloud/aiplatform? Will we be able to remember to update this when/if alerts other than email alert become supported?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They're not directly copied from GAPIC. But Jing's team also confirmed that there's no plans for additional alert configs.

@@ -702,6 +759,11 @@ def create(
sync=sync,
create_request_timeout=create_request_timeout,
)
# TODO: remove temporary re-import statements once model monitoring for batch prediction is GA
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since there are several of these throughout the PR, perhaps add a tracking bug number so the TODO comments look like # TODO(b/.....): remove... with a common bug number.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

buganizer component created & assigned

@@ -17,9 +17,16 @@

from typing import Optional, List
from google.cloud.aiplatform_v1.types import (
model_monitoring as gca_model_monitoring,
model_monitoring as gca_model_monitoring_v1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use compat for these imports?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

compat by default imports from GA version of GAPIC, unless if DEFAULT_VERSION is set to v1beta1. I tried to see if the import aliases will index into the correct version by simply setting DEFAULT_VERSION = 'v1beta1' and then switching it back to v1 on an ad-hoc basis, but it doesn't dynamically index in the way I was hoping for. I think it's because the symbol table isn't automatically re-written unless if we explicitly re-import. So that's why I imported both v1 and v1beta1 versions explicitly.

def as_proto(self):
"""Returns EmailAlertConfig as a proto message."""
# TODO: remove config_for_bp parameter when model monitoring for batch prediction is GA
def as_proto(self, config_for_bp: Optional[bool] = False):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This typing annotation should be just config_for_bp: bool = False.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

def as_proto(self):
"""Returns _ObjectiveConfig as a proto message."""
# TODO: remove config_for_bp parameter when model monitoring for batch prediction is feature complete and in GA
def as_proto(self, config_for_bp: Optional[bool] = False):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as above about typing annotation here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

io as gca_io,
model_monitoring as gca_model_monitoring,
)

# constants used for testing
USER_EMAIL = ""
MODEL_NAME = "churn"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this used as the model's display_name? if so maybe better call it MODEL_DISPLAY_NAME, since "model name" could be understood as its resource full name.

If this is indeed intended as a display name of a resource used only for testing, please prefix it with "temp".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the variable names to make it more specific. But the extra prefixing is unnecessary because the actual display names are being created by _make_display_name implemented in e2e_base.py, which does append a prefix set by the class (in this case the prefix is 'temp_e2e_model_monitoring_test_')

Copy link
Contributor

@dizcology dizcology Aug 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. In that case can we rename this further and call it something like MODEL_DISPLAYNAME_KEY to prevent the next engineer reading this to misinterpret as the actual model displayname? More generally, what’s the meaning of “churn” in the context of testing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the variable name. Regarding the actual value string, "churn" is just a shorthand used to reference the BQ dataset. The same dataset and pre-trained model was used in the example notebook on the Cloud SDK documentation page and a separate example for BQML.

@rosiezou rosiezou added kokoro:force-run Add this label to force Kokoro to re-run the tests. and removed kokoro:force-run Add this label to force Kokoro to re-run the tests. labels Aug 15, 2022
@yoshi-kokoro yoshi-kokoro removed kokoro:force-run Add this label to force Kokoro to re-run the tests. labels Aug 16, 2022
Copy link
Contributor

@SinaChavoshi SinaChavoshi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving from MLMD, as this is a no-op on MLMD side.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: vertex-ai Issues related to the googleapis/python-aiplatform API. size: l Pull request size is large.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants