Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix accuracy score feature name in model validation #6729

Merged
merged 2 commits into from
Sep 7, 2022

Conversation

jerrylian-db
Copy link
Collaborator

@jerrylian-db jerrylian-db commented Sep 7, 2022

Signed-off-by: Jerry Liang jerry.liang@databricks.com

Related Issues/PRs

#6593

What changes are proposed in this pull request?

#6593 broke the model validation Python example. This PR fixes that example and adds some usability improvements to model validation. It also makes some doc fixes.

How is this patch tested?

  • I have written tests (not required for typo or doc fix) and confirmed the proposed feature/bug-fix/change works.

Does this PR change the documentation?

  • No. You can skip the rest of this section.
  • Yes. Make sure the changed pages / sections render correctly by following the steps below.
  1. Click the Details link on the Preview docs check.
  2. Find the changed pages / sections and make sure they render correctly.

See that accuracy_score has been updated in the mlflow.evaluate() API docs.

Screen Shot 2022-09-07 at 2 08 26 PM

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

(Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/pipelines: Pipelines, Pipeline APIs, Pipeline configs, Pipeline Templates
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interface

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Language

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

How should the PR be classified in the release notes? Choose one:

  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Signed-off-by: Jerry Liang <jerry.liang@databricks.com>
Signed-off-by: Jerry Liang <jerry.liang@databricks.com>
@jerrylian-db jerrylian-db self-assigned this Sep 7, 2022
)
# If you would like to catch model validation failures, you can add try except clauses around
# the mlflow.evaluate() call and catch the ModelValidationFailedException, imported at the top
# of this file.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting rid of catching model validation exceptions so that we can catch future model validation breaking changes.

@@ -1092,11 +1092,14 @@ def evaluate(
if evaluator_config.get("_disable_candidate_model", False):
evaluation_result = EvaluationResult(metrics=dict(), artifacts=dict())
else:
if baseline_model:
_logger.info("Evaluating candidate model:")
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I was looking at the model evaluation logs, I didn't know whether they belonged to the candidate or the baseline model. I'm adding these logs so that it would be easier for users to make that discernment.

@github-actions github-actions bot added the rn/none List under Small Changes in Changelogs. label Sep 7, 2022
Copy link
Collaborator

@dbczumar dbczumar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks @jerrylian-db !

@jerrylian-db jerrylian-db merged commit 1f7f9dc into master Sep 7, 2022
@jerrylian-db jerrylian-db deleted the fix_accuracy branch September 22, 2022 16:24
nnethery pushed a commit to nnethery/mlflow that referenced this pull request Feb 1, 2024
* wip

Signed-off-by: Jerry Liang <jerry.liang@databricks.com>

* wip

Signed-off-by: Jerry Liang <jerry.liang@databricks.com>

Signed-off-by: Jerry Liang <jerry.liang@databricks.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rn/none List under Small Changes in Changelogs.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants