Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix & simplify pytorch / statsmodels logging tests #4724

Merged
merged 4 commits into from
Aug 23, 2021

Conversation

dbczumar
Copy link
Collaborator

@dbczumar dbczumar commented Aug 23, 2021

What changes are proposed in this pull request?

When #4719 was merged, it broke a couple of test cases that needlessly modify the tracking URI used by MLflow and tear down a temporary tracking directory mid-way through the test. This PR refactors the test cases to reduce complexity while still testing the necessary functionality.

How is this patch tested?

Integration tests

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

(Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interface

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Language

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

How should the PR be classified in the release notes? Choose one:

  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Signed-off-by: dbczumar <corey.zumar@databricks.com>
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Signed-off-by: dbczumar <corey.zumar@databricks.com>
@github-actions github-actions bot added area/build Build and test infrastructure for MLflow rn/none List under Small Changes in Changelogs. labels Aug 23, 2021
@@ -215,29 +215,18 @@ def test_signature_and_examples_are_saved_correctly(sequential_model, data):
@pytest.mark.large
@pytest.mark.parametrize("scripted_model", [True, False])
def test_log_model(sequential_model, data, sequential_predicted):
old_uri = tracking.get_tracking_uri()
# should_start_run tests whether or not calling log_model() automatically starts a run.
for should_start_run in [False, True]:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The behavior tested by this case can be captured by simplifying to should_start_run=False.

model = statsmodels_model.model
with TempDir(chdr=True, remove_on_exit=True) as tmp:
for should_start_run in [False, True]:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The behavior tested by this case can be captured by simplifying to should_start_run=False.

@github-actions github-actions bot added rn/none List under Small Changes in Changelogs. and removed rn/none List under Small Changes in Changelogs. labels Aug 23, 2021
Copy link
Member

@harupy harupy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Signed-off-by: dbczumar <corey.zumar@databricks.com>
@dbczumar dbczumar merged commit 17bb0bb into mlflow:master Aug 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/build Build and test infrastructure for MLflow rn/none List under Small Changes in Changelogs.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants