-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable system metrics logging for resuming an existing run #10312
Enable system metrics logging for resuming an existing run #10312
Conversation
Documentation preview for a77b849 will be available here when this CircleCI job completes successfully. More info
|
7e46d80
to
d7a8fa4
Compare
Signed-off-by: chenmoneygithub <chen.qian@databricks.com>
Signed-off-by: chenmoneygithub <chen.qian@databricks.com>
3638859
to
97f14d4
Compare
Co-authored-by: Harutaka Kawamura <hkawamura0130@gmail.com> Signed-off-by: Chen Qian <chenmoney@google.com>
@danielyxyang Thanks for your help! But please don't start contributing by doing code review. Usually we assign code review tasks to OSS contributors after they have nailed several solid PRs, otherwise the review could be distracting and increasing our overhead. |
Signed-off-by: chenmoneygithub <chen.qian@databricks.com>
Oops sorry for that! I'll let you do your work then:) |
# Unset the environment variables to avoid affecting other test cases. | ||
mlflow.disable_system_metrics_logging() | ||
mlflow.set_system_metrics_sampling_interval(None) | ||
mlflow.set_system_metrics_samples_before_logging(None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you might want to add this as an autouse fixture in case an error occurs before we reach these lines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice call.
expected_metrics_name = [f"system/{name}" for name in expected_metrics_name] | ||
for name in expected_metrics_name: | ||
assert name in metrics | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
expected_metrics_name = [f"system/{name}" for name in expected_metrics_name] | |
for name in expected_metrics_name: | |
assert name in metrics | |
expected_metric_names = [f"system/{name}" for name in expected_metrics_name] | |
assert sorted(metrics) == expected_metrics_name | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not guaranteed to be equivalent. The expected set does not contain GPU metrics, which is logged when GPU is available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert metrics.keys() >= set(expected_metrics_name)
might work.
|
||
# Pause for a bit to allow the system metrics monitoring to exit. | ||
time.sleep(1) | ||
thread_names = [thread.name for thread in threading.enumerate()] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we call join
on thread collected by threading.enumerate
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kinda prefer to align the testing code with users' code, for which they won't explicitly join threads returned by threading.enumerate
metrics_history = mlflow.tracking.MlflowClient().get_metric_history( | ||
run.info.run_id, "system/cpu_utilization_percentage" | ||
) | ||
assert metrics_history[-1].step > last_step |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make this assertion more strict. For example,
assert [m.step for m in metrics_history] == expected_steps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the behavior here is not deterministic because of python thread management.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's not deterministic? The number of steps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, the number of steps is not guaranteed.
Signed-off-by: chenmoneygithub <chen.qian@databricks.com>
import mlflow | ||
from mlflow.system_metrics.system_metrics_monitor import SystemMetricsMonitor | ||
|
||
|
||
@pytest.fixture(scope="module", autouse=True) | ||
def clean_mlruns_dir(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we rename this fixture?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should the scope be module or function? should we run this after each test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry my bad.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
馃洜 DevTools 馃洜
Install mlflow from this PR
Checkout with GitHub CLI
Related Issues/PRs
Resolve #10253
What changes are proposed in this pull request?
Enable system metrics logging for resuming an existing run.
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/gateway
: AI Gateway service, Gateway client APIs, third-party Gateway integrationsarea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notes