Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix lint issues #5305

Merged
merged 1 commit into from
Jan 25, 2022
Merged

Fix lint issues #5305

merged 1 commit into from
Jan 25, 2022

Conversation

harupy
Copy link
Member

@harupy harupy commented Jan 25, 2022

Signed-off-by: harupy 17039389+harupy@users.noreply.github.com

What changes are proposed in this pull request?

Fix a few lint issues.

How is this patch tested?

Lint check

Does this PR change the documentation?

  • No. You can skip the rest of this section.
  • Yes. Make sure the changed pages / sections render correctly by following the steps below.
  1. Check the status of the ci/circleci: build_doc check. If it's successful, proceed to the
    next step, otherwise fix it.
  2. Click Details on the right to open the job page of CircleCI.
  3. Click the Artifacts tab.
  4. Click docs/build/html/index.html.
  5. Find the changed pages / sections and make sure they render correctly.

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

(Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interface

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Language

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

How should the PR be classified in the release notes? Choose one:

  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
@github-actions github-actions bot added area/build Build and test infrastructure for MLflow rn/none List under Small Changes in Changelogs. labels Jan 25, 2022
Copy link
Collaborator

@WeichenXu123 WeichenXu123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@harupy harupy merged commit 4a257c2 into mlflow:master Jan 25, 2022
BenWilson2 added a commit that referenced this pull request Feb 1, 2022
* onnxruntime InferenceSession fix
Some distributions of onnxruntime require the specification of the providers argument on calling
InferenceSession. E.g. onnxruntime-gpu.
The package import call does not differnetiate which architecture specific version has been installed,
as all are imported with onnxruntime.
onnxruntime documentation says that
from v1.9.0 some distributions require the providers list to be
provided on calling an InferenceSession.
Therefore the try catch nested structure below attempts to create an inference session
with just the model path as pre v1.9.0.
If that fails, it will use the providers list
 as part of the InferenceSession call.
At the moment this is just CUDA and CPU, and probably
should be expanded.

* Refactor onnxruntime call and providers list
onnxruntime call now uses specific ValueError catch.
Initial attempt at loading model uses just path to onnx file.
If ValueError, the v1.9.0 call with providers is used.
The providers list argument is now obtained from the MLModel file.
The MLModel file meta data is loaded with assumption it is in the same
directory as the onnx model file.
Updating of save_model() function needed.
Signed-off-by: Ed Morris <ecm200@gmail.com>

* onnxruntime providers argument added to log model
For the onnx model flavour, added ability to provide onnxruntime
execution providers list.
onnx_execution_providors argument passed through log_model function
as kwargs.
Added to save_model function by kwargs pass through.
Model flavour metadata saving updated to providers field.
If not specified, saves default to file.
Tested and works for MLModel files with and without new field.
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Code comment formatting changes

Code comments formatting corrections

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>

* Code comment formatting changes

Code formatting changes.

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>

* Code comment formatting changes

Code formatting changes

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>

* Black reformat changes to onnx.py
Ran BLACK to check formatting of ONNX.py
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Fix lint issues (#5305)

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Remove num examples (#5304)

Signed-off-by: dbczumar <corey.zumar@databricks.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Run `apt-get update` before installing `libopenblas-dev` (#5313)

* run sudo update

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>

* fix typo

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>

* fix typo again

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* onnxruntime InferenceSession fix
Some distributions of onnxruntime require the specification of the providers argument on calling
InferenceSession. E.g. onnxruntime-gpu.
The package import call does not differnetiate which architecture specific version has been installed,
as all are imported with onnxruntime.
onnxruntime documentation says that
from v1.9.0 some distributions require the providers list to be
provided on calling an InferenceSession.
Therefore the try catch nested structure below attempts to create an inference session
with just the model path as pre v1.9.0.
If that fails, it will use the providers list
 as part of the InferenceSession call.
At the moment this is just CUDA and CPU, and probably
should be expanded.

Signed-off-by: Ed Morris <ecm200@gmail.com>

* Refactor onnxruntime call and providers list
onnxruntime call now uses specific ValueError catch.
Initial attempt at loading model uses just path to onnx file.
If ValueError, the v1.9.0 call with providers is used.
The providers list argument is now obtained from the MLModel file.
The MLModel file meta data is loaded with assumption it is in the same
directory as the onnx model file.
Updating of save_model() function needed.
Signed-off-by: Ed Morris <ecm200@gmail.com>

Signed-off-by: Ed Morris <ecm200@gmail.com>

* onnxruntime providers argument added to log model
For the onnx model flavour, added ability to provide onnxruntime
execution providers list.
onnx_execution_providors argument passed through log_model function
as kwargs.
Added to save_model function by kwargs pass through.
Model flavour metadata saving updated to providers field.
If not specified, saves default to file.
Tested and works for MLModel files with and without new field.
Signed-off-by: Ed Morris <ecm200@gmail.com>

Signed-off-by: Ed Morris <ecm200@gmail.com>

* Code comment formatting changes

Code comments formatting corrections

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Code comment formatting changes

Code formatting changes.

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Code comment formatting changes

Code formatting changes

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Black reformat changes to onnx.py
Ran BLACK to check formatting of ONNX.py
Signed-off-by: Ed Morris <ecm200@gmail.com>

Signed-off-by: Ed Morris <ecm200@gmail.com>

* Removed catch all exception and linting issue.
Removed the catch all exception of the InferenceSession call.
Removed linting issue.
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Autoformat: https://github.com/mlflow/mlflow/actions/runs/1779614231

Signed-off-by: mlflow-automation <mlflow-automation@users.noreply.github.com>

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Co-authored-by: Harutaka Kawamura <hkawamura0130@gmail.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: mlflow-automation <mlflow-automation@users.noreply.github.com>
mehtayogita pushed a commit to mehtayogita/mlflow that referenced this pull request Feb 3, 2022
* onnxruntime InferenceSession fix
Some distributions of onnxruntime require the specification of the providers argument on calling
InferenceSession. E.g. onnxruntime-gpu.
The package import call does not differnetiate which architecture specific version has been installed,
as all are imported with onnxruntime.
onnxruntime documentation says that
from v1.9.0 some distributions require the providers list to be
provided on calling an InferenceSession.
Therefore the try catch nested structure below attempts to create an inference session
with just the model path as pre v1.9.0.
If that fails, it will use the providers list
 as part of the InferenceSession call.
At the moment this is just CUDA and CPU, and probably
should be expanded.

* Refactor onnxruntime call and providers list
onnxruntime call now uses specific ValueError catch.
Initial attempt at loading model uses just path to onnx file.
If ValueError, the v1.9.0 call with providers is used.
The providers list argument is now obtained from the MLModel file.
The MLModel file meta data is loaded with assumption it is in the same
directory as the onnx model file.
Updating of save_model() function needed.
Signed-off-by: Ed Morris <ecm200@gmail.com>

* onnxruntime providers argument added to log model
For the onnx model flavour, added ability to provide onnxruntime
execution providers list.
onnx_execution_providors argument passed through log_model function
as kwargs.
Added to save_model function by kwargs pass through.
Model flavour metadata saving updated to providers field.
If not specified, saves default to file.
Tested and works for MLModel files with and without new field.
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Code comment formatting changes

Code comments formatting corrections

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>

* Code comment formatting changes

Code formatting changes.

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>

* Code comment formatting changes

Code formatting changes

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>

* Black reformat changes to onnx.py
Ran BLACK to check formatting of ONNX.py
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Fix lint issues (mlflow#5305)

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Remove num examples (mlflow#5304)

Signed-off-by: dbczumar <corey.zumar@databricks.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Run `apt-get update` before installing `libopenblas-dev` (mlflow#5313)

* run sudo update

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>

* fix typo

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>

* fix typo again

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* onnxruntime InferenceSession fix
Some distributions of onnxruntime require the specification of the providers argument on calling
InferenceSession. E.g. onnxruntime-gpu.
The package import call does not differnetiate which architecture specific version has been installed,
as all are imported with onnxruntime.
onnxruntime documentation says that
from v1.9.0 some distributions require the providers list to be
provided on calling an InferenceSession.
Therefore the try catch nested structure below attempts to create an inference session
with just the model path as pre v1.9.0.
If that fails, it will use the providers list
 as part of the InferenceSession call.
At the moment this is just CUDA and CPU, and probably
should be expanded.

Signed-off-by: Ed Morris <ecm200@gmail.com>

* Refactor onnxruntime call and providers list
onnxruntime call now uses specific ValueError catch.
Initial attempt at loading model uses just path to onnx file.
If ValueError, the v1.9.0 call with providers is used.
The providers list argument is now obtained from the MLModel file.
The MLModel file meta data is loaded with assumption it is in the same
directory as the onnx model file.
Updating of save_model() function needed.
Signed-off-by: Ed Morris <ecm200@gmail.com>

Signed-off-by: Ed Morris <ecm200@gmail.com>

* onnxruntime providers argument added to log model
For the onnx model flavour, added ability to provide onnxruntime
execution providers list.
onnx_execution_providors argument passed through log_model function
as kwargs.
Added to save_model function by kwargs pass through.
Model flavour metadata saving updated to providers field.
If not specified, saves default to file.
Tested and works for MLModel files with and without new field.
Signed-off-by: Ed Morris <ecm200@gmail.com>

Signed-off-by: Ed Morris <ecm200@gmail.com>

* Code comment formatting changes

Code comments formatting corrections

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Code comment formatting changes

Code formatting changes.

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Code comment formatting changes

Code formatting changes

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Black reformat changes to onnx.py
Ran BLACK to check formatting of ONNX.py
Signed-off-by: Ed Morris <ecm200@gmail.com>

Signed-off-by: Ed Morris <ecm200@gmail.com>

* Removed catch all exception and linting issue.
Removed the catch all exception of the InferenceSession call.
Removed linting issue.
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Autoformat: https://github.com/mlflow/mlflow/actions/runs/1779614231

Signed-off-by: mlflow-automation <mlflow-automation@users.noreply.github.com>

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Co-authored-by: Harutaka Kawamura <hkawamura0130@gmail.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: mlflow-automation <mlflow-automation@users.noreply.github.com>
Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com>
mehtayogita added a commit that referenced this pull request Feb 4, 2022
* Condense run comparison table (#5306)

* init

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* use antd tooltip

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* support safari

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* fix js lint

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update test

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* fix

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update var names

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* test1

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* adjust col width by state

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* use state control visibility

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* use react ref

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* fix lint

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* address comments

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* address comments

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* address comments

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* clean

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

* update

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com>

* onnxruntime InferenceSession fix to bug #5308 (#5317)

* onnxruntime InferenceSession fix
Some distributions of onnxruntime require the specification of the providers argument on calling
InferenceSession. E.g. onnxruntime-gpu.
The package import call does not differnetiate which architecture specific version has been installed,
as all are imported with onnxruntime.
onnxruntime documentation says that
from v1.9.0 some distributions require the providers list to be
provided on calling an InferenceSession.
Therefore the try catch nested structure below attempts to create an inference session
with just the model path as pre v1.9.0.
If that fails, it will use the providers list
 as part of the InferenceSession call.
At the moment this is just CUDA and CPU, and probably
should be expanded.

* Refactor onnxruntime call and providers list
onnxruntime call now uses specific ValueError catch.
Initial attempt at loading model uses just path to onnx file.
If ValueError, the v1.9.0 call with providers is used.
The providers list argument is now obtained from the MLModel file.
The MLModel file meta data is loaded with assumption it is in the same
directory as the onnx model file.
Updating of save_model() function needed.
Signed-off-by: Ed Morris <ecm200@gmail.com>

* onnxruntime providers argument added to log model
For the onnx model flavour, added ability to provide onnxruntime
execution providers list.
onnx_execution_providors argument passed through log_model function
as kwargs.
Added to save_model function by kwargs pass through.
Model flavour metadata saving updated to providers field.
If not specified, saves default to file.
Tested and works for MLModel files with and without new field.
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Code comment formatting changes

Code comments formatting corrections

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>

* Code comment formatting changes

Code formatting changes.

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>

* Code comment formatting changes

Code formatting changes

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>

* Black reformat changes to onnx.py
Ran BLACK to check formatting of ONNX.py
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Fix lint issues (#5305)

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Remove num examples (#5304)

Signed-off-by: dbczumar <corey.zumar@databricks.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Run `apt-get update` before installing `libopenblas-dev` (#5313)

* run sudo update

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>

* fix typo

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>

* fix typo again

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* onnxruntime InferenceSession fix
Some distributions of onnxruntime require the specification of the providers argument on calling
InferenceSession. E.g. onnxruntime-gpu.
The package import call does not differnetiate which architecture specific version has been installed,
as all are imported with onnxruntime.
onnxruntime documentation says that
from v1.9.0 some distributions require the providers list to be
provided on calling an InferenceSession.
Therefore the try catch nested structure below attempts to create an inference session
with just the model path as pre v1.9.0.
If that fails, it will use the providers list
 as part of the InferenceSession call.
At the moment this is just CUDA and CPU, and probably
should be expanded.

Signed-off-by: Ed Morris <ecm200@gmail.com>

* Refactor onnxruntime call and providers list
onnxruntime call now uses specific ValueError catch.
Initial attempt at loading model uses just path to onnx file.
If ValueError, the v1.9.0 call with providers is used.
The providers list argument is now obtained from the MLModel file.
The MLModel file meta data is loaded with assumption it is in the same
directory as the onnx model file.
Updating of save_model() function needed.
Signed-off-by: Ed Morris <ecm200@gmail.com>

Signed-off-by: Ed Morris <ecm200@gmail.com>

* onnxruntime providers argument added to log model
For the onnx model flavour, added ability to provide onnxruntime
execution providers list.
onnx_execution_providors argument passed through log_model function
as kwargs.
Added to save_model function by kwargs pass through.
Model flavour metadata saving updated to providers field.
If not specified, saves default to file.
Tested and works for MLModel files with and without new field.
Signed-off-by: Ed Morris <ecm200@gmail.com>

Signed-off-by: Ed Morris <ecm200@gmail.com>

* Code comment formatting changes

Code comments formatting corrections

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Code comment formatting changes

Code formatting changes.

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Code comment formatting changes

Code formatting changes

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Black reformat changes to onnx.py
Ran BLACK to check formatting of ONNX.py
Signed-off-by: Ed Morris <ecm200@gmail.com>

Signed-off-by: Ed Morris <ecm200@gmail.com>

* Removed catch all exception and linting issue.
Removed the catch all exception of the InferenceSession call.
Removed linting issue.
Signed-off-by: Ed Morris <ecm200@gmail.com>

* Autoformat: https://github.com/mlflow/mlflow/actions/runs/1779614231

Signed-off-by: mlflow-automation <mlflow-automation@users.noreply.github.com>

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Co-authored-by: Harutaka Kawamura <hkawamura0130@gmail.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: mlflow-automation <mlflow-automation@users.noreply.github.com>
Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com>

* Make it easy to find how to log model signature.

Add one line in the model signature introduction section and add link to detailed section in the introduction.

Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com>

* Fix the link to how to log models with signatures section.

Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com>

* Fix the link to how to log models with signatures. Verified by running sphinx build locally.

Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com>

* Update docs/source/models.rst

Co-authored-by: Ankit Mathur <52183359+ankit-db@users.noreply.github.com>
Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com>

Co-authored-by: WeichenXu <weichen.xu@databricks.com>
Co-authored-by: Dr Ed Morris <34489160+ecm200@users.noreply.github.com>
Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Co-authored-by: Harutaka Kawamura <hkawamura0130@gmail.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: mlflow-automation <mlflow-automation@users.noreply.github.com>
Co-authored-by: Ankit Mathur <52183359+ankit-db@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/build Build and test infrastructure for MLflow rn/none List under Small Changes in Changelogs.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants