Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate Mlflow API request to databricks sdk authentication way and support OAuth #12011

Merged
merged 43 commits into from
Jun 1, 2024

Conversation

WeichenXu123
Copy link
Collaborator

@WeichenXu123 WeichenXu123 commented May 15, 2024

🛠 DevTools 🛠

Open in GitHub Codespaces

Install mlflow from this PR

pip install git+https://github.com/mlflow/mlflow.git@refs/pull/12011/merge

Checkout with GitHub CLI

gh pr checkout 12011

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

  • (1) Using Databricks-SDK API to send rest API request, so that it can support various kinds of authentication including OAuth
  • (2) Note that Databricks-SDK can't support several authentication ways which are supported by current MLflow, to ensure backward compatibility, I added fallback code and made a few refactoring to making code clear.
  • (3) Note that Databricks-SDK can't support reading credential token values, but get_databricks_env_vars and MlflowCredentialContext requires reading token value, I also keep relevant code of reading credential tokens, and made some refactoring. Note that get_databricks_env_vars has been used in some customer code such as Ray-on-Spark to set up MLflow authentication in remote Ray tasks, so we still need to support it.

Note that ecause of (2) and (3), in current stage we still have to keep some of legacy databricks-cli python module code.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Manual test with OAuth

  1. Prepare OAuth credentials including "client_id" and "client_secret", see guide in https://docs.databricks.com/en/dev-tools/auth/oauth-m2m.html
  2. You can tell MLflow to use OAuth by one of the following ways:

a): Set 3 environmental variables: DATABRICKS_HOST + DATABRICKS_CLIENT_ID + DATABRICKS_CLIENT_SECRET
b): Edit ~/.databrickscfg file, adding a section for specific profile like:

[DEFAULT]
host  = https://e2-demo-field-eng.cloud.databricks.com/
client_id = <your client id>
client_secret = <your client secret>

In MLflow application (assuming you are running it in your local machine and it needs to log data to databricks shard), you need to set mlflow tracking URI to either "databricks" or "databricks://<your-profile-name>", if no profile name in tracking URI, DEFAULT profile is used when reading ~/.databrickscfg file.

then you can run MLflow testing code like:

result = mlflow.create_experiment("/Users/weichen.xu@databricks.com/my-mlflow-exp-12345")
print(result)

result = mlflow.get_experiment("642681409487128")
print(result)

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/deployments: MLflow Deployments client APIs, server, and third-party Deployments integrations
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interface

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Language

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Copy link

github-actions bot commented May 15, 2024

Documentation preview for 038911c will be available when this CircleCI job
completes successfully.

More info

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
@WeichenXu123 WeichenXu123 changed the title [Draft] Migrate Mlflow API request to databricks sdk authentication way and support OAuth Migrate Mlflow API request to databricks sdk authentication way and support OAuth May 16, 2024
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
@BenWilson2
Copy link
Member

I like the fallback approaches :) Let's make sure that we test the SDK functionality in serverless and Jobs to make sure that everything is working in those runtime variants.

mlflow/utils/rest_utils.py Outdated Show resolved Hide resolved
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
mlflow/utils/rest_utils.py Outdated Show resolved Hide resolved
mlflow/utils/rest_utils.py Outdated Show resolved Hide resolved
raw=True,
**extra_kwargs,
)
return raw_response["contents"]._response
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accessing _response seems scary. why do we need it?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for getting original requests.Response object. The reason is, in MLFlow deployment, because we support LLM streaming, it calls requests.Response.iterlines method, so return the original requests.Response object avoids modifying MLFlow deployment streaming prediction code.

I checked databricks-sdk code, _response attribute exists in all versions >= 0.20, we can ask databricks-sdk team to make it as a public attribute.

Copy link
Member

@harupy harupy May 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

databricks-sdk code, _response attribute exists in all versions >= 0.20

This doesn't mean we can use it. The _ prefix means It's private. Don't touch it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wait do we need to use databricks_workspace_client.api_client for oauth to work?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

databricks_workspace_client.api_client is not the only way, however, it should be the simplest way.

I saw sql team set up oauth by writing some code by themselves: https://github.com/databricks/databricks-sql-python/blob/main/src/databricks/sql/auth/authenticators.py but this way we have to maintain more code and it is error-prone.

On the other hand, databricks-sdk supports many other authentication ways besides OAuth, if we can make mlflow authentication using databricks-sdk, the other authentication ways we don't need extra effort to support.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. databricks-sdk design goal is to automatically pick optimal value for these configs

can you elaborate on "automatically"?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to check databricks-sdk source code for details and then get back to you.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

databricks-sdk exposes a config retry_timeout_seconds, official doc:

(Integer) Number of seconds to keep retrying HTTP requests. Default is 300 (5 minutes). (from https://github.com/databricks/databricks-sdk-py)

For other HTTP controlling params, they are not exposed, internally, the logic for retry backoff is somewhat coarse. Retryable errors are defined in ApiClient._is_retriable , and the retry respects the Retry-After header if present in a 429 or 503 response. The backoff is linear, starting at 1 second and increasing by 1 second up to 10 seconds. Jitter of between 0-1 second is added for each retry.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proposal:

Add a environmental var config DATABRICKS_ENDPOINT_HTTP_RETRY_TIMEOUT_SECONDS, for databricks endpoint (which requests are invoked by databricks-sdk)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Config DATABRICKS_ENDPOINT_HTTP_RETRY_TIMEOUT_SECONDS was added.

@harupy
Copy link
Member

harupy commented May 17, 2024

Note that get_databricks_env_vars has been used in some customer code such as Ray-on-Spark to set up MLflow authentication in remote Ray tasks, so we still need to support it.

Does this Ray import get_databricks_env_vars from mlflow?

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
mlflow/utils/rest_utils.py Outdated Show resolved Hide resolved
mlflow/utils/rest_utils.py Outdated Show resolved Hide resolved
mlflow/utils/rest_utils.py Outdated Show resolved Hide resolved
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
@harupy
Copy link
Member

harupy commented May 22, 2024

Databricks-SDK can't support reading credential token values

can you elaborate on this?

@WeichenXu123
Copy link
Collaborator Author

Databricks-SDK can't support reading credential token values

can you elaborate on this?

oh, it means you can't get credential token value by invoking Databricks-SDK API. i.e., Databricks-SDK doesn't provide API to expose credential token , because its design goal is to support various authentication ways and hide internal details.

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

insecure = hasattr(config, "insecure") and config.insecure
from databricks.sdk import WorkspaceClient
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our test requirements specify version 0.20.0, which is only installed in 15.x runtime, and this SDK was introduced into DBR in 13.x (with version 0.1.6).
Older runtimes will exhibit a breaking change here with authentication in runtime if a user installs MLflow 2.14.0.
Can we wrap this import in a try catch and if the import fails, utilize the fallback logic that is preserved here?

Please test an install of this branch on both MLR 10LTS and 13LTS to ensure that auth functions correctly for:

  1. databricks-sdk is not in the runtime environment
  2. databricks-sdk < version 0.20.0

To verify that fallback logic for these older runtimes functions without any breaking changes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(The main condition that I'd like to verify is if databricks-sdk being upgraded to 0.20.0 in these older runtimes causes any issues, since installing MLflow will force upgrade or simply install the package). I'm not sure if the runtime environment exposes everything needed to make the SDK function correctly.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Confirmed:

(1) when installing MLflow or MLflow skinny , the databricks-sdk >= 0.20 is forcily installed dependency. So it wouldn't cause importing error.

(2) I tested in databricks runtime 9.3 / 10.3 / 13.3 , databricks-sdk >= 0.20 is compatible with them. sdk team will ensure compatibility for runtime versions in lifecycle.

Copy link
Member

@BenWilson2 BenWilson2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM once #12011 (comment) is addressed / confirmed. Thanks @WeichenXu123 !!

@WeichenXu123 WeichenXu123 merged commit 315506d into mlflow:master Jun 1, 2024
77 of 81 checks passed
@@ -29,6 +29,8 @@ dependencies = [
"cachetools<6,>=5.0.0",
"click<9,>=7.0",
"cloudpickle<4",
"databricks-sdk<1,>=0.20.0",
"databricks-sdk<1,>=0.20.0",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicate dependencies

@WeichenXu123 is this intended?

ws_client = WorkspaceClient(profile=host_creds.databricks_auth_profile, config=config)

try:
if method == "GET":
Copy link
Member

@harupy harupy Jun 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should remove this if statement. params is valid for POST as well

**extra_kwargs,
)
return raw_response["contents"]._response
except DatabricksError as e:
Copy link
Member

@harupy harupy Jun 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does DatabricksError contain raw request and response?

@@ -36,7 +36,7 @@ jobs:
run: |
pip install --no-dependencies tests/resources/mlflow-test-plugin
pip install .[gateway] \
pytest pytest-timeout pytest-asyncio httpx psutil sentence-transformers transformers
pytest pytest-timeout pytest-asyncio httpx psutil sentence-transformers transformers databricks-sdk
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this still necessary?

@@ -29,6 +29,8 @@ dependencies = [
"cachetools<6,>=5.0.0",
"click<9,>=7.0",
"cloudpickle<4",
"databricks-sdk<1,>=0.20.0",
"databricks-sdk<1,>=0.20.0",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why >= 0.20.0?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/artifacts Artifact stores and artifact logging area/deployments MLflow Deployments client APIs, server, and third-party Deployments integrations area/model-registry Model registry, model registry APIs, and the fluent client calls for model registry area/models MLmodel format, model serialization/deserialization, flavors area/projects MLproject format, project running backends area/recipes MLflow Recipes, Recipes APIs, Recipes configs, Recipe Templates area/tracking Tracking service, tracking client APIs, autologging integrations/databricks Databricks integrations rn/feature Mention under Features in Changelogs.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants