Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add databricks deployments client skeleton + example #10421

Merged
merged 3 commits into from
Nov 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
15 changes: 15 additions & 0 deletions examples/deployments/databricks.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from mlflow.deployments import get_deploy_client


def main():
client = get_deploy_client("databricks")
client.create_endpoint(
name="gpt4-chat",
config={
# TODO: doesn't work yet
},
)


if __name__ == "__main__":
main()
44 changes: 44 additions & 0 deletions mlflow/deployments/databricks/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
from mlflow.deployments import BaseDeploymentClient


class DatabricksDeploymentClient(BaseDeploymentClient):
def create_deployment(self, name, model_uri, flavor=None, config=None, endpoint=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the model_uri going to be required if we're creating a gateway route, or is gateway route creation purely going to be handled with create_endpoint() ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need the flavor designator here? If we're using model_uri, can it read the configured flavor information from the MLmodel file?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is gateway route creation purely going to be handled with create_endpoint() ?

yes

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perfect!

raise NotImplementedError

def update_deployment(self, name, model_uri=None, flavor=None, config=None, endpoint=None):
raise NotImplementedError

def delete_deployment(self, name, config=None, endpoint=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious what values would be in config or endpoint here? Is there an option to delete an endpoint referenced by name but not the entire named deployment?

Copy link
Member Author

@harupy harupy Nov 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need xxx_deployment methods. They are abstract methods and need to be overridden. Users won't touch them.

raise NotImplementedError

def list_deployments(self, endpoint=None):
raise NotImplementedError

def get_deployment(self, name, endpoint=None):
raise NotImplementedError

def predict(self, deployment_name=None, inputs=None, endpoint=None):
raise NotImplementedError("TODO")

def create_endpoint(self, name, config=None):
raise NotImplementedError("TODO")

def update_endpoint(self, endpoint, config=None):
raise NotImplementedError("TODO")

def delete_endpoint(self, endpoint):
raise NotImplementedError("TODO")

def list_endpoints(self):
raise NotImplementedError("TODO")

def get_endpoint(self, endpoint):
raise NotImplementedError("TODO")
Comment on lines +23 to +36
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll implement these later in a follow-up PR.



def run_local(name, model_uri, flavor=None, config=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Build a local serving container and validate the capacity to return inference predictions? Is that what this is? (if so, this is awesome)

Copy link
Member Author

@harupy harupy Nov 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually not sure what this is for. A deployment plugin must define target_help and run_local. We can update them if necessary.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The target_help implementation as explained in the ABC is definitely out of scope for a Databricks plugin (not entirely sure what that would return even if there was an available endpoint to target?). The run_local might also be a "maybe nice to have in the far-off future", but definitely something that would be rather challenging to simulate model serving behavior from within OSS.

pass


def target_help():
pass
1 change: 0 additions & 1 deletion mlflow/gateway/fluent.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import logging
from typing import Any, Dict, List, Optional

from mlflow.gateway.client import MlflowGatewayClient
Expand Down
3 changes: 3 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -184,6 +184,9 @@ def run(self):

[mlflow.app.client]
basic-auth=mlflow.server.auth.client:AuthServiceClient

[mlflow.deployments]
databricks=mlflow.deployments.databricks
""",
cmdclass={
"dependencies": ListDependencies,
Expand Down