-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support log_model with code model in langchain #11817
Conversation
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
Documentation preview for 22af799 will be available when this CircleCI job More info
|
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
@@ -2615,7 +2601,6 @@ def test_save_load_chain_as_code_optional_code_path(): | |||
artifact_path="model_path", | |||
signature=signature, | |||
input_example=input_example, | |||
code_paths=[], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test code_paths=None
mlflow/langchain/__init__.py
Outdated
f"If the provided model '{lc_model}' is a string, it must be a valid python " | ||
"file path containing the code for defining the chain instance." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the case, can we log it as a separate model metadata like model_code_path
instead of reusing "code_paths" ?
Because it has differences with existing code_paths
files (its code is not a common module, but it should contain set_chain
method, and it might come from databricks notebook),
and recently we are expanding MLflow code_paths
functionality (e.g. auto inferring code_paths), and the newly added model_code_path
can't support the expanded code_paths
functionality and make code messy.
I have some related discussion in this doc:
https://docs.google.com/document/d/144wAwgXsQ40C3dDsoObX0LfXp33aRbvEVCCq-VYgJqw/edit#bookmark=id.d1xwn4gq68ce
CC @BenWilson2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we should keep this logic separate. This main entry point to a chain definition should be handled distinctly to avoid having to add error handling logic that would need to be applied to branching decision logic if this were overloaded into code_paths for directory traversal for dependent relative and absolute import statements with dependency inference.
Regardless of this fact, what is the mechanism for handling dependent imports within this implementation? If a user has external imports to custom code that rely on absolute imports, will this notebook path preserve its directory structure from the workspace root?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the PR to separate this logic. There isn't any mechanism for handling dependent imports right now, and the notebook path does not preserve its directory structure from the workspace root. Should this be a requirement here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There isn't any mechanism for handling dependent imports right now, and the notebook path does not preserve its directory structure from the workspace root. Should this be a requirement here?
No need to handle this.
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
) | ||
|
||
if len(code_paths) > 1: | ||
if code_paths and len(code_paths) > 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we delete this check since now we will use model_config instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will get removed in the next PR: #11843, when we start to actually use model_config
@@ -256,7 +258,8 @@ def load_retriever(persist_directory): | |||
f"Current code paths: {code_paths}" | |||
) | |||
|
|||
code_dir_subpath = _validate_and_copy_code_paths(formatted_code_path, path) | |||
code_dir_subpath = _validate_and_copy_code_paths(code_paths, path) | |||
model_code_dir_subpath = _validate_and_copy_model_code_path(model_code_path, path) | |||
|
|||
if signature is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When the signature is None, I am not sure if this code would work: _LangChainModelWrapper(lc_model)
So we need to figure out a way to load the model here and use that as a wrapped model so we can infer_signature
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolve signature issues with https://github.com/mlflow/mlflow/pull/11817/files#r1583519300 this potentially
Maybe we can move this code on top when the lc_model is a str, we load the model and make that the lc_model. Lot of things would be solved because of that. What do you think?
**model_data_kwargs, | ||
) | ||
|
||
if Version(langchain.__version__) >= Version("0.0.311"): | ||
checker_model = lc_model | ||
if isinstance(lc_model, str): | ||
# TODO: use model_config instead of code_paths[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can move this code on top when the lc_model is a str, we load the model and make that the lc_model. Lot of things would be solved because of that. What do you think?
from databricks.sdk import WorkspaceClient | ||
from databricks.sdk.service.workspace import ExportFormat | ||
|
||
w = WorkspaceClient() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: This function is already huge :D
Can we extract this out in a function?
@@ -162,6 +163,20 @@ def _validate_and_copy_code_paths(code_paths, path, default_subpath="code"): | |||
return code_dir_subpath | |||
|
|||
|
|||
def _validate_and_copy_model_code_path(code_path, path, default_subpath="model_code"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default_subpath= FLAVOR_CONFIG_MODEL_CODE
Can we update the above so we don't define this var in 2 places?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is reworked in the next PR #11843, let's address it there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving this so we can merge this and we can address the TODOs so we can do it in small batch
Signed-off-by: Ann Zhang <ann.zhang@databricks.com>
🛠 DevTools 🛠
Install mlflow from this PR
Checkout with GitHub CLI
Related Issues/PRs
Work to follow:
What changes are proposed in this pull request?
model_code
as separate fromcode
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/deployments
: MLflow Deployments client APIs, server, and third-party Deployments integrationsarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yes
should be selected for bug fixes, documentation updates, and other small changes.No
should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.