Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversational LangChain model support #11552

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

ishaan-mehta
Copy link

@ishaan-mehta ishaan-mehta commented Mar 28, 2024

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

Adds support for conversational LangChain models (such as RunnableWithMessageHistory) by passing configurables to model invocation and calling invoke instead __call__ for all models.

Adds check to ensure signature is not inferred from lc_model passed as a str.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

Added support for prediction with any LangChain model loaded as a python_function (pyfunc) instead of select types. Added support for passing configurable fields to Runnable LangChain models loaded as a python_function (pyfunc).

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/deployments: MLflow Deployments client APIs, server, and third-party Deployments integrations
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interface

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Language

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Copy link

github-actions bot commented Mar 28, 2024

Documentation preview for a92b8f9 will be available when this CircleCI job
completes successfully.

More info

@ishaan-mehta ishaan-mehta marked this pull request as ready for review March 28, 2024 16:08
@github-actions github-actions bot added area/models MLmodel format, model serialization/deserialization, flavors rn/bug-fix Mention under Bug Fixes in Changelogs. rn/feature Mention under Features in Changelogs. and removed rn/bug-fix Mention under Bug Fixes in Changelogs. labels Mar 28, 2024
@B-Step62
Copy link
Collaborator

@ishaan-mehta Thank you so much for your contribution!!

Adding configurable parameter makes sense to me. However, accepting all Runnable types need to be a bit carefully done, because some types are not pickelable and need special saving logic. We have a list of current supported types here, so would you mind testing other types you'd like to support and add them as test cases in test_langchain_model_export.py?

@ishaan-mehta
Copy link
Author

ishaan-mehta commented Apr 1, 2024

Hi @B-Step62 — glad to help, and thanks for taking a look here!

A couple thoughts:

  1. Given that this check takes place in the prediction function where the model has already been saved and is now being loaded, does it matter which types are pickleable since we have already saved and loaded back the model? Any checks for whether the model is savable should take place in the save_model() function itself, no?
  2. Also, with Ability for log to langchain as code #11370, we are now able to log arbitrary LangChain model types as code (including ones we weren't previously able to save, such as RunnableWithMessageHistory), so I am not sure it makes sense to limit prediction to a select few types.

@B-Step62
Copy link
Collaborator

B-Step62 commented Apr 3, 2024

@ishaan-mehta Thank you so much for the explanation! I've overlooked that the condition check is in prediction function, and that makes sense to remove the limitation then🙂

Copy link
Collaborator

@B-Step62 B-Step62 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall change looks good to me, thank you so much for the contribution!

A few requests:

  1. Could you run the pre-commit hook to format the code? you can run it by pre-commit run --all-files and it'll automatically update the files.
  2. Could you add tests for (1) pass configurable as prediction parameter (2) run prediction with RunnableWithMessageHistory?

cc: @serena-ruan as SME

mlflow/langchain/__init__.py Show resolved Hide resolved
mlflow/langchain/api_request_parallel_processor.py Outdated Show resolved Hide resolved
@ishaan-mehta
Copy link
Author

Hi @B-Step62, I've addressed your comments, added a test that addresses prediction with a RunnableWithMessageHistory while passing configurables as params, and restructured my changes to call_api() so that the previous behavior (base types like Retriever, Chain, etc. being invoked with return_only_outputs=True) is preserved. Let me know if this looks good — thanks!

Signed-off-by: Ishaan Mehta <45380942+ishaan-mehta@users.noreply.github.com>
Signed-off-by: Ishaan Mehta <45380942+ishaan-mehta@users.noreply.github.com>
Signed-off-by: Ishaan Mehta <45380942+ishaan-mehta@users.noreply.github.com>
Signed-off-by: Ishaan Mehta <45380942+ishaan-mehta@users.noreply.github.com>
Signed-off-by: Ishaan Mehta <45380942+ishaan-mehta@users.noreply.github.com>
Signed-off-by: Ishaan Mehta <45380942+ishaan-mehta@users.noreply.github.com>
Signed-off-by: Ishaan Mehta <45380942+ishaan-mehta@users.noreply.github.com>
@BenWilson2
Copy link
Member

@ishaan-mehta could you do a rebase to master? Let's see if we can get this merged in :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/models MLmodel format, model serialization/deserialization, flavors rn/feature Mention under Features in Changelogs.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants