Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Langchain callback fix for pyfunc serving #12023

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

serena-ruan
Copy link
Collaborator

@serena-ruan serena-ruan commented May 16, 2024

🛠 DevTools 🛠

Open in GitHub Codespaces

Install mlflow from this PR

pip install git+https://github.com/mlflow/mlflow.git@refs/pull/12023/merge

Checkout with GitHub CLI

gh pr checkout 12023

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

Fix langchain tracer to use prediction_context, and support passing convert_chat_responses as params.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/deployments: MLflow Deployments client APIs, server, and third-party Deployments integrations
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interface

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Language

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
Copy link

github-actions bot commented May 16, 2024

Documentation preview for 1393ce4 will be available when this CircleCI job
completes successfully.

More info

@github-actions github-actions bot added the rn/none List under Small Changes in Changelogs. label May 16, 2024
Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
@@ -641,7 +641,7 @@ def predict(
if is_in_databricks_model_serving_environment() and MLFLOW_ENABLE_TRACE_IN_SERVING.get():
from mlflow.langchain.langchain_tracer import MlflowLangchainTracer

callbacks = [MlflowLangchainTracer()]
callbacks = [MlflowLangchainTracer(prediction_context=get_prediction_context())]
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is necessary for propagating request_id into processor in serving. But we need to set prediction_context in serving

mlflow/pyfunc/__init__.py Outdated Show resolved Hide resolved
Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
@serena-ruan
Copy link
Collaborator Author

This PR is not a blocker for rag callback migration now, but it should be helpful for mlflow serving tracing.

Comment on lines +687 to +695
# temp workaround for rag model in model serving passing convert_chat_responses
# to predict method, we shouldn't validate its schema
if self.loader_module == "mlflow.langchain":
convert_chat_responses = params.pop("convert_chat_responses", None) if params else None
else:
convert_chat_responses = None
params = _validate_params(params, self.metadata)
if convert_chat_responses is not None:
params["convert_chat_responses"] = convert_chat_responses
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead, I think we should start allowing parameters that aren't in the signature to be passed. Can we make that change instead? I'm concerned about adding special case logic in pyfunc predict for specific flavors. If this misses the 2.13.0 release, that's okay.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I think that makes sense for allowing more params skipping params validation, it's just currently convert_chat_responses must be set to true so it returns a dictionary as expected. We can revisit this PR later for general mlflow pyfunc tracing support.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead, I think we should start allowing parameters that aren't in the signature to be passed.

Then we can also add the callback as the parameters ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about allowing _validate_prediction_input to be a overridable method in model_impl ? instead of adding a patch here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rn/none List under Small Changes in Changelogs.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants