From 377653401bbb6a3303c6aff459e629e2fdc83eb7 Mon Sep 17 00:00:00 2001 From: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> Date: Wed, 20 Dec 2023 12:38:22 +0900 Subject: [PATCH] Apply suggestions from code review Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> --- docs/source/deployment/deploy-model-locally.rst | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/source/deployment/deploy-model-locally.rst b/docs/source/deployment/deploy-model-locally.rst index b4121a32eabb9..7466b69a36e65 100644 --- a/docs/source/deployment/deploy-model-locally.rst +++ b/docs/source/deployment/deploy-model-locally.rst @@ -3,10 +3,10 @@ Deploy MLflow Model as a Local Inference Server =============================================== -MLflow allows you to deploy your model as a local inference using just a single command. -This approach is ideal for lightweight applications or for testing your model locally before moving it to a production environment. +MLflow allows you to deploy your model as a locally using just a single command. +This approach is ideal for lightweight applications or for testing your model locally before moving it to a staging or production environment. -If you are new to MLflow model deployment, please read `MLflow Deployment `_ first to understand the basic concepts of MLflow models and deployments. +If you are new to MLflow model deployment, please read the guide on `MLflow Deployment `_ first to understand the basic concepts of MLflow models and deployments. Deploying Inference Server @@ -113,10 +113,10 @@ a valid :ref:`Model Signature ` with ``params`` must be defined "params": {"max_answer_len": 10} }' -.. note:: Since JSON loses type information, MLflow will cast the JSON input to the input type specified +.. note:: Since JSON discards type information, MLflow will cast the JSON input to the input type specified in the model's schema if available. If your model is sensitive to input types, it is recommended that a schema is provided for the model to ensure that type mismatch errors do not occur at inference time. - In particular, DL models are typically strict about input types and will need model schema in order + In particular, Deep Learning models are typically strict about input types and will need a model schema in order for the model to score correctly. For complex data types, see :ref:`encoding-complex-data` below. .. _encoding-complex-data: @@ -130,7 +130,7 @@ are supported: * binary: data is expected to be base64 encoded, MLflow will automatically base64 decode. -* datetime: data is expected as string according to +* datetime: data is expected to be encoded as a string according to `ISO 8601 specification `_. MLflow will parse this into the appropriate datetime representation on the given platform. @@ -202,7 +202,7 @@ inference server in Kubernetes-native frameworks like `Seldon Core `_ in the MLServer documentation. -Also you can find guides to deploy MLflow model to Kubernetes cluster using MLServer in `Deploying a model to Kubernetes `_. +You can also find guides to deploy MLflow models to a Kubernetes cluster using MLServer in `Deploying a model to Kubernetes `_. Running Batch Inference -----------------------