Skip to content

Latest commit

 

History

History
 
 

04-deployment

4. Model Deployment

4.1 Three ways of deploying a model

4.2 Web-services: Deploying models with Flask and Docker

See code here

4.3 Web-services: Getting the models from the model registry (MLflow)

See code here

4.4 (Optional) Streaming: Deploying models with Kinesis and Lambda

See code here

4.5 Batch: Preparing a scoring script

See code here

4.6 MLOps Zoomcamp 4.6 - Batch: Scheduling batch scoring jobs with Prefect

Note: There are several changes to deployment in Prefect 2.3.1 since 2.0b8:

  • DeploymentSpec in 2.0b8 now becomes Deployment.
  • work_queue_name is used instead of tags to submit the deployment to the a specific work queue.
  • You don't need to create a work queue before using the work queue. A work queue will be created if it doesn't exist.
  • flow_location is replaced with flow
  • flow_runner and flow_storage are no longer supported
from prefect.deployments import Deployment
from prefect.orion.schemas.schedules import CronSchedule
from score import ride_duration_prediction

deployment = Deployment.build_from_flow(
    flow=ride_duration_prediction,
    name="ride_duration_prediction",
    parameters={
        "taxi_type": "green",
        "run_id": "e1efc53e9bd149078b0c12aeaa6365df",
    },
    schedule=CronSchedule(cron="0 3 2 * *"),
    work_queue_name="ml",
)

deployment.apply()

4.7 Choosing the right way of deployment

COMING SOON

4.8 Homework

More information here: TBD

Notes

Did you take notes? Add them here: