- import the flow module
- set infrastructure:
- if
--infra-block
is provided, load the block from the server API - otherwise use defaults from the type specified by
--infra
(defaulting to the process type if none specified)
- if
- create a
Deployment
object and load existing deployment by name from the server API, if any. This populates server-side settings if any (eg: description, version, tags). If no description, use the flow docstring. - generate the flow parameter openapi schema
- update the
Deployment
object with runtime settings - write a default .prefectignore file if one doesn't already exist
- upload the flow to storage:
- if
--storage-block
is provided, load the block from the server API, and use it to upload the current directory to the storage location - otherwise, use the default storage ie:
LocalFileSystem
with abasepath
set to the current directory (eg: /Users/tekumara/code/prefect-demo) and no upload.
- if
- and write $flowname-deployment.yaml file consisting of:
- editable fields, ie: deployment name, description, tags, parameters, schedule, infrastructure
- system-generated fields, everything else, eg: flow name, storage, parameter schema
eg:
$ prefect deployment build flows/param_flow.py:increment -n my-deployment -i kubernetes-job
Found flow 'increment'
Default '.prefectignore' file written to /Users/tekumara/code/prefect-demo/.prefectignore
Deployment YAML created at '/Users/tekumara/code/prefect-demo/increment-deployment.yaml'.
It's not obvious from these log lines but build
is saving the flow to storage. In this case storage is a LocalFileSystem
object with basepath = /Users/tekumara/code/prefect-demo/
- load $flowname-deployment.yaml file into a
DeploymentYAML
object. - create a flow with the deployment's flow name
- create and save an anonymous infrastructure block from the infrastructure section of the deployment
- create a deployment from the deployment metadata (name, parameters, description, tags, parameter schema), and the storage block (previously specified and saved during
deployment build
)
To load a flow from a deployment, the prefect engine will:
- retrieve the storage block from the server API, or use LocalFileSystem storage if none specified.
- call
storage_block.get_directory(from_path=None, local_path=".")
:- LocalFileSystem copies basepath into the current directory
- RemoteFileSystem downloads basepath into the current directory
- load the flow specified in
$path/$entrypoint
Deployment.build_from_flow
is the equivalent of prefect deployment build
in python.
Differences from the CLI:
- the
infra
field specifies a new anonymous infra block. To avoid creating a block on apply, useinfra_overrides
instead (recommended). - you can supply parameters
For example:
deployment = Deployment.build_from_flow(
flow=flows.param_flow.increment,
name="s3-deployment",
# Deployment class args
work_queue_name="kubernetes",
# Create a new anonymous infra block with these params on apply
infrastructure=KubernetesJob(
image="prefect-registry:5000/flow:latest",
env={"APP_ENVIRONMENT": "prod"},
),
parameters={"i": 1},
)
- The Deployment defines a flow name and the flow code. Unlike Prefect 1 it's possible to have two Deployments with different flow code but the same flow name.