Skip to content

Commit

Permalink
[Docs] Add simple descriptions of function types: job and serving (#3586
Browse files Browse the repository at this point in the history
)
  • Loading branch information
jillnogold committed May 30, 2023
1 parent 7859866 commit 0123ed7
Show file tree
Hide file tree
Showing 7 changed files with 67 additions and 6 deletions.
6 changes: 4 additions & 2 deletions docs/concepts/functions-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,12 @@ MLRun supports real-time and batch runtimes.

Real-time runtimes:
* **{ref}`nuclio <nuclio-real-time-functions>`** - real-time serverless functions over Nuclio
* **{ref}`serving <serving-graph>`** - higher level real-time Graph (DAG) over one or more Nuclio functions
* **{ref}`serving <serving-function>`** - deploy models and higher-level real-time Graph (DAG) over one or more Nuclio functions

Batch runtimes:
* **handler** - execute python handler (used automatically in notebooks or for debug)
* **local** - execute a Python or shell program
* **job** - run the code in a Kubernetes Pod
* **{ref}`job <job-function>`** - run the code in a Kubernetes Pod
* **{ref}`dask <dask-overview>`** - run the code as a Dask Distributed job (over Kubernetes)
* **{ref}`mpijob <horovod>`** - run distributed jobs and Horovod over the MPI job operator, used mainly for deep learning jobs
* **{ref}`spark <spark-operator>`** - run the job as a Spark job (using Spark Kubernetes Operator)
Expand Down Expand Up @@ -52,6 +52,8 @@ The limits methods are different for Spark and Dask:
```{toctree}
:maxdepth: 1
../runtimes/job-function
../runtimes/serving-function
../runtimes/dask-overview
../runtimes/horovod
../runtimes/spark-operator
Expand Down
2 changes: 1 addition & 1 deletion docs/projects/git-best-practices.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@
"project = mlrun.get_or_create_project(name=\"my-super-cool-project\", context=\"./\")\n",
"```\n",
"\n",
"4. Set the MLRun project source with the desired `pull_at_runtime` behavior (see [Loading the code from container vs. loading the code at runtime](#load-code-from-container-vs-load-code-at-runtime) for more info). Also set `GIT_TOKEN` in MLRun project secrets for working with private repos.\n",
"4. Set the MLRun project source with the desired `pull_at_runtime` behavior (see [Loading the code from container vs. loading the code at runtime](#loading-the-code-from-container-vs-loading-the-code-at-runtime) for more info). Also set `GIT_TOKEN` in MLRun project secrets for working with private repos.\n",
"\n",
"```python\n",
"# Notice the prefix has been changed to git://\n",
Expand Down
2 changes: 1 addition & 1 deletion docs/projects/project.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ MLRun **Project** is a container for all your work on a particular ML applicatio
Projects are stored in a GIT or archive and map to IDE projects (in PyCharm, VSCode, etc.), which enables versioning, collaboration, and [CI/CD](../projects/ci-integration.html).
Projects simplify how you process data, [submit jobs](../concepts/submitting-tasks-jobs-to-functions.html), run [multi-stage workflows](../concepts/workflow-overview.html), and deploy [real-time pipelines](../serving/serving-graph.html) in continuous development or production environments.

<p align="center"><img src="./_static/images/project-lifecycle.png" alt="project-lifecycle" width="700"/></p><br>
<p align="center"><img src="../_static/images/project-lifecycle.png" alt="project-lifecycle" width="700"/></p><br>

**In this section**

Expand Down
38 changes: 38 additions & 0 deletions docs/runtimes/job-function.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
(job-function)=
# Function of type `job`

You can deploy a model using a `job` type function, which runs the code in a Kubernetes Pod.

You can create (register) a `job` function with basic attributes such as code, requirements, image, etc. using the
{py:meth}`~mlrun.projects.MlrunProject.set_function` method.
You can also import an existing job function/template from the {ref}`function-hub`.

Functions can be created from a single code, notebook file, or have access to the entire project context directory.
(By adding the `with_repo=True` flag, the project context is cloned into the function runtime environment.)

Examples:


```python
# register a (single) python file as a function
project.set_function('src/data_prep.py', name='data-prep', image='mlrun/mlrun', handler='prep', kind="job")

# register a notebook file as a function, specify custom image and extra requirements
project.set_function('src/mynb.ipynb', name='test-function', image="my-org/my-image",
handler="run_test", requirements=["scikit-learn"], kind="job")

# register a module.handler as a function (requires defining the default sources/work dir, if it's not root)
project.spec.workdir = "src"
project.set_function(name="train", handler="training.train", image="mlrun/mlrun", kind="job", with_repo=True)
```

To run the job:
```
project.run_function("train")
```

**See also**
- [Create and register functions](../runtimes/create-and-use-functions.html)
- [How to annotate notebooks (to be used as functions)](../runtimes/mlrun_code_annotations.html)
- [How to run, build, or deploy functions](./run-build-deploy.html)
- [Using functions in workflows](./build-run-workflows-pipelines.html)
21 changes: 21 additions & 0 deletions docs/runtimes/serving-function.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
(serving-function)=
# Function of type `serving`

Deploying models in MLRun uses the function type `serving`. You can create a serving function using the `set_function()` call from a notebook.
You can also import an existing serving function/template from the {ref}`function-hub`.

This example converts a notebook to a serving function, adds a model to it, and deploys it:

```python
serving = project.set_function(name="my-serving", func="my_serving.ipynb", kind="serving", image="mlrun/mlrun", handler="handler")
serving.add_model(key="iris", model_path="https://s3.wasabisys.com/iguazio/models/iris/model.pkl", model_class="ClassifierModel")
project.deploy_function(serving)
```


**See also**
- {ref}`Real-time serving pipelines (graphs) <serving-graph>`: higher level real-time graphs (DAG) over one or more Nuclio functions
- {ref}`Serving graphs demos and tutorials <demos-serving>`
- {ref}`Real-time serving <mlrun-serving-overview>`
- {ref}`Serving pre-trained ML/DL models <serving-ml-dl-models>`

1 change: 1 addition & 0 deletions docs/tutorial/03-model-serving.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"(serving-ml-dl-models)=\n",
"# Serving pre-trained ML/DL models\n",
"\n",
"This notebook demonstrate how to serve standard ML/DL models using **MLRun Serving**.\n",
Expand Down
3 changes: 1 addition & 2 deletions mlrun/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -733,8 +733,7 @@ def code_to_function(
- spark: run distributed Spark job using Spark Kubernetes Operator
- remote-spark: run distributed Spark job on remote Spark service
Learn more about function runtimes here:
https://docs.mlrun.org/en/latest/runtimes/functions.html#function-runtimes
Learn more about {Kinds of function (runtimes)](../concepts/functions-overview.html).
:param name: function name, typically best to use hyphen-case
:param project: project used to namespace the function, defaults to 'default'
Expand Down

0 comments on commit 0123ed7

Please sign in to comment.