Skip to content
Permalink
Browse files

Update README.md and quick start notebook (#346)

* Update bentoml-quick-start-guide.ipynb

* update readme

* build docks script typo
  • Loading branch information...
parano committed Oct 16, 2019
1 parent c2c66be commit 5f8445f45fb97fb84dcafa177075fbfbbab4da18
Showing with 50 additions and 47 deletions.
  1. +1 −1 DEVELOPMENT.md
  2. +38 −35 README.md
  3. +10 −10 guides/quick-start/bentoml-quick-start-guide.ipynb
  4. +1 −1 scripts/build-docs.sh
@@ -106,7 +106,7 @@ $ pip install -e .[dev]

To build documentation for locally:
```bash
$ ./script/build-docs.sh
$ ./scripts/build-docs.sh
```

Modify \*.rst files inside the `docs` folder to update content, and to
@@ -10,31 +10,18 @@

[![BentoML](https://raw.githubusercontent.com/bentoml/BentoML/master/docs/_static/img/bentoml.png)](https://colab.research.google.com/github/bentoml/BentoML/blob/master/guides/quick-start/bentoml-quick-start-guide.ipynb)

[Getting Started](https://github.com/bentoml/BentoML#getting-started) | [Documentation](http://bentoml.readthedocs.io) | [Examples](https://github.com/bentoml/BentoML#examples) | [Contributing](https://github.com/bentoml/BentoML#contributing) | [Releases](https://github.com/bentoml/BentoML#releases) | [License](https://github.com/bentoml/BentoML/blob/master/LICENSE) | [Blog](https://medium.com/bentoml)
[Getting Started](https://github.com/bentoml/BentoML#getting-started) | [Documentation](http://bentoml.readthedocs.io) | [Gallery](https://github.com/bentoml/gallery) | [Contributing](https://github.com/bentoml/BentoML#contributing) | [Releases](https://github.com/bentoml/BentoML#releases) | [License](https://github.com/bentoml/BentoML/blob/master/LICENSE) | [Blog](https://medium.com/bentoml)


BentoML is a platform for __serving and deploying machine learning
models__. It provides three main components:
BentoML is a flexible framework that accelerates the workflow of
__serving and deploying machine learning models__ in the cloud.

* BentoService: High-level APIs for defining a prediction service by packaging
trained model, preprocessing source code, dependencies, and configurations
into a standard BentoML bundle, which can be used as containerize REST API
server, PyPI package, CLI tool, or batch/streaming serving job.

* DeploymentOperator: The enssential module for deploying and managing your
prediction service workloads on Kubernetes cluster and cloud platforms such
as AWS Lambda, SageMaker, Azure ML, and GCP Function etc.

* Yatai: A stateful server provides Web UI and APIs for model management
and model serving deployments management for teams
Check out our 5-mins [Quickstart Notebook](https://colab.research.google.com/github/bentoml/BentoML/blob/master/guides/quick-start/bentoml-quick-start-guide.ipynb)
using BentoML to turn a trained sklearn model into a containerized
REST API server, and then deploy it to AWS Lambda.


Check out our 5-mins [BentoML Quickstart Notebook](https://colab.research.google.com/github/bentoml/BentoML/blob/master/guides/quick-start/bentoml-quick-start-guide.ipynb)
using BentoML to turn a trained sklearn model into a REST API server, and deploy it to AWS Lambda:


If you plan to adopt BentoML for production use case or wants to contribute,
be sure to join our Slack channel and hear our latest development updates!
If you are using BentoML for production workloads or wants to contribute,
be sure to join our Slack channel and hear our latest development updates:
[![join BentoML Slack](https://badgen.net/badge/Join/BentoML%20Slack/cyan?icon=slack)](http://bit.ly/2N5IpbB)

---
@@ -47,7 +34,7 @@ Installation with pip:
pip install bentoml
```

Defining a machine learning service with BentoML:
Defining a prediction service with BentoML:

```python
import bentoml
@@ -63,10 +50,8 @@ class IrisClassifier(bentoml.BentoService):
return self.artifacts.model.predict(df)
```

After training your ML model, you can pack it with the prediction service
`IrisClassifier` defined above, and save them as a BentoML Bundle to file
system:

Train a classifier model and pack it with the prediction service
`IrisClassifier` defined above:
```python
from sklearn import svm
from sklearn import datasets
@@ -76,17 +61,18 @@ iris = datasets.load_iris()
X, y = iris.data, iris.target
clf.fit(X, y)
# Packaging trained model for serving in production:
# Create a iris classifier service with the newly trained model
iris_classifier_service = IrisClassifier.pack(model=clf)
# Save prediction service to file bundle
# Save the entire prediction service to file bundle
saved_path = = iris_classifier_service.save()
```

A BentoML bundle is a versioned file archive, containing the BentoService you
defined, along with trained model artifacts, dependencies and configurations.

Now you can start a REST API server based off the saved BentoML bundle:
Now you can start a REST API server based off the saved BentoML bundle form
command line:
```bash
bentoml serve {saved_path}
```
@@ -103,15 +89,15 @@ curl -i \
http://localhost:5000/predict
```

The saved BentoML bundle can also be used directly from command line for inferencing:
The saved BentoML bundle can also be loaded directly from command line for inferencing:
```bash
bentoml predict {saved_path} --input='[[5.1, 3.5, 1.4, 0.2]]'
# alternatively:
bentoml predict {saved_path} --input='./iris_test_data.csv'
```

BentoML bundle is also pip-installable and can be used as a Python package:
BentoML bundle is pip-installable and can be directly distributed as a PyPI package:
```bash
pip install {saved_path}
```
@@ -123,10 +109,9 @@ installed_svc = IrisClassifier.load()
installed_svc.predict([[5.1, 3.5, 1.4, 0.2]])
```

BentoML bundle is structure to be a docker build context where you can easily
build a docker image for this API server containing all dependencies and
environments settings:

BentoML bundle is structured to work as a docker build context so you can easily
build a docker image for this API server by using it as the build context
directory:
```bash
docker build -t my_api_server {saved_path}
```
@@ -183,8 +168,26 @@ To learn more, try out the Getting Started with Bentoml notebook: [![Google Cola
- [(Beta) API server deployment on Kubernetes](https://github.com/bentoml/BentoML/tree/master/guides/deployment/deploy-with-kubernetes)


## Project Overview

BentoML has three main components:

* BentoService: High-level API for defining a prediction service by packaging
trained model, preprocessing source code, dependencies, and configurations
into a BentoML bundle file, which can be deployed as containerize REST API
server, PyPI package, CLI tool, or batch/streaming serving job

* DeploymentOperator: The enssential module for deploying and managing your
prediction service workloads on Kubernetes cluster and cloud platforms such
as AWS Lambda, SageMaker, Azure ML, and GCP Function etc

* YataiServer: Web UI and APIs for model management and model serving
deployment process management for teams


## Feature Highlights


* __Multiple Distribution Format__ - Easily package your Machine Learning models
and preprocessing code into a format that works best with your inference scenario:
* Docker Image - deploy as containers running REST API Server
@@ -6,7 +6,7 @@
"source": [
"# Getting Started with BentoML\n",
"\n",
"[BentoML](http://bentoml.ai) is an open source framework for building, shipping and running machine learning services. It provides high-level APIs for defining an ML service and packaging its artifacts, source code, dependencies, and configurations into a production-system-friendly format that is ready for deployment.\n",
"[BentoML](http://bentoml.ai) is an open source framework for serving and deploying machine learning models. It provides high-level APIs for defining a prediction service and packaging trained models, source code, dependencies, and configurations into a production-system-friendly format that is ready for production deployment.\n",
"\n",
"This is a quick tutorial on how to use BentoML to create a prediction service with a trained sklearn model, serving the model via a REST API server and deploy it to [AWS Lambda](https://aws.amazon.com/lambda/) as a serverless endpoint.\n",
"\n",
@@ -133,15 +133,15 @@
"# 2) `pack` it with required artifacts\n",
"svc = IrisClassifier.pack(model=clf)\n",
"\n",
"# 3) save BentoSerivce to file archive\n",
"# 3) save BentoSerivce to a BentoML bundle\n",
"saved_path = svc.save()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_That's it._ You've just created your first Bento. It's a versioned file archive, containing the BentoService you defined, including the trained model, dependencies and configurations etc. You can load back in a saved Bento file from other computers or servers using the `bentoml.load` API demostrated later in this notebook."
"_That's it._ You've just created your first BentoML Bundle. It's a versioned file archive, containing the BentoService you defined, including the trained model, dependencies and configurations etc, everything it needs to deploy the exact same service in production."
]
},
{
@@ -150,7 +150,7 @@
"source": [
"## Model Serving via REST API\n",
"\n",
"For exposing your model as a HTTP API endpoint, you can simply use the `bentoml serve` command. This allows application developers to easily intergrate with the ML model you are developing.\n",
"Use the `bentoml serve` command to start a REST API server from a saved BentoML bundle. This allows application developers to easily intergrate with the ML model you are developing.\n",
"\n",
"Note that REST API serving **does not work in Google Colab**, due to unable to access Colab's VM. You may download the notebook and run it locally to play with the BentoML API server."
]
@@ -193,9 +193,7 @@
"## Run REST API server with Docker\n",
"\n",
"BentoML supports building Docker Image for your REST API model server.\n",
"Simply use the archive folder as the docker build context:\n",
"\n",
"Note that `docker` is __note available in Google Colab__, download the notebook, ensure docker is installed and try it locally."
"Simply use the BentoML bundle directory as the docker build context:"
]
},
{
@@ -211,6 +209,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that `docker` is __note available in Google Colab__, download the notebook, ensure docker is installed and try it locally.\n",
"\n",
"Next, you can `docker push` the image to your choice of registry for deployment,\n",
"or run it locally for development and testing:"
]
@@ -253,7 +253,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## \"pip install\" a BentoML archive\n",
"## \"pip install\" a BentoML bundle\n",
"\n",
"BentoML also supports distributing a BentoService as PyPI package, with the\n",
"generated `setup.py` file. A Bento directory can be installed with `pip`:"
@@ -456,7 +456,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, you can deploy the BentML service archive you just created to AWS Lambda with one command:"
"Now, you can deploy the BentML bundle you just created to AWS Lambda with one command:"
]
},
{
@@ -528,7 +528,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"By default, BentoML uses a small local datebase to track your active deployments. For team settings, you can host a shared Yatai Server for the team to manage all model serving deployments."
"BentoML by default stores the deployment metadata on the local machine. For team settings, we recommend hosting a shared BentoML Yatai server for your entire team to track all the BentoML bundle and deployments they've created in a central place."
]
},
{
@@ -2,4 +2,4 @@
set -e

GIT_ROOT=$(git rev-parse --show-toplevel)
sphinx-build -b html $GIT_ROOT/docs $GIT_ROOT/built-docs
sphinx-build $GIT_ROOT/docs $GIT_ROOT/built-docs

0 comments on commit 5f8445f

Please sign in to comment.
You can’t perform that action at this time.