diff --git a/contrib/html_demo/JupyterCode/3_deployment_to_azure_app_service.ipynb b/contrib/html_demo/JupyterCode/3_deployment_to_azure_app_service.ipynb deleted file mode 100644 index f5b1263a4..000000000 --- a/contrib/html_demo/JupyterCode/3_deployment_to_azure_app_service.ipynb +++ /dev/null @@ -1,559 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Licensed under the MIT License.\n", - "\n", - "# Deployment of a classification model to an Azure app service\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Table of contents\n", - "\n", - "1. [Introduction](#intro)\n", - "1. [Model retrieval](#model)\n", - "1. [Model deployment](#deploy)\n", - " 1. [Workspace retrieval](#workspace)\n", - " 1. [Model registration](#register)\n", - " 1. [Scoring script](#scoring)\n", - " 1. [Environment setup](#env)\n", - " 1. [Packaging model for Azure app](#package)\n", - " 1. [Deploying the web app](#app)\n", - "1. [Testing the web app](#test)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 1. Introduction \n", - "\n", - "This notebook is similar to notebook 21 in the classification scenario, but instead of deploying our model to a webservice in the AzureML resource group, we will deploy it to an app service. While an AzureML deployment is satisfactory for sharing your model, it cannot interfaced with via a web application. Azure app service enables this by allowing us to set up our own security policies, particularly CORS policies that allow us to make requests to the model from other sources.\n", - "\n", - "We will:\n", - "- Register a model\n", - "- Create an environment that contains our model\n", - "- Deploy an app service using this environment on [Azure Container Instances](https://azure.microsoft.com/en-us/services/container-instances/)\n", - "\n", - "This notebook follows the instructions the article [Deploy a machine learning model to Azure App Service](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-app-service#deploy-image-as-a-web-app)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Pre-requisites \n", - "Like the other notebook, an Azure workspace is required for this notebook. If we don't have one, we need to first run through the short 20_azure_workspace_setup.ipynb notebook in the classification scenario to create it.\n", - "\n", - "While it is not required that you have run notebook 21_deployment_on_azure_container_instances.ipynb from the classification scenario, as this notebook repeats the necessary work from there, it is still recommended that you have run and understood that notebook. We will only be explaining the work specific to deploying to an app service here.\n", - "\n", - "Additionally, you will need to have installed [The Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest).\n", - "\n", - "### Library import \n", - "Throughout this notebook, we will be using a variety of libraries. We are listing them here for better readibility." - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Azure ML SDK Version: 1.2.0\n" - ] - } - ], - "source": [ - "# For automatic reloading of modified libraries\n", - "%reload_ext autoreload\n", - "%autoreload 2\n", - "\n", - "# Regular python libraries\n", - "import os\n", - "import sys\n", - "\n", - "# fast.ai\n", - "from fastai.vision import models\n", - "\n", - "# Azure\n", - "import azureml.core\n", - "from azureml.core import Experiment, Workspace\n", - "from azureml.core.image import ContainerImage\n", - "from azureml.core.model import Model\n", - "from azureml.core.webservice import AciWebservice, Webservice\n", - "from azureml.exceptions import WebserviceException\n", - "\n", - "# Computer Vision repository\n", - "sys.path.extend([\".\", \"../..\", \"../../..\"])\n", - "# This \"sys.path.extend()\" statement allows us to move up the directory hierarchy \n", - "# and access the utils_cv package\n", - "from utils_cv.common.deployment import generate_yaml\n", - "from utils_cv.common.data import root_path \n", - "from utils_cv.classification.model import IMAGENET_IM_SIZE, model_to_learner\n", - "\n", - "# Check core SDK version number\n", - "print(f\"Azure ML SDK Version: {azureml.core.VERSION}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 2. Model retrieval and export " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "learn = model_to_learner(models.resnet18(pretrained=True), IMAGENET_IM_SIZE)\n", - "\n", - "output_folder = os.path.join(os.getcwd(), 'outputs')\n", - "model_name = 'im_classif_resnet18' # Name we will give our model both locally and on Azure\n", - "pickled_model_name = f'{model_name}.pkl'\n", - "os.makedirs(output_folder, exist_ok=True)\n", - "\n", - "learn.export(os.path.join(output_folder, pickled_model_name))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 3. Model deployment on Azure \n", - "\n", - "As with other deployment notebooks, you will need to supply your own subcription id, resource group, workspace name, and workspace region for this notebook." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "subscription_id = \"YOUR_SUBSCRIPTION_ID_HERE\"\n", - "resource_group = \"YOUR_RESOURCE_GROUP_HERE\" \n", - "workspace_name = \"YOUR_WORKSPACE_NAME_HERE\" \n", - "workspace_region = \"YOUR_WORKSPACE_REGION_HERE\"" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.A Workspace retrieval " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from utils_cv.common.azureml import get_or_create_workspace\n", - "\n", - "ws = get_or_create_workspace(\n", - " subscription_id,\n", - " resource_group,\n", - " workspace_name,\n", - " workspace_region)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.B Model registration (Without experiment) " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "model = Model.register(\n", - " model_path = os.path.join('outputs', pickled_model_name),\n", - " model_name = model_name,\n", - " tags = {\"Model\": \"Pretrained ResNet18\"},\n", - " description = \"Image classifier\",\n", - " workspace = ws\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.C Scoring script " - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [], - "source": [ - "scoring_script = \"score.py\"\n", - "\n", - "%%writefile $scoring_script\n", - "# Copyright (c) Microsoft. All rights reserved.\n", - "# Licensed under the MIT license.\n", - "\n", - "import os\n", - "import json\n", - "\n", - "from base64 import b64decode\n", - "from io import BytesIO\n", - "from azureml.contrib.services.aml_response import AMLResponse\n", - "\n", - "from azureml.core.model import Model\n", - "from fastai.vision import load_learner, open_image\n", - "\n", - "def init():\n", - " global model\n", - " model_path = Model.get_model_path(model_name='im_classif_resnet18')\n", - " # ! We cannot use the *model_name* variable here otherwise the execution on Azure will fail !\n", - "\n", - " model_dir_path, model_filename = os.path.split(model_path)\n", - " model = load_learner(model_dir_path, model_filename)\n", - "\n", - "\n", - "def run(raw_data):\n", - "\n", - " # Expects raw_data to be a list within a json file\n", - " result = []\n", - " \n", - " for im_string in json.loads(raw_data)['data']:\n", - " im_bytes = b64decode(im_string)\n", - " try:\n", - " im = open_image(BytesIO(im_bytes))\n", - " pred_class, pred_idx, outputs = model.predict(im)\n", - " result.append({\"label\": str(pred_class), \"probability\": str(outputs[pred_idx].item())})\n", - " except Exception as e:\n", - " result = AMLResponse({\"label\": str(e), \"probability\": \"\"})\n", - " #result.append({\"label\": str(e), \"probability\": ''})\n", - " return result" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.D Environment setup \n", - "#### Using `azureml.core.environment` to build the docker image and for deployment" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Create a deployment-specific yaml file from classification/environment.yml\n", - "try:\n", - " generate_yaml(\n", - " directory=os.path.join(root_path()), \n", - " ref_filename='environment.yml',\n", - " needed_libraries=['pytorch', 'spacy', 'fastai', 'dataclasses'],\n", - " conda_filename='myenv.yml'\n", - " )\n", - " # Note: Take a look at the generate_yaml() function for details on how to create your yaml file from scratch\n", - "\n", - "except FileNotFoundError:\n", - " raise FileNotFoundError(\"The *environment.yml* file is missing - Please make sure to retrieve it from the github repository\")\n", - " \n", - "\n", - "from azureml.core import Environment\n", - "from azureml.core.environment import DEFAULT_CPU_IMAGE\n", - "\n", - "cv_test_env = Environment.from_conda_specification(name=\"im_classif_resnet18\", file_path=\"myenv.yml\")\n", - "\n", - "# required to have required inferencing packages preinstalled in the resulting docker image\n", - "cv_test_env.inferencing_stack_version=\"latest\"\n", - "\n", - "# let's use the default CPU image and add a few required packages\n", - "cv_test_env.docker.base_dockerfile=\"\"\"FROM {}\n", - "RUN apt-get update && \\\n", - " apt-get install -y libssl-dev build-essential libgl1-mesa-glx\n", - "\"\"\".format(DEFAULT_CPU_IMAGE)\n", - "\n", - "# setting docker.base_image to None to use the base_dockerfile to build the image\n", - "cv_test_env.docker.base_image=None\n", - "\n", - "# Now, let's try registering the environment. You'll be able to see the specified environment.\n", - "cv_test_env.register(ws)\n", - "\n", - "# Since building the docker image for the first time requires a few minutes, let's start building the image\n", - "# that we'll be using for deployment now as we'll be able to monitor the build through the streamed log.\n", - "cv_test_env.build(ws).wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 3.E Packaging the model for use in Azure app service \n", - "\n", - "Now we break off from `21_deployment_on_azure_container_instances.ipynb`, as we need to create a package for our model that we will use to deploy an Azure app service. First, we set up an inference configuration that encapsulates information regarding the model's dependencies and entry script.\n", - "\n", - "Next, using [Model.package](https://docs.microsoft.com//python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#package-workspace--models--inference-config-none--generate-dockerfile-false-), we create a Docker image that contains the workspace we are deploying to, the model(s) we are using, and any dependencies those models might have.\n", - "\n", - "We print out the package location, as we will need this below in order to create and deploy the app." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.model import InferenceConfig\n", - "\n", - "# A configuration containing our entry script and the dependencies required by the model\n", - "inference_config = InferenceConfig(entry_script='score.py', environment=cv_test_env)\n", - "\n", - "# Create a ModelPackage object containing information about our workspace, model, and configuration\n", - "package = Model.package(ws, [model], inference_config)\n", - "package.wait_for_creation(show_output=True)\n", - "print(\"Package location:\\n\", package.location)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 3.F Deploying the web app \n", - "\n", - "Finally, we are ready to deploy our model as an Azure app service. This process will be done using the Azure CLI." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Creating a resource group and app service plan\n", - "\n", - "You can skip this part if you have already created a resource group and service plan for your app. Otherwise, we will need to create them here. You should replace `` and `` with the names you wish to use. Also replace `\"LOCATION\"` with the region you would like to use for your app.\n", - "\n", - "```bash\n", - " az group create --name --location \"LOCATION\"\n", - " az appservice plan create --name --resource-group --sku B1 --is-linux\n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Creating the web app\n", - "\n", - "Now we create the webapp. You should replace `` and `` with the same names you used when you created your resource group and app service plan. Additionally, you should replace `` with the package location returned by the python script.\n", - "\n", - "```bash\n", - " az webapp create --resource-group --plan myplanname --name --deployment-container-image-name \n", - "```\n", - " \n", - "This will return a JSON response containing information about our app." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Retrieve login credentials for Azure Container Registry\n", - "\n", - "As part of the process in creating the image for our app, we uploaded it to the Azure Container Registry (ACR). We will need the login credentials for this image in the ACR in order to activate our app. When we printed out the location of the package above, it gave us an output similar to \n", - "\n", - "```bash\n", - " .azurecr.io/\n", - "```\n", - "\n", - "Replace `` below with `` from the package location.\n", - "\n", - "```bash\n", - " az acr credential show --name \n", - "```\n", - " \n", - "This should give a JSON response similar to the one below:\n", - "\n", - "```bash\n", - " {\n", - " \"passwords\": [\n", - " {\n", - " \"name\": \"password\",\n", - " \"value\": \"Iv0lRZQ9762LUJrFiffo3P4sWgk4q+nW\"\n", - " },\n", - " {\n", - " \"name\": \"password2\",\n", - " \"value\": \"=pKCxHatX96jeoYBWZLsPR6opszr==mg\"\n", - " }\n", - " ],\n", - " \"username\": \"myml08024f78fd10\"\n", - " }\n", - "```\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Activating the web app\n", - "\n", - "At this point, our web app has been created, but it is not active. In order to activate it, we need to provide the credentials to the ACR that holds the app's image.\n", - "\n", - "Replace ``, ``, and `` as we have above. You should also replace `` with the name you would like to give the app. Then, replace `` and `` with the credentials we retrieved above.\n", - "\n", - "```bash\n", - " az webapp config container set --name --resource-group --docker-custom-image-name --docker-registry-server-url .azurecr.io --docker-registry-server-user --docker-registry-server-password \n", - "```\n", - " \n", - "If the app name you chose is unavailable, you will need to provide a different name.\n", - "\n", - "After running this command, it will return a JSON response similar to the one below:\n", - "\n", - "```bash\n", - " [\n", - " {\n", - " \"name\": \"WEBSITES_ENABLE_APP_SERVICE_STORAGE\",\n", - " \"slotSetting\": false,\n", - " \"value\": \"false\"\n", - " },\n", - " {\n", - " \"name\": \"DOCKER_REGISTRY_SERVER_URL\",\n", - " \"slotSetting\": false,\n", - " \"value\": \"https:.azurecr.io\"\n", - " },\n", - " {\n", - " \"name\": \"DOCKER_REGISTRY_SERVER_USERNAME\",\n", - " \"slotSetting\": false,\n", - " \"value\": \"myml08024f78fd10\"\n", - " },\n", - " {\n", - " \"name\": \"DOCKER_REGISTRY_SERVER_PASSWORD\",\n", - " \"slotSetting\": false,\n", - " \"value\": null\n", - " },\n", - " {\n", - " \"name\": \"DOCKER_CUSTOM_IMAGE_NAME\",\n", - " \"value\": \"DOCKER|\"\n", - " }\n", - " ]\n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now our app will begin loading the image from ACR. This may take awhile. We can monitor its progress using the log stream tab in the Azure portal.\n", - "\n", - "Once the app is deployed, you can find the hostname using:\n", - "```bash\n", - " az webapp show --name --resource-group \n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Enabling cross origin resource sharing (CORS)\n", - "\n", - "Lastly, we need to enable CORS, so our app can provide results to other websites that want to run the model. Replace `` with your app's name, and `` with the origin you would like to allow resource sharing for (to enable CORS from all origins, you can enter `\"*\"`).\n", - "```bash\n", - " az webapp cors add -n --alowed-origins \n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 4. Test the web app \n", - "\n", - "We can run the below code to test that our app is deployed and functional. Here we select two images from Microsoft image set, convert them into base64, and send a post request to our app. You should replace the service_uri string with your app's scoring uri." - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "200\n", - "[{\"label\": \"water_bottle\", \"probability\": \"0.800184428691864\"}, {\"label\": \"water_bottle\", \"probability\": \"0.6857761144638062\"}]\n" - ] - } - ], - "source": [ - "from utils_cv.common.image import im2base64, ims2strlist\n", - "from utils_cv.common.data import data_path\n", - "import requests\n", - "import json\n", - "\n", - "# Extract test images paths\n", - "im_url_root = \"https://cvbp-secondary.z19.web.core.windows.net/images/\"\n", - "im_filenames = [\"cvbp_milk_bottle.jpg\", \"cvbp_water_bottle.jpg\"]\n", - "\n", - "for im_filename in im_filenames:\n", - " # Retrieve test images from our storage blob\n", - " r = requests.get(os.path.join(im_url_root, im_filename))\n", - "\n", - " # Copy test images to local data/ folder\n", - " with open(os.path.join(data_path(), im_filename), 'wb') as f:\n", - " f.write(r.content)\n", - "\n", - "# Extract local path to test images\n", - "local_im_paths = [os.path.join(data_path(), im_filename) for im_filename in im_filenames]\n", - "\n", - "# Convert images to json object\n", - "im_string_list = ims2strlist(local_im_paths)\n", - "\n", - "service_uri = \"YOUR_WEBAPP_URI_HERE\"\n", - "\n", - "payload = json.dumps({\"data\": im_string_list})\n", - "headers = {'Content-Type':'application/json'}\n", - "\n", - "resp = requests.post(service_uri, payload, headers=headers)\n", - "\n", - "print(resp.status_code)\n", - "print(resp.text)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.8" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -}