Skip to content

Files

Latest commit

 

History

History
756 lines (520 loc) · 30.4 KB

tutorial-auto-train-image-models.md

File metadata and controls

756 lines (520 loc) · 30.4 KB
title titleSuffix description services ms.service ms.subservice ms.topic author ms.author ms.reviewer ms.date ms.custom
Tutorial: AutoML- train object detection model
Azure Machine Learning
Train an object detection model to identify if an image contains certain objects with automated ML and the Azure Machine Learning CLI v2 and Python SDK v2.
machine-learning
machine-learning
automl
tutorial
ssalgadodev
ssalgado
swatig
11/07/2023
devx-track-python, automl, devx-track-azurecli, update-code, build-2023

Tutorial: Train an object detection model with AutoML and Python

[!INCLUDE dev v2]

In this tutorial, you learn how to train an object detection model using Azure Machine Learning automated ML with the Azure Machine Learning CLI extension v2 or the Azure Machine Learning Python SDK v2. This object detection model identifies whether the image contains objects, such as a can, carton, milk bottle, or water bottle.

Automated ML accepts training data and configuration settings, and automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model.

You write code using the Python SDK in this tutorial and learn the following tasks:

[!div class="checklist"]

  • Download and transform data
  • Train an automated machine learning object detection model
  • Specify hyperparameter values for your model
  • Perform a hyperparameter sweep
  • Deploy your model
  • Visualize detections

Prerequisites

  • [!INCLUDE prereq-workspace]

  • Python 3.6 or 3.7 are supported for this feature

  • Download and unzip the *odFridgeObjects.zip data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the Convert the downloaded data to JSONL section of the notebook.

  • Use a compute instance to follow this tutorial without further installation. (See how to create a compute instance.) Or install the CLI/SDK to use your own local environment.

    [!INCLUDE cli v2]

    This tutorial is also available in the azureml-examples repository on GitHub. If you wish to run it in your own local environment:

    [!INCLUDE sdk v2]

    This tutorial is also available in the azureml-examples repository on GitHub. If you wish to run it in your own local environment:

    • Use the following commands to install Azure Machine Learning Python SDK v2:

      • Uninstall previous preview version:
      pip uninstall azure-ai-ml
      • Install the Azure Machine Learning Python SDK v2:
      pip install azure-ai-ml azure-identity

      [!NOTE] Only Python 3.6 and 3.7 are compatible with automated ML support for computer vision tasks.


Compute target setup

Note

To try serverless compute (preview), skip this step and proceed to Experiment setup.

You first need to set up a compute target to use for your automated ML model training. Automated ML models for image tasks require GPU SKUs.

This tutorial uses the NCsv3-series (with V100 GPUs) as this type of compute target uses multiple GPUs to speed up training. Additionally, you can set up multiple nodes to take advantage of parallelism when tuning hyperparameters for your model.

The following code creates a GPU compute of size Standard_NC24s_v3 with four nodes.

[!INCLUDE cli v2]

Create a .yml file with the following configuration.

$schema: https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json 
name: gpu-cluster
type: amlcompute
size: Standard_NC24s_v3
min_instances: 0
max_instances: 4
idle_time_before_scale_down: 120

To create the compute, you run the following CLI v2 command with the path to your .yml file, workspace name, resource group and subscription ID.

az ml compute create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]

[!INCLUDE sdk v2]

from azure.ai.ml.entities import AmlCompute
compute_name = "gpu-cluster"
cluster_basic = AmlCompute(
    name=compute_name,
    type="amlcompute",
    size="Standard_NC24s_v3",
    min_instances=0,
    max_instances=4,
    idle_time_before_scale_down=120,
)
ml_client.begin_create_or_update(cluster_basic)

This compute is used later while creating the task specific automl job.


Experiment setup

You can use an Experiment to track your model training jobs.

[!INCLUDE cli v2] Experiment name can be provided using experiment_name key as follows:

experiment_name: dpv2-cli-automl-image-object-detection-experiment

[!INCLUDE sdk v2]

Experiment name is used later while creating the task specific automl job.

exp_name = "dpv2-image-object-detection-experiment"

Visualize input data

Once you have the input image data prepared in JSONL (JSON Lines) format, you can visualize the ground truth bounding boxes for an image. To do so, be sure you have matplotlib installed.

%pip install --upgrade matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import matplotlib.patches as patches
from PIL import Image as pil_image
import numpy as np
import json
import os

def plot_ground_truth_boxes(image_file, ground_truth_boxes):
    # Display the image
    plt.figure()
    img_np = mpimg.imread(image_file)
    img = pil_image.fromarray(img_np.astype("uint8"), "RGB")
    img_w, img_h = img.size

    fig,ax = plt.subplots(figsize=(12, 16))
    ax.imshow(img_np)
    ax.axis("off")

    label_to_color_mapping = {}

    for gt in ground_truth_boxes:
        label = gt["label"]

        xmin, ymin, xmax, ymax =  gt["topX"], gt["topY"], gt["bottomX"], gt["bottomY"]
        topleft_x, topleft_y = img_w * xmin, img_h * ymin
        width, height = img_w * (xmax - xmin), img_h * (ymax - ymin)

        if label in label_to_color_mapping:
            color = label_to_color_mapping[label]
        else:
            # Generate a random color. If you want to use a specific color, you can use something like "red".
            color = np.random.rand(3)
            label_to_color_mapping[label] = color

        # Display bounding box
        rect = patches.Rectangle((topleft_x, topleft_y), width, height,
                                 linewidth=2, edgecolor=color, facecolor="none")
        ax.add_patch(rect)

        # Display label
        ax.text(topleft_x, topleft_y - 10, label, color=color, fontsize=20)

    plt.show()

def plot_ground_truth_boxes_jsonl(image_file, jsonl_file):
    image_base_name = os.path.basename(image_file)
    ground_truth_data_found = False
    with open(jsonl_file) as fp:
        for line in fp.readlines():
            line_json = json.loads(line)
            filename = line_json["image_url"]
            if image_base_name in filename:
                ground_truth_data_found = True
                plot_ground_truth_boxes(image_file, line_json["label"])
                break
    if not ground_truth_data_found:
        print("Unable to find ground truth information for image: {}".format(image_file))

Using the above helper functions, for any given image, you can run the following code to display the bounding boxes.

image_file = "./odFridgeObjects/images/31.jpg"
jsonl_file = "./odFridgeObjects/train_annotations.jsonl"

plot_ground_truth_boxes_jsonl(image_file, jsonl_file)

Upload data and create MLTable

In order to use the data for training, upload data to default Blob Storage of your Azure Machine Learning Workspace and register it as an asset. The benefits of registering data are:

  • Easy to share with other members of the team
  • Versioning of the metadata (location, description, etc.)
  • Lineage tracking

[!INCLUDE cli v2]

Create a .yml file with the following configuration.

$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
name: fridge-items-images-object-detection
description: Fridge-items images Object detection
path: ./data/odFridgeObjects
type: uri_folder

To upload the images as a data asset, you run the following CLI v2 command with the path to your .yml file, workspace name, resource group and subscription ID.

az ml data create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]

[!INCLUDE sdk v2]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]


Next step is to create MLTable from your data in jsonl format as shown below. MLtable package your data into a consumable object for training.

paths:
  - file: ./train_annotations.jsonl
transformations:
  - read_json_lines:
        encoding: utf8
        invalid_lines: error
        include_path_column: false
  - convert_column_types:
      - columns: image_url
        column_type: stream_info

[!INCLUDE cli v2]

The following configuration creates training and validation data from the MLTable.

target_column_name: label
training_data:
  path: data/training-mltable-folder
  type: mltable
validation_data:
  path: data/validation-mltable-folder
  type: mltable

[!INCLUDE sdk v2]

You can create data inputs from training and validation MLTable with the following code:

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=data-load)]


Configure your object detection experiment

To configure automated ML jobs for image-related tasks, create a task specific AutoML job.

[!INCLUDE cli v2]

To use serverless compute (preview), replace the line compute: azureml:gpu-cluster with this code:

resources:
 instance_type: Standard_NC24s_v3
 instance_count: 4
task: image_object_detection
primary_metric: mean_average_precision
compute: azureml:gpu-cluster

[!INCLUDE sdk v2]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=image-object-detection-configuration)]

Note

To use serverless compute (preview), replace the line compute="cpu-cluster" with this code:

image_object_detection_job.resources = ResourceConfiguration(instance_type="Standard_NC24rs_v3",instance_count =4)

image_object_detection_job.set_limits(
    max_trials=10,
    max_concurrent_trials=2,
)

Automatic hyperparameter sweeping for image tasks (AutoMode)

Important

This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

In your AutoML job, you can perform an automatic hyperparameter sweep in order to find the optimal model (we call this functionality AutoMode). You only specify the number of trials; the hyperparameter search space, sampling method and early termination policy aren't needed. The system will automatically determine the region of the hyperparameter space to sweep based on the number of trials. A value between 10 and 20 will likely work well on many datasets.

[!INCLUDE cli v2]

limits:
  max_trials: 10
  max_concurrent_trials: 2

[!INCLUDE sdk v2]

# Trigger AutoMode
image_object_detection_job.set_limits(max_trials=10, max_concurrent_trials=2)

You can then submit the job to train an image model.

[!INCLUDE cli v2]

To submit your AutoML job, you run the following CLI v2 command with the path to your .yml file, workspace name, resource group and subscription ID.

az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]

[!INCLUDE sdk v2]

When you've configured your AutoML Job to the desired settings, you can submit the job.

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=submit-run)]


Manual hyperparameter sweeping for image tasks

In your AutoML job, you can specify the model architectures by using model_name parameter and configure the settings to perform a hyperparameter sweep over a defined search space to find the optimal model.

In this example, we'll train an object detection model with yolov5 and fasterrcnn_resnet50_fpn, both of which are pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains over thousands of labeled images with over 80 label categories.

You can perform a hyperparameter sweep over a defined search space to find the optimal model.

Job limits

You can control the resources spent on your AutoML Image training job by specifying the timeout_minutes, max_trials and the max_concurrent_trials for the job in limit settings. Refer to detailed description on Job Limits parameters.

[!INCLUDE cli v2]

limits:
  timeout_minutes: 60
  max_trials: 10
  max_concurrent_trials: 2

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=limit-settings)]


The following code defines the search space in preparation for the hyperparameter sweep for each defined architecture, yolov5 and fasterrcnn_resnet50_fpn. In the search space, specify the range of values for learning_rate, optimizer, lr_scheduler, etc., for AutoML to choose from as it attempts to generate a model with the optimal primary metric. If hyperparameter values aren't specified, then default values are used for each architecture.

For the tuning settings, use random sampling to pick samples from this parameter space by using the random sampling_algorithm. The job limits configured above, tells automated ML to try a total of 10 trials with these different samples, running two trials at a time on our compute target, which was set up using four nodes. The more parameters the search space has, the more trials you need to find optimal models.

The Bandit early termination policy is also used. This policy terminates poor performing trials; that is, those trials that aren't within 20% slack of the best performing trial, which significantly saves compute resources.

[!INCLUDE cli v2]

sweep:
  sampling_algorithm: random
  early_termination:
    type: bandit
    evaluation_interval: 2
    slack_factor: 0.2
    delay_evaluation: 6
search_space:
  - model_name:
      type: choice
      values: [yolov5]
    learning_rate:
      type: uniform
      min_value: 0.0001
      max_value: 0.01
    model_size:
      type: choice
      values: [small, medium]

  - model_name:
      type: choice
      values: [fasterrcnn_resnet50_fpn]
    learning_rate:
      type: uniform
      min_value: 0.0001
      max_value: 0.001
    optimizer:
      type: choice
      values: [sgd, adam, adamw]
    min_size:
      type: choice
      values: [600, 800]

[!INCLUDE sdk v2]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=search-space-settings)]


Once the search space and sweep settings are defined, you can then submit the job to train an image model using your training dataset.

[!INCLUDE cli v2]

To submit your AutoML job, you run the following CLI v2 command with the path to your .yml file, workspace name, resource group and subscription ID.

az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]

[!INCLUDE sdk v2]

When you've configured your AutoML Job to the desired settings, you can submit the job.

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=submit-run)]


When doing a hyperparameter sweep, it can be useful to visualize the different trials that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child jobs' tab in the UI of the main automl_image_job from above, which is the HyperDrive parent job. Then you can go into the 'Child jobs' tab of this one.

Alternatively, here below you can see directly the HyperDrive parent job and navigate to its 'Child jobs' tab:

[!INCLUDE cli v2]

CLI example not available, please use Python SDK.

[!INCLUDE sdk v2]

hd_job = ml_client.jobs.get(returned_job.name + '_HD')
hd_job

Register and deploy model

Once the job completes, you can register the model that was created from the best trial (configuration that resulted in the best primary metric). You can either register the model after downloading or by specifying the azureml path with corresponding jobid.

Get the best trial

[!INCLUDE cli v2]

CLI example not available, please use Python SDK.

[!INCLUDE sdk v2]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=best_run)]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_local_dir)]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=download_model)]

Register the model

Register the model either using the azureml path or your locally downloaded path.

[!INCLUDE cli v2]

 az ml model create --name od-fridge-items-mlflow-model --version 1 --path azureml://jobs/$best_run/outputs/artifacts/outputs/mlflow-model/ --type mlflow_model --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]

[!INCLUDE sdk v2]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=register_model)]

After you register the model you want to use, you can deploy it using the managed online endpoint deploy-managed-online-endpoint

Configure online endpoint

[!INCLUDE cli v2]

$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
name: od-fridge-items-endpoint
auth_mode: key

[!INCLUDE sdk v2]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=endpoint)]

Create the endpoint

Using the MLClient created earlier, we'll now create the Endpoint in the workspace. This command starts the endpoint creation and return a confirmation response while the endpoint creation continues.

[!INCLUDE cli v2]

az ml online-endpoint create --file .\create_endpoint.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]

[!INCLUDE sdk v2]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_endpoint)]

We can also create a batch endpoint for batch inferencing on large volumes of data over a period of time. Check out the object detection batch scoring notebook for batch inferencing using the batch endpoint.

Configure online deployment

A deployment is a set of resources required for hosting the model that does the actual inferencing. We create a deployment for our endpoint using the ManagedOnlineDeployment class. You can use either GPU or CPU VM SKUs for your deployment cluster.

[!INCLUDE cli v2]

name: od-fridge-items-mlflow-deploy
endpoint_name: od-fridge-items-endpoint
model: azureml:od-fridge-items-mlflow-model@latest
instance_type: Standard_DS3_v2
instance_count: 1
liveness_probe:
    failure_threshold: 30
    success_threshold: 1
    timeout: 2
    period: 10
    initial_delay: 2000
readiness_probe:
    failure_threshold: 10
    success_threshold: 1
    timeout: 10
    period: 10
    initial_delay: 2000 

[!INCLUDE sdk v2]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=deploy)]

Create the deployment

Using the MLClient created earlier, we'll create the deployment in the workspace. This command starts the deployment creation and return a confirmation response while the deployment creation continues.

[!INCLUDE cli v2]

az ml online-deployment create --file .\create_deployment.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]

[!INCLUDE sdk v2]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_deploy)]

Update traffic:

By default the current deployment is set to receive 0% traffic. you can set the traffic percentage current deployment should receive. Sum of traffic percentages of all the deployments with one end point shouldn't exceed 100%.

[!INCLUDE cli v2]

az ml online-endpoint update --name 'od-fridge-items-endpoint' --traffic 'od-fridge-items-mlflow-deploy=100' --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]

[!INCLUDE sdk v2]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=update_traffic)]

Test the deployment

[!INCLUDE cli v2]

CLI example not available, please use Python SDK.

[!INCLUDE sdk v2]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_inference_request)]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=dump_inference_request)]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=invoke_inference)]

Visualize detections

Now that you have scored a test image, you can visualize the bounding boxes for this image. To do so, be sure you have matplotlib installed.

[!INCLUDE cli v2]

CLI example not available, please use Python SDK.

[!INCLUDE sdk v2]

[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=visualize_detections)]

Clean up resources

Don't complete this section if you plan on running other Azure Machine Learning tutorials.

If you don't plan to use the resources you created, delete them, so you don't incur any charges.

  1. In the Azure portal, select Resource groups on the far left.
  2. From the list, select the resource group you created.
  3. Select Delete resource group.
  4. Enter the resource group name. Then select Delete.

You can also keep the resource group but delete a single workspace. Display the workspace properties and select Delete.

Next steps

In this automated machine learning tutorial, you did the following tasks:

[!div class="checklist"]

  • Configured a workspace and prepared data for an experiment.
  • Trained an automated object detection model
  • Specified hyperparameter values for your model
  • Performed a hyperparameter sweep
  • Deployed your model
  • Visualized detections

Note

Use of the fridge objects dataset is available through the license under the MIT License.