Skip to content

Latest commit

 

History

History
149 lines (114 loc) · 6.8 KB

how-to-deploy-profile-model.md

File metadata and controls

149 lines (114 loc) · 6.8 KB
title titleSuffix description services ms.service ms.subservice ms.date ms.topic zone_pivot_groups ms.reviewer author ms.author ms.custom
Profile model memory and CPU usage (v1)
Azure Machine Learning
Use CLI (v1) or SDK (v1) to profile your model before deployment. Profiling determines the memory and CPU usage of your model.
machine-learning
machine-learning
inferencing
11/04/2022
how-to
aml-control-methods
None
Blackmist
larryfr
UpdateFrequency5, deploy, cliv1, sdkv1

Profile your model to determine resource utilization

[!INCLUDE dev v1]

This article shows how to profile a machine learning to model to determine how much CPU and memory you will need to allocate for the model when deploying it as a web service.

Important

This article applies to CLI v1 and SDK v1. This profiling technique is not available for v2 of either CLI or SDK.

[!INCLUDE cli v1 deprecation]

Prerequisites

This article assumes you have trained and registered a model with Azure Machine Learning. See the sample tutorial here for an example of training and registering a scikit-learn model with Azure Machine Learning.

Limitations

  • Profiling will not work when the Azure Container Registry (ACR) for your workspace is behind a virtual network.

Run the profiler

Once you have registered your model and prepared the other components necessary for its deployment, you can determine the CPU and memory the deployed service will need. Profiling tests the service that runs your model and returns information such as the CPU usage, memory usage, and response latency. It also provides a recommendation for the CPU and memory based on resource usage.

In order to profile your model, you will need:

  • A registered model.
  • An inference configuration based on your entry script and inference environment definition.
  • A single column tabular dataset, where each row contains a string representing sample request data.

Important

At this point we only support profiling of services that expect their request data to be a string, for example: string serialized json, text, string serialized image, etc. The content of each row of the dataset (string) will be put into the body of the HTTP request and sent to the service encapsulating the model for scoring.

Important

We only support profiling up to 2 CPUs in ChinaEast2 and USGovArizona region.

Below is an example of how you can construct an input dataset to profile a service that expects its incoming request data to contain serialized json. In this case, we created a dataset based 100 instances of the same request data content. In real world scenarios we suggest that you use larger datasets containing various inputs, especially if your model resource usage/behavior is input dependent.

::: zone pivot="py-sdk"

[!INCLUDE sdk v1]

import json
from azureml.core import Datastore
from azureml.core.dataset import Dataset
from azureml.data import dataset_type_definitions

input_json = {'data': [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
                       [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}
# create a string that can be utf-8 encoded and
# put in the body of the request
serialized_input_json = json.dumps(input_json)
dataset_content = []
for i in range(100):
    dataset_content.append(serialized_input_json)
dataset_content = '\n'.join(dataset_content)
file_name = 'sample_request_data.txt'
f = open(file_name, 'w')
f.write(dataset_content)
f.close()

# upload the txt file created above to the Datastore and create a dataset from it
data_store = Datastore.get_default(ws)
data_store.upload_files(['./' + file_name], target_path='sample_request_data')
datastore_path = [(data_store, 'sample_request_data' +'/' + file_name)]
sample_request_data = Dataset.Tabular.from_delimited_files(
    datastore_path, separator='\n',
    infer_column_types=True,
    header=dataset_type_definitions.PromoteHeadersBehavior.NO_HEADERS)
sample_request_data = sample_request_data.register(workspace=ws,
                                                   name='sample_request_data',
                                                   create_new_version=True)

Once you have the dataset containing sample request data ready, create an inference configuration. Inference configuration is based on the score.py and the environment definition. The following example demonstrates how to create the inference configuration and run profiling:

from azureml.core.model import InferenceConfig, Model
from azureml.core.dataset import Dataset


model = Model(ws, id=model_id)
inference_config = InferenceConfig(entry_script='path-to-score.py',
                                   environment=myenv)
input_dataset = Dataset.get_by_name(workspace=ws, name='sample_request_data')
profile = Model.profile(ws,
            'unique_name',
            [model],
            inference_config,
            input_dataset=input_dataset)

profile.wait_for_completion(True)

# see the result
details = profile.get_details()

::: zone-end

::: zone pivot="cli"

[!INCLUDE cli v1]

The following command demonstrates how to profile a model by using the CLI:

az ml model profile -g <resource-group-name> -w <workspace-name> --inference-config-file <path-to-inf-config.json> -m <model-id> --idi <input-dataset-id> -n <unique-name>

Tip

To persist the information returned by profiling, use tags or properties for the model. Using tags or properties stores the data with the model in the model registry. The following examples demonstrate adding a new tag containing the requestedCpu and requestedMemoryInGb information:

model.add_tags({'requestedCpu': details['requestedCpu'],
                'requestedMemoryInGb': details['requestedMemoryInGb']})
az ml model profile -g <resource-group-name> -w <workspace-name> --i <model-id> --add-tag requestedCpu=1 --add-tag requestedMemoryInGb=0.5

::: zone-end

Next steps