page_type | languages | products | description | ||
---|---|---|---|---|---|
sample |
|
|
Learn how to deploy an R model as an Azure Machine Learning managed online endpoint |
This folder contains the assets that are called from deploy-r.sh to deploy an R model as a managed online endpoint in Azure Machine Learning. This README explains how to modify the assets in this folder to deploy your own R model.
We deploy R models using a feature called custom containers, which lets you bring a Docker container and deploy it as a managed online endpoint. In the R case, we Dockerize your model using plumber and its associated Docker image. See the included Dockerfile and plumber script for more details.
To deploy your own model, do the following:
Assuming you've saved your model as a .rda or .rds file, save it in the scripts
folder in this directory. This directory is "mounted" to your Docker container when we deploy the container as an online endpoint, so you can change the contents of this directory without needing to rebuild your Docker container.
Modify the third function in plumber.R to load the saved model and run the model every time the endpoint is invoked. If your model takes more or fewer inputs, you may need to change the function signature. For example, if you have a model that takes three inputs, update the function decorator with the line @param c The third number to add
and also update the function signature to say function(a, b, c)
.
Follow the steps in our documentation to configure the CLI (v2). Then run deploy-r.sh (if running on a Linux machine). Alternatively, call az ml online-endpoint create --name $ENDPOINT_NAME -f r-endpoint.yml
and call az ml online-deployment create --name r-deployment --endpoint $ENDPOINT_NAME -f r-deployment.yml --all-traffic
to create a deployment.
You can now follow the steps here to send data to your deployed endpoint.