Skip to content

Commit

Permalink
serverless samples (#2260)
Browse files Browse the repository at this point in the history
* serverless samples

* Update cli-automl-classification-task-bankmarketing-serverless.yml

* Create automl-classification-task-bankmarketing-serverless.ipynb

* Update automl-classification-task-bankmarketing-serverless.ipynb

* Update automl-classification-task-bankmarketing-serverless.ipynb

* Update automl-classification-task-bankmarketing-serverless.ipynb

* Update automl-classification-task-bankmarketing-serverless.ipynb

* Create pipeline-serverless.yml

* Create pipeline_with_components_from_yaml_serverless.ipynb

* Update pipeline_with_components_from_yaml_serverless.ipynb

* Update pipeline_with_components_from_yaml_serverless.ipynb

* add metadata to notebook cell

* Update azureml-in-a-day.ipynb

* Update azureml-in-a-day.ipynb

* Update azureml-in-a-day.ipynb

* Update e2e-ml-workflow.ipynb

* Update e2e-ml-workflow.ipynb

* Update quickstart.ipynb

* Update quickstart.ipynb

* Update quickstart.ipynb

* Update quickstart.ipynb

* Update quickstart.ipynb

* Update quickstart.ipynb

* Update quickstart.ipynb

* Update train-model.ipynb

* Update train-model.ipynb

* Update train-model.ipynb

* Update pipeline.ipynb

* Update pipeline.ipynb

* Update pipeline.ipynb

* Update quickstart.ipynb

* Update pipeline.ipynb

* Update pipeline.ipynb

* Update quickstart.ipynb

* Update train-model.ipynb

* Update train-model.ipynb

* Update train-model.ipynb

* Update cli-automl-classification-task-bankmarketing-serverless.yml

* Update pipeline-serverless.yml

* Update hello-code.yml

* Update hello-data-uri-folder.yml

* Update hello-dataset.yml

* Update hello-git.yml

* Update hello-interactive.yml

* Update hello-iris-datastore-file.yml

* Update hello-iris-datastore-folder.yml

* Update hello-iris-file.yml

* Update hello-iris-folder.yml

* Update hello-iris-literal.yml

* Update hello-mlflow.yml

* Update hello-model-as-input.yml

* Update hello-model-as-output.yml

* Update hello-notebook.yml

* Create hello-pipeline-abc-serverless.yml

* Update hello-pipeline-abc-serverless.yml

* Create hello-pipeline-customize-output-file-serverless.yml

* Update hello-pipeline-customize-output-file-serverless.yml

* Create hello-pipeline-customize-output-folder-serverless.yml

* Update hello-pipeline-customize-output-file-serverless.yml

* Update hello-pipeline-customize-output-folder-serverless.yml

* Create hello-pipeline-default-artifacts-serverless.yml

* Update hello-pipeline-default-artifacts-serverless.yml

* Update hello-pipeline-default-artifacts-serverless.yml

* Create hello-pipeline-io-serverless.yml

* Update hello-pipeline-io-serverless.yml

* Create hello-pipeline-settings-serverless.yml

* Update hello-pipeline-settings-serverless.yml

* Create hello-pipeline-serverless.yml

* Update hello-pipeline-serverless.yml

* Update hello-pipeline-serverless.yml

* Update hello-world.yml

* Update hello-world-output.yml

* Update hello-world-output-data.yml

* Update hello-world-org.yml

* Update hello-world-input.yml

* Update hello-world-env-var.yml

* Update automl-classification-task-bankmarketing-serverless.ipynb

* Update automl-classification-task-bankmarketing-serverless.ipynb

* Update automl-classification-task-bankmarketing-serverless.ipynb

* Update cli-automl-classification-task-bankmarketing-serverless.yml

* Update auto-ml-forecasting-github-dau.ipynb

* Update auto-ml-forecasting-github-dau.ipynb

* Update auto-ml-forecasting-github-dau.ipynb

* Update auto-ml-forecasting-github-dau.ipynb

* Update auto-ml-forecasting-github-dau.ipynb

* Update auto-ml-forecasting-github-dau.ipynb

* Update cli-automl-forecasting-task-github-dau.yml

* Update cli-automl-forecasting-orange-juice-sales.yml

* Update automl-forecasting-orange-juice-sales-mlflow.ipynb

* Update automl-forecasting-orange-juice-sales-mlflow.ipynb

* Update automl-forecasting-orange-juice-sales-mlflow.ipynb

* Update automl-forecasting-orange-juice-sales-mlflow.ipynb

* Update automl-forecasting-orange-juice-sales-mlflow.ipynb

* Update automl-forecasting-orange-juice-sales-mlflow.ipynb

* Update automl-forecasting-orange-juice-sales-mlflow.ipynb

* Update hello-sweep.yml

* Update train-model.ipynb

* Update e2e-ml-workflow.ipynb

* Update hello-automl-job-basic.yml

* Update auto-ml-forecasting-github-dau.ipynb

* Update auto-ml-forecasting-github-dau.ipynb

* Update automl-forecasting-orange-juice-sales-mlflow.ipynb

* Update quickstart.ipynb

* Update train-model.ipynb

* Update pipeline.ipynb

* Update auto-ml-forecasting-github-dau.ipynb

* Update auto-ml-forecasting-github-dau.ipynb

---------

Co-authored-by: Sheri Gilley <sgilley@microsoft.com>
  • Loading branch information
vijetajo and sdgilley committed Jun 21, 2023
1 parent 7c695d9 commit 867d4c2
Show file tree
Hide file tree
Showing 42 changed files with 3,285 additions and 1,904 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
$schema: https://azuremlsdk2.blob.core.windows.net/preview/0.0.1/autoMLJob.schema.json
type: automl
experiment_name: dpv2-cli-automl-classifier-experiment
description: A Classification job using bank marketing
# Serverless compute is used to run this AutoML job.
# Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you.

task: classification
log_verbosity: debug
primary_metric: accuracy

target_column_name: "y"

#validation_data_size: 0.20
#n_cross_validations: 5
#test_data_size: 0.1

training_data:
path: "./training-mltable-folder"
type: mltable
validation_data:
path: "./validation-mltable-folder"
type: mltable
test_data:
path: "./test-mltable-folder"
type: mltable

limits:
timeout_minutes: 180
max_trials: 40
max_concurrent_trials: 5
trial_timeout_minutes: 20
enable_early_termination: true
exit_score: 0.92

featurization:
mode: custom
transformer_params:
imputer:
- fields: ["job"]
parameters:
strategy: most_frequent
blocked_transformers:
- WordEmbedding
training:
enable_model_explainability: true
allowed_training_algorithms:
- gradient_boosting
- logistic_regression
# Resources to run this serverless job
resources:
instance_type="Standard_E4s_v3"
instance_count=5
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ type: automl
experiment_name: dpv2-cli-automl-forecasting-orange-juice-sales
# name: dpv2-sdk-forecasting-train-job-01
description: A Time-Series Forecasting job using orange juice sales dataset
compute: azureml:cpu-cluster
task: forecasting
primary_metric: normalized_root_mean_squared_error
log_verbosity: info
Expand Down Expand Up @@ -54,4 +53,4 @@ forecasting:
training:
enable_model_explainability: true
enable_stack_ensemble: false
blocked_training_algorithms: []
blocked_training_algorithms: []
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ type: automl
experiment_name: dpv2-cli-automl-forecasting-github-dau-experiment
description: A Time-Series Forecasting job using Github DAU dataset that trains only the TCNForecaster model.

compute: azureml:automl-gpu-cluster

task: forecasting
primary_metric: normalized_root_mean_squared_error
Expand Down Expand Up @@ -33,3 +32,6 @@ training:
enable_stack_ensemble: false
allowed_training_algorithms:
- TCNForecaster
resources:
instance_type: Standard_E4s_v3
instance_count: 4
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-automl/hello-automl-job-basic.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ experiment_name: dpv2-cli-automl-classifier-experiment
# name: dpv2-cli-classifier-train-job-basic-01
description: A Classification job using bank marketing

compute: azureml:cpu-cluster

task: classification
primary_metric: accuracy
Expand Down
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-code.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,3 @@ command: ls
code: src
environment:
image: library/python:latest
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-data-uri-folder.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,3 @@ inputs:
path: azureml:local-folder-example@latest
mode: ro_mount
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-dataset.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,3 @@ inputs:
path: azureml:sampledata@latest
mode: ro_mount
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-git.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,3 @@ command: >-
code: src
environment:
image: library/python:latest
compute: azureml:cpu-cluster
3 changes: 1 addition & 2 deletions cli/jobs/basics/hello-interactive.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@ $schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
command: python hello-interactive.py && sleep 600
code: src
environment: azureml:AzureML-tensorflow-2.7-ubuntu20.04-py38-cuda11-gpu@latest
compute: azureml:cpu-cluster

services:
my_vscode:
Expand All @@ -15,4 +14,4 @@ services:
# my_ssh:
# type: tensor_board
# ssh_public_keys: <paste the entire pub key content>
# nodes: all # Use the `nodes` property for a distributed job to run interactive services on all nodes. If `nodes` are not selected, by default, interactive applications are only enabled on the head node.
# nodes: all # Use the `nodes` property for a distributed job to run interactive services on all nodes. If `nodes` are not selected, by default, interactive applications are only enabled on the head node.
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-iris-datastore-file.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,3 @@ inputs:
type: uri_file
path: azureml://datastores/workspaceblobstore/paths/example-data/iris.csv
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-iris-datastore-folder.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,3 @@ inputs:
type: uri_folder
path: azureml://datastores/workspaceblobstore/paths/example-data/
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-iris-file.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,3 @@ inputs:
type: uri_file
path: https://azuremlexamples.blob.core.windows.net/datasets/iris.csv
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-iris-folder.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,3 @@ inputs:
type: uri_folder
path: wasbs://datasets@azuremlexamples.blob.core.windows.net/
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-iris-literal.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,3 @@ inputs:
type: uri_file
iris_csv: https://azuremlexamples.blob.core.windows.net/datasets/iris.csv
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-mlflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,3 @@ $schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
command: python hello-mlflow.py
code: src
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-model-as-input.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,3 @@ inputs:
type: mlflow_model # List of all model types here: https://learn.microsoft.com/azure/machine-learning/reference-yaml-model#yaml-syntax
path: ../../assets/model/mlflow-model
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-model-as-output.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,3 @@ outputs:
output_folder:
type: custom_model # mlflow_model,custom_model, triton_model
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-notebook.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,3 @@ command: |
code: src
environment:
image: library/python:latest
compute: azureml:cpu-cluster
27 changes: 27 additions & 0 deletions cli/jobs/basics/hello-pipeline-abc-serverless.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
$schema: https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json
type: pipeline
display_name: hello_pipeline_abc
# Serverless compute is used to run this pipeline job.
# Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you.
settings:
default_compute: azureml:serverless

inputs:
hello_string_top_level_input: "hello world"
jobs:
a:
command: echo hello ${{inputs.hello_string}}
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
inputs:
hello_string: ${{parent.inputs.hello_string_top_level_input}}
b:
command: echo "world" >> ${{outputs.world_output}}/world.txt
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
outputs:
world_output:
c:
command: echo ${{inputs.world_input}}/world.txt
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
inputs:
world_input: ${{parent.jobs.b.outputs.world_output}}

Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
$schema: https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json
type: pipeline
display_name: hello_pipeline_customize_output_file
# Serverless compute is used to run this pipeline job.
# Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you.
settings:
default_compute: azureml:serverless

outputs:
output:
type: uri_file
path: azureml://datastores/workspaceblobstore/paths/${{name}}/hello_world.txt
mode: rw_mount
jobs:
hello_world:
command: echo "hello" && echo "world" > ${{outputs.output}}
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
outputs:
output: ${{parent.outputs.output}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
$schema: https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json
type: pipeline
display_name: hello_pipeline_customize_output_folder
# Serverless compute is used to run this pipeline job.
# Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you.
settings:
default_compute: azureml:serverless
jobs:
hello_world:
command: echo "hello" && echo "world" > ${{outputs.output}}/hello_world-folder.txt
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
outputs:
output:
type: uri_folder
path: azureml://datastores/workspaceblobstore/paths/${{name}}/
19 changes: 19 additions & 0 deletions cli/jobs/basics/hello-pipeline-default-artifacts-serverless.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
$schema: https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json
type: pipeline
display_name: hello_pipeline_default_artifacts
# Serverless compute is used to run this pipeline job.
# Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you.
settings:
default_compute: azureml:serverless
jobs:
hello_job:
command: echo "hello" && echo "world" > ./outputs/world.txt
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
outputs:
artifacts:
world_job:
command: cat ${{inputs.world_input}}/outputs/world.txt
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
inputs:
world_input: ${{parent.jobs.hello_job.outputs.artifacts}}

19 changes: 19 additions & 0 deletions cli/jobs/basics/hello-pipeline-io-serverless.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
$schema: https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json
type: pipeline
display_name: hello_pipeline_io
# Serverless compute is used to run this pipeline job.
# Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you.
settings:
default_compute: azureml:serverless
jobs:
hello_job:
command: echo "hello" && echo "world" > ${{outputs.world_output}}/world.txt
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
outputs:
world_output:
world_job:
command: cat ${{inputs.world_input}}/world.txt
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:1
inputs:
world_input: ${{parent.jobs.hello_job.outputs.world_output}}

15 changes: 15 additions & 0 deletions cli/jobs/basics/hello-pipeline-serverless.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
$schema: https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json
type: pipeline
display_name: hello_pipeline
# Serverless compute is used to run this pipeline job.
# Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you.
settings:
default_compute: azureml:serverless
jobs:
hello_job:
command: echo "hello"
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
world_job:
command: echo "world"
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest

15 changes: 15 additions & 0 deletions cli/jobs/basics/hello-pipeline-settings-serverless.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
$schema: https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json
type: pipeline
display_name: hello_pipeline_settings
# Serverless compute is used to run this pipeline job.
# Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you.
settings:
default_datastore: azureml:workspaceblobstore
default_compute: azureml:serverless
jobs:
hello_job:
command: echo 202204190 & echo "hello"
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:1
world_job:
command: echo 202204190 & echo "hello"
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:1
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-sweep.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ trial:
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
inputs:
A: 0.5
compute: azureml:cpu-cluster
sampling_algorithm: random
search_space:
B:
Expand Down
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-world-env-var.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,5 @@ $schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
command: echo $hello_env_var
environment:
image: library/python:latest
compute: azureml:cpu-cluster
environment_variables:
hello_env_var: "hello world"
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-world-input.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,3 @@ environment:
inputs:
hello_string: "hello world"
hello_number: 42
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-world-org.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@ $schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
command: echo "hello world"
environment:
image: library/python:latest
compute: azureml:cpu-cluster
tags:
hello: world
display_name: hello-world-example
Expand Down
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-world-output-data.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,3 @@ outputs:
hello_output:
environment:
image: python
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-world-output.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,3 @@ $schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
command: echo "hello world" > ./outputs/helloworld.txt
environment:
image: library/python:latest
compute: azureml:cpu-cluster
1 change: 0 additions & 1 deletion cli/jobs/basics/hello-world.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,3 @@ $schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
command: echo "hello world"
environment:
image: library/python:latest
compute: azureml:cpu-cluster
Loading

0 comments on commit 867d4c2

Please sign in to comment.