page_type | languages | products | description | ||
---|---|---|---|---|---|
sample |
|
|
Top-level directory for official Azure Machine Learning CLI sample code. |
Welcome to the Azure Machine Learning examples repository!
- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin.
- A terminal. Install and set up the CLI (v2) before you begin.
Scripts
Jobs (jobs)
path | status | description |
---|---|---|
jobs/pipelines-with-components/nyc_taxi_data_regression/job.yml | no description | |
jobs/pipelines/cifar-10/job.yml | no description | |
jobs/pipelines/nyc-taxi/job.yml | no description | |
jobs/single-step/dask/nyctaxi/job.yml | This sample shows how to run a distributed DASK job on AzureML. The 24GB NYC Taxi dataset is read in CSV format by a 4 node DASK cluster, processed and then written as job output in parquet format. | |
jobs/single-step/julia/iris/job.yml | Train a Flux model on the Iris dataset using the Julia programming language. | |
jobs/single-step/lightgbm/iris/job-sweep.yml | Run a hyperparameter sweep job for LightGBM on Iris dataset. | |
jobs/single-step/lightgbm/iris/job.yml | Train a LightGBM model on the Iris dataset. | |
jobs/single-step/pytorch/cifar-distributed/job.yml | Train a basic convolutional neural network (CNN) with PyTorch on the CIFAR-10 dataset, distributed via PyTorch. | |
jobs/single-step/pytorch/iris/job.yml | Train a neural network with PyTorch on the Iris dataset. | |
jobs/single-step/pytorch/word-language-model/job.yml | Train a multi-layer RNN (Elman, GRU, or LSTM) on a language modeling task with PyTorch. | |
jobs/single-step/r/accidents/job.yml | Train a GLM using R on the accidents dataset. | |
jobs/single-step/r/iris/job.yml | Train an R model on the Iris dataset. | |
jobs/single-step/scikit-learn/diabetes/job.yml | Train a scikit-learn LinearRegression model on the Diabetes dataset. | |
jobs/single-step/scikit-learn/iris-notebook/job.yml | Train a scikit-learn SVM on the Iris dataset using a custom Docker container build with a notebook via papermill. | |
jobs/single-step/scikit-learn/iris/job-docker-context.yml | Train a scikit-learn SVM on the Iris dataset using a custom Docker container build. | |
jobs/single-step/scikit-learn/iris/job-sweep.yml | Sweep hyperparemeters for training a scikit-learn SVM on the Iris dataset. | |
jobs/single-step/scikit-learn/iris/job.yml | Train a scikit-learn SVM on the Iris dataset. | |
jobs/single-step/spark/nyctaxi/job.yml | This sample shows how to run a single node Spark job on Azure ML. The 47GB NYC Taxi dataset is read in parquet format by a 1 node Spark cluster, processed and then written as job output in parquet format. | |
jobs/single-step/tensorflow/mnist-distributed-horovod/job.yml | Train a basic neural network with TensorFlow on the MNIST dataset, distributed via Horovod. | |
jobs/single-step/tensorflow/mnist-distributed/job.yml | Train a basic neural network with TensorFlow on the MNIST dataset, distributed via TensorFlow. | |
jobs/single-step/tensorflow/mnist/job.yml | Train a basic neural network with TensorFlow on the MNIST dataset. | |
jobs/basics/hello-code.yml | no description | |
jobs/basics/hello-iris-datastore-file.yml | no description | |
jobs/basics/hello-iris-datastore-folder.yml | no description | |
jobs/basics/hello-iris-file.yml | no description | |
jobs/basics/hello-iris-folder.yml | no description | |
jobs/basics/hello-iris-literal.yml | no description | |
jobs/basics/hello-mlflow.yml | no description | |
jobs/basics/hello-notebook.yml | no description | |
jobs/basics/hello-pipeline-abc.yml | no description | |
jobs/basics/hello-pipeline-io.yml | no description | |
jobs/basics/hello-pipeline-settings.yml | no description | |
jobs/basics/hello-pipeline.yml | no description | |
jobs/basics/hello-sweep.yml | Hello sweep job example. | |
jobs/basics/hello-world-env-var.yml | no description | |
jobs/basics/hello-world-input.yml | no description | |
jobs/basics/hello-world-org.yml | ||
jobs/basics/hello-world-output-data.yml | no description | |
jobs/basics/hello-world-output.yml | no description | |
jobs/basics/hello-world.yml | no description | |
jobs/pipelines-with-components/basics/1a_e2e_local_components/pipeline.yml | "Dummy train-score-eval pipeline with local components" | |
jobs/pipelines-with-components/basics/1b_e2e_registered_components/pipeline.yml | "E2E dummy train-score-eval pipeline with registered components" | |
jobs/pipelines-with-components/basics/1c_e2e_inline_components/pipeline.yml | "E2E dummy train-score-eval pipeline with components defined inline in pipeline job" | |
jobs/pipelines-with-components/basics/2a_basic_component/pipeline.yml | "Hello World component example" | |
jobs/pipelines-with-components/basics/2b_component_with_input_output/pipeline.yml | "Component with inputs and outputs" | |
jobs/pipelines-with-components/basics/3a_basic_pipeline/pipeline.yml | "Basic Pipeline Job with 3 Hello World components" | |
jobs/pipelines-with-components/basics/3b_pipeline_with_data/pipeline.yml | no description | |
jobs/pipelines-with-components/basics/4a_local_data_input/pipeline.yml | "Example of using data in a local folder as pipeline input" | |
jobs/pipelines-with-components/basics/4b_datastore_datapath_uri_folder/pipeline.yml | "Example of using data folder from a Workspace Datastore as pipeline input" | |
jobs/pipelines-with-components/basics/4c_datastore_datapath_uri_file/pipeline.yml | "Example of using data file from a Workspace Datastore as pipeline input" | |
jobs/pipelines-with-components/basics/4d_dataset_input/pipeline.yml | "Example of using data from a Dataset as pipeline input" | |
jobs/pipelines-with-components/basics/4e_web_url_input/pipeline.yml | "Example of using a file hosted at a web URL as pipeline input" | |
jobs/pipelines-with-components/basics/5a_env_public_docker_image/pipeline.yml | no description | |
jobs/pipelines-with-components/basics/5b_env_registered/pipeline.yml | no description | |
jobs/pipelines-with-components/basics/5c_env_conda_file/pipeline.yml | no description | |
jobs/pipelines-with-components/basics/6a_tf_hello_world/pipeline.yml | "Prints the environment variable ($TF_CONFIG) useful for scripts running in a Tensorflow training environment" | |
jobs/pipelines-with-components/basics/6c_pytorch_hello_world/pipeline.yml | "Prints the environment variables useful for scripts running in a PyTorch training environment" |
Endpoints (endpoints)
path | status | description |
---|
Resources (resources)
path | status | description |
---|---|---|
resources/compute/cluster-basic.yml | no description | |
resources/compute/cluster-location.yml | no description | |
resources/compute/cluster-low-priority.yml | no description | |
resources/compute/cluster-minimal.yml | no description | |
resources/compute/cluster-ssh-password.yml | no description |
Assets (assets)
path | status | description |
---|---|---|
assets/dataset/cloud-file-https.yml | Dataset created from a file in cloud using https URL. | |
assets/dataset/cloud-file-wasbs.yml | Dataset created from a file in cloud using wasbs URL. | |
assets/dataset/cloud-file.yml | Dataset created from file in cloud. | |
assets/dataset/cloud-folder-https.yml | Dataset created from folder in cloud using https URL. | |
assets/dataset/cloud-folder-wasbs.yml | Dataset created from folder in cloud using wasbs URL. | |
assets/dataset/cloud-folder.yml | Dataset created from folder in cloud. | |
assets/dataset/iris-csv-example.yml | no description | |
assets/dataset/local-file.yml | Dataset created from local file. | |
assets/dataset/local-folder.yml | Dataset created from local folder. | |
assets/dataset/public-file-https.yml | Dataset created from a publicly available file using https URL. | |
assets/environment/docker-context.yml | no description | |
assets/environment/docker-image-plus-conda.yml | Environment created from a Docker image plus Conda environment. | |
assets/environment/docker-image.yml | Environment created from a Docker image. | |
assets/model/local-file.yml | Model created from local file. | |
assets/model/local-mlflow.yml | Model created from local MLflow model directory. |
directory | description |
---|---|
assets |
assets |
endpoints |
endpoints |
jobs |
jobs |
resources |
resources |
We welcome contributions and suggestions! Please see the contributing guidelines for details.
This project has adopted the Microsoft Open Source Code of Conduct. Please see the code of conduct for details.