Skip to content

End-to-end ML pipelines in Azure Machine Learning and Azure Databricks.

Notifications You must be signed in to change notification settings

anderl80/aml-vs-adb

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MLOps

MLOps with AML or ADB

This repository compares an end-to-end ML pipeline in Azure Machine Learning and Azure Databricks.

We'll use a text data classification ML approach to show how the pipelines could be set up in both services. In the end we'll deploy an endpoint using the model that was trained in both services.

Data

https://www.kaggle.com/ayushggarg/all-trumps-twitter-insults-20152021

Deploy model

Depending on the use-case, the latency requirements might vary. Azure Databricks is capable of not only hosting, but also serving a model using MLFlow model serving. This means that a job cluster is spawned in the background to host the model. For dev/test workloads MLFlow model serving or ACI is the place to go. Deploying to AKS is recommended in production workflows, especially when high-performance/low-latency is required. AKS is the best choice in terms of latency, is more scalable, can be fine tuned and has better cost control. Find a comparison of the services here.

Latencies

About

End-to-end ML pipelines in Azure Machine Learning and Azure Databricks.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published