deploy and manage your own ML time series model(using LSTM) with CloudFormations, Lambda, S3, SageMaker
-
Updated
Mar 13, 2020 - Python
deploy and manage your own ML time series model(using LSTM) with CloudFormations, Lambda, S3, SageMaker
Demonstration of a MLOps workflow with a Human Activity Recognition project with AWS SageMaker
Example Python smaples for running your local code as a SageMaker training job using @Remote decorator
A repo to train a Custom Deep Learning Model with Amazon SageMaker
Exploring Cloud NoSQL Databases in Content Management Systems
A repo containing end to end sagemaker mlops pipeline for model building
How to use different AWS frameworks together
A Sagemaker e2e multi-model pipeline that can tune multiple models on separate datasets and deploy them to a single endpoint.
Deploy various ML models over the cloud or on the edge
Deploy GPT-2 PyTorch model with HuggingFace pretrained weights to AWS SageMaker
AccIo - Enterprise LLM : Unifying intelligence at your command!
Natural Language Processing
Elastic Data Factory
API for Continue.dev extension to support LLM models hosted via AWS Sagemaker
End-to-end MLOps solution for model retraining, training metric alerts via email, and ML model deployment to Amazon Sagemaker. Includes a CloudFormation script for infrastructure setup and a web app for visualizing history model training metrics and performing inference.
I did a simple test to see how deploying a machine learning model on AWS Sagemaker and thus turning it into an API works. Since scikit-learn models require less dependencies than e.g. TensorFlow models I went with them for this test. To do so I used a tutorial.
Add a description, image, and links to the sagemaker topic page so that developers can more easily learn about it.
To associate your repository with the sagemaker topic, visit your repo's landing page and select "manage topics."