The example notebooks within this folder showcase the capabilities of Amazon SageMaker in deploying and monitoring machine learning models.
- Deploy Models with ModelBuilder using IN_PROCESS Mode
- Get Started Building and Deploying Models with ModelBuilder
- A/B Testing with Amazon SageMaker
- Faster autoscaling on Amazon SageMaker realtime endpoints (Application Autoscaling)
- Faster autoscaling on Amazon SageMaker realtime endpoints with inference components (Application Autoscaling)
- Faster autoscaling on Amazon SageMaker realtime endpoints (Step Scaling)
- Amazon SageMaker Asynchronous Inference
- Amazon SageMaker Asynchronous Inference using the SageMaker Python SDK
- SageMaker Real-time Dynamic Batching Inference with Torchserve
- Amazon SageMaker Batch Transform
- Use SageMaker Batch Transform for PyTorch Batch Inference
- SageMaker Batch Transform with Torchserve
- Amazon SageMaker Clarify Model Explainability Monitor for Batch Transform - JSON Lines Format
- Amazon SageMaker Clarify Model Bias Monitor for Batch Transform - JSON Format
- Amazon SageMaker Clarify Model Bias Monitor - JSON Lines Format
- Amazon SageMaker Clarify Model Bias Monitor - JSON Format
- Leverage deployment guardrails to update a SageMaker Inference endpoint using linear traffic shifting
- Leverage deployment guardrails to update a SageMaker Inference endpoint using rolling deployment
- Leverage deployment guardrails to update a SageMaker Inference endpoint using canary traffic shifting
- Host a Pretrained Model on SageMaker
- Inference Pipeline with Scikit-learn and Linear Learner
- Amazon SageMaker Cross Account Lineage Queries
- AWS Marketplace Product Usage Demonstration - Model Packages
- Amazon SageMaker Multi-Model Endpoints using TorchServe
- SageMaker Model Monitor with Batch Transform - Data Quality Monitoring On-Schedule
- SageMaker Model Monitor with Batch Transform - Model Quality Monitoring On-schedule
- Amazon SageMaker Clarify Model Monitors
- BYOC LLM Monitoring: Bring Your Own Container Llama2 Multiple Evaluations Monitoring with SageMaker Model Monitor
- Amazon SageMaker Model Monitor
- Amazon SageMaker Model Quality Monitor
- Running multi-container endpoints on Amazon SageMaker
- Amazon SageMaker Multi-Model Endpoints using your own algorithm container
- SageMaker Serverless Inference
- Shadow Variant Experiments via API
- Triton on SageMaker - Deploying on Inferentia instance type
- Run Multiple NLP Bert Models on GPU with Amazon SageMaker Multi-Model Endpoints (MME)
- Multiple Ensembles with GPU models using Amazon SageMaker in MME mode
- Triton on SageMaker - NLP Bert
- Serve Pytorch models with the Python Backend on GPU with Amazon SageMaker Hosting
- Triton TensorRT Sentence Transformer
- Amazon SageMaker XGBoost Bring Your Own Model
- SageMaker Inference Recommender
- SageMaker Serverless Inference
- Implement a SageMaker Real-time Single Model Endpoint (SME) for a TensorFlow Vision model on an NVIDIA Triton Server
- Deploy a TensorFlow Model using NVIDIA Triton on SageMaker