Skip to content

Deploying a (containerized) Pytorch model to AWS Lambda with ECR (Container Registry)

Notifications You must be signed in to change notification settings

achrafash/pytorch-lambda

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

lambda-pytorch

Tutorial

  1. Run sam build
  2. sam deploy --guided --stack-name lambda-pytorch
  3. Choose the same Region that you created the Amazon ECR repository in. (us-east-2)
  4. Enter the image repository (repositoryUri) for the function (Amazon ECR repository)
  5. For Confirm changes before deploy and Allow SAM CLI IAM role creation, keep the defaults
  6. For pytorchEndpoint may not have authorization defined, Is this okay?, select y.
  7. Keep the defaults for the remaining prompts.
  8. Test the API:
curl --header "Content-Type: application/json" --request POST --data '{"sentence": "Bonjour Pierre."}' <API_GATEWAY_URL>

This project contains source code and supporting files for a serverless application for classifying handwritten digits using a Machine Learning model in PyTorch. It includes the following files and folders:

  • app/app.py - Code for the application's Lambda function including the code for ML inferencing.
  • app/Dockerfile - The Dockerfile to build the container image.
  • app/model - A simple PyTorch model for classifying handwritten digits trained against the MNIST dataset.
  • app/requirements.txt - The pip requirements to be installed during the container build.
  • events - Invocation events that you can use to invoke the function.
  • template.yaml - A template that defines the application's AWS resources.

The application uses several AWS resources, including Lambda functions and an API Gateway API. These resources are defined in the template.yaml file in this project. You can update the template to add AWS resources through the same deployment process that updates your application code.

Deploy the sample application

The Serverless Application Model Command Line Interface (SAM CLI) is an extension of the AWS CLI that adds functionality for building and testing Lambda applications. It uses Docker to run your functions in an Amazon Linux environment that matches Lambda. It can also emulate your application's build environment and API.

To use the SAM CLI, you need the following tools.

You may need the following for local testing.

To build and deploy your application for the first time, run the following in your shell:

sam build
sam deploy --guided

The first command will build a docker image from a Dockerfile and then copy the source of your application inside the Docker image. The second command will package and deploy your application to AWS, with a series of prompts:

  • Stack Name: The name of the stack to deploy to CloudFormation. This should be unique to your account and region, and a good starting point would be something matching your project name.
  • AWS Region: The AWS region you want to deploy your app to.
  • Confirm changes before deploy: If set to yes, any change sets will be shown to you before execution for manual review. If set to no, the AWS SAM CLI will automatically deploy application changes.
  • Allow SAM CLI IAM role creation: Many AWS SAM templates, including this example, create AWS IAM roles required for the AWS Lambda function(s) included to access AWS services. By default, these are scoped down to minimum required permissions. To deploy an AWS CloudFormation stack which creates or modifies IAM roles, the CAPABILITY_IAM value for capabilities must be provided. If permission isn't provided through this prompt, to deploy this example you must explicitly pass --capabilities CAPABILITY_IAM to the sam deploy command.
  • Save arguments to samconfig.toml: If set to yes, your choices will be saved to a configuration file inside the project, so that in the future you can just re-run sam deploy without parameters to deploy changes to your application.

You can find your API Gateway Endpoint URL in the output values displayed after deployment.

Use the SAM CLI to build and test locally

Build your application with the sam build command.

lambda-pytorch$ sam build

The SAM CLI builds a docker image from a Dockerfile and then installs dependencies defined in app/requirements.txt inside the docker image. The processed template file is saved in the .aws-sam/build folder.

Test a single function by invoking it directly with a test event. An event is a JSON document that represents the input that the function receives from the event source. Test events are included in the events folder in this project.

Run functions locally and invoke them with the sam local invoke command.

lambda-pytorch$ sam local invoke InferenceFunction --event events/event.json

The SAM CLI can also emulate your application's API. Use the sam local start-api to run the API locally on port 3000.

lambda-pytorch$ sam local start-api
lambda-pytorch$ curl http://localhost:3000/classify_digit

The SAM CLI reads the application template to determine the API's routes and the functions that they invoke. The Events property on each function's definition includes the route and method for each path.

      Events:
        Inference:
          Type: Api
          Properties:
            Path: /classify_digit
            Method: post

Add a resource to your application

The application template uses AWS Serverless Application Model (AWS SAM) to define application resources. AWS SAM is an extension of AWS CloudFormation with a simpler syntax for configuring common serverless application resources such as functions, triggers, and APIs. For resources not included in the SAM specification, you can use standard AWS CloudFormation resource types.

Fetch, tail, and filter Lambda function logs

To simplify troubleshooting, SAM CLI has a command called sam logs. sam logs lets you fetch logs generated by your deployed Lambda function from the command line. In addition to printing the logs on the terminal, this command has several nifty features to help you quickly find the bug.

NOTE: This command works for all AWS Lambda functions; not just the ones you deploy using SAM.

lambda-pytorch$ sam logs -n InferenceFunction --stack-name lambda-pytorch --tail

You can find more information and examples about filtering Lambda function logs in the SAM CLI Documentation.

Cleanup

To delete the sample application that you created, use the AWS CLI. Assuming you used your project name for the stack name, you can run the following:

aws cloudformation delete-stack --stack-name lambda-pytorch

Resources

See the AWS SAM developer guide for an introduction to SAM specification, the SAM CLI, and serverless application concepts.

Next, you can use AWS Serverless Application Repository to deploy ready to use Apps that go beyond hello world samples and learn how authors developed their applications: AWS Serverless Application Repository main page

About

Deploying a (containerized) Pytorch model to AWS Lambda with ECR (Container Registry)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published