generated from amazon-archives/__template_MIT-0
-
Notifications
You must be signed in to change notification settings - Fork 1k
New pattern for SQS-Lambda-S3 using Terraform and Python #2702
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
julianwood
merged 10 commits into
aws-samples:main
from
sidd130:sidd130-feature-sqs-lambda-s3
May 16, 2025
Merged
Changes from all commits
Commits
Show all changes
10 commits
Select commit
Hold shift + click to select a range
3c664d5
Added new pattern for SQS-Lambda-S3 using Terraform and Python
sidd130 aefc6b5
Merge branch 'aws-samples:main' into sidd130-feature-sqs-lambda-s3
sidd130 65c44aa
CHANGES
sidd130 7abd24c
CHANGES
sidd130 0570946
Merge branch 'sidd130-feature-sqs-lambda-s3' of https://github.com/si…
sidd130 8adcb49
Updated README to add relevant URLs to AWS documentation
sidd130 70c246e
Merge branch 'main' of https://github.com/sidd130/serverless-patterns…
sidd130 9118ecb
Merge branch 'main' of https://github.com/sidd130/serverless-patterns…
sidd130 1658c92
Merge branch 'main' of https://github.com/sidd130/serverless-patterns…
sidd130 47a54c9
Update sqs-lambda-s3-terraform-python.json
marakere File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,117 @@ | ||
| # Amazon SQS to Amazon S3 integration using AWS Lambda | ||
|
|
||
| This pattern creates an SQS queue, a Lambda function, an S3 bucket along with event source mapping for the Lambda function and appropriate permissions to enable the interfacing between these resources. | ||
|
|
||
| An example of where this pattern could be useful is **handling large number of deployment requests asynchronously**. Given that deployment requests can vary in terms of application target and payload, this pattern can be employed as an _entry point_ component for deployment systems, that receive and process a large number of requests for deployments across multiple applications. Requests can be processed in batches and outcomes can be saved on the S3 bucket, which can further trigger notifications workflows. | ||
|
|
||
| Learn more about this pattern at Serverless Land Patterns: [SQS to Lambda to S3](https://serverlessland.com/patterns/sqs-lambda-s3) | ||
|
|
||
| **Important:** this application uses various AWS services and there are costs associated with these services after the Free Tier usage - please see the [AWS Pricing page](https://aws.amazon.com/pricing/) for details. You are responsible for any AWS costs incurred. No warranty is implied in this example. | ||
|
|
||
| ## Requirements | ||
|
|
||
| * **AWS Resources**<br> | ||
| Creation of AWS resources requires the following: | ||
| * [AWS account](https://portal.aws.amazon.com/gp/aws/developer/registration/index.html) - An AWS account is required for creating the various resources. If you do not already have one, then create an account and log in. The IAM user that you use must have sufficient permissions to make necessary AWS service calls and manage AWS resources. | ||
| * [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) - This is required for cloning this repo. | ||
| * [Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started) - Terraform is an IaC (Infrastructure as Code) software tool used for creating and managing AWS resources using a declarative configuration language. | ||
|
|
||
| * **Test Setup**<br> | ||
| In order to test this integration, the following are required: | ||
| * [Python](https://wiki.python.org/moin/BeginnersGuide/Download) is required to run the test script. | ||
| * [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) is a prerequisite for using boto3 module in the test script. | ||
|
|
||
| ## Deployment Instructions | ||
|
|
||
| 1. Create a new directory, navigate to that directory in a terminal and clone the GitHub repository: | ||
| ``` | ||
| git clone https://github.com/aws-samples/serverless-patterns | ||
| ``` | ||
| 1. Change directory to the pattern directory: | ||
| ``` | ||
| cd sqs-lambda-s3-terraform-python | ||
| ``` | ||
|
|
||
| 1. Pick a unique name for the target S3 bucket eg. `my-bucket-20250329`. Replace the bucket name and AWS region in `variables.tf`: | ||
|
|
||
| ``` | ||
| variable aws_region_name { | ||
| type = string | ||
| default = "ap-south-1" | ||
| description = "AWS Region" | ||
| } | ||
|
|
||
| variable "s3_bucket_name" { | ||
| type = string | ||
| default = "my-bucket-20250329" | ||
| description = "S3 Bucket name" | ||
| } | ||
| ``` | ||
|
|
||
| 1. Deploy the AWS resources through Terraform: | ||
|
|
||
| ``` | ||
| terraform init -upgrade | ||
| terraform fmt | ||
| terraform validate | ||
| terraform apply -auto-approve | ||
| ``` | ||
|
|
||
| ## How it works | ||
|
|
||
| The AWS resources created as a part of this integration are as follows: | ||
|
|
||
| * Amazon SQS queue | ||
| * AWS Lambda function | ||
| * Amazon S3 bucket | ||
| * IAM policies and roles | ||
|
|
||
| The SQS queue is configured as a trigger for the Lambda function. Whenever a message is posted to the SQS queue, the Lambda function is invoked synchronously. This is useful in scenarios, where the message requires some pre-processing before storage. | ||
|
|
||
| ## Testing | ||
|
|
||
| 1. Before the test script can be executed, a few pre-steps should be completed: | ||
|
|
||
| 1. IAM user creation - [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) | ||
| 2. Grant permissions to IAM user - [https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) | ||
| 3. Generate access key pair for IAM user - [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) | ||
| 4. Configure AWS CLI - [https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration) | ||
|
|
||
| 1. Update the AWS region in the test script `send_sqs_event.py` with the region, in which the SQS queue will be created: | ||
|
|
||
| ``` | ||
| config = Config(region_name='ap-south-1') | ||
| ``` | ||
|
|
||
| 1. Run the test script: | ||
|
|
||
| ``` | ||
| python send_sqs_event.py | ||
sidd130 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ``` | ||
|
|
||
| 1. Check the S3 bucket to see if a new JSON object has been created: | ||
|
|
||
| ``` | ||
| aws s3 ls [bucket-name] | ||
| ``` | ||
|
|
||
| Alternately, the S3 bucket can be looked up on the AWS Console. | ||
|
|
||
| ## Cleanup | ||
|
|
||
| 1. Delete the AWS resources through Terraform: | ||
|
|
||
| ``` | ||
| terraform apply -destroy -auto-approve | ||
| ``` | ||
|
|
||
| ## Resources | ||
|
|
||
| * [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) | ||
| * [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) | ||
| * [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) | ||
|
|
||
| ---- | ||
| Copyright 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved. | ||
|
|
||
| SPDX-License-Identifier: MIT-0 | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,23 @@ | ||
| import boto3 | ||
| from botocore.config import Config | ||
| import os | ||
| import json | ||
|
|
||
| import boto3.s3 | ||
|
|
||
| def lambda_handler(event, context): | ||
| print(event['Records'][0]['body']) | ||
| print(context) | ||
| file_name = 'request_' + json.loads(event['Records'][0]['body'])["uniqueID"] + '.json' | ||
| request_body = event['Records'][0]['body'] | ||
| aws_region = os.getenv('AWS_REGION_NAME') | ||
| s3_bucket_ident = os.getenv('S3_BUCKET_NAME') | ||
| config = Config(region_name=aws_region) | ||
| s3_client = boto3.client('s3',config=config) | ||
| resp = s3_client.put_object( | ||
| Body=str(request_body).encode(encoding="utf-8"), | ||
| Bucket=s3_bucket_ident, | ||
| Key=file_name | ||
| ) | ||
|
|
||
| print(resp) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,192 @@ | ||
| terraform { | ||
| required_providers { | ||
| aws = { | ||
| source = "hashicorp/aws" | ||
| version = "~>5.41" | ||
| } | ||
| } | ||
|
|
||
| required_version = ">=1.2.0" | ||
| } | ||
|
|
||
| provider "aws" { | ||
| region = var.aws_region_name | ||
| } | ||
|
|
||
| data "archive_file" "lambda_handler_zip_file" { | ||
| type = "zip" | ||
| source_file = "${path.module}/handler.py" | ||
| output_path = "${path.module}/sqs-lambda-s3.zip" | ||
| } | ||
|
|
||
| # Lambda function | ||
| resource "aws_lambda_function" "event-processor" { | ||
| function_name = "event-processor" | ||
| filename = data.archive_file.lambda_handler_zip_file.output_path | ||
| source_code_hash = filebase64sha256(data.archive_file.lambda_handler_zip_file.output_path) | ||
| handler = "handler.lambda_handler" | ||
| runtime = "python3.12" | ||
| role = aws_iam_role.event-processor-exec-role.arn | ||
| environment { | ||
| variables = { | ||
| AWS_REGION_NAME = var.aws_region_name | ||
| S3_BUCKET_NAME = var.s3_bucket_name | ||
| } | ||
| } | ||
| } | ||
|
|
||
| # Lambda execution role | ||
| resource "aws_iam_role" "event-processor-exec-role" { | ||
| name = "event-processor-exec-role" | ||
| assume_role_policy = jsonencode({ | ||
| Version = "2012-10-17", | ||
| Statement = [ | ||
| { | ||
| Effect = "Allow" | ||
| Principal = { | ||
| Service = "lambda.amazonaws.com" | ||
| } | ||
| Action = [ | ||
| "sts:AssumeRole" | ||
| ] | ||
| } | ||
| ] | ||
| }) | ||
| } | ||
|
|
||
| # Lambda exec role policy | ||
| resource "aws_iam_policy" "event-processor-policy" { | ||
| name = "event-processor-policy" | ||
| policy = jsonencode({ | ||
| Version = "2012-10-17" | ||
| Statement = [ | ||
| { | ||
| Effect = "Allow" | ||
| Action = [ | ||
| "sts:AssumeRole" | ||
| ] | ||
| Resource = [aws_lambda_function.event-processor.arn] | ||
| }, | ||
| { | ||
| Effect = "Allow" | ||
| Action = [ | ||
| "sqs:ReceiveMessage", | ||
| "sqs:GetQueueAttributes", | ||
| "sqs:DeleteMessage" | ||
| ] | ||
| Resource = aws_sqs_queue.event-collector.arn | ||
| }, | ||
| { | ||
| Effect = "Allow" | ||
| Action = [ | ||
| "s3:PutObject" | ||
| ] | ||
| Resource = [ | ||
| "${aws_s3_bucket.event-storage.arn}", | ||
| "${aws_s3_bucket.event-storage.arn}/*", | ||
| ] | ||
| } | ||
| ] | ||
| }) | ||
| } | ||
|
|
||
| # Attach policy to Lambda execution role for SQS permissions | ||
| resource "aws_iam_role_policy_attachment" "lambda-exec-role-policy" { | ||
| policy_arn = aws_iam_policy.event-processor-policy.arn | ||
| role = aws_iam_role.event-processor-exec-role.name | ||
| } | ||
|
|
||
| # Attach policy to Lambda exec role for CloudWatch permissions | ||
| resource "aws_iam_role_policy_attachment" "lambda-policy" { | ||
| policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" | ||
| role = aws_iam_role.event-processor-exec-role.name | ||
| } | ||
|
|
||
| # Event source mapping to create a trigger for Lambda to read from SQS queue | ||
| resource "aws_lambda_event_source_mapping" "event-processor-event-src-map" { | ||
| function_name = aws_lambda_function.event-processor.arn | ||
| event_source_arn = aws_sqs_queue.event-collector.arn | ||
| enabled = true | ||
| depends_on = [ | ||
| aws_lambda_function.event-processor, | ||
| aws_sqs_queue.event-collector, | ||
| aws_sqs_queue_policy.event-collector-policy, | ||
| aws_iam_policy.event-processor-policy | ||
| ] | ||
| } | ||
|
|
||
| # SQS Queue | ||
| resource "aws_sqs_queue" "event-collector" { | ||
| name = "event-collector-queue" | ||
| max_message_size = 2048 | ||
| } | ||
|
|
||
| # SQS queue policy | ||
| resource "aws_sqs_queue_policy" "event-collector-policy" { | ||
| queue_url = aws_sqs_queue.event-collector.url | ||
| policy = jsonencode({ | ||
| Version = "2012-10-17" | ||
| Statement = [ | ||
| { | ||
| Effect = "Allow" | ||
| Principal = { | ||
| Service = "lambda.amazonaws.com" | ||
| } | ||
| Action = [ | ||
| "sqs:ReceiveMessage", | ||
| "sqs:GetQueueAttributes", | ||
| "sqs:DeleteMessage" | ||
| ] | ||
| Resource = aws_sqs_queue.event-collector.arn | ||
| Condition = { | ||
| ArnEquals = { | ||
| "aws:SourceArn" = aws_lambda_function.event-processor.arn | ||
| } | ||
| } | ||
| } | ||
| ] | ||
| }) | ||
|
|
||
| depends_on = [ | ||
| aws_sqs_queue.event-collector, | ||
| aws_lambda_function.event-processor | ||
| ] | ||
| } | ||
|
|
||
| # S3 bucket | ||
| resource "aws_s3_bucket" "event-storage" { | ||
| bucket = var.s3_bucket_name | ||
| force_destroy = true | ||
| tags = { | ||
| Name = "event-storage" | ||
| } | ||
| } | ||
|
|
||
| # Bucket policy document | ||
| data "aws_iam_policy_document" "bucket-policy" { | ||
| statement { | ||
| effect = "Allow" | ||
| actions = ["s3:PutObject"] | ||
| principals { | ||
| type = "Service" | ||
| identifiers = [ | ||
| "lambda.amazonaws.com" | ||
| ] | ||
| } | ||
| resources = [ | ||
| "${aws_s3_bucket.event-storage.arn}", | ||
| "${aws_s3_bucket.event-storage.arn}/*", | ||
| ] | ||
| condition { | ||
| test = "ArnEquals" | ||
| variable = "aws:SourceArn" | ||
| values = ["${aws_lambda_function.event-processor.arn}"] | ||
| } | ||
| } | ||
| } | ||
|
|
||
| # Bucket policy | ||
| resource "aws_s3_bucket_policy" "event-storage-bucket-policy" { | ||
| bucket = aws_s3_bucket.event-storage.id | ||
| policy = data.aws_iam_policy_document.bucket-policy.json | ||
| } |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,15 @@ | ||
| import boto3 | ||
| from botocore.config import Config | ||
| import json | ||
| import uuid | ||
|
|
||
|
|
||
| config = Config(region_name='ap-south-1') | ||
| sqs_client = boto3.client('sqs', | ||
| config=config) | ||
| uniq_id = str(uuid.uuid4()) | ||
| response = sqs_client.send_message( | ||
| QueueUrl='event-collector-queue', | ||
| MessageBody=json.dumps({"status": 200, "uniqueID": uniq_id}) | ||
| ) | ||
| print(response) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.