Skip to content

Latest commit

 

History

History
 
 

apigw-sqs-lambda-sqs

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 

python: 3.9 AWS: SQS test: integration

Python: Amazon API Gateway, AWS Lambda, Amazon SQS Example

Introduction

This project contains automated test code samples for serverless applications written in Python. The project demonstrates several techniques for executing tests in the cloud specifically when interacting with SQS and AWS Lambda services. The main concept here is to be able to run tests from a local machine to a target system on the cloud, while the tests/messages and the their results are being ingested by 2 SQS queues and 2 Lambda functions. The Lambda function process_input_queue.py (details below) can be extend by the user to do more processing as needed (system under test - SUT). Based on current tooling, we recommend customers focus on testing in the cloud as much as possible. This example is designed to be used in a testing environment, not in a production environment.

The project uses the AWS Serverless Application Model (SAM) CLI for configuration, testing and deployment.


Contents


Key Files in the Project

  • process_input_queue.py - Lambda handler code to read from input SQS queue, do some processing, and enqueue the processing results into the output SQS queue
  • write_test_result.py - Lambda handler code that is triggered from the SQS OutputQueue or by the SQS InputQueueDLQ to update DynamoDB table with the test results
  • template.yaml - SAM script for deployment
  • test_api_gateway.py - Integration test written in Python on a live stack deployed on AWS

Top


Sample project description

The sample project allows a user to call an API endpoint (using /inbox) and generate a custom "test/hello" message so it can trigger a test. Each test result is stored on a DynamoDB table. The following diagram demonstrated the architecture and flow. The following diagram has been created using the AWS Application Composer, which can help you visually design and build serverless applications quickly.

Event Sequence

This project consists of an API Gateway, two AWS Lambda functions, two Amazon SQS standard queues which are using 2 DLQ queues accordingly for error handling and a DynamoDB table.

The Sequence is (corresponding steps 1-7 on the diagram):

  1. User is using the test client to invoke an API call (POST /inbox) to send a message/job/test to be processed later on in the backend by the ProcessInputQueue Lambda function.
  2. API Gateway is sending the message payload into the InputQueue.
  3. InputQueue is triggering the ProcessInputQueue Lambda function to process the message/job/test in the queue. This is where the Lambda can be enhanced to do further processing/testing as needed by you. It's up to the user of this sample to decide what this Lambda should eventually do, as it can be extended and adapted to the testing needs.
  4. When the ProcessInputQueue Lambda finishes its processing, it sends the testing result to the OutputQueue (if needed - please adapt the result JSON/Message). This triggers the write_test_result.py Lambda function which writes the result in the DynamoDB table. Please note we have implemented Lambda filter on the SQS Queues, so that the function will be triggered only if the test message will contain "type":"TEST" in the message body. By default, all test messages which are generated by the test client already contain that. This is a test harness, and the Lambda exists to instrument the SUT.
  5. User is using the test client to check the testing result/message of the test it issued on step 1.
  6. API query for the test result is issued to the DynamoDB.
  7. DynamoDB returns the result which is stored on the table.

Prerequisites

The SAM CLI extends the AWS CLI that adds functionality for building and testing serverless applications. It contains features for building your application locally, deploying it to AWS, and emulating AWS services locally to support automated unit tests.

To use the SAM CLI, you need the following tools.

The SAM CLI installs dependencies defined in src/requirements.txt, creates a deployment package, and saves it in the .aws-sam/build folder. Read the documentation.

Use the following command to deploy your application package to AWS:

# deploy your application to the AWS cloud from the apigw-sqs-lambda-sqs directory
sam build
sam deploy --guided

After running this command You'll receive a series of prompts:

  • Stack Name: The name of the stack to deploy to CloudFormation. This should be unique to your account and region, and a good starting point would be something matching your project name. Use apigw-sqs-lambda-sqs as the stack name for this project. you'll need the stack name to run the Integration test.

  • AWS Region: The AWS region you want to deploy your app to.

  • Parameter EnvType: Defines the environment to be deployed. Default is test and it will deploy all needed services of the test framework. If you specify prod then the write_test_result.py Lambda function and DynamoDB table will not be deployed.

  • Confirm changes before deploy: If set to yes, SAM CLI shows you any change sets for manual review before deployment. If set to no, the AWS SAM CLI will automatically deploy application changes.

  • Allow SAM CLI IAM role creation: Many AWS SAM templates, including this example, create AWS IAM roles required for the AWS Lambda function(s) included to access AWS services. By default, SAM CLI scopes these down to minimum required permissions. To deploy an AWS CloudFormation stack which creates or changes IAM roles, the CAPABILITY_IAM value for capabilities must be provided. If you don't provide permission through this prompt, you must explicitly pass --capabilities CAPABILITY_IAM to the sam deploy command.

  • CheckOutputQueue may not have authorization defined, Is this okay?: set to "y". Since this is for testing only we can proceed without client authorization for API Gateway, but for production environment we do recommend to have it.

  • Save arguments to samconfig.toml: If set to yes, SAM CLI saves your choices to a configuration file inside the project, so that in the future you can just re-run sam deploy without parameters to deploy changes to your application.


API Gateway Endpoint

You can find your API Gateway Endpoint URL in the output values displayed after deployment ("APIGatewayInputEndPoint"). Take note of this URL for use in the logging section below. On subsequent deploys you can run sam deploy without the --guided flag. Read the documentation.


Run the Integration Test

test_api_gateway.py

For the integration tests, it is assumed that the full stack is already deployed before testing (EnvType was set to test during sam deploy).

The integration test setup determines the API endpoint.

The integration test tear-down removes any data injected for the tests and purges each queue in case they have messages inside them. It also deletes tests results from the DynamoDB table.

To run the integration test, create the environment variable "AWS_SAM_STACK_NAME" with the name of the test stack, and execute the test. It is also important to assure that the AWS region is set properly in the enviroment "AWS_DEFAULT_REGION".

# Set the environment variables AWS_SAM_STACK_NAME and AWS_DEFAULT_REGION 
# to match the name of the stack and the region where you will test
# pip3 install will check the requirements file and will install needed packages to run the pytest. it is required only once for setting the environment
# deploy your application to the AWS cloud from the apigw-sqs-lambda-sqs directory

export AWS_SAM_STACK_NAME=<stack-name>
export AWS_DEFAULT_REGION=<region-of-test>
pip3 install virtualenv
python3 -m venv venv
source venv/bin/activate
pip3 install -r tests/requirements.txt 
python3 -m pytest -s tests/integration -v 

# For INFO debug log you can run: python -m pytest -s tests/integration --log-cli-level=20

in the test_api_gateway.py file you can control the polling mechanism for checking the test results, using:

service_level_agreement = 10 # total time to check is 10 seconds

interval_num = 5  # number of times to check if there is a message in the queue.

This may be useful if the testing takes more than the define default time (5 retries every 2 sec= total of 10 sec). If your Lambda process_input_queue.py (SUT) is doing processing for more than ~5 seconds, than it is recommended to adapt this parameters accordingly.

The test_api_gateway.py is running 3 tests:

test_positive_scenario - sending a message/test via API Gateway for a positive scenario, and checking the result in DynamoDB table. If all went ok, then this test should pass successfully.

test_false_positive_scenario - sending a message/test via API Gateway for a false positive scenario, and checking the result in DynamoDB table. If all went ok, then this test should pass successfully. This test is useful for testing a wrong input to the SUT and expecting an error message to be generated by the it (simulated by the process_input_queue Lambda)

test_exception_scenario - sending a malformed message (e.g an error occurred during the overall test) via API Gateway, and checking the result in DynamoDB table. If all went ok, then this test should pass successfully. This test is useful for testing a case where to the SUT had an unexpected exception in its processing. In this case, the SQS InputQueue will deliver the message to the SQS InputQueueDLQ dead letter queue, which will trigger the write_test_result.py Lambda to write the exception result to the DynamoDB table.

Top


Clean up

In order to remove the deployed resource in the cloud, please run from the apigw-sqs-lambda-sqs directory:

sam delete

Answer "yes" to the questions and it will delete the sam stack.