A sample serverless app for Keboola infrastructure
- Our serverless apps use Serverless Framework.
- AWS Lambda understands only LTS versions Node (currently v12). Therefore we use Babel to compile source code during deployment which allows us to use new language features.
- The source code is bundled by Webpack during deployment.
- There is
source-map
support for translation of error stack traces to original sources.
- There is
- Compliance with enclosed ESLint rules based on
@keboola/eslint-config-node
is expected.
src
- source code of the functionstest
- app and functional tests.babelrc
- definition for Babel compiler.env
- definition of env vars.eslintrc.json
- ESlint rules.travis.yml
- definition for Travis CIcf-stack.json
- CloudFormation template for custom AWS resourcesdocker-compose.yml
- Docker Compose services for local developmentDockerfile
- Docker image setuppackage.json
- npm dependenciesserverless.yml
- service definition for Serverless frameworkwebpack.config.js
- definition for Webpack
@keboola/middy-error-logger
- a middleware for Middy creating unified response for error states@babel/core
,@babel/preset-env
,babel-core
,babel-eslint
,babel-jest
- requirements for ES6 translationlodash
- utility librarysource-map-support
- a requirement for translation of error stacks from Webpack compiled code to original source codeaws-sdk
- official AWS SDK (it is in dev dependencies because Lambda runtime in AWS already has it included)axios
- a HTTP client for functional testing of API Gatewayeslint
,babel-eslint
,@keboola/eslint-config-node
- requirements for ESLintmocha
- testing frameworkserverless
- app frameworkserverless-webpack
,webpack
,webpack-node-externals
- requirements for Webpack
The basic structure of src/lambda.js
file looks like this:
import middy from 'middy';
import { install } from 'source-map-support';
import errorLogger from '@keboola/middy-error-logger';
install();
const handlerFunction = () => {
const result = { result: 'ok' };
return Promise.resolve({ statusCode: 200, body: JSON.stringify(result) });
};
// eslint-disable-next-line
export const handler = middy(handlerFunction)
.use(errorLogger());
- The code is compressed using Webpack so we need to install source maps support (to get line numbers of original source files in stack traces).
- We use Middy.js as a middleware engine with our error logger as its middleware.
- It expects using of http-errors for client errors and handles the output to client accordingly.
- This file should contain necessary minimum of code to simplify testing process. You can add some routing here, see e.g. keboola/gooddata-provisioning.
The app uses three instances or stages.
dev
is for local development and each developer can has his own onetest
is for continuous integration using Travis CIprod
is for production
docker-compose.yml has shortcuts to deploy and test the dev
stage. The others are configured in .travis.yml. Each service uses different set of env vars (prefixed by DEV_
, CI_
or PROD_
).
Locally, it is convenient to save the env vars to .env
file. Each stage has its own set of env vars under common prefix, see .env.template. The variables are:
DEPLOY_AWS_ACCESS_KEY_ID
- IAM credentials of the user used for service deploymentDEPLOY_AWS_SECRET_ACCESS_KEY
- IAM credentials of the user used for service deploymentKEBOOLA_STACK_TAG
- AWS tag of created resources, it should be the same for all instances (e.g.serverless-demo-app
)REGION
- AWS region of deployed serviceSERVICE_NAME
- used for names of AWS resources, it should be unique (e.g.jakub-serverless-demo-app
)
Variables used for testing:
TEST_AWS_ACCESS_KEY_ID
- IAM credentials of user used for functional testingTEST_AWS_SECRET_ACCESS_KEY
- IAM credentials of user used for functional testingTEST_API_ENDPOINT
- http endpoint of created API Gateway (e.g.https://l217h7oa23.execute-api.eu-west-1.amazonaws.com/dev
)
You will want to add other variables if your functions use other resources.
There is CF template cf-stack.json which already contains some resources.
ServerlessDeploymentPolicy
- IAM policy with set of permissions required for a user performing deployment of the serviceServerlessDeploymentGroup
- IAM group which should be attached to a user performing deployment of the serviceFunctionalTestPolicy
- Template of IAM policy which should be used for functional testsFunctionalTestGroup
- IAM group which should be attached to a user running functional testsServerlessDeploymentBucket
- S3 bucket for service deployment
Add other resources if your app needs them.
Serverless plugins @keboola/middy-error-logger
handle formatting of CloudWatch logs to Papertrail. Service name is used for log's hostname
and stage is used for log's program
. AWS Request id is added to the log so that you can use it for further debug in CloudWatch logs if needed.
The logs look like:
{
"statusCode": 200,
"event": {
"resource": "/auth/login",
"httpMethod": "POST",
"queryStringParameters": null,
"body": null
},
"context": {
"sourceIp": "214.178.123.91",
"userAgent": "Paw/3.1.7 (Macintosh; OS X/10.14.0) GCDHTTPRequest"
},
"awsRequestId": "a32d32a5-1228-11e8-91cc-89975b126b44"
}
Unhandled exceptions or rejected promises are logged with "statusCode":500
so you can use this phrase to create Papertrail search with alarm. Example:
{
"message": "_this.storage.authx is not a function",
"statusCode": 500,
"stack": [
"TypeError: _this.storage.authx is not a function",
" at /var/task/src/lambda/webpack:/src/app/Visualize.js:18:32"
],
"event": {
"resource": "/",
"httpMethod": "GET",
"queryStringParameters": null,
"body": null
},
"context": {
"sourceIp": "214.178.123.91",
"userAgent": "Paw/3.1.7 (Macintosh; OS X/10.14.0) GCDHTTPRequest"
},
"awsRequestId": "ab022f5a-d3ad-11e8-89f6-89a425b4ca0a"
}
App tests can run whole handler and check its response, see test/unit/lambda.js.
Functional tests should invoke deployed functions externally. Either by calling API Gateway using a HTTP client or by invoking lambda function using AWS SDK. You will find both examples in test/func/func.js.
If your handler use other AWS resources, you should check their state in your test. Add permissions to the resources to FunctionalTestPolicy
in cf-stack.json.
- Download git repository:
git clone git@github.com:keboola/serverless-demo-app.git
- Create a stack cf-stack.json with IAM policies and user groups for deployment and functional testing. You will need to fill parameters:
ServiceName
- should be the same asSERVICE_NAME
env var (e.g.dev-serverless-demo-app
)KeboolaStack
- should be the same asKEBOOLA_STACK_TAG
env var (e.g.serverless-demo-app
)Stage
- one of:dev
,test
,prod
(again, should be the same asSTAGE
env var)
- Create an IAM user for deployment (e.g.
serverless-demo-app-deploy
) and assign it to the group created in previous step. Create AWS credentials. - Create an IAM user for testing (e.g.
serverless-demo-app-testing
) and assign it to the group created in previous step. Create AWS credentials. - Create
.env
file from template .env.template - Run
docker-compose run --rm dev-deploy
CI is configured on Travis, see https://travis-ci.org/keboola/serverless-demo-app. Deployment to production is run automatically after releasing a version on GitHub.
- Create two sets of env variables with
CI_
andPROD_
prefixes in Travis settings. - Create an IAM user for pushing to ECR repository (e.g.
serverless-demo-app-ecr
) withAmazonEC2ContainerRegistryFullAccess
policy attached. Save its credentials to Travis env vars asECR_AWS_ACCESS_KEY_ID
andECR_AWS_SECRET_ACCESS_KEY
. - Create an ECS repository (e.g.
keboola/serverless-demo-app
) and add a Lifecycle policy to expire images with tags prefixed bystage-
when their count exceeds 10.
- You can add custom resources preferably by adding them to cf-stack.json.
- You will have to add permissions to the IAM role used for running lambda functions. Look to
appLambdaRole
inresources
section of serverless.yml file. - You should also add permissions to the policy used for functional testing (
FunctionalTestPolicy
in cf-stack.json) and check state of the resources in the tests.
You can mock some AWS services using LocalStack. In this case add the service to docker-compose.yml
, then link it to the test service and fill env vars like:
localstack:
image: localstack/localstack
ports:
- "4569:4569"
- "4572:4572"
environment:
- "SERVICES=s3,dynamodb"
dev-test-app:
...
links:
- localstack
environment:
- "AWS_ACCESS_KEY_ID=accessKey"
- "AWS_SECRET_ACCESS_KEY=secretKey"
- "DYNAMO_ENDPOINT=http://localstack:4569"
- "DYNAMO_TABLE=emails"
- "REGION=us-east-1"
- "S3_BUCKET=emails"
- "S3_ENDPOINT=http://localstack:4572"
command: ...
You must be able to switch instances of AWS services in your lambda handler. E.g.:
let s3 = new aws.S3({});
let dynamo = new aws.DynamoDB({ region: process.env.REGION });
export function setS3(client) {
s3 = client;
}
export function setDynamo(client) {
dynamo = client;
}
export const handler = middy(() => {
});
And finally instantiate AWS services with mocked endpoints in the tests and switch them for the handler too:
import * as lambda from '../lambda';
const s3 = new aws.S3({
s3ForcePathStyle: true,
endpoint: new aws.Endpoint(process.env.S3_ENDPOINT),
sslEnabled: false,
});
lambda.setS3(s3);
const dynamo = new aws.DynamoDB({
region: process.env.REGION,
endpoint: process.env.DYNAMO_ENDPOINT,
});
lambda.setDynamo(dynamo);