Skip to content
This repository has been archived by the owner on Feb 5, 2024. It is now read-only.

FormidableLabs/aws-lambda-serverless-reference

Repository files navigation

AWS Lambda Serverless Reference — Formidable, We build the modern web

Travis Status Maintenance Status

A simple "hello world" reference app for the FormidableLabs/serverless/aws Terraform module, using the serverless framework targeting an AWS Lambda deploy.

Contents

Overview

Getting a serverless application into the cloud "the right way" can be a challenge. To this end, we start with a super-simple, "hello world" Express app targeting AWS Lambda using serverless. Along the way, this reference project takes care of all of the tough supporting pieces that go into a production-ready, best-practices-following cloud infrastructure like:

  • Local development workflows.
  • Terraform stack controlling IAM permissions and cloud resources to support a vanilla serverless application.
  • Remote state management for Terraform.
  • Serverless application deployment and production lifecycle management.

Using this project as a template, you can hopefully take a new serverless application and set up "everything else" to support it in AWS the right way, from the start.

A great starting point is our introduction blog post that explains existing privilege approaches, motivations for this project, and some quick example infrastructures.

Audience

This reference application is meant for developers / architects who are already familiar with AWS infrastructures (and CloudFormation), Terraform, and Serverless framework applications. This project will hopefully provide some guidance / examples to get the whole shebang all the way to a multi-environment deployment and support a team of administrators and engineers for the application.

For folks (particularly Formidables) interested in learning more about how we construct our production cloud infrastructures, head over to our learning page.

Stack

We use very simple, very common tools to allow a mostly vanilla Express server to run in localdev / Docker like a normal Node.js HTTP server and also as a Lambda function exposed via API Gateway.

Tech stack:

Infrastructure stack:

  • serverless: Build / deployment framework for getting code to Lambda.
  • serverless-http: Bridge to make a vanilla Express server run on Lambda.

Infrastructure tools:

  • AWS CloudFormation: Create AWS cloud resources using YAML. The serverless framework creates a CloudFormation stack of Lambda-supporting resources as part of a normal deployment. This project also uses a small CloudFormation stack to bootstrap an S3 bucket and DynamoDB to handle Terraform state.
  • HashiCorp Terraform: Create AWS cloud resources using HCL. Typically more flexible and expressive than CloudFormation. We have a simple Terraform stack that uses the FormidableLabs/serverless/aws module to set up a production-ready set of resources (IAM, monitoring, etc.) to support the resources/stack generated by serverless.

Naming

We use a naming convention in cloud resources and yarn tasks to separate some various high level things:

  • cf: AWS CloudFormation specific names.
  • tf: Terraform specific names.
  • sls: Serverless framework names.

Stages

Development hits a local machine, and when programmatically named, is usually referred to as:

  • localdev: A development-only setup running on a local machine.

We target four different stages/environments of AWS hosted deployments:

  • sandbox: A loose environment where developers can manually push / check things / break things with impunity. Typically deployed from developer laptops.
  • development: Tracks feature development branches. Typically deployed by CI on merges to develop branch if using git flow workflow.
  • staging: A near-production environment to validate changes before committing to actual production. Typically deployed by CI for release candidate branches before merging to master.
  • production: The real deal. Typically deployed by CI after a merge to master.

Note that these are completely arbitrary groups, both in composition and naming. There a sensible set of groups if you need just some starting point. But the final group (or even one if you want) is totally up to you!

All of our yarn run <task> tasks should be run with a STAGE=<value> prefix. The default is to assume STAGE=localdev and only commands like yarn run node:localdev or yarn run lambda:localdev can run without specification successfully. For commands actually targeting AWS, please prefix like:

$ STAGE=sandbox yarn run <task>
$ STAGE=development yarn run <task>
$ STAGE=stage yarn run <task>
$ STAGE=production yarn run <task>

Note: We separate the STAGE variable from NODE_ENV because often there are build implications of NODE_ENV that are distinct from our notion of deploy target environments.

Environment Variables

Our task runner scheme is a bash + yarn based system crafted around the following environment variables (with defaults):

  • STAGE: localdev
  • SERVICE_NAME: simple-reference (The name of the application/service in the cloud.)
  • AWS_REGION: us-east-1

... and some minor localdev only ones:

  • AWS_XRAY_CONTEXT_MISSING: LOG_ERROR (Have Xray not error in localdev)
  • SERVER_PORT: 3000
  • SERVER_HOST: 0.0.0.0

... and some implied ones:

  • FUNCTION_NAME: The name of a given Lambda function. In this project, the main one is server.

If your project supports Windows, you will want to have a more general / permissive approach.

User Roles

We rely on IAM roles to limit privileges to the minimum necessary to provision, update, and deploy the service. Typically this involves creating personalized users in the AWS console, and then assigning them groups for varying appropriate degrees of privilege. Here are the relevant ones for this reference project:

  • Superuser - Support Stack: A privileged user that can create the initial bootstrap CloudFormation stack and Terraform service module that will support a Serverless application. It should not be used for Serverless deploys.
  • IAM Groups - Serverless App: The FormidableLabs/serverless/aws module provides IAM groups and support for different types of users to create/update/delete the Serverless application. The IAM groups created are:
    • tf-${SERVICE_NAME}-${STAGE}-admin: Can create/delete/update the Severless app.
    • tf-${SERVICE_NAME}-${STAGE}-developer: Can deploy the Severless app.
    • tf-${SERVICE_NAME}-${STAGE}-ci: Can deploy the Severless app.

ℹ️ Note: Our cloud infrastructure is based on an approach of a single shared AWS account (with many limited IAM users). A more secure and differently complex option is to use separate AWS accounts for different stages/environments for infrastructures/applications. We discuss these approaches more in our introductory blog post for the FormidableLabs/serverless/aws Terraform module.

In practice, many real world projects will segregate at least the ultimate production infrastructure to a separate AWS account and potentially utilize multiple infrastructures within a shared non-production AWS account. There are many ways to implement a robust production privilege approach, and this reference project implements just one of them!

Installation

Node.js (Runtime)

Our application is a Node.js server.

First, make sure you have our version of node (determined by .nvmrc) that matches our Lambda target (you will need to have nvm installed):

$ nvm use

Then, yarn install the Node.js dependencies:

$ yarn install

AWS (Deployment)

AWS Tools

Certain administrative / development work require the AWS CLI tools to prepare and deploy our staging / production services. To get those either do:

# Install via Python
$ sudo pip install awscli --ignore-installed six

# Or brew
$ brew install awscli

After this you should be able to type:

$ aws --version

Terraform

Install tfenv from Homebrew: brew install tfenv. Then, in the root of the repo, run tfenv install to download and use the pinned version of Terraform for this project.

Note that tfenv conflicts with Homebrew terraform and must be uninstalled first. You can still use tfenv to install and use the latest Terraform version in projects that don't have a .terraform-version file.

AWS Credentials

To work with this reference app, you need AWS credentials for your specific user (aka, FIRST.LAST). To create the bootstrap and service support stacks, that user will need to be a superuser. To deploy serverless applications, the user will need to be attached to given tf-${SERVICE_NAME}-${STAGE}-(admin|developer) IAM groups after the service stack is created.

Once you have a user + access + secret keys, you need to make them available to commands requiring them. There are a couple of options:

In Environment

You can append the following two environment variables to any command like:

$ AWS_ACCESS_KEY_ID=INSERT \
  AWS_SECRET_ACCESS_KEY=INSERT \
  STAGE=sandbox \
  yarn run lambda:info

This has the advantage of not storing secrets on disk. The disadvantage is needing to keep the secrets around to paste and/or export into every new terminal.

Saved to Local Disk

Another option is to store the secrets on disk. You can configure your ~/.aws credentials like:

$ mkdir -p ~/.aws
$ touch ~/.aws/credentials

Then add a default entry if you only anticipate working on this one project or a named profile entry of your username (aka, FIRST.LAST):

$ vim ~/.aws/credentials
[default|FIRST.LAST]
aws_access_key_id = INSERT
aws_secret_access_key = INSERT

If you are using a named profile, then export it into the environment in any terminal you are working in:

$ export AWS_PROFILE="FIRST.LAST"
$ STAGE=sandbox yarn run lambda:info

Or, you can declare the variable inline:

$ AWS_PROFILE="FIRST.LAST"\
  STAGE=sandbox \
  yarn run lambda:info
AWS Vault

The most secure mix of the two above options is to install and use aws-vault. Once you've followed the installation instructions, you can set up and use a profile like:

# Store AWS credentials for a profile named "FIRST.LAST"
$ aws-vault add FIRST.LAST
Enter Access Key Id: INSERT
Enter Secret Key: INSERT

# Execute a command with temporary creds
$ STAGE=sandbox aws-vault exec FIRST.LAST -- yarn run lambda:info

⚠️ Warning: Certain IAM role creation commands do not work with the default aws-vault setup if you have MFA set up (which you should).

The following commands that definitely need extra command support:

  • yarn tf:service:apply
  • yarn tf:service:_delete
  • yarn lambda:deploy

We have a research ticket to better handle sessions with MFA, but in the meantime you can simply add the --no-session flag to any aws-vault commands that need it. E.g.

$ STAGE=sandbox aws-vault exec FIRST.LAST --no-session -- <ACTUAL_COMMAND>

# E.g.
$ STAGE=sandbox aws-vault exec FIRST.LAST --no-session -- STAGE=sandbox yarn tf:service:apply

In practice, it is probably easier in the meantime to just always add the --no-session flag when using aws-vault exec.

Development

We have several options for developing a service locally, with different advantages. Here's a quick list of application ports / running commands:

  • 3000: Node server via nodemon. (yarn node:localdev)
  • 3001: Lambda offline local simulation. (yarn lambda:localdev)

Node.js

Run the server straight up in your terminal with Node.js via nodemon for instant restarts on changes. Note that because we effectively run the same server at different URL bases you must separately specify them.

# Base server
$ yarn node:localdev

# Different scenarios that reuse base.js
$ BASE_URL=/canary yarn node:localdev
$ BASE_URL=/vpc yarn node:localdev
$ BASE_URL=/layers yarn node:localdev

# Other scenarios that use a different js file.
$ SCENARIO=xray yarn node:localdev

See it in action!:

Or from the command line:

$ curl -X POST "http://127.0.0.1:3000/base/hello.json" \
  -H "Content-Type: application/json"

Lambda Offline

Run the server in a Lambda simulation via the serverless-offline plugin. In contrast to node:localdev above, all routes and functions are loaded together.

$ yarn lambda:localdev

See it in action!:

Support Stack Provisioning (Superuser)

This section discusses getting AWS resources provisioned to support Terraform and then Serverless.

The basic overview is:

  1. Bootstrap Stack: Use AWS CloudFormation to provision resources to manage Terraform state.
  2. Service Stack: Use Terraform to provision resources / permissions to accompany a Serverless deploy.

after this, then we are ready to deploy a standard serverless application with full support!

Bootstrap Stack

This step creates an S3 bucket and DynamoDB data store to enable Terraform to remotely manage it's state. We do this via AWS CloudFormation.

All commands in this section should be run by an AWS superuser. The configuration for all of this section is controlled by: aws/bootstrap.yml. Commands and resources created are all prefixed with cf as a project-specific choice for ease of identification in the AWS console (vs. Terraform vs. Serverless-generated).

Create the CloudFormation stack:

# Provision stack.
$ STAGE=sandbox yarn run cf:bootstrap:create
{
    "StackId": "arn:aws:cloudformation:${AWS_REGION}:${AWS_ACCOUNT}:stack/cf-${SERVICE_NAME}-${STAGE}-bootstrap/HASH"
}

# Check status until reach `CREATE_COMPLETE`
$ STAGE=sandbox yarn run cf:bootstrap:status
"CREATE_COMPLETE"

Once this is complete, you can move on to provisioning the service stack section. The remaining commands below are only if you need to update / delete the bootstrap stack, which shouldn't happen that often.

Update the CloudFormation stack:

# Update, then check status.
$ STAGE=sandbox yarn run cf:bootstrap:update
$ STAGE=sandbox yarn run cf:bootstrap:status

Delete the CloudFormation stack:

The bootstrap stack should only be deleted after you have removed all of the -admin|-developer|-ci groups from users and deleted the Serverless and Terraform service stacks.

# **WARNING**: Use with extreme caution!!!
$ STAGE=sandbox yarn run cf:bootstrap:_delete

# Check status. (A status or error with `does not exist` when done).
$ STAGE=sandbox yarn run cf:bootstrap:status
An error occurred (ValidationError) when calling the DescribeStacks operation: Stack with id cf-SERVICE_NAME-STAGE does not exist

Service Stack

This step provisions a Terraform stack to provide us with IAM groups and other AWS resources to support and enhance a Serverless provision (in the next section).

All commands in this section should be run by an AWS superuser. The configuration for all of this section is controlled by: terraform/main.tf. Commands and resources created are all prefixed with tf as a project-specific choice for ease of identification.

ℹ️ Note: We use the terraform CLI program under the hood directly for all of our Terraform work. This is simple and good for learning, but in a real world infrastructure has several limitations (such as the pain of remembering to re-init environments on switching, etc.). Consequently, if you're looking to maintain multiple environments with Terraform in the real world, consider more flexible meta tools like terragrunt.

Init your local Terraform state.

This needs to be run once to be able to run any other Terraform commands.

$ STAGE=sandbox yarn run tf:service:init --reconfigure

⚠️ Warning: You need to run yarn run tf:service:init every time you change STAGE or other core environmental setup before you can mutate anything with the stack (like yarn run tf:service:apply). Failure to do so will result in bad things like incorrect stage variables applied to an old, stale stage in the underlying Terraform local disk cache.

ℹ️ Note: We suggest using the --reconfigure flag every time you run init when switching environments so that the remote state (in S3) remains the source of truth and accidental stuff you do on local disk doesn't end up corrupting things.

Plan the Terraform stack.

Terraform allows you to see what's going to happen / change in your cloud infrastructure before actually committing to it, so it is always a good idea to run a plan before any Terraform mutating command.

$ STAGE=sandbox yarn run tf:service:plan

Apply the Terraform stack:

This creates / updates as appropriate.

# Type in `yes` to go forward
$ STAGE=sandbox yarn run tf:service:apply

# YOLO: run without checking first
$ STAGE=sandbox yarn run tf:service:apply -auto-approve

# **WARNING**: If using `aws-vault`, remember `--no-session`!
$ STAGE=sandbox aws-vault exec FIRST.LAST --no-session -- STAGE=sandbox yarn tf:service:apply

Delete the Terraform stack:

The service stack should only be deleted after you have removed all of the -admin|-developer|-ci groups from users and deleted the Serverless stack.

# **WARNING**: Use with extreme caution!!!
# Type in `yes` to go forward
$ STAGE=sandbox yarn run tf:service:_delete

# YOLO: run without checking first
$ STAGE=sandbox yarn run tf:service:_delete -auto-approve

# **WARNING**: If using `aws-vault`, remember `--no-session`!
$ STAGE=sandbox aws-vault exec FIRST.LAST --no-session -- STAGE=sandbox yarn tf:service:_delete

Visualize the Terraform stack:

These are Mac-based instructions, but analogous steps are available on other platforms. First, you'll need GraphViz for the dot tool:

$ brew install graphviz

From there, you can visualize with:

# Generate SVG
$ STAGE=sandbox yarn run -s tf:terraform graph | dot -Tsvg > ~/Desktop/infrastructure.svg

Serverless Deployment (IAM Roles)

This section discusses developers getting code and secrets deployed (manually from local machines to an AWS development playground or automated via CI).

All commands in this section should be run by AWS users with attached IAM groups provisioned by our support stack of tf-${SERVICE_NAME}-${STAGE}-(admin|developer|ci). The configuration for this section is controlled by: serverless.yml

⚠️ Prod/Real World Warning: This reference application deploys from local laptops for ease of instruction. However, our laptops are usually a different operating system than the target Lambda Linux execution environment. This is an issue for binary dependencies in node_modules which are OS-specific and zipped up and shipped with the Lambda application.

Our reference application presently does not have binary dependencies, but as a best practice for a real world Lambda application, you should not package and deploy from a different OS than your target Lambda execution environment. This means if locally deploying using an appropriate Docker setup for packaging, or using a CI/CD system that matches the Lambda OS to package and deploy the application.

Admin Deployment

These actions are reserved for -admin users.

Create the Lambda app. The first time through a deploy, an -admin user is required (to effect the underlying CloudFormation changes).

$ STAGE=sandbox yarn run lambda:deploy

# **WARNING**: If using `aws-vault`, remember `--no-session`!
$ STAGE=sandbox aws-vault exec FIRST.LAST --no-session -- STAGE=sandbox yarn lambda:deploy

# Check on app and endpoints.
$ STAGE=sandbox yarn run lambda:info

Delete the Lambda app.

# **WARNING**: Use with extreme caution!!!
$ STAGE=sandbox yarn run lambda:_delete

# Confirm (with expected error).
$ STAGE=sandbox yarn lambda:info
...

  Serverless Error ---------------------------------------

  Stack with id sls-${SERVICE_NAME}-${STAGE} does not exist

Metrics:

# Show metrics for an application
$ STAGE=sandbox yarn run lambda:metrics

User Deployment

These actions can be performed by any user (-admin|developer|ci).

Get server information:

$ STAGE=sandbox yarn run lambda:info
...
endpoints:
  ANY - https://HASH.execute-api.AWS_REGION.amazonaws.com/STAGE/base
  ANY - https://HASH.execute-api.AWS_REGION.amazonaws.com/STAGE/base/{proxy+}
  ANY - https://HASH.execute-api.AWS_REGION.amazonaws.com/STAGE/xray
  ANY - https://HASH.execute-api.AWS_REGION.amazonaws.com/STAGE/xray/{proxy+}
...

See the logs:

$ STAGE=sandbox yarn run lambda:logs -f FUNCTION_NAME

Note: To see the logs in the AWS console, you unfortunately cannot just click on "CloudWatch > Logs" and see the relevant potential ones listed because a wildcard would be needed for log:DescribeLogGroups|Streams. However, if you know the log group generated name, and we do here, you can fill in the blanks and navigate to:

https://console.aws.amazon.com/cloudwatch/home?#logStream:group=/aws/lambda/sls-SERVICE_NAME-STAGE-FUNCTION_NAME;streamFilter=typeLogStreamPrefix

Update the Lambda server.

$ STAGE=sandbox yarn run lambda:deploy

# **WARNING**: If using `aws-vault`, remember `--no-session`!
$ STAGE=sandbox aws-vault exec FIRST.LAST --no-session -- STAGE=sandbox yarn lambda:deploy

Rollback to a previous Lambda deployment:

If something has gone wrong, you can see the list of available states to roll back to with:

$ STAGE=sandbox yarn run lambda:rollback

Then choose a datestamp and add with the -t flag like:

$ STAGE=sandbox yarn run lambda:rollback -t 2019-02-07T00:35:56.362Z

terraform-aws-serverless Development

For contributors to FormidableLabs/serverless/aws

To test out a local branch of terraform-aws-serverless, first clone it to the relative path of ../terraform-aws-serverless from this project's checkout. Then run the command:

$ yarn _dev:on

To switch to the local modules instead of published ones. Next, make sure to re-initialize Terraform to pick up the local modules:

$ STAGE=sandbox yarn run tf:service:init --reconfigure

Then, do all your testing of the local module. When you're ready to unwind the changes, you can do:

$ yarn _dev:off

To switch back to the published version. Typically then you'll do a version bump if we've published a new module. Then again re-initialize:

$ STAGE=sandbox yarn run tf:service:init --reconfigure

...and you're up and running the published module again!

Maintenance Status

Stable: Formidable is not planning to develop any new features for this project. We are still responding to bug reports and security concerns. We are still welcoming PRs for this project, but PRs that include new features should be small and easy to integrate and should not include breaking changes.

About

A reference application for AWS + serverless framework.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published