-
Notifications
You must be signed in to change notification settings - Fork 0
A. AWS
- General remarks
- Modules
- API Gateway
- AppSync
- Aurora
- Cognito
- ECR
- EC2
- EFS
- Lambda
- A few words about AWS Lambda
- API Gateway with explicit Lambda handlers
- Basic Lambda with an API Gateway
- Configuring Cloudwatch
- Configuring IAM policies to enable Lambda access to other resources
- Letting other AWS services to access a lambda
- Event sourcing
- Lambda with container
- Lambda with EFS
- Lambda with Layers
- Lambda versions and aliases
- Lambda in private subnets
- Policy
- Role
- Route 53
- S3
- Creating a bucket
- Configuring a bucket access policy
- Creating a public bucket for hosting a static website
- Synching local files with a bucket
- Adding a cloudfront distribution and enabling automatic files invalidation when content changes
- Configuring custom domains on CloudFront
- Redirection and routing rules
- Dealing with 404 for SPA or PWA
- Secret
- Security Group
- SNS
- SQS
- SSM
- Step-function
- VPC
- Troubleshooting
- FAQ
- Annexes
- References
Every resource in this package define an optional dependsOn
property which accepts an array of pulumi.Resource
or pulumi.CustomResource
objects. Pulumi advertises its ability to automatically detect dependencies, but in practice, it falls short of its claims. Incorrect dependencies lead to failed deployments as a resource might need another one before being created or failed destructions as a resource might not be destroyed before another one is.
Example:
const defaultSg = new SecurityGroup({
name: `my-sg`,
// ... other settings
})
const lambda = new Lambda({
// ... other settings
dependsOn: [defaultSg]
})
Related topic:
WARNING: When configuring a Lambda with an API Gateway, that Lambda MUST RETURN A RESPONSE WITH A SPECIFIC SCHEMA! For example. It must return an object with a
statusCode
and abody
value:const { doSomething } = require('./src') exports.handler = async ev => { const message = await doSomething() return { statusCode: 200, body: message } }
const { aws: { apiGateway, Lambda, sns } } = require('@cloudlessopenlabs/pulumix')
// Creates a lambda
const lambda = new Lambda({
// ...
})
// Creates SNS topic
const topic = new sns.Topic({
// ...
})
const restApi = new apiGateway.RestApi({
name: 'my-web-api',
resources: {
// Creates a path that sends data to a Lambda
dogs: {
list: {
GET: { // GET /dogs/list
queryStrings: {
apikey: true // requires that an 'apikey' query string is passed.
},
headers: {
'x-hello': true // requires that an 'x-hello' header is passed.
},
lambda_proxy: {
lambda
}
}
}
},
// Creates a path that sends data to an SNS topic
ingest: {
POST: { // POST /ingest
contentTypes: ['multipart/form-data', 'text/plain'], // supported content-types. Requests that do not use one of these return 415. Default value: 'application/json'
sns: {
topic
}
}
}
},
stages: [{
name: 'v1',
snapshot: {
version: '0.0.8',
description: 'Reset the integration passthrough behavior'
},
cloudwatch: {
level: 'INFO',
metrics: false,
fullRequestResponse: true,
logsRetentionInDays:14
}
}],
domains: [{
name: 'example.com', // custom domain you own.
stages: [{
name: 'v1',
path: 'v1'
}]
}],
tags: {
Project: 'hello'
},
protect: true
})
{
"resource": "/new-contact",
"path": "/new-contact",
"httpMethod": "POST",
"headers":
{
// ...
},
"multiValueHeaders":
{
// ...
},
"queryStringParameters": {
// ... Query string values
},
"multiValueQueryStringParameters": null,
"pathParameters": null,
"stageVariables": null,
"requestContext":
{
// ... Contains Cognito data for example.
},
"body": "{\"hello\":\"world\"}", // Stringified version of the payload.
"isBase64Encoded": false
}
// WARNING: This is a global setting inside an AWS Account.
// Do not run this command in any other stack as it may creates side-effect
const apiGatewaySettings = apiGateway.enableCloudwatch()
When you first deploy your API Gateway with a custom domain as described in the previous section, this deployment will fail with the following error:
Error creating API Gateway Domain Name: BadRequestException: The specified SSL certificate doesn't exist, isn't in us-east-1 region, isn't valid, or doesn't include a valid certificate chain.
That's because the new SSL certificate provisionned by AWS Certificate Manager requires manual verification. You will have to manually verify you own that domain first and then try to redeploy again:
- Login to the AWS Account where you were trying to deploy the API Gateway. A new SSL certificate must be in a
Pending validation
status in ACM. - Manually validate the ACM certificate:
- Browse to
AWS Certificate Manager (ACM)
. - Select the
us-east-1
region. - Select the certificate you just provisioned. Its status should be
Pending validation
. - Copy those values:
CNAME name
CNAME value
- Browse to your DNS, select your domain and create a new
CNAME
record with the values above. - Go back to
AWS Certificate Manager (ACM)
and wait until the status switches toSuccess
.
- Browse to
- Redeploy your pulumi stack.
- Manually configure your DNS so that the traffic for your custom domain is redirected to your API Gateway.
- Gets the DNS value:
- Browse to your API in the API Gateway console.
- Select the
Custom domain names
section, then select your custom domain. - Under the
Configurations
tab, copy theAPI Gateway domain name
value (this is a cloudfront URL)
- Create a new
A
record in your DNS to redirect traffic from your custom domain to the the cloudfront URL from the step above.- Record type:
A
(WARNING: Technically, it is not possible to use an A record with a value different from IP address. With AWS Route 53, it is possible via theAlias
feature. With GoDaddy, this is calledCNAME Flattening
) - Record name: Your custom domain.
- Record value: The cloudfront URL.
- Record type:
- Gets the DNS value:
If the API Gateway uses the edge mode, the Certificate is used by a CloudFront distribution which takes a long time before it is deleted. This long deletion process will block for the Certificate deletion. Do not freak out something is wrong. It could take up to 30 minutes. You can interupt the deletion and restart later if you prefer.
The following example:
- Creates a new GraphQL endpoint with the schema defined below. That new endpoint only accepts authenticated requests via API key (default setup).
- Connects a Lambda resolver to the
project
field of theQuery
type. That lambda will receive the following payload:
/**
* Processes the GraphQL request.
*
* @param {Object} event
* @param {Object} .field Allowed values: 'projects', 'create_project'
* @param {Object} .args
* @param {Object} .identity
* @param {String} .sub
* @param {String} .username
* @param {[String]} .groups
* @param {Object} .claims
* @param {String} .iss
* @param {Number} .exp
* @param {Number} .iat
* @param {Object} .request
* @param {Object} .headers
* @param {String} .'x-forwarded-for' e.g., '49.181.221.14, 130.176.212.45'
* @param {String} .origin e.g., 'https://studio.apollographql.com'
* @param {String} .referer e.g., 'https://studio.com...j-dev/explorer?variant=current'
* @param {String} .'user-agent' e.g., 'Mozilla/5...cko) Chrome/96.0.4664.55 Safari/537.36'
* @param {String} .'cloudfront-is-mobile-viewer' e.g., 'false'
* @param {String} .'cloudfront-is-smarttv-viewer' e.g., 'false'
* @param {String} .'cloudfront-is-tablet-viewer' e.g., 'false'
* @param {String} .'cloudfront-viewer-country' e.g., 'AU'
* @param {Object} .info
* @param {String} .fieldName e.g., 'projects'
* @param {String} .parentTypeName e.g., 'Query'
* @param {Object} .variables
* @param {String} .selectionSetGraphQL e.g., '{ count data { id name } }'
* @param {[String]} .selectionSetList e.g., ['count', 'data', 'data/id', 'data/name']
* @param {Object} .source Reserved property. GraphQL response object from a parent.
*
* @return {Object}
*/
exports.handler = async event => {
const { field, hello, ...rest } = event // 'field' and 'hello' are defined in the 'productResolver' in the code below.
const { source, args, identity, request, info, selectionSetGraphQL, selectionSetList } = rest
console.log('FIELD CONTROLLED VIA THE mappingTemplate.payload')
console.log({
field,
hello
})
console.log('RESERVED FIELDS')
console.log({
source, // GraphQL response object from a parent.
args, // Arguments. In the example below { id:1, name:'jeans' }
identity, // Identity object. It depends on the authentication method. It will typically contain claims.
request,
info,
selectionSetGraphQL,
selectionSetList
})
}
To learn more about the identity
object, please refer to the Cognito $context.identity
object example.
const pulumi = require('@pulumi/pulumi')
const { resolve, aws: { appSync } } = require('@cloudlessopenlabs/pulumix')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const PRODUCT_STACK = `your-product-stack/${ENV}`
const productStack = new pulumi.StackReference(PRODUCT_STACK)
const productApi = productStack.getOutput('lambda')
const tags = {
Project: PROJ,
Env: ENV
}
const schema = `
type Product {
id: ID!
name: String
}
type User {
id: ID!
}
type Query {
products(id: Int, name: String): [Product]
users: [User]
}
schema {
query: Query
}`
// Create the GraphQL API with its Schema.
const graphql = new appSync.Api({
name: PROJECT,
description: `Lineup ${ENV} GraphQL API`,
schema,
resolver: {
// Add all the lambda that are used as data source must be listed here
// in order to configure access from this GraphQL API.
lambdaArns:[productApi.arn]
},
cloudwatch: true,
tags
})
// Create a data source to retrieve and store data.
const dataSource = new appSync.DataSource({
name: PROJECT,
api: {
id: graphql.api.id,
roleArn: graphql.roleArn
},
functionArn: productApi.arn,
tags
})
// Create a VTL resolver that can bridge between a field and data source.
const productResolver = new appSync.Resolver({
name: `${PROJECT}-resolver-product`,
api:{
id: graphql.api.id,
roleArn: graphql.roleArn
},
type: 'Query',
field: 'projects',
mappingTemplate:{
payload: {
field: 'projects',
hello: 'world'
}
},
dataSource,
tags
})
module.exports = {
graphql,
dataSource,
resolvers: {
productResolver
}
}
NOTE: The sample above is similar to:
const graphql = new appSync.Api({
// ...
authConfig: {
apiKey: true
}
})
Because AppSync resolvers that use Lambda data source can be straightforward (most of the time, they're just a pass through to the lambda), we've created a createDataSourceResolvers
helper method which created a single data source for that lambda and then uses GraphQL schema inspection to isolate the fields for which resolvers must be created to route HTTP requests to that Lambda data source.
This API works as follow:
- Creates a new DataSource for the AppSync
api
object using the lambda's ARNfunctionArn
. - Extracts all the fields out of the GraphQL schema string
schema.value
for the GraphQL types defined inschema.includes
(default: ['Query', 'Mutation', 'Subscription']). - For each extracted field, create a new resolver which uses the data source created in step 1.
const pulumi = require('@pulumi/pulumi')
const { resolve, aws: { appSync } } = require('@cloudlessopenlabs/pulumix')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const PRODUCT_STACK = `your-product-stack/${ENV}`
const productStack = new pulumi.StackReference(PRODUCT_STACK)
const productApi = productStack.getOutput('lambda')
const tags = {
Project: PROJ,
Env: ENV
}
const schema = `
type Product {
id: ID!
name: String
}
type User {
id: ID!
}
type Query {
products(id: Int, name: String): [Product]
users: [User]
}
schema {
query: Query
}`
// Create the GraphQL API with its Schema.
const graphql = new appSync.Api({
name: PROJECT,
description: `Lineup ${ENV} GraphQL API`,
schema,
resolver: {
// Add all the lambda that are used as data source must be listed here
// in order to configure access from this GraphQL API.
lambdaArns:[productApi.arn]
},
cloudwatch: true,
tags
})
// Create a single data source using the 'functionArn' value and then create as many resolvers as
// there are fields in the 'Query' type.
const { dataSource, resolvers } = appSync.createDataSourceResolvers({
name: PROJECT,
api: {
id: graphql.api.id,
roleArn: graphql.roleArn
},
schema: {
value: schema,
includes:['Query'] // This means resolvers for all the `Query` fields will be created.
},
functionArn: productApi.arn,
tags
})
module.exports = {
graphql,
productAPI: {
dataSource,
resolvers
}
}
Use the authConfig
property. For example, Cognito:
const graphql = new appSync.Api({
name: 'my-api',
description: `My GraphQL API`,
schema:`
schema {
query: Query
}
type Product {
id: ID!
name: String
}
type User {
id: ID!
}
type Query {
products: [Product]
users: [User]
}`,
resolver: {
lambdaArns:[productApi.arn]
},
authConfig: {
cognito: {
userPoolId: '1234',
awsRegion: 'ap-southeast-2'
}
},
cloudwatch: true,
tags
})
authConfig
:
{
iam: true
}
authConfig
:
{
cognito: {
userPoolId: '1234' // Required
awsRegion: 'ap-southeast-2', // Required
// appIdClientRegex: '^my-app.*', // Optional
// defaultAction: 'DENY' // Default is 'ALLOW'. Allowed values: 'DENY', 'ALLOW'
}
}
This object is the one that is both accessible in the VTL mapping template and passed to the Lambda under the event.identity
property. It is similar to this sample:
{
claims: {
sub: '3c5b5034-1975-4889-a839-d43a7e0fbc48',
iss: 'https://cognito-idp.ap-southeast-2.amazonaws.com/ap-southeast-2_k63pzVJgQ',
version: 2,
client_id: '7n06fpr1t4ntm1hofbh8bnhp96',
origin_jti: '84c72cd1-eaad-40e5-a98f-9d7cd7a586cd',
event_id: 'c95393c0-bab7-40a8-b9e9-48e17b8d23fd',
token_use: 'access',
scope: 'phone openid profile email',
auth_time: 1634788385,
exp: 1634791985,
iat: 1634788385,
jti: 'ade2fe51-4b56-4a8f-9d9f-a9f3d03fd0aa',
username: '3c5b5034-1975-4889-a839-d43a7e0fbc48'
},
defaultAuthStrategy: 'ALLOW',
groups: null,
issuer: 'https://cognito-idp.ap-southeast-2.amazonaws.com/ap-southeast-2_k63pzVJgQ',
sourceIp: [ '49.179.157.39' ],
sub: '3c5b5034-1975-4889-a839-d43a7e0fbc48',
username: '3c5b5034-1975-4889-a839-d43a7e0fbc48'
}
}
authConfig
:
{
oidc: {
issuer: 'dewd'
clientId: '1121321'
authTtl: '60000', // 60,000 ms (1 min)
iatTtl: '60000' // 60,000 ms (1 min)
}
}
WARNING: If both an Aurora cluster and an RDS proxy are provisioned at the same time, the initial
pulumi up
may fail with any of the following errors:Error creating DB Proxy: InvalidParameterValue: RDS is not authorized to assume service-linked role... Check your RDS service-linked role and try again
or
error registering RDS DB Proxy (xxxxxx/default) Target: InvalidDBInstanceState: DB Instance xxxxxxxxxx is in an unsupported state - CONFIGURING_LOG_EXPORTS, needs to be in [AVAILABLE, MODIFYING, BACKING_UP]
This is because the RDS target can only be created with running DB instances. Because the initial setup takes time, the DB instance may not be running by the time the RDS target creation process starts. There is no other option to wait and run
pulumi up
again later. This issue seems to have been resolved when all those resources started to use thedependsOn
option.
WARNING: Once the
masterUsername
is set, it cannot be changed. Attempting to change it will create a delete and replace operation, which is obvioulsy not what you may want.
const { aws:{ rds: { Aurora } } } = require('@cloudlessopenlabs/pulumix')
const aurora = new Aurora({
name: 'my-db',
engine: 'mysql', // Valid values: 'mysql' or 'post',
engineVersion: '8.0',
auroraMySqlVersion: '3.02.0',
availabilityZones: ['ap-southeast-2a', 'ap-southeast-2b', 'ap-southeast-2c'],
backupRetentionPeriod: 30, // 30 days
auth: {
// secretId: process.env.DB_SECRET_ID, // AWS Secret Manager variable name that stores the DB creds. To learn more about this, please refer to the "How to create DB credentials in AWS Secret Manager?" section.
masterUsername: process.env.DB_USERNAME,
masterPassword: process.env.DB_PASSWORD,
},
instanceNbr: 1,
instanceSize: 'db.t3.medium', // 'db.t2.small' does not support MySQL 8.0
vpcId: 'vpc-1234',
subnetIds: ['subnet-1234', 'subnet-4567'],
ingress:[
{ protocol: 'tcp', fromPort: 3306, toPort: 3306, cidrBlocks: ['10.0.1.204/32'], description:`Bastion host access` }
],
protect:false,
publicAccess:false,
allowMajorVersionUpgrade: false, // Optional. Default false.
applyImmediately: true, // Optional. Default true.
tags: {
Project:'my-project',
Env: 'dev'
}
})
NOTES:
- The
auth
config can use inline credentials or credentials stored in AWS Secret Manager. To learn more about this, please refer to the How to create DB credentials in AWS Secret Manager? section.- This example uses an
ingress
rule that gives access to an EC2 instance. In practice, create a dedicated security group to access the RDS cluster, then add this SG to any system that needs access.- Not all
instanceSize
support theengineVersion
. Please refer to this documentation to check which instance size supports which engine version: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html#Concepts.DBInstanceClass.Support
For example, '8.0'
for MySQL or '13.6'
for PostgreSQL. For PostgreSQL, simply use the standard PostgreSQL version. You can list them via this command:
aws rds describe-db-engine-versions --engine aurora-postgresql --query '*[].[EngineVersion]' --output text --region your-AWS-Region
For MySQL, as of 2022, only 3 versions are supported:
5.6
5.7
8.0
For example
2.10.2
for MySQL 5.6 or 5.7 and3.02.0
MySQL 8.0.
Aurora created its own MySQL versions compatible with the community versions. As of 2022, 3 major versions exist: 1,2, and 3. The exact mapping between those Aurora specific version and the community versions are listed here:
- Aurora version 1: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.11Updates.html
- Aurora version 2: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.20Updates.html
- Aurora version 3: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.30Updates.html
Use the EC2
class described in the EC2 with SSM section and the Aurora
class described in the RDS Aurora section. The important bit in the next sample is the aurora ingress
, which allows the bastion to access Aurora:
ingress:[
{ protocol: 'tcp', fromPort: 3306, toPort: 3306, cidrBlocks: [pulumi.interpolate`${bastion.privateIp}/32`], description:`Bastion host ${ec2Name} access` }
]
const { aws:{ EC2, SecurityGroup, rds: { Aurora } } } = require('@cloudlessopenlabs/pulumix')
// Security group for Bastion host
const ec2Name = `${PROJECT}-bastion`
const bastionSg = new SecurityGroup({
name: ec2Name,
description: `Identifies the bastion host ${ec2Name}.`,
vpcId: vpc.id,
tags
})
// Bastion server
const ec2Name = `${PROJECT}-rds-bastion`
const { ami, instanceType } = config.requireObject('bastion')
const bastion = new EC2({
name: ec2Name,
ami,
instanceType,
availabilityZone: vpc.privateSubnets[0].availabilityZone,
vpcSecurityGroupIds: [bastionSg.id],
subnetId: vpc.privateSubnets[0].id,
publicKey,
ssm:{
vpcId: vpc.id,
vpcDefaultSecurityGroupId: vpc.defaultSecurityGroupId
},
tags
})
// Aurora
const { backupRetentionPeriod, instanceSize, instanceNbr } = config.requireObject('aurora')
const aurora = new Aurora({
name: PROJECT,
engine: 'mysql',
engineVersion: '8.0',
auroraMySqlVersion: '3.02.0',
availabilityZones: vpc.availabilityZones,
backupRetentionPeriod,
auth: {
// secretId: process.env.DB_SECRET_ID, // AWS Secret Manager variable name that stores the DB creds. To learn more about this, please refer to the "How to create DB credentials in AWS Secret Manager?" section.
masterUsername: process.env.DB_USERNAME,
masterPassword: process.env.DB_PASSWORD,
},
instanceNbr,
instanceSize,
vpcId:vpc.id,
subnetIds: vpc.isolatedSubnets.apply(subnets => subnets.map(s => s.id)),
ingress:[
{ protocol: 'tcp', fromPort: 3306, toPort: 3306, cidrBlocks: [pulumi.interpolate`${bastion.privateIp}/32`], description:`Bastion host ${ec2Name} access` }
],
protect:false,
publicAccess:false,
tags
})
The basic setup consists of:
- Addind an RDS proxy on an existing and already running cluster or instance.
- Adding a list of resource that can access it via the
ingress
rules. You may want to create a dedicated security group that can access the RDS proxy. This way you can simply add this SG to any resource you wish to have access to the proxy rather than having to add those resource to the ingress list. - Optional, but recommended, turn on IAM authentication on the proxy. This will prevent any client to use explicit DB credentials and force them to be configured properly via their IAM role. To learn more about this, please refer to the Setting up a Lambda to be able to access the RDS proxy when IAM is turned on section.
- In your client, replace the RDS endpoint that you would have used in the hostname with the RDS proxy endpoint. Nothing else changes.
WARNING: If both an Aurora cluster and an RDS proxy are provisioned at the same time, the initial
pulumi up
may fail with any of the following errors:Error creating DB Proxy: InvalidParameterValue: RDS is not authorized to assume service-linked role... Check your RDS service-linked role and try again
or
error registering RDS DB Proxy (xxxxxx/default) Target: InvalidDBInstanceState: DB Instance xxxxxxxxxx is in an unsupported state - CONFIGURING_LOG_EXPORTS, needs to be in [AVAILABLE, MODIFYING, BACKING_UP]
This is because the RDS target can only be created with running DB instances. Because the initial setup takes time, the DB instance may not be running by the time the RDS target creation process starts. There is no other option to wait and run
pulumi up
again later. This issue seems to have been resolved when all those resources started to use thedependsOn
option.
Use the proxy
property. When this feature is enabled, an additional security group is created for RDS proxy.
const auroraOutput = new Aurora({
name: 'my-db',
engine: 'mysql',
engineVersion: '8.0',
auroraMySqlVersion: '3.02.0',
availabilityZones: ['ap-southeast-2a', 'ap-southeast-2b', 'ap-southeast-2c'],
backupRetentionPeriod: 30, // 30 days
auth: {
// secretId: process.env.DB_SECRET_ID, // AWS Secret Manager variable name that stores the DB creds. To learn more about this, please refer to the "How to create DB credentials in AWS Secret Manager?" section.
masterUsername: process.env.DB_USERNAME,
masterPassword: process.env.DB_PASSWORD,
},
instanceNbr: 1,
instanceSize: 'db.t3.medium', // 'db.t2.small' does not support MySQL 8.0
vpcId: 'vpc-1234',
subnetIds: ['subnet-1234', 'subnet-4567'],
ingress:[
{ protocol: 'tcp', fromPort: 3306, toPort: 3306, cidrBlocks: ['10.0.1.204/32'], description:`Bastion host access` }
],
proxy: true
})
To configure it in greater details, use an object instead:
{
proxy: {
enabled: true, // Default true.
subnetIds: null, // Default is the RDS's subnetIds.
logSQLqueries: false, // Default false
idleClientTimeout: 1800, // Default 1800 seconds
requireTls: true, // Default true.
iam: false // Default false. If true, the RDS credentials are disabled and the only way to connect is via IAM.
}
}
By default, all the ingress
rules apply to identically both RDS and RDS proxy. This first example is equivalent to this:
{
ingress:[
{
protocol: 'tcp',
fromPort: 3306,
toPort: 3306,
cidrBlocks: ['10.0.1.204/32'],
description:`Bastion host access`,
rds: true,
proxy: true
}
],
}
To create ingress rules that are specific to RDS or RDS proxy, use the rds
or proxy
flag on each rule.
When the iam
flag is not turned on, you must add the additional steps in your client configuration:
- Generate a password on-the-fly based the client's IAM role. This is done in your code via AWS Signer in the AWS SDK.
- Add an extra
rds-db:connect
policy to your resource's IAM role.
const AWS = require('aws-sdk')
const config = {
region: 'ap-southeast-2',
hostname: 'my-project.proxy-12345.ap-southeast-2.rds.amazonaws.com',
port: 3306,
username: 'admin'
}
const signer = new AWS.RDS.Signer(config)
signer.getAuthToken({ username:config.username }, (err, password) => {
if (err)
console.log(`Something went wrong: ${err.stack}`)
else
console.log(`Great! the password is: ${password}`)
})
To integrate this signer with the mysql2
package:
const mysql = require('mysql2/promise')
const db = mysql.createPool({
host: 'my-project.proxy-12345.ap-southeast-2.rds.amazonaws.com', // can also be an IP
user: 'admin',
ssl: { rejectUnauthorized: false},
database: 'my-db-name',
multipleStatements: true,
waitForConnections: true,
connectionLimit: 2, // connection pool size
queueLimit: 0,
timezone: '+00:00', // UTC
authPlugins: {
mysql_clear_password: () => () => {
return signer.getAuthToken({ username:'admin' })
}
}
})
const { aws:{ Lambda, rds:{ policy: { createConnectPolicy } } } } = require('@cloudlessopenlabs/pulumix')
const rdsAccessPolicy = createConnectPolicy({ name:`my-project-access-rds`, rdsArn:proxy.arn })
const lambda = new Lambda({
//...
policies:[rdsAccessPolicy],
//...
})
createConnectPolicy
accepts the following input:
-
rdsArn
: It is required. Examples:arn:aws:rds:ap-southeast-2:1234:db-proxy:prx-123
,arn:aws:rds:ap-southeast-2:1234:cluster:blabla
orarn:aws:rds:ap-southeast-2:1234:db:blibli
. -
resourceId
: Optional. Default resource name (1) -
username
: Optional. Default*
. Other examples: 'mark', 'peter'
Only RDS proxy embeds its resource ID in its arn. This means that the resourceId
should not be provided when the rdsArn
is an RDS proxy. For all the other RDS resources (clusters and instances), the resourceId
is required. For an Aurora cluster, this resource is called clusterResourceId
, while for an instance, it is called dbiResourceId
.
For more details around creating this policy, please refer to this article Creating and using an IAM policy for IAM database access
This section is not about the code sample(which is trivial and added below), but about the approach. It is NOT RECOMMENDED to use Pulumi to provision a secret in AWS secrets manager and then use it directly into Aurora. The reasons for this are:
- You need to maintain the initial secrets in the Pulumi code. Even if you use environment variables, this could be avoided.
- Each time you run
pulumi up
, there is a risk to update the DB credentials, which could break clients relying on your DB.
Instead, you shoud:
- Prior to provsioning the DB, create a new secret in your account and name it using your stack convention(1).
- Pass that secret ARN to the Aurora script above.
const auroraOutput = aurora({
...
auth: {
secretId: 'my-db-creds-dev' // This can be the secret's name, id or arn.
},
...
})
(1) For example
my-db-creds-<STACKNAME>
(e.g.,my-db-creds-dev
).
AWS Cognito is a fully-managed AWS Auth Server as a Service. It requires to create:
- A user pool to store users.
- A domain to enable the auth server feature. This is required to support both OAuth2 flows as well as the login/signup hosted UI.
- One or many app clients to access the user pool unauthenticated APIs (i.e., sign-in, signup, reset password).
- An optional identity pool if the authenticated user must access AWS resources (e.g., S3).
- An optional resource server if you must define custom scopes.
Configuring Cognito is not trivial. The easiest is to refer to some examples:
- Configuring a direct signup without any user confirmation: This is the easiest, but also the least robust and secured.
- Configuring signup with required email confirmation for account activation: Most standard approach.
The user pool is the container that stores users. To allow user to signin/signup, you must also add:
- A domain, i.e., a URL that will host a hosted UI. You can let AWS provisions a default domain or provide your own (e.g.,
mydomain.com
). - At least one App client.
Don't worry if you feel this model is confusing... It freaking is 🤯!!! Remember that for a single user pool:
- only one domain is allowed;
- more than one app client is allowed, but least one is required if you need to support signin/signup (which you most likely need).
If no domain is provisioned, the OAuth2 Auth Server feature is not toggled, which means that none of the OAuth2 Web APIs described at https://docs.aws.amazon.com/cognito/latest/developerguide/authorization-endpoint.html work. If the domain is provisioned, the base URL for those Web APIs is described in the userPool.domain.endpoint
in the example below.
The following example demonstrates how the UserPool
class provides the optional ability to also provision a domain and a default App.
const { aws: { cognito }, getProject, unwrap } = require('@cloudlessopenlabs/pulumix')
const BACKEND = {} // { backend:'s3' }
const { project:PROJ, stack:ENV } = getProject(BACKEND)
const PROJECT = `${PROJ}-${ENV}`
const preSignup = createSomePreSignUpLamda()
const postConfirm = createSomePostConfirmLamda()
const customMessage = createSomeCustomMessageLamda()
const userPool = new cognito.UserPool({
name: PROJECT,
domain: {
name: PROJECT,
// certArn: 'example.com' // AWS Certificate Manager ARN for 'example.com'
},
username: {
use:['email'], // Allowed values: 'email', 'phone'
// aliases: ['email', 'phone'], // Allowed: 'email', 'phone', 'preferred_username'. When set, those mutable values can be used as username on top of the unique immutable username.
// caseSensitive: true // Default true
},
// attributes: {
// hello_world: {
// type: 'number', // Allowed values: 'string', 'number', 'boolean', 'date'
// required: true, // Default false.
// mutable: true, // Default true.
// range: [0,100] // Default null. Min, max constraints on string or number.
// }
// },
// autoVerifiedAttributes: ['email', 'phone'], // Default null. Supported values: 'email', 'phone'
// recoveryMechanisms: ['email', 'phone'], // Default null. Supported values: 'email', 'phone'
// email: {
// ses: {
// from: 'info@example.com',
// replyTo: 'no-reply@example.com',
// configurationSet: 'ddd',
// arn: 'arn:of:the:ses:service'
// },
// verification: {
// confirmType: 'link', // Valid values: 'code', 'link' (default)
// subject: 'Welcome',
// message: 'Welcome and thanks for joining. Please click on {##this link##} to activate your account.', // WARNING: This message MUST contain certain characters based on the confirmType's value. If confirmType is 'code', this text must contain '{####}'. If it is 'link', this text must contain '{##whatever you want here##}'.
// }
// },
// sms: {
// verification: {
// message: 'Please use this code'
// },
// mfa: {
// message: 'Please use this code'
// }
// },
// mfa : {
// methods: ['sms', 'totp'], // Valid values: 'email', 'sms', 'totp'
// optional: false // Default false. True means only for individual users who have MFA enabled.
// },
hooks: {
preSignUp: {
name: preSignup.name,
arn: preSignup.arn
},
postConfirmation: {
name: postConfirm.name,
arn: postConfirm.arn
},
customMessage: {
name: customMessage.name,
arn: customMessage.arn
}
},
passwordPolicy: {
minimumLength: 6,
requireLowercase: true,
requireNumbers: true,
requireSymbols: true,
requireUppercase: true
},
groups: ['tester'],
defaultApp: {
// name: 'my-app', // Default `default-app-${name}` where 'name' is the user pool's name.
oauth: {
// disable: false, // Default false
grantTypes:['code', 'password', 'refresh_token'],
passwordModes:['srp', 'admin', 'standard'],
scopes: ['phone', 'email', 'openid', 'profile'],
// secret: false // Default false. When set to true, a secret is generated. Use this when for server-side authentication. WARNING: True forces the secret to be passed during the authorizaton_code flow, which is not suitable for a SPA or PWA.
},
// tokenDuration: {
// idToken: {
// value: 1, // Default 1.
// unit: 'hours' // Default 'hours'. Allowed values: 'seconds', 'minutes', 'hours' (default), 'days'
// },
// accessToken: {
// value: 1, // Default 1.
// unit: 'hours' // Default 'hours'. Allowed values: 'seconds', 'minutes', 'hours' (default), 'days'
// },
// refreshToken: {
// value: 30, // Default 30.
// unit: 'hours' // Default 'days'. Allowed values: 'seconds', 'minutes', 'hours', 'days' (default)
// }
// },
allowedUrls:{
callbacks:['https://fdewcds3423.cloudfront.net'],
// logouts: ['https://fdewcds3423.cloudfront.net/logout']
},
// idps:['facebook', 'google'], // Allowed values: 'facebook', 'google', 'amazon', 'apple', 'oidc', 'saml'
},
protect: false,
tags: {
Project: PROJ,
Env: ENV
}
})
unwrap(userPool).apply(v => console.log(v))
// This prints the following:
// {
// accountRecoverySetting: {
// recoveryMechanisms: [{
// name: 'verified_email',
// priority: 1
// }]
// },
// adminCreateUserConfig: {
// allowAdminCreateUserOnly: false
// },
// arn: 'arn:aws:cognito-idp:ap-southeast-2:123456:userpool/ap-southeast-2_rxfg32d6',
// creationDate: '2022-05-20T13:13:31Z',
// emailConfiguration: {
// emailSendingAccount: 'COGNITO_DEFAULT'
// },
// endpoint: 'cognito-idp.ap-southeast-2.amazonaws.com/ap-southeast-2_rxfg32d6',
// estimatedNumberOfUsers: 0,
// id: "ap-southeast-2_rxfg32d6",
// lambdaConfig: {
// postConfirmation: "arn:aws:lambda:ap-southeast-2:123456:function:authserver-post-confirmation-prod",
// preSignUp: "arn:aws:lambda:ap-southeast-2:123456:function:authserver-pre-signup-prod"
// },
// lastModifiedDate: "2022-05-20T13:13:31Z",
// mfaConfiguration: "OFF",
// name: "authserver-prod",
// passwordPolicy: {
// minimumLength: 6,
// requireLowercase: true,
// requireNumbers: true,
// requireSymbols: true,
// requireUppercase: true,
// temporaryPasswordValidityDays: 7,
// },
// tags: {
// Env: "prod",
// Name: "authserver-prod",
// Project: "authserver"
// },
// tagsAll: {
// Env: "prod",
// Name: "authserver-prod",
// Project: "authserver"
// },
// urn: "urn:pulumi:prod::authserver::aws:cognito/userPool:UserPool::authserver-prod",
// usernameAttributes: [
// 'email'
// ],
// usernameConfiguration: {
// caseSensitive: true
// },
// verificationMessageTemplate : {
// defaultEmailOption: "CONFIRM_WITH_LINK"
// },
// domain: {
// awsAccountId: "123456",
// cloudfrontDistributionArn: "d18k7b2git647n.cloudfront.net",
// domain: "authserver-prod",
// endpoint: "https://authserver-prod.auth.ap-southeast-2.amazoncognito.com",
// id: "authserver-prod",
// s3Bucket: "aws-cognito-prod-syd-assets",
// urn: "urn:pulumi:prod::authserver::aws:cognito/userPoolDomain:UserPoolDomain::authserver-prod-domain",
// userPoolId: "ap-southeast-2_rxfg32d6",
// version: "20220520131333"
// },
// userGroups: [{
// id: "ap-southeast-2_rxfg32d6/tester",
// name: "tester",
// precedence: 0,
// urn: "urn:pulumi:prod::authserver::aws:cognito/userGroup:UserGroup::tester",
// userPoolId: "ap-southeast-2_rxfg32d6",
// }],
// permissions: [{ action:'lambda:InvokeFunction' }] // Permission on each Lambda trigger,
// defaultApp: {
// accessTokenValidity: 1,
// allowedOauthFlows: [
// "code"
// ],
// allowedOauthFlowsUserPoolClient: true,
// allowedOauthScopes: [
// "openid",
// "phone",
// "profile",
// "email"
// ],
// callbackUrls: [
// "https://fdewcds3423.cloudfront.net"
// ],
// clientSecret: "[secret]",
// defaultRedirectUri: "https://fdewcds3423.cloudfront.net",
// enableTokenRevocation: true,
// explicitAuthFlows: [
// "ALLOW_ADMIN_USER_PASSWORD_AUTH",
// "ALLOW_USER_SRP_AUTH",
// "ALLOW_USER_PASSWORD_AUTH",
// "ALLOW_REFRESH_TOKEN_AUTH",
// "ALLOW_CUSTOM_AUTH"
// ],
// generateSecret: false,
// hostedUI: {
// loginUrl: "https://authserver-prod.auth.ap-southeast-2.amazoncognito.com/login?client_id=32e...",
// signupUrl: "https://authserver-prod.auth.ap-southeast-2.amazoncognito.com/signup?client_id=32eu..."
// },
// id: "fewr32432d2r32d2d",
// idTokenValidity: 1,
// name: "authserver-app-prod",
// preventUserExistenceErrors: "ENABLED",
// refreshTokenValidity: 30,
// supportedIdentityProviders: [
// "COGNITO"
// ],
// tokenValidityUnits: {
// accessToken: "hours",
// idToken: "hours",
// refreshToken: "days"
// },
// userPoolId: "ap-southeast-2_rxfg32d6"
// }
// }
NOTES:
- By default, the signup requires a unique immutable username which can be anything. To add support for mutable 'email', 'phone' or 'preferred_username' username, use the 'username.aliases' property. Use the 'username.use' property to add immutable support to use 'email' or 'phone' as username.
- By default, the email confirmation method uses a link. To change this to use a code, use the 'email.verification.confirmType' property.
const userPool = new cognito.UserPool({
//... more config
hooks: {
preSignUp: {
name: preSignup.name,
arn: preSignup.arn
}
//... more config
}
//... more config
})
The pre-signup lambda is triggered when a new user signs up, just after the user is successfully added to the pool. If this trigger fails, the entire operation is aborted and rolled back. This trigger is typically used to synchronize the Cognito user details with your own backend (e.g., user
tables on your own database).
The payload received by this lambda is similar to this:
{
"version": "1",
"region": "ap-southeast-2",
"userPoolId": "ap-southeast-2_tMBCrsw9Y",
"userName": "3931fffa-85f2-4821-aeac-22ee804bf379",
"callerContext":
{
"awsSdkVersion": "aws-sdk-unknown-unknown",
"clientId": "26tetpqo2sils133ho8eijhh7u"
},
"triggerSource": "PreSignUp_SignUp",
"request":
{
"userAttributes":
{
"email": "nic@example.com"
},
"validationData": null
},
"response":
{
"autoConfirmUser": false,
"autoVerifyEmail": false,
"autoVerifyPhone": false
}
}
const userPool = new cognito.UserPool({
//... more config
hooks: {
postConfirmation: {
name: postConfirm.name,
arn: postConfirm.arn
}
//... more config
}
//... more config
})
The post-confirmation lambda is triggered when the user is confirmed. This can happen automatically during the signup process or manually when a user has to confirm their details (e.g., email, phone number). This trigger is typically used to add the user to a pre-configured Cognito User Group.
The payload received by this lambda is similar to this:
{
"version": "1",
"region": "ap-southeast-2",
"userPoolId": "ap-southeast-2_tMBCrsw9Y",
"userName": "3931fffa-85f2-4821-aeac-22ee804bf379",
"callerContext":
{
"awsSdkVersion": "aws-sdk-unknown-unknown",
"clientId": "26tetpqo2sils133ho8eijhh7u"
},
"triggerSource": "PostConfirmation_ConfirmSignUp",
"request":
{
"userAttributes":
{
"sub": "3931fffa-85f2-4821-aeac-22ee804bf379",
"cognito:email_alias": "nic@example.com",
"email_verified": "false",
"cognito:user_status": "CONFIRMED",
"email": "nic@example.com"
}
},
"response":
{}
}
const userPool = new cognito.UserPool({
//... more config
hooks: {
customMessage: {
name: customMessage.name,
arn: customMessage.arn
}
//... more config
}
//... more config
})
This Lambda allows to send custom messages in response to Cognito events. The event intercepted by this lambda is similar to this:
{
"version": "1",
"region": "ap-southeast-2",
"userPoolId": "ap-southeast-xxxxxx",
"userName": "xxxxxxxxxxx",
"callerContext":
{
"awsSdkVersion": "aws-sdk-unknown-unknown",
"clientId": "xxxxxxxx"
},
"triggerSource": "CustomMessage_SignUp",
"request":
{
"userAttributes":
{
"sub": "xxxxxxxxx",
"cognito:email_alias": "you@example.com",
"email_verified": "false",
"cognito:user_status": "UNCONFIRMED",
"email": "you@example.com"
},
"codeParameter": "{####}",
"linkParameter": "{##Click Here##}",
"usernameParameter": null
},
"response":
{
"smsMessage": null,
"emailMessage": null,
"emailSubject": null
}
}
Where triggerSource
is used to determine which the event type:
-
CustomMessage_SignUp
: To send the confirmation code post sign-up. -
CustomMessage_AdminCreateUser
: To send the temporary password to a new user. -
CustomMessage_ResendCode
: To resend the confirmation code to an existing user. -
CustomMessage_ForgotPassword
: To send the confirmation code for Forgot Password request. -
CustomMessage_UpdateUserAttribute
: When a user's email or phone number is changed, this trigger sends a verification code automatically to the user. Cannot be used for other attributes. -
CustomMessage_VerifyUserAttribute
: This trigger sends a verification code to the user when they manually request it for a new email or phone number. -
CustomMessage_Authentication
: To send MFA code during authentication.
exports.handler = async event => {
event = event || {}
logger.log({ level:'INFO', message:'Cognito event received', code:'00003002000', data: event })
event.response.emailSubject = "Welcome to the service"
event.response.emailMessage = 'Test test 1234 hello ' + event.request.codeParameter
return event
}
IMPORTANT:
emailMessage
orsmsMessage
MUST CONTAIN THE CODE OR LINK when the message is a verification message. If they don't, Cognito treats them as invalid and falls back on the default message.
const app = new cognito.App({
name: `${PROJ}-app-${ENV}`,
userPool: {
id: userPool.pool.id,
endpoint: userPool.domain.endpoint
},
oauth: {
// disable: false, // Default false
grantTypes:['code', 'password', 'refresh_token'],
passwordModes:['srp', 'admin', 'standard'],
scopes: ['phone', 'email', 'openid', 'profile'],
// secret: false // Default false. When set to true, a secret is generated. Use this when for server-side authentication. WARNING: True forces the secret to be passed during the authorizaton_code flow, which is not suitable for a SPA or PWA.
},
// tokenDuration: {
// idToken: {
// value: 1, // Default 1.
// unit: 'hours' // Default 'hours'. Allowed values: 'seconds', 'minutes', 'hours' (default), 'days'
// },
// accessToken: {
// value: 1, // Default 1.
// unit: 'hours' // Default 'hours'. Allowed values: 'seconds', 'minutes', 'hours' (default), 'days'
// },
// refreshToken: {
// value: 30, // Default 30.
// unit: 'hours' // Default 'days'. Allowed values: 'seconds', 'minutes', 'hours', 'days' (default)
// }
// },
allowedUrls:{
callbacks:['https://fdewcds3423.cloudfront.net'],
// logouts: ['https://fdewcds3423.cloudfront.net/logout']
},
// idps:['facebook', 'google'], // Allowed values: 'facebook', 'google', 'amazon', 'apple', 'oidc', 'saml'
})
unwrap(app).apply(v => console.log(v))
// {
// accessTokenValidity: 1,
// allowedOauthFlows: [
// "code"
// ],
// allowedOauthFlowsUserPoolClient: true,
// allowedOauthScopes: [
// "openid",
// "phone",
// "profile",
// "email"
// ],
// callbackUrls: [
// "https://fdewcds3423.cloudfront.net"
// ],
// clientSecret: "[secret]",
// defaultRedirectUri: "https://fdewcds3423.cloudfront.net",
// enableTokenRevocation: true,
// explicitAuthFlows: [
// "ALLOW_ADMIN_USER_PASSWORD_AUTH",
// "ALLOW_USER_SRP_AUTH",
// "ALLOW_USER_PASSWORD_AUTH",
// "ALLOW_REFRESH_TOKEN_AUTH",
// "ALLOW_CUSTOM_AUTH"
// ],
// generateSecret: false,
// hostedUI: {
// loginUrl: "https://authserver-prod.auth.ap-southeast-2.amazoncognito.com/login?client_id=32e...",
// signupUrl: "https://authserver-prod.auth.ap-southeast-2.amazoncognito.com/signup?client_id=32eu..."
// },
// id: "fewr32432d2r32d2d",
// idTokenValidity: 1,
// name: "authserver-app-prod",
// preventUserExistenceErrors: "ENABLED",
// refreshTokenValidity: 30,
// supportedIdentityProviders: [
// "COGNITO"
// ],
// tokenValidityUnits: {
// accessToken: "hours",
// idToken: "hours",
// refreshToken: "days"
// },
// userPoolId: "ap-southeast-2_rxfg32d6"
// }
Using the AWS SDK CognitoIdentityServiceProvider
APIs is a typical use case when interacting programmatically with Cognito. For example, the code below adds a:
const AWS = require('aws-sdk')
const main = async () => {
const cognitoIdp = new AWS.CognitoIdentityServiceProvider()
await cognitoIdp.adminAddUserToGroup({
UserPoolId: '12345',
GroupName: 'admin',
Username: '32231-ded32-32e32s2-23e11'
}).promise()
}
To run this sample, the environment (e.g., Lambda) must use a policy that allows the adminAddUserToGroup
action. In the case of a Lambda, this would be similar to:
const pulumi = require('@pulumi/pulumi')
const { aws: { Lambda, cognito } } = require('@cloudlessopenlabs/pulumix')
const postConfirmation = new Lambda({...})
const userPool = new cognito.UserPool({...})
pulumi.output(userPool.arn).apply(userPoolArn => Lambda.attachPolicy(postConfirmation, {
name: `my-policy`,
path: '/',
description: `IAM policy to let the lambda access Cognito.`,
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'cognito-idp:AdminAddUserToGroup'
],
Resource: userPoolArn,
Effect: 'Allow'
}]
})
}))
The following configuration provisions:
- A user pool to store users.
- A domain provisioned by AWS on its
.amazoncognito.com
domain to access the Auth Server. - An App to host a signin/signup hosted UI and interact with the Auth Server.
- A Pre-signup lambda to:
- auto-confirm and auto-verify the user.
- Sync that user to another data store (most likely the user table in your own database).
What is important to understand with the following code snippet is that:
- Upon successful signup, the user will be immediately redirected to the callback URL.
- The pre-signup Lambda must mutate the event it receives and return that object in order to mark the user as verified and confirmed. If the user is not confirmed, they wont be able to login. If they are not verified, they won't be able to reset their password using their email (notice that the
recoveryMechanisms
property is set to'email'
). The mutation looks like this:
event.response.autoConfirmUser = true
event.response.autoVerifyEmail = true
event.response.autoVerifyPhone = true
NOTES:
- The
recoveryMechanisms
won't work if the associatedautoVerify<MECHANISM>
property is not true.
const { aws: { cognito }, getProject, unwrap } = require('@cloudlessopenlabs/pulumix')
const BACKEND = {} // { backend:'s3' }
const { project:PROJ, stack:ENV } = getProject(BACKEND)
const PROJECT = `${PROJ}-${ENV}`
const preSignup = createSomePreSignUpLamda()
const userPool = new cognito.UserPool({
name: PROJECT,
domain: {
name: PROJECT,
},
username: {
use:['email']
},
recoveryMechanisms: ['email'], // Default null. Supported values: 'email', 'phone'
hooks: {
preSignUp
},
passwordPolicy: {
minimumLength: 6,
requireLowercase: true,
requireNumbers: true,
requireSymbols: true,
requireUppercase: true
},
defaultApp: {
oauth: {
grantTypes:['code', 'password', 'refresh_token'],
passwordModes:['srp', 'admin', 'standard'],
scopes: ['phone', 'email', 'openid', 'profile'],
},
allowedUrls:{
callbacks:['https://fdewcds3423.cloudfront.net'],
}
},
protect: false,
tags: {
Project: PROJ,
Env: ENV
}
})
The following configuration provisions:
- A user pool to store users.
- A domain provisioned by AWS on its
.amazoncognito.com
domain to access the Auth Server. - An App to host a signin/signup hosted UI and interact with the Auth Server.
- A Pre-signup lambda to:
- Sync that user to another data store (most likely the user table in your own database).
As opposed to the previous example, upon successful signup, the user is redirected to an intermediate page prompting them to click continue after the activation step has been completed. This activation step is toggled via the autoVerifiedAttributes
. When that property is set, an activation is sent (via the channels defined in the autoVerifiedAttributes
) to the user (1).
Optionally, the activation message can be configured as follow:
-
email.ses
property (default null). This is the email service. By default, Cognito is used (which uses SES behind the scene). This is not the recommended option for production use as the from email is no-reply@verificationemail.com which is flagged as spam by most email providers. -
email.verification
property (default null). This object allows to configured the message sent. Theemail.verification.confirmType
can be blank (defaultlink
).
IMPORTANT: If the
email.verification.message
is set, it MUST contain one of those two tokens:
{####}
: This token is required in the text message is the confirmType is 'code' (e.g., 'Welcome. Use this code to activate your account: {####}').{##whatever text your need here##}
: This token is required in the text message is the confirmType is empty or set to 'link' (e.g., 'Welcome. Click on {##this link##} to activate your account.').
Finally, the pre-signup Lambda does not have to mutate the event payload to set the autoConfirmUser
, autoVerifyEmail
, or autoVerifyPhone
flags.
(1) The values in the
autoVerifiedAttributes
must match the required attributes provided by the user during the signup. Configuring those attributes can be done in a couple of ways. Either the attribute (e.g., email) is set via theusername
, or it is set via theattributes
property. More about theattributes
property in the Configuring signup attributes example.
const { aws: { cognito }, getProject, unwrap } = require('@cloudlessopenlabs/pulumix')
const BACKEND = {} // { backend:'s3' }
const { project:PROJ, stack:ENV } = getProject(BACKEND)
const PROJECT = `${PROJ}-${ENV}`
const preSignup = createSomePreSignUpLamda()
const userPool = new cognito.UserPool({
name: PROJECT,
domain: {
name: PROJECT,
},
username: {
use:['email']
},
autoVerifiedAttributes: ['email'], // Default null. Supported values: 'email', 'phone'
recoveryMechanisms: ['email'], // Default null. Supported values: 'email', 'phone'
email: {
verification: {
confirmType: 'link', // Valid values: 'code', 'link' (default)
subject: 'Welcome',
message: 'Welcome and thanks for joining. Please click on {##this link##} to activate your account.', // WARNING: This message MUST contain certain characters based on the confirmType's value. If confirmType is 'code', this text must contain '{####}'. If it is 'link', this text must contain '{##whatever you want here##}'.
}
},
hooks: {
preSignUp
},
passwordPolicy: {
minimumLength: 6,
requireLowercase: true,
requireNumbers: true,
requireSymbols: true,
requireUppercase: true
},
defaultApp: {
oauth: {
grantTypes:['code', 'password', 'refresh_token'],
passwordModes:['srp', 'admin', 'standard'],
scopes: ['phone', 'email', 'openid', 'profile'],
},
allowedUrls:{
callbacks:['https://fdewcds3423.cloudfront.net'],
}
},
protect: false,
tags: {
Project: PROJ,
Env: ENV
}
})
It is not unusual to require more than the email or phone number during a signup process (e.g., first name, last name, ...). In this case, you should use the attributes
property.
There are 2 types of attributes:
- Standard (i.e., used by the OIDC): e.g., given_name, family_name, address, ... (full list at https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html). Those attributes are automatically recognized by the out-of-the-box hosted UI and will appear in the signup form.
- Custom: Non-standard attributes: For example:
hello_world
.
WARNING: Non-standard attributes have the following limitations:
- The cannot be set to required.
- The will not automatically appear in the hosted UI signup form.
// List of all supported standard attributes: https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html
const userPool = new cognito.UserPool({
// ... other props
attributes: {
given_name: {
type: 'string',
required: true
},
family_name: {
type: 'string',
required: true
},
phone_number: {
type: 'string',
required: true
},
hello: {
type: 'string',
required: false
}
}
})
The next sample shows how to provision an EC2 bastion host secured via SSM in a private subnet. A private subnet does not need to have a NAT Gateway to work with SSM, but in this example, it is required in order to use the EC2_SHELL
which needs internet access to install telnet (this is just for example, because in theory, you would use SSM to install telnet, which would remove the need for this userData script, and therefore would also remove the need for a NAT gateway).
Also, notice that we are passing the RSA public key to this instance. This will set up the RSA key for the ec2-user
SSH user. The RSA private key is intended to be shared to any engineer that needs to establish a secured SSH tunnel between their local machine and this bastion host. Private RSA keys are usually not supposed to be shared lightly, but in this case, the security and accesses are managed by SSM, which relaxes the restrictions around sharing the RSA private key. For more details about SSH tunneling with SSM, please refer to this document: https://gist.github.com/nicolasdao/4808f0a1e5e50fdd29ede50d2e56024d#ssh-tunnel-to-private-rds-instances.
const { aws: { EC2 } } = require('@cloudlessopenlabs/pulumix')
const { getPubKeySync } = require('./src/ssh')
const EC2_SHELL = `#!/bin/bash
set -ex
cd /tmp
sudo yum install -y telnet`
// The code for this `getPubKeySync` method is documented in the "NodeJS snippet to get SSH public key" section
const EC2_RSA_PUBLIC_KEY = getPubKeySync()
const ec2 = new EC2({
name: 'my-ec2-machine',
ami: 'ami-02dc2e45afd1dc0db', // That's Amazon Linux 2 for 64-bits ARM which comes pre-installed with the SSM agent.
instanceType: 't4g.nano', // EC2 ARM graviton 2
availabilityZone: 'ap-southeast-2a', // Tip: Use `npx get-regions` to find an AZ.
subnetId: privateSubnetId,
userData: EC2_SHELL,
publicKey: EC2_RSA_PUBLIC_KEY, // The private key is used by the SSH client.
ssm: { // Toggles SSM: WARNING: SSM must be manually configured in the AWS Console to
vpcId:vpc.id,
vpcDefaultSecurityGroupId: vpc.vpc.defaultSecurityGroupId
},
tags: {
Project: 'my-cool-project',
Env: 'dev'
}
})
console.log(ec2)
NOTE: Refer to the Annexes to learn more about:
- the
getPubKeySync
method (NodeJS snippet to get SSH public key section);- how to generate SSH keys (Generating SSH keys section);
- how to manually configure SSM in the AWS Console (Setting up SSM in the AWS Console section).
const awsx = require('@pulumi/awsx')
const path = require('path')
// ECR images. Doc:
// - buildAndPushImage API: https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/awsx/ecr/#buildAndPushImage
// - 2nd argument is a DockerBuild object: https://www.pulumi.com/docs/reference/pkg/docker/image/#dockerbuild
const image = awsx.ecr.buildAndPushImage('my-image-name', {
context: path.resolve('../app'),
args:{
SOME_ARG: 'hello'
},
tags: {
Name: 'my-image-name'
}
})
Where args
is what is passed to the --build-arg
option of the docker build
command.
The URL for this new image is inside the image.imageValue
property.
const { aws:{ ecr } } = require('@cloudlessopenlabs/pulumix')
const myImage = new ecr.Image({
name: 'my-image',
tag: 'v2',
dir: path.resolve('./app')
})
Where myImage
is structured as follow:
-
myImage.imageValues
: It contains the values you can use in theFROM
directive of another Dockerfile (e.g.,FROM 12345.dkr.ecr.ap-southeast-2.amazonaws.com/my-image:v2
). If thetag
property is set, this array contains two values. The first item is tagged with the thetag
value, and the second is tagged with<tag>-<SHA-digest>
. If thetag
is not set, this array contains only one item tagged with the SHA-digest. -
myImage.repository
: Output object with the repository's details. -
lifecyclePolicy
: Output object with the lifecycle policy.
const myImage = new ecr.Image({
name: 'my-image',
tag: 'v3',
dir: path.resolve('./app'),
args: {
DB_USER: '1234',
DB_PASSWORD: '4567'
},
imageTagMutable: false, // the default is true
lifecyclePolicies:[{
description: 'Only keep up to 50 tagged images',
tagPrefixList:['v'],
countNumber: 50
}],
tags: {
Project: 'my-cool-project',
Env: 'prod',
Name: 'my-image'
}
})
NOTICE:
- When
imageTagMutable
is set to false, each tagged version becomes immutable, which means your deployment will fail if you're pushing a tag that already exists.
By default, repositories are private. To make them public, use:
const myImage = new ecr.Image({
name: 'my-image',
tag: 'v3',
dir: path.resolve('./app'),
args: {
DB_USER: '1234',
DB_PASSWORD: '4567'
},
imageTagMutable: false, // the default is true
lifecyclePolicies:[{
description: 'Only keep up to 50 tagged images',
tagPrefixList:['v'],
countNumber: 50
}],
publicConfig: {
aboutText: 'This is a public repo',
description: 'This is a public repo',
usageText: 'Use it as follow...',
architectures: ['ARM', 'ARM 64', 'x86', 'x86-64'],
operatingSystems: ['Linux']
},
tags: {
Project: 'my-cool-project',
Env: 'prod',
Name: 'my-image'
}
})
const pulumi = require('@pulumi/pulumi')
const { aws:{ securityGroup, vpc, Lambda, efs } } = require('@cloudlessopenlabs/pulumix')
const { resolve } = require('path')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const tags = {
Project: PROJ,
Env: ENV
}
const main = async () => {
// VPC with a public subnet and an isolated subnet (i.e., private with no NAT)
const vpcOutput = await vpc({
name: PROJECT,
subnets: [{ type: 'public' }, { type: 'isolated', name: 'efs' }],
numberOfAvailabilityZones: 3,
protect: true,
tags
})
// Security group that can access EFS
const { securityGroup:accessToEfsSecurityGroup } = await securityGroup.sg({
name: `${PROJECT}-access-efs`,
description: `Access to the EFS filesystem ${PROJECT}.`,
egress: [{
protocol: '-1',
fromPort: 0,
toPort: 65535,
cidrBlocks: ['0.0.0.0/0'],
ipv6CidrBlocks: ['::/0'],
description:'Allows to respond to all traffic'
}],
vpcId: vpc.id,
tags
})
// EFS
const efsOutput = await efs({
name: PROJECT,
accessPointDir: '/projects',
vpcId: vpc.id,
subnetIds: vpc.isolatedSubnetIds,
ingress:[{
// Allows traffic from resources with the 'accessToEfsSecurityGroup' SG.
protocol: 'tcp', fromPort: 2049, toPort: 2049, securityGroups: [accessToEfsSecurityGroup.id], description: 'SG for NFS access to EFS'
}],
protect: true,
tags
})
// Lambda
const lambda = new Lambda({
name: PROJECT,
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
timeout: 30,
vpcConfig: {
subnetIds: vpc.isolatedSubnetIds,
securityGroupIds:[
// Use the 'accessToEfsSecurityGroup' so that this lambda can access the EFS filesystem.
accessToEfsSecurityGroup.id
]
},
fileSystemConfig: {
arn: efsOutput.accessPoint.arn,
localMountPath: '/mnt/somefolder'
},
cloudwatch: true,
logsRetentionInDays: 7,
tags
})
return {
vpc: vpcOutput,
accessToEfsSecurityGroup,
efs: efsOutput,
lambda
}
}
module.exports = main()
IMPORTANT: When using Docker, please make sure that your image uses the same architecture (i.e.,
x86_64
vsarm64
) then your Lambda OS. DO NOT USE something likeFROM amazon/aws-lambda-nodejs:14
as this is equivalent to the latest digest. Who knows what architecture the latest digest uses. Instead, browse the Docker Hub registry and find the tag that explicitly supports your OS architecture. For example,FROM amazon/aws-lambda-nodejs:14.2021.09.29.20
useslinux/arm64
while14.2021.10.14.13
useslinux/amd64
.
It is important to know the key design principles behind AWS Lambdas before using them. Please refer to this document for a quick refresher course: https://gist.github.com/nicolasdao/e72beb55f3550351e777a4a52d18f0be#a-few-words-about-aws-lambda
As of 29 of September 2021, ARM-based lambdas are powered by the AWS Graviton2 processor. This results in a significantly better performance/price ratio.
This is why @cloudlessopenlabs/pulumix
uses the arm64
architecture as default rather than x86_64
(which is the normal AWS SDK and Pulumi default). This configuration can be changed via the architecture
property:
const { resolve } = require('path')
const { aws:{ Lambda } } = require('@cloudlessopenlabs/pulumix')
new Lambda({
name: 'my-lambda',
architecture: 'x86_64', // Default is 'arm64'
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
}
})
IMPORTANT: When using Docker, please make sure that your image uses the same architecture (i.e., x86_64
vs arm64
) then your Lambda OS. DO NOT USE something like FROM amazon/aws-lambda-nodejs:14
as this is equivalent to the latest digest. Who knows what architecture the latest digest uses. Instead, browse the Docker Hub registry and find the tag that explicitly supports your OS architecture. For example, FROM amazon/aws-lambda-nodejs:14.2021.09.29.20
uses linux/arm64
while 14.2021.10.14.13
uses linux/amd64
.
As described in the next section called Lambda in private subnets, Lambdas can be provisioned so that they can access your private subnet. In the background, an ENI is provisioned to connect that Lambda hosted in AWS private cloud to your private subnet. This configuration is done via the vpcConfig
property:
const mySg = createSomeSecurityGroup()
const lambda = new Lambda({
// ... other settings
vpcConfig: {
subnetIds: vpc.isolatedSubnetIds,
securityGroupIds:[
mySg.id
]
}
})
This configuration can be depployed successfully, but its destruction will get stuck because of the mySg
security group. If you head to the AWS Console and try to manually delete mySg
, an error message will indicate that it cannot be deleted because it is attached to ENIs. You must then manually detach mySg
from those ENIs before being able to re-run the pulumi destroy
command again.
This happens because the order in which those resources should be destroyed is incorrect. The Lambda should be destroyed before the security group, but because mySg
was reference via its ID in the Lambda, the Lambda cannot add it to its dependsOn
property.
To fix this, use this instead:
const mySg = createSomeSecurityGroup()
const lambda = new Lambda({
// ... other settings
vpcConfig: {
subnetIds: vpc.isolatedSubnetIds,
securityGroups:[
mySg
]
}
})
The same remark applies to the
subnetIds
property which could be replaced bysubnets
.
const { resolve } = require('path')
const { unwrap, aws:{ Lambda } } = require('@cloudlessopenlabs/pulumix')
const l = new Lambda({
name: 'my-lambda',
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
timeout: 30, // Optional. Default 3 seconds.
memorySize: 128, // Optional. Default 128MB
cloudwatch: true, // Optional. Default false.
logsRetentionInDays: 7 // Optional. The default is 0 (i.e., never expires).
policies: [somePolicy], // Optional. Default null.
tags: { // Optional.
Project: 'my-project',
Env: 'dev'
}
})
unwrap(l).apply(v => {
console.log(v.id)
console.log(v.name)
console.log(v.arn)
console.log(v.role)
console.log(v.logGroup)
})
const pulumi = require('@pulumi/pulumi')
const aws = require('@pulumi/aws')
const awsx = require('@pulumi/awsx')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const api = new awsx.apigateway.API(PROJECT, {
routes: [
{
method: 'GET',
path: '/{subFolder}/{subSubFolders+}',
eventHandler: async ev => {
return {
statusCode: 200,
body: JSON.stringify({
subFolder: ev.pathParameters.subFolder,
subSubFolders: ev.pathParameters.subSubFolders
})
}
}
}
],
})
exports.url = api.url
CloudWatch is automatically configured for each Lambda provisioned via each route.
This next sample is more explicit than the previous example. It assumes that the root folder contains an app/
folder which contains the actual NodeJS lambda code:
app/
|__ src/
|__ index.js
|__ index.js
|__ package.json
The
package.json
is not always required. If yourindex.js
is simple and does not contain external NodeJS dependencies, then theindex.js
will suffice.
Where ./index.js
is similar to:
const { doSomething } = require('./src')
exports.handler = async ev => {
const message = await doSomething()
return {
statusCode: 200,
body: message
}
}
// https://www.pulumi.com/docs/reference/pkg/aws/lambda/function/
const pulumi = require('@pulumi/pulumi')
const aws = require('@pulumi/aws')
const { resolve } = require('path')
const { aws:{ Lambda } } = require('@cloudlessopenlabs/pulumix')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const REGION = aws.config.region
const tags = {
Project: PROJ,
Env: ENV,
Region: REGION
}
const lambda = new Lambda({
name: PROJECT,
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
timeout:30,
memorySize:128,
tags
})
// API GATEWAY: https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/awsx/apigateway/
const api = new awsx.apigateway.API(PROJECT, {
routes: [
{
method: 'GET',
path: '/{subFolder}/{subSubFolders+}',
eventHandler: lambda
}
]
})
// api.url
module.exports = api
Cloudwatch could be set up via policies as explained in the next section, but because this setup is common, we've added support for it via the Lambda API:
const { aws:{ Lambda } } = require('@cloudlessopenlabs/pulumix')
const lambda = new Lambda({
// ...
cloudwatch: true,
logsRetentionInDays: 7 // This is optional. The default is 0 (i.e., never expires).
})
Tips:
- Inspect AWS managed policies to see how their statement is structured. You can easily do this with
npx get-policies
.- To find the right action, use this:
npx get-aws-actions
- Please refer to the Annexes in the Policies examples section for common examples.
To illustrate this topic, let's see how we could configure CloudWatch so the Lambda can send its logs to a log group (NOTE: because this is such a common use case, this operation could be simplified by using the cloudwatch: true
property on the Lambda itself). To enable this setup, we need to create a new policy that allows the creations of log groups, log streams and log event as associate that policy to the Lambda's role.
There are 2 ways to create a new policy and associate it to a Lambda:
- Pre-lambda policy creation: This is the most common way to add a policy onto a Lambda.
- Post-lambda policy creation: Use this strategy when the policy can only be added after the Lambda is created.
This is the most common way to add a policy onto a Lambda. If the policy requires information coming from the Lambda after it has been created, than use the Post-lambda policy creation strategy instead.
// IAM: Allow lambda to create log groups, log streams and log events.
// Doc: https://www.pulumi.com/docs/reference/pkg/aws/iam/policy/
const cloudWatchPolicy = new aws.iam.Policy(PROJECT, {
path: '/',
description: 'IAM policy for logging from a lambda',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'logs:CreateLogGroup',
'logs:CreateLogStream',
'logs:PutLogEvents'
],
Resource: 'arn:aws:logs:*:*:*',
Effect: 'Allow'
}]
})
})
const lambda = new Lambda({
name: PROJECT,
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
timeout:30,
memorySize:128,
policies: [cloudWatchPolicy],
tags
})
TIPS: Leverage existing AWS Managed policies instead of creating your own each time (use
npx get-policies
to find them). This example could be re-written as follow:const lambda = new Lambda({ name: PROJECT, fn: { runtime: 'nodejs12.x', dir: resolve('./app') }, timeout:30, memorySize:128, policies: [{ arn: 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole' }], tags })Because enabling CloudWatch on a Lambda is so common, this policy can be automatically toggled as follow:
const lambda = new Lambda({ // ... cloudwatch: true, logsRetentionInDays: 7 // This is optional. The default is 0 (i.e., never expires). })
Use the Lambda.attachPolicy
API. This API is generally used instead of the Pre-lambda policy creation startegy when, for whatever reasons, the policy can only be added after the Lambda is created.
const lambda = new Lambda({
name: PROJECT,
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
timeout:30,
memorySize:128,
policies: [cloudWatchPolicy],
tags
})
Lambda.attachPolicy(lambda, {
name: PROJECT,
path: '/',
description: 'IAM policy for logging from a lambda',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'logs:CreateLogGroup',
'logs:CreateLogStream',
'logs:PutLogEvents'
],
Resource: 'arn:aws:logs:*:*:*',
Effect: 'Allow'
}]
})
})
For God knows what reason, not all services can invoke AWS Lambdas via the standard Identity-based policies strategy. That's why it is recommended to use the Resource-based policies strategy instead via the Pulumi aws.lambda.Permission
API. For example, this is how you would allow AWS Cognito to invoke a lambda:
new aws.lambda.Permission(name, {
action: 'lambda:InvokeFunction',
function: lambda.name,
principal: 'cognito-idp.amazonaws.com',
sourceArn: userPool.arn
})
To easily find the principal's name, use the the command
npx get-principals
.
const { aws:{ Lambda } } = require('@cloudlessopenlabs/pulumix')
const { resolve } = require('path')
const lambda = new Lambda({
name: 'my-example',
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
eventSources: [{
name: 'schedule',
expression: 'rate(1 minute)' // 'cron(30 0 * * ? *)' // Every day at 12:30AM UTC
}]
})
To learn more about the
expression
syntax, please refer to the official AWS doc at https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html.
By default, the event object sent to the Lambda is similar to this:
{
version: '0',
id: 'cee5b84f-57b6-c60b-2c8c-9e1867b7e9ac',
'detail-type': 'Scheduled Event',
source: 'aws.events',
account: '12345677',
time: '2022-01-27T02:18:59Z',
region: 'ap-southeast-2',
resources: [
'arn:aws:events:ap-southeast-2:12345677:rule/some-event-name'
],
detail: {}
}
This object can be fully replaced with your own via the optional eventSources[name='schedule'].payload
property:
const lambda = new Lambda({
name: 'my-example',
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
eventSources: [{
name: 'schedule',
expression: 'rate(1 minute)',
payload: {
hello: 'World'
}
}]
})
const lambda = new Lambda({
name: 'my-example',
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
eventSources: [{
name: 'sqs',
queue: myQueue, // Also support { arn:myQueue.arn }
// batchSize: 1, // Default 10. Max 10,000 for standard queue and 10 for FIFO. That's the max number of message that a single Lambda can pull at once.
// filterCriteria: ... // Optional. Refer to doc: https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#sqs-with-event-filter
}]
})
const lambda = new Lambda({
name: 'my-example',
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
eventSources: [{
name: 'sqs',
queue: myQueue, // Also support { arn:myQueue.arn }
// batchSize: 1, // Default 10. Max 10,000 for standard queue and 10 for FIFO. That's the max number of message that a single Lambda can pull at once.
// filterCriteria: ... // Optional. Refer to doc: https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#sqs-with-event-filter
}]
})
NOTE:
- To configure a dead-letter queue that catches the SQS messages that could not be delivered to the Lambda, add a dead-letter queue on
myQueue
.- To learn more about the shape of an SQS event received by an Lambda, please refer to the Annexes under the SQS event source mapping payload section.
const lambda = new Lambda({
name: 'my-example',
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
eventSources: [{
name: 'sqs',
queue: myQueue, // Also support { arn:myQueue.arn }
filterCriteria: { // Doc: https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-SQS
filters:[{
// Only react to SQS messages set with `body.type == 'check'`
pattern: JSON.stringify({
body: {
type:['check']
}
})
}]
}
}]
})
WARNING: You must make sure that the Docker image is compatible with the Lambda architecture (i.e., x86_64 vs arm64). For a list of all the AWS lambda images with their associated OS, please refer to https://hub.docker.com/r/amazon/aws-lambda-nodejs/tags?page=1&ordering=last_updated.
- Create a new container for you lambda as follow:
- Create a new
app
folder as follow:
mkdir app && \ cd app && \ touch index.js && \ touch Dockerfile
- Paste the following in the
Dockerfile
:
FROM amazon/aws-lambda-nodejs:14.2021.09.29.20 ARG FUNCTION_DIR="/var/task" # Create function directory RUN mkdir -p ${FUNCTION_DIR} # Copy handler function and package.json COPY index.js ${FUNCTION_DIR} # Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile) CMD [ "index.handler" ]
To see how to deal with
npm install
, please refer to https://gist.github.com/nicolasdao/f440e76b8fd748d84ad3b9ca7cf5fd12#the-instructions-order-in-your-dockerfile-matters-for-performance.More about this AWS image below (1).
- Paste the following in the
index.js
:
// IMPORTANT: IT MUST BE AN ASYNC FUNCTION OR THE CALLBACK VERSION: (event, context, callback) => callback(null, { statusCode:200, body: 'Hello' }) exports.handler = async event => { return { statusCode: 200, body: `Hello world!` } }
- Test your lambda locally:
docker build -t my-app . docker run -p 127.0.0.1:4000:8080 my-app:latest curl -XPOST "http://localhost:4000/2015-03-31/functions/function/invocations" -d '{}'
More details about these commands below (2).
- Create a new
- Create your
index.js
:
const { resolve } = require('path')
const { getProject, aws:{ Lambda } } = require('@cloudlessopenlabs/pulumix')
const BACKEND = {} //{ backend: 's3' }
const { createResourceName } = getProject(BACKEND)
const lambda = new Lambda({
name: createResourceName(),
fn: {
dir: resolve('./app'),
type: 'image' // If './app' contains a 'Dockerfile', this prop is not needed. 'lambda' is able to automatically infer the type is an 'image'.
},
timeout:30,
memorySize:128
})
module.exports = {
lambda
}
(1) The amazon/aws-lambda-nodejs:14.2021.09.29.20 docker image hosts a node web server listening on port 8080. The CMD expects a string or array following this naming convention: ".". (2) Once the container is running, the only way to test it is to perform POST to this path:
2015-03-31/functions/function/invocations
. This container won't listen to anything else; no GET, no PUT, no DELETE.
You may also want to add a .dockerignore
. We've added a Dockerfile and a .dockerignore example in the Annexes under the Docker files examples section.
As a quick refresher, the following Dockerfile
:
FROM amazon/aws-lambda-nodejs:14.2021.09.29.20
ARG FUNCTION_DIR="/var/task"
ENV HELLO Mike Davis
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy handler function and package.json
COPY index.js ${FUNCTION_DIR}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "index.handler" ]
Sets up an HELLO
environment variable that can be accessed by the Lambda code as follow:
exports.handler = async event => {
return {
statusCode: 200,
body: `Hello ${process.env.HELLO}!`
}
}
This could have been set up via the docker build
and with an ARG
in the Dockerfile
:
FROM amazon/aws-lambda-nodejs:14.2021.09.29.20
ARG FUNCTION_DIR="/var/task"
ARG MSG
ENV HELLO $MSG
...
docker build --build-arg MSG=buddy -t my-app .
docker run -p 127.0.0.1:4000:8080 my-app:latest
To define one or many --build-arg
via Pulumi, use the following API:
// ECR images. Doc:
// - buildAndPushImage API: https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/awsx/ecr/#buildAndPushImage
// - 2nd argument is a DockerBuild object: https://www.pulumi.com/docs/reference/pkg/docker/image/#dockerbuild
const image = awsx.ecr.buildAndPushImage(PROJECT, {
context: './app',
args: {
MSG: 'Mr Dao. How do you do?'
}
})
Please refer to the Mounting an EFS access point on a Lambda section.
For a full example of a project that uses Lambda with Docker and Git installed to save files on EFS, please refer to this project: https://github.com/nicolasdao/example-aws-lambda-efs
IMPORTANT: Your layer code must be under
/your-layer/nodejs/
, notyour-layer/
For a refresher on how Lambda Layers work, please refer to this document: https://gist.github.com/nicolasdao/e72beb55f3550351e777a4a52d18f0be#layers
Pulumi file index.js
:
const pulumi = require('@pulumi/pulumi')
const aws = require('@pulumi/aws')
const { resolve } = require('path')
const { aws:{ Lambda, LambdaLayer } } = require('@cloudlessopenlabs/pulumix')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const REGION = aws.config.region
const RUNTIME = 'nodejs12.x'
const tags = {
Project: PROJ,
Env: ENV,
Region: REGION
}
const lambdaLayerOutput1 = new LambdaLayer({
name: `${PROJECT}-layer-01`,
description: 'Includes puffy',
runtime: RUNTIME,
dir: resolve('./layers/layer01'),
tags
})
const lambdaLayerOutput2 = new LambdaLayer({
name: `${PROJECT}-layer-02`,
description: 'Do something else',
runtime: RUNTIME,
dir: resolve('./layers/layer02'),
tags
})
const lambda = new Lambda({
name: PROJECT,
fn: {
runtime: RUNTIME,
dir: resolve('./app')
},
layers:[
lambdaLayerOutput1.arn,
lambdaLayerOutput2.arn
],
timeout:30,
memorySize:128,
tags
})
module.exports = {
lambda,
lambdaLayer: lambdaLayerOutput1
}
Lambda file:
exports.handler = async () => {
console.log('Welcome to lambda test layers!')
try {
require('puffy')
console.log('puffy is ready')
} catch (err) {
console.error('ERROR')
console.log(err)
}
try {
const { sayHi } = require('/opt/nodejs/utils')
sayHi()
sayBye()
} catch (err) {
console.error('ERROR IN LAYER ONE')
console.log(err)
}
try {
const { sayHi } = require('/opt/nodejs')
sayHi()
} catch (err) {
console.error('ERRor in layer twO')
console.log(err)
}
}
Layer01 code ./layers/layer01/nodejs/utils.js
module.exports = {
sayHi: () => console.log('Hello, I am layer One')
}
Layer02 code ./layers/layer01/nodejs/index.js
module.exports = {
sayHi: () => console.log('Hello, I am layer Two')
}
To learn more about what versions and aliases are and why they are useful, please refer to this document: AWS LAMBDA/Deployment strategies
To publish the latest deployment to a new version, use the publish
property:
const lambda = new Lambda({
name: PROJECT,
fn: {
runtime: RUNTIME,
dir: resolve('./app')
},
publish: true,
timeout:30,
memorySize:128,
tags
})
To create an alias:
// Doc: https://www.pulumi.com/registry/packages/aws/api-docs/lambda/alias/
const testLambdaAlias = new aws.lambda.Alias('testLambdaAlias', {
name: 'prod',
description: 'a sample description',
functionName: lambda.arn,
functionVersion: '1',
routingConfig: {
additionalVersionWeights: {
'2': 0.5,
}
}
})
Full API doc at https://www.pulumi.com/registry/packages/aws/api-docs/lambda/alias/.
// Doc: https://www.pulumi.com/registry/packages/aws/api-docs/iam/policy/
const cloudWatchPolicy = new aws.iam.Policy('my-custom-policy', {
name: 'my-custom-policy',
description: 'IAM policy for logging from a lambda',
path: '/',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'logs:CreateLogGroup',
'logs:CreateLogStream',
'logs:PutLogEvents'
],
Resource: 'arn:aws:logs:*:*:*',
Effect: 'Allow'
}]
})
})
To see a concrete example that combine a role and a policy to allow multiple services to invole a Lambda, please refer to this example under the AWS role section.
// Doc: https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/
const lambdaRole = new aws.iam.Role('lambda-role', {
name: 'lambda-role',
description: 'IAM role for a Lambda',
assumeRolePolicy: {
Version: '2012-10-17',
Statement: [{
Action: 'sts:AssumeRole',
Principal: {
Service: 'lambda.amazonaws.com', // tip: Use the command `npx get-principals` to find any AWS principal
},
Effect: 'Allow',
Sid: ''
}],
}
})
TIPS: The
Service
property supports both the string type and the array string type. TheStatement
for a role with multiple services would look like this:[{ Action: 'sts:AssumeRole', Principal: { Service: [ 'lambda.amazonaws.com', 'cognito-idp.amazonaws.com' ] }, Effect: 'Allow', Sid: '' }]
This example assumes we have already acquired a lambda's ARN (string):
const lambdaArnString = getLambdaArn() // Just for demo.
// 1. Create a multi-services IAM role.
const myRole = new aws.iam.Role('my-multi-services-role', {
name: 'my-multi-services-role',
description: 'IAM role for a multi-services role',
assumeRolePolicy: {
Version: '2012-10-17',
Statement: [{
Action: 'sts:AssumeRole',
Principal: {
Service: [// tip: Use the command `npx get-principals` to find any AWS principal
'events.amazonaws.com',
'cognito-idp.amazonaws.com'
]
},
Effect: 'Allow',
Sid: ''
}],
}
})
// 2. Create a policy that can invoke the lambda.
const invokePolicy = new aws.iam.Policy('my-custom-policy', {
name: 'my-custom-policy',
description: 'IAM policy for invoking a lambda',
path: '/',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'lambda:InvokeFunction'
],
Resource: lambdaArnString,
Effect: 'Allow'
}]
})
})
// 3. Attach the policy to the role
const lambdaRolePolicyAttachment = new aws.iam.RolePolicyAttachment(`attached-policy`, {
role: myRole.name,
policyArn: invokePolicy.arn
})
const pulumi = require('@pulumi/pulumi')
const aws = require('@pulumi/aws')
const { getProject } = require('@cloudlessopenlabs/pulumix')
const config = new pulumi.Config()
const domains = config.requireObject('domains')
const zoneId = config.require('zoneId')
const BACKEND = {} //{ backend: 's3' }
const { project:PROJ, createResourceName, stack:ENV } = getProject(BACKEND)
const STACK_META = { org:'', stack:ENV, ...BACKEND }
const PROTECT = false
const tags = {
Project: PROJ,
Env: ENV
}
// Creates a new SSL cert using AWS ACM. Doc: https://www.pulumi.com/registry/packages/aws/api-docs/acm/certificate/
const certName = createResourceName('cert')
const cert = new aws.acm.Certificate(certName, {
name: certName,
domainName: domains[0],
subjectAlternativeNames: domains.slice(1),
tags: {
...tags,
Name: certName
},
validationMethod: 'DNS'
}, {
protect: PROTECT,
// If this cert is aimed at configuring a custom domain on CloudFront, then
// it must be provisionned in 'us-east-1'.
provider: new aws.Provider('temp-provider', { region: 'us-east-1' })
})
// Solves DNS challenge (WARNING: Only works if the DNS is also maintain in Route 53 in the same AWS account.)
// Doc: https://www.pulumi.com/registry/packages/aws/api-docs/route53/record/
const challengeName = createResourceName('dns-chal')
const dnsChallengedRecord = new aws.route53.Record(challengeName, {
zoneId: zoneId,
name: cert.domainValidationOptions[0].resourceRecordName,
type: cert.domainValidationOptions[0].resourceRecordType,
ttl: 300,
records: [cert.domainValidationOptions[0].resourceRecordValue],
tags: {
...tags,
Name: challengeName
}
},{
protect: PROTECT
})
module.exports = {
cert,
dnsChallengedRecord
}
For example of S3 bucket policies, please refer to the annexes under the Read/write access to S3 objects section.
No need to use this library. It is quite straightforward to do it with the standard Pulumi package:
const aws = require('@pulumi/aws')
const name = 'my-universally-unique-name' // S3 Bucket name must be universally unique.
// S3 bucket doc: https://www.pulumi.com/docs/reference/pkg/aws/s3/bucket/
const bucket = new aws.s3.Bucket(name, {
bucket: name,
acl: 'private', // Valid values: 'private' (default), 'public-read', 'public-read-write', 'aws-exec-read', 'authenticated-read', and 'log-delivery-write'.
versioning: {
enabled:true
},
tags: {
Name: name
}
})
To grant public access to files stored in S3:
const aws = require('@pulumi/aws')
const name = 'my-universally-unique-name' // S3 Bucket name must be universally unique.
// S3 bucket doc: https://www.pulumi.com/docs/reference/pkg/aws/s3/bucket/
const bucket = new aws.s3.Bucket(name, {
bucket: name,
acl: 'public-read', // Valid values: 'private' (default), 'public-read', 'public-read-write', 'aws-exec-read', 'authenticated-read', and 'log-delivery-write'.
versioning: {
enabled:true
},
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Principal: '*',
Action: 's3:GetObject',
Resource: `arn:aws:s3:::${name}/*`
}
]
}),
tags: {
Name: name
}
})
For example of S3 bucket policies, please refer to the annexes under the Read/write access to S3 objects section.
The following policy grants access to QuickSight hosted in a different AWS Account:
const LOG_BUCKET = 'my-unique-bucket-name'
const QUICKSIGHT_ACCOUNT = '1234567' // AWS Account ID where QuickSight is hosted.
const logBucket = new aws.s3.Bucket(LOG_BUCKET, {
bucket: LOG_BUCKET,
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Principal: {
AWS: [
`arn:aws:iam::${QUICKSIGHT_ACCOUNT}:role/service-role/aws-quicksight-service-role-v0`,
`arn:aws:iam::${QUICKSIGHT_ACCOUNT}:root`
]
},
Action: [
's3:ListBucket',
's3:GetObject',
's3:GetObjectVersion'
],
Resource: [
`arn:aws:s3:::${LOG_BUCKET}`,
`arn:aws:s3:::${LOG_BUCKET}/*`
]
}
]
}),
tags: {
Name: LOG_BUCKET
}
})
const pulumi = require('@pulumi/pulumi')
const { aws:{ s3: { Website } } } = require('@cloudlessopenlabs/pulumix')
const website = new Website({
name: 'my-unique-bucket-name',
website: { // When this property is set, the bucket is public. Otherwise, the bucket is private.
indexDocument: 'index.html',
// errorDocument: 'error.html',
// cors: {...}, // Full doc at https://www.pulumi.com/docs/reference/pkg/aws/s3/bucket/#using-cors
// routingRules: [{ ... }], // Full doc at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-websiteconfiguration-routingrules.html
},
tags: {
Project: 'my-project'
},
// versioning: true, // Default false,
// dependsOn: [x,y,z],
// protect: true, // Default false
})
pulumi.all([
website.bucket.websiteEndpoint,
website.bucket.bucketDomainName,
website.bucket.bucketRegionalDomainName
]).apply(([websiteEndpoint, bucketDomainName, bucketRegionalDomainName]) => {
console.log(`Website URL: ${websiteEndpoint}`)
console.log(`Bucket domain name: ${bucketDomainName}`) // e.g., 'bucketname.s3.amazonaws.com'
console.log(`Bucket regional domain name: ${bucketRegionalDomainName}`) // e.g., 'https://bucketname.s3.ap-southeast-2.amazonaws.com'
})
This feature is not using native Pulumi APIs. Instead, it uses the AWS SDK to sync files via the S3 API after the bucket has been created. When the content
property of the s3.Website
constructor is set, a new files
property is added to the output. The new files
property is an array containing object similar to this:
[{
key: "favicon.png",
hash: "5efd4dc4c28ef3548aec63ae88865ff9"
},{
key: "global.css",
hash: "8ff861b6a5b09e7d5fa681d8dd31262a"
}]
Because this array is stored in Pulumi, we can use this reference object to determine which file must be updated (based on its hash), which file must be added (based its key) and which file must be deleted (based on its key). This is demoed in the sample below where you can see that the existingContent
is passed from the stack to the s3.Website
API.
The following example syncs the files stored under the ./app/public
folder and excludes all files under the node_modules
folder.
const { getProject, getStack, aws: { s3: { Website } } } = require('@cloudlessopenlabs/pulumix')
const { join } = require('path')
const BACKEND = {} // { backend: 's3' }
const { project:PROJ, createResourceName, stack:ENV } = getProject(BACKEND)
const STACK_META = { org:'YourPulumiOrg', stack:ENV, ...BACKEND }
const thisStack = getStack({ project:PROJ, ...STACK_META })
const website = new Website({
name: createResourceName(),
website: { // When this property is set, the bucket is public. Otherwise, the bucket is private.
indexDocument: 'index.html',
// errorDocument: 'error.html',
content: {
dir:join(__dirname, './app/public'),
ignore: '**/node_modules/**',
existingContent: thisStack.getOutput('files'), // e.g., [{key: "favicon.png",hash: "5efd4dc4c28ef3548aec63ae88865ff9" },{ key: "global.css",hash: "8ff861b6a5b09e7d5fa681d8dd31262a" }]
// remove:true,
// cors: {...}, // Full doc at https://www.pulumi.com/docs/reference/pkg/aws/s3/bucket/#using-cors
},
// routingRules: [{ ... }], // Full doc at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-websiteconfiguration-routingrules.html
},
tags: {
Project: 'my-project'
},
// versioning: true, // Default false,
// dependsOn: [x,y,z],
// protect: true, // Default false
})
// module.exports = website
module.exports = {
bucket: website.bucket,
files: website.files
}
IMPORTANT: To delete a bucket, its content must be removed first. Re-deploy the stack by uncommenting the
// remove:true
line. This will remove all the content.
Using the exact same sample from above:
const website = new Website({
name: PROJECT,
website: { // When this property is set, the bucket is public. Otherwise, the bucket is private.
indexDocument: 'index.html',
// errorDocument: 'error.html',
content: {
dir:join(__dirname, './app/public'),
ignore: '**/node_modules/**',
existingContent: thisStack.getOutput('files'), // e.g., [{key: "favicon.png",hash: "5efd4dc4c28ef3548aec63ae88865ff9" },{ key: "global.css",hash: "8ff861b6a5b09e7d5fa681d8dd31262a" }]
// remove:true,
// cors: {...}, // Full doc at https://www.pulumi.com/docs/reference/pkg/aws/s3/bucket/#using-cors
},
// routingRules: [{ ... }], // Full doc at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-websiteconfiguration-routingrules.html
cloudfront: {
invalidateOnUpdate: true,
// compress: true, // Adds 'gzip' compression. 'brotli' not supported by this `pulumix` version yet.
// cacheTtl: {
// min: 3600, // Default 0. Cache the content on CloudFront for minimum 1 hour. This is used in case the origin server uses a 'cache-control:max-age=30' lower than 1 hour.
// max: 86400, // Default 86400. Cache the content on CloudFront for maximum 1 hour. This is used in case the origin server uses a 'cache-control:max-age=30000000' greater than 1 day.
// default: 86400, // Default 3600. Cache the content on CloudFront for 1 day
// cacheControl:'max-age=86400' // sets the 'cache-control:max-age=86400' header.
// },
// customHeaders: {
// hello:'world'
// },
// allowedMethods: ['GET'], // Default ['GET', 'HEAD', 'OPTIONS']
// customDomains: ['example.com', 'www.example.com'],
// acm: { arn: 'arn:...' }, // ARN of the ACM certificate for the domains defined in 'customDomains'. REQUIRED if 'customDomains' is defined.
// sslSupportMethod: 'vip' // Valid values: 'sni-only' (default), 'static-ip' or 'vip'. WARNING: 'vip' incurs extra costs.
}
},
tags: {
Project: 'my-project'
},
// versioning: true, // Default false,
// dependsOn: [x,y,z],
// protect: true, // Default false
})
// module.exports = website
module.exports = {
bucket: website.bucket,
files: website.files,
cloudfront: website.cloudfront
}
The link to the CloudFront domain is located under cloudfront.domainName
If you already have an ACM's SSL certificate's ARN, you can use the code below. Otherwise, the Website
object also supports an automatic ACM SSL certificate creation feature as detailed in the second code snippet:
const website = new Website({
name: PROJECT,
website: { // When this property is set, the bucket is public. Otherwise, the bucket is private.
indexDocument: 'index.html',
// errorDocument: 'error.html',
content: {
dir:join(__dirname, './app/public'),
ignore: '**/node_modules/**',
existingContent: thisStack.getOutput('files'), // e.g., [{key: "favicon.png",hash: "5efd4dc4c28ef3548aec63ae88865ff9" },{ key: "global.css",hash: "8ff861b6a5b09e7d5fa681d8dd31262a" }]
// remove:true,
// cors: {...}, // Full doc at https://www.pulumi.com/docs/reference/pkg/aws/s3/bucket/#using-cors
},
// routingRules: [{ ... }], // Full doc at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-websiteconfiguration-routingrules.html
cloudfront: {
invalidateOnUpdate: true,
customDomains: ['example.com', 'www.example.com'],
acmCertificateArn: 'arn:...' // ACM's SSL cert's ARN for 'example.com', 'www.example.com'
}
},
tags: {
Project: 'my-project'
},
// versioning: true, // Default false,
// dependsOn: [x,y,z],
// protect: true, // Default false
})
WARNING: The ACM SSL certificate must be hosted as follow:
- Same AWS Account as the CloudFront distribution.
- 'us-east-1' region (this is a CloudFront requirement)
The automatic ACM's SSL certificate provisionning code snippet looks like this:
const website = new Website({
name: PROJECT,
website: { // When this property is set, the bucket is public. Otherwise, the bucket is private.
indexDocument: 'index.html',
// errorDocument: 'error.html',
content: {
dir:join(__dirname, './app/public'),
ignore: '**/node_modules/**',
existingContent: thisStack.getOutput('files'), // e.g., [{key: "favicon.png",hash: "5efd4dc4c28ef3548aec63ae88865ff9" },{ key: "global.css",hash: "8ff861b6a5b09e7d5fa681d8dd31262a" }]
// remove:true,
// cors: {...}, // Full doc at https://www.pulumi.com/docs/reference/pkg/aws/s3/bucket/#using-cors
},
// routingRules: [{ ... }], // Full doc at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-websiteconfiguration-routingrules.html
cloudfront: {
invalidateOnUpdate: true,
customDomains: ['example.com', 'www.example.com'],
acmCertificateArn: 'auto',
dns: { // Optional. Used to automatically validate the SSL cert DNS challenge and configure the custom domain's DNS.
domainZoneId: 'Z3HENL7...***' // required
validateChallenge: true,
records: [{ // Optional. Used to configure the custom domain's DNS. IMPORTANT. The order matters!!!
name: 'example.com' // Not specifying the 'type' and 'value' default to an Alias record on the CloudFront distribution.
}, {
name: 'www.example.com',
type: 'CNAME',
value: 'example.com' // Also accepts array of strings.
}]
}
}
},
tags: {
Project: 'my-project'
},
// versioning: true, // Default false,
// dependsOn: [x,y,z],
// protect: true, // Default false
})
IMPORTANT:
- The automatic ACM's SSL certificate provisionning uses the 'DNS' challenge. The details of that challenge are located under
website.acmCert.domainValidationOptions[0]
.- Once the SSL certificate is create, it MUST be validated. You can manually get the DNS challenge details by browsing to ACM. If the custom domain's DNS is managed by AWS Route 53 in the same AWS account than ACM, then it is possible to automatically validate the DNS challenge by uncommenting the
validateChallenge
anddomainZoneId
(zone ID of the custom domain in AWS Route 53) property in the code above.- Don't forget to configure your custom domain's DNS records to resolve the custom domain to the CloudFront distribution. To know more about this, please refer to the Configuring AWS Route 53 for CloudFront custom domain annex (the CloudFront distribution value is located under
website.cloudfront.domainName
). If you've set thevalidateChallenge
to true and if the custom domain's DNS is managed by AWS Route 53 in the same AWS account than ACM, then this is all done automatically.
const website = new Website({
name: PROJECT,
website: { // When this property is set, the bucket is public. Otherwise, the bucket is private.
indexDocument: 'index.html',
// errorDocument: 'error.html',
content: {
dir:join(__dirname, './app/public'),
ignore: '**/node_modules/**',
existingContent: thisStack.getOutput('files'), // e.g., [{key: "favicon.png",hash: "5efd4dc4c28ef3548aec63ae88865ff9" },{ key: "global.css",hash: "8ff861b6a5b09e7d5fa681d8dd31262a" }]
// remove:true,
// cors: {...}, // Full doc at https://www.pulumi.com/docs/reference/pkg/aws/s3/bucket/#using-cors
},
routingRules: [
// Rule to redirect any 404 to the root. The redirection
{
Condition: { // required
HttpErrorCodeReturnedEquals: '404',
// KeyPrefixEquals: 'login' // this is the path. WARNING: This is a strict equal. If you want to also cover 'login/' you need to add anothe rule.
},
Redirect: { // required
ReplaceKeyPrefixWith: '' // ,
// HostName: 'ec2-11-22-333-44.compute-1.amazonaws.com', // Default is the same hostname as the origin
// HttpRedirectCode: "307", // Default is 301
// Protocol: "https" // Default is the same protocol as the origin
}
}],
cloudfront: {
invalidateOnUpdate: true
}
},
tags: {
Project: 'my-project'
},
// versioning: true, // Default false,
// dependsOn: [x,y,z],
// protect: true, // Default false
})
SPAs (Single Page Applications) or PWAs (Progressive Web Applications) use dynamic routing. This means that the URL's path represents an application's state, but not necessarily a physical resource on a server (e.g., static page in an S3 bucket). By default, such application hosted on S3 and cached via CloudFront will expose a single object (most likely the index.html). This resource (e.g., index.html) contains Javascript that will update the URL history in accordance to its state changes (e.g., clicking on a button in the home page /
opens the /blog
page). Because this resource does not physically exist in the origin server, CloudFront returns a 404 error.
The solution is to configure CloudFront to catch all 404 errors and return an S3 resource that exists (in our case the path to the index.html object) along with a 200 status. This can be done via the Error pages
tab in the CloudFront console or the customErrorResponses
option in CloudFormation/Terrafform/Pulumi IaC tool. The Website
API exposes this setting as follow:
const website = new Website({
name: PROJECT,
website: { // When this property is set, the bucket is public. Otherwise, the bucket is private.
indexDocument: 'index.html',
content: {
dir:join(__dirname, './app/build'),
ignore: '**/node_modules/**',
existingContent: thisStack.getOutput('files'), // e.g., [{key: "favicon.png",hash: "5efd4dc4c28ef3548aec63ae88865ff9" },{ key: "global.css",hash: "8ff861b6a5b09e7d5fa681d8dd31262a" }]
// remove:true
},
cloudfront: {
customDomains: domains,
acmCertificateArn: 'auto',
invalidateOnUpdate: true,
customErrorResponses: [{
errorCode:404,
ttl:300,
responseCode: 200,
responsePagePath: '/' // Path to the index.html that physically exists in S3
}]
}
},
tags: {
Project: 'my-project'
}
})
const { aws:{ Secret } } = require('@cloudlessopenlabs/pulumix')
Secret.get('my-secret-name').then(({ version, data }) => {
console.log(version)
console.log(data) // Actual secret object
})
WARNING: Don't forget to also define an egress rule to allow traffic out from your resource. This is a typical mistake that causes systems to not be able to contact any other services. The most common egress rule is:
{ protocol: '-1', fromPort:0, toPort:65535, cidrBlocks: ['0.0.0.0/0'], ipv6CidrBlocks: ['::/0'], description:'Allow all traffic' }
const { aws: { SecurityGroup } } = require('@cloudlessopenlabs/pulumix')
const sg = new SecurityGroup({
name: `my-special-sg`,
description: `Controls something special.`,
vpcId: 'vpc-1234',
egress: [{
protocol: '-1',
fromPort:0, toPort:65535, cidrBlocks: ['0.0.0.0/0'],
ipv6CidrBlocks: ['::/0'],
description:'Allow all traffic'
}],
tags: {
Project: 'demo'
}
})
console.log(sg)
// {
// id: Output<String>,
// arn: : Output<String>,
// name: : Output<String>,
// description: : Output<String>,
// rules: : Output<[SecurityGroupRule]>
// }
const { getProject, aws: { sns } } = require('@cloudlessopenlabs/pulumix')
const BACKEND = { backend: 's3' }
const PROTECT = false
const { project:PROJ, createResourceName, stack:ENV } = getProject(BACKEND)
const tags = {
Project: PROJ,
Env: ENV
}
const topic = new sns.Topic({
name: createResourceName(),
description: 'My new topic',
fifo: false, // If set to true, the 'name' is automatically suffixed with '.fifo' (unless the name is already suffixed with '.fifo')
tags,
protect: PROTECT
})
module.exports = {
topic
}
To learn how to configure policies to grant access to an SNS topic, please refer to Publish to SNS section under the Annexes.
const { aws: { Lambda, sns } } = require('@cloudlessopenlabs/pulumix')
const topic = new sns.Topic({
// ...props
})
const lambda = new Lambda({
// ... props
})
const subscription = sns.Topic.createTopicSubscription(topic, {
name: 'my-lambda-sub',
lambda: lambda, // Needs at least those two props: { name:..., arn:... }.
// deadLetterQueue: true,
// tags,
// protect: PROTECT
})
To learn more about configuring the DLQ, please refer to the Configuring a subscription's dead-letter queue section.
WARNING: With this type of subscription, you must also manually validate the subscription to proove you own the HTTP(S) endpoint. To learn more about this topic, please refer to the Confirming HTTP or HTTPS subscription section.
const { aws: { sns } } = require('@cloudlessopenlabs/pulumix')
const topic = new sns.Topic({
// ...props
})
const subscription = sns.Topic.createTopicSubscription(topic, {
name: 'my-http-sub',
https: 'https://example.com/?hello=world',
// http: 'http://example.com/?hello=world',
// deadLetterQueue: true,
// tags,
// protect: PROTECT
})
To learn more about configuring the DLQ, please refer to the Configuring a subscription's dead-letter queue section.
const { aws: { sns, sqs } } = require('@cloudlessopenlabs/pulumix')
const topic = new sns.Topic({
// ...props
})
const queue = new sqs.Queue({
// ... props
})
const subscription = sns.Topic.createTopicSubscription(topic, {
name: 'my-sqs-sub',
queue: myQueue // Also support { arn:myQueue.arn }
// deadLetterQueue: true,
// tags,
// protect: PROTECT
})
To learn more about configuring the DLQ, please refer to the Configuring a subscription's dead-letter queue section.
Regarding deadLetterQueue
property, it can be one of the following values:
-
true
: This means a new Queue is created on-the-fly and used as the DLQ. -
Object
: It must contain an 'arn' and 'id' field. For example:{ arn: otherQueue.arn, id:otherQueue.id }
. -
Output<Queue>
: Self-explanatory
With http
ot https
protocols, a subscription to an SNS topic must be manually confirmed. It works as follow:
- When the subscription is created, it sends an HTTP test payload to the subscribing endpoint. That payload looks like this:
- header:
-
x-amz-sns-topic-arn
: arn:aws:sns:ap-southeast-2:1234567:my-topic-name -
CloudFront-Viewer-Country
: AU -
CloudFront-Forwarded-Proto
: https -
CloudFront-Is-Tablet-Viewer
: false -
CloudFront-Is-Mobile-Viewer
: false -
User-Agent
: Amazon Simple Notification Service Agent -
x-amz-sns-message-type
: SubscriptionConfirmation -
X-Forwarded-Proto
: https -
CloudFront-Is-SmartTV-Viewer
: false -
Host
: goanna.dev.cloudlesslabs.com -
Accept-Encoding
: gzip,deflate -
x-amz-sns-message-id
: 3fae3bd8-ba12-42a1-8e18-8ddba6f86a9b -
X-Forwarded-Port
: 443 -
X-Amzn-Trace-Id: Root
: 1-62ac6eee-1b98f29974cabe767e2ddf6d -
Via
: 1.1 11c9ed08d5e275cd06919cdd978badd6.cloudfront.net (CloudFront) -
X-Amz-Cf-Id
: s_38D12gU31C8KBKHz5zoX_vLzi8yH9wxYXNjjRmwrO4xL-vjGuB3A== -
X-Forwarded-For
: 54.240.194.75 -
CloudFront-Is-Desktop-Viewer
: true -
Content-Type
: text/plain; charset=UTF-8
-
- body
-
Type
: SubscriptionConfirmation, -
MessageId
: 3fae3bd8-ba12-42a1-8e18-8ddba6f86a9b, -
Token
: 123456789876543212345678, -
TopicArn
: arn:aws:sns:ap-southeast-2:1234567:my-topic-name, -
Message
: You have chosen to subscribe to the topic arn:aws:sns:ap-southeast-2:1234567:my-topic-name.\nTo confirm the subscription, visit the SubscribeURL included in this message., -
SubscribeURL
: https://sns.ap-southeast-2.amazonaws.com/?Action=ConfirmSubscription&TopicArn=arn:aws:sns:ap-southeast-2:1234567:my-topic-name&Token=123456789876543212345678
-
- header:
- You must capture that payload on the subscribing endpoint's backend and copy the value of the
body.SubscribeURL
field. - Login to the AWS console and browse to the subscriptions of your SNS topic.
- Select the HTTP subscription and click on the
Confirm subscription
button. There, paste thebody.SubscribeURL
value copied in the previous step.
WARNING: By default, the confirmation must be completed within 30 minutes. Use the
confirmationTimeoutInMinutes
to change that setting.
const { aws: { sqs } } = require('@cloudlessopenlabs/pulumix')
// For more config, please refer to the Pulumi doc: https://www.pulumi.com/registry/packages/aws/api-docs/sqs/queue/
const queue = new sqs.Queue({
name: 'my-queue',
description: 'This is my queue',
// fifo: true, // Default false. When true, the 'name' is automatically suffixed with '.fifo', if that suffix is not set yet.
// redrivePolicy: {
// deadLetterQueue: true, // This automatically provisions a DLQ for this queue,
// maxReceiveCount: 4 // Default is 10
// },
// visibilityTimeoutSeconds: 600
})
Regarding redrivePolicy.deadLetterQueue
, it can be one of the following values:
-
true
: This means a new Queue is created on-the-fly and used as the DLQ. -
Object
: It must contain an 'arn' field. For example:{ arn: otherQueue.arn }
. -
Output<Queue>
: Self-explanatory
WARNING: The first time a DLQ is deployed on SQS, you may receive this error message:
waiting for SQS Queue ... attributes to create: timeout while waiting for state to become 'equal' (last state: 'notequal', timeout: 2m0s)
. To fix this, redeploy a second time.
Please refer to the SQS event source section.
const { aws: { ssm } } = require('@cloudlessopenlabs/pulumix')
// Full parameters list at https://www.pulumi.com/registry/packages/aws/api-docs/ssm/parameter/
const foo = new ssm.Parameter({
name: 'foo',
value: { hello:'world' }
})
module.exports = foo
To retrieve a value from Parameter store:
const { aws: { ssm } } = require('@cloudlessopenlabs/pulumix')
const main = async () => {
const { version, value } = await ssm.Parameter.get({ name:'foo', version:2, json:true })
console.log({
version,
value
})
}
NOTICE: This method does not use the Pulumi API as it creates
registered twice
issues when both aget
andcreate
operations that use the same name are put in the same script.
To store or update data in Parameter Store without using Pulumi:
const { aws: { ssm } } = require('@cloudlessopenlabs/pulumix')
const main = async () => {
// Full parameters list at https://www.pulumi.com/registry/packages/aws/api-docs/ssm/parameter/
const data = await ssm.Parameter.createOrUpdate({
name: 'foo',
value: {
hello: 'World'
},
overWrite:true // Default false. True means you can overwrite the value.
})
return data // { version: 1, tier: 'Standard' }
}
main()
The previous example demonstrates how to read the value of a parameter store variable. However, this API does not use Pulumi under the hood. To get a specific version using the native Pulumi API:
const param = aws.ssm.Parameter.get('foo','foo:12')
When the version is not used with the parameter store's ID, the latest version is returned.
By default, this uitility creates a policy that allows the step-function to invoke any lambda.
const { aws: { stepFunction } } = require('@cloudlessopenlabs/pulumix')
const main = async () => {
const preProvision = await stepFunction.stateMachine({
name: 'my-step-function',
type: 'standard', // Valid values: 'standard' (default) or 'express'
description: 'Does something.',
states: preProvisionWorkflow,
// policies: [],
cloudWatchLevel: 'all', // Default is 'off'. Valid values: 'all', 'error', 'fatal'
logsRetentionInDays: 7, // Default 0 (i.e., never expires). Only applies when 'cloudWatch' is true.
tags:{
Name: 'my-step-function'
}
})
return {
preProvision
}
}
module.exports = main()
The preProvisionWorkflow
is a JSON object that you can export from the Step Function designer in the AWS console. This object is rather complex so we recommend to use the designer.
WARNING: Once the VPC's subnets have been created, updating them will produce a replace, which can have dire consequences to your entire infrastructure. Therefore think twice when setting them up.
The following setup is quite safe:
const { aws: { VPC } } = require('@cloudlessopenlabs/pulumix')
const vpc = new VPC({
name: 'my-project-dev',
subnets: [{ type: 'public' }, { type: 'private' }],
numberOfAvailabilityZones: 3, // Provide the maximum number of AZs based on your region. The default is 2
protect: false,
tags: {
Project: 'my-project',
Env: 'dev'
}
})
console.log(vpc)
// {
// id: Output<String>,
// arn: Output<String>,
// cidrBlock: Output<String>,
// ipv6CidrBlock: Output<String>,
// defaultNetworkAclId: Output<String>,
// defaultRouteTableId: Output<String>,
// defaultSecurityGroupId: Output<String>,
// dhcpOptionsId: Output<String>,
// mainRouteTableId: Output<String>,
// publicSubnets: Output<Subnet>,
// privateSubnets: Output<Subnet>,
// isolatedSubnets: Output<Subnet>,
// availabilityZones: Output<[String]>, // e.g., ['ap-southeast-2a', 'ap-southeast-2b', 'ap-southeast-2c']
// natGateways: Output<NAT>
// }
//
// Where:
// - NAT is similar to: {
// id : 'nat-12345',
// name : 'workloads-network-prod-0',
// privateIp: '10.0.3.47',
// publicIp : '54.55.60.255',
// subnet : {
// availabilityZone: 'ap-southeast-2a',
// id : 'subnet-12345',
// name : 'workloads-network-prod-public-0',
// type : 'public'
// }
// subnetId : 'subnet-12345'
// }
// - Subnet is similar to: {
// availabilityZone: 'ap-southeast-2c',
// id : 'subnet-12345',
// name : 'workloads-network-prod-private-2',
// type : 'private'
// }
This setup will divide the VPC's CIDR block in equal portions based on the total number of subnets created. The above example shows 6 subnets (3 public and 3 private). Because the example above did not specify any CIDR block for the VPC, it is set to 10.0.0.0/16
which represents 65,536 IP addresses. This means each subnet can use up to ~10922
IP addresses.
The last thing to be aware of is that the private subnets will also provision 3 NATs in the public subnets. The temptation would be to use isolated
subnets instead of private ones to save on money, but from my experience, this is pointless. You'll always end up internet access from your isolated subnets, so don't bother and setup private subnets from the beginning.
This can happen when an update to an API Gateway's integration caused a delete and replace of that integration. If a stage and existing snapshots (aka deployments) already existed, those snapshot will fail with this error message because thay assume the integration to exist.
To fix this error, refresh the stack so that the deleted integration is synched with the stack and Pulumi figures out that it has to reprovisions that integration before updating the snapshot.
This is usually due to insufficient memory on the Lambda. Try increasing it.
SQS - Dead-letter queue - waiting for SQS Queue ... attributes to create: timeout while waiting for state to become 'equal' (last state: 'notequal', timeout: 2m0s)
Redeploy a second time and this will fix this issue.
- Login to the
AWS Console
and select theAWS Secret Manager
service. - Click on the
Store a new secret
button. - There you have 2 options:
- If the RDS database already exists, select the
Credentials for Amazon RDS database
type. - If the RDS database does not exist yet, select the
Other type of secret
type, and then add the following 2 key value pairs:-
username
: ******* -
password
: *******
-
- If the RDS database already exists, select the
There are 2 main ways to grant a service access to a resource:
- Identity-based policies: Attach a policy on a service's IAM role which can access the resource.
- Resource-based policies: Attach a policy on a resource's IAM role which allows the service to access the resource.
Choosing one strategy over the other depends on your use case. That being said, some scenarios only accept one. For example, when configuring a lambda to be triggered by a schedule CRON job (i.e., Cloudwatch event), only the resource-based policy via an AWS lambda permission works. Go figure...
The standard way to configure allow a service to access a resource is to:
- Create a role for the service trying to access the resource. In the example below, the role
lambda-role
can only be assumed by thelambda.amazonaws.com
principal.
Tip: Use
npx get-principals
to find the principal URI.
- Create a policy that allows specific actions on that resource. Alternatively, use one of the existing AWS Managed Policies.
Tip: Use
npx get-policies
to search AWS managed policies and get their ARN.
- Associate the role with the policy.
- Attach the new role to the service.
For example:
// Step 1: Create a role that identifies the resource (mainly the principal).
const lambdaRole = new aws.iam.Role('lambda-role', {
assumeRolePolicy: {
Version: '2012-10-17',
Statement: [{
Action: 'sts:AssumeRole',
Principal: {
Service: 'lambda.amazonaws.com', // tip: Use the command `npx get-principals` to find any AWS principal
},
Effect: 'Allow',
Sid: ''
}],
}
})
// Step 2: Create a policy or use the `npx get-policies` to get a managed AWS policy ARN
const cloudWatchPolicy = new aws.iam.Policy('cw-policy', {
path: '/',
description: 'IAM policy for logging from a lambda',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'logs:CreateLogGroup',
'logs:CreateLogStream',
'logs:PutLogEvents'
],
Resource: 'arn:aws:logs:*:*:*',
Effect: 'Allow'
}]
})
})
// Step 3: Attach the policy to the role. You can attach more than one.
const lambdaLogs = new aws.iam.RolePolicyAttachment(`attached-policy`, {
role: lambdaRole.name,
policyArn: cloudWatchPolicy.arn
})
// Step 4: Reference that role on the resource
const lambda = new aws.lambda.Function('my-lambda', {
// ... other properties
role: lambdaRole.arn,
dependsOn:[lambdaLogs]
})
const s3ObjectPolicyName = `my-project-s3-access`
const s3ObjectPolicy = new aws.iam.Policy(s3ObjectPolicyName, {
name: s3ObjectPolicyName,
description: `Allow to read/write objects in an S3 bucket.`,
path: '/',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
's3:Get*',
's3:List*',
's3:PutObject'
],
Resource: [
join(logBucketArn,'*'), // Notice that you cannot simply use the bucket's ARN.
logBucketArn // We also need this guy otherwise the listObject API fails with "access denied"
],
Effect: 'Allow'
}]
})
})
const parameterStorePolicyName = `my-project-parameter-store`
const parameterStorePolicy = new aws.iam.Policy(parameterStorePolicyName, {
name: parameterStorePolicyName,
description: `Allow to read Parameter Store.`,
path: '/',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'ssm:GetParameters',
'ssm:GetParameter'
],
Resource: ['*'],
Effect: 'Allow'
}]
})
})
// IAM: Allow lambda to read Cloudwatch logs.
const cloudwatchLogGroupPolicyName = `my-project-read-log-group`
const cloudwatchLogGroupPolicy = new aws.iam.Policy(cloudwatchLogGroupPolicyName, {
name: cloudwatchLogGroupPolicyName,
description: `Allow to read Cloudwatch log group.`,
path: '/',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'logs:FilterLogEvents'
],
Resource: ['*'],
Effect: 'Allow'
}]
})
})
// IAM: Allow lambda to publish to SNS
const snsPolicyName = 'my-project-allow-sns-publish'
const snsPolicy = pulumi.output(fanOutTopic.arn).apply(arn => new aws.iam.Policy(snsPolicyName, {
name: snsPolicyName,
description: 'Allow to publish message to SNS',
path: '/',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'sns:Publish'
],
Resource: [arn],
Effect: 'Allow'
}]
})
}))
const sqsPolicyName = 'my-project-allow-sqs-send-msg'
const sqsPolicy = pulumi.output(queue.arn).apply(queueArn => new aws.iam.Policy(sqsPolicyName, {
name: sqsPolicyName,
description: 'Allows to send message to SQS.',
path: '/',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'sqs:SendMessage'
],
Resource: queueArn,
Effect: 'Allow'
}]
}),
tags: {
...tags,
Name: sqsPolicyName
}
}, {
protect: PROTECT
}))
This example shows how you would setup two environment variables as well as setup the GitHub auth token to install private NPM packages hosted on GitHub:
WARNING: The
amazon/aws-lambda-nodejs:14.2021.09.29.20
image targets ARM architecture. Therefore, make sure your Lambda usesarm64
. To find the tag that explicitly supports your OS architecture, browse the official AWS Lambda Docker Hub registry.
FROM amazon/aws-lambda-nodejs:14.2021.09.29.20
ARG FUNCTION_DIR="/var/task"
ARG GITHUB_ACCESS_TOKEN
ARG SOME_ENV_DEMO
ENV SOME_ENV_DEMO $SOME_ENV_DEMO
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Setup access to the private GitHub package
RUN echo "//npm.pkg.github.com/:_authToken=$GITHUB_ACCESS_TOKEN" >> ~/.npmrc
COPY .npmrc ${FUNCTION_DIR}
# Install all dependencies
COPY package*.json ${FUNCTION_DIR}
RUN npm install --only=prod --prefix ${FUNCTION_DIR}
# Copy app files
COPY . ${FUNCTION_DIR}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "index.handler" ]
Where index.handler
means index.js
function handler
(which must have been explicitly exported).
Dockerfile
README.md
LICENSE
node_modules
npm-debug.log
.env
ssh-keygen -t rsa
Where -t rsa
specifies the rsa
algorithm.
By default, this creates two new files under ~/.ssh
:
-
id_rsa
: That's the private key. -
id_rsa.pub
: That's the public key.
To create a private/public keypair with a specific filename, use the -f
option as follow:
ssh-keygen -t rsa -f ~/.ssh/your-filename
To control the key length (default 3072), use the -b
option as follow:
ssh-keygen -t rsa -f ~/.ssh/your-filename -b 2048
RSA is quite old, and it is now recommended to replace it with the widely adopted ecdsa
algorithm using either 256, 384, or 521 key size:
ssh-keygen -t ecdsa -b 384 -f ./keys
To generate private and public SSH keys, use the following command:
ssh-keygen -t rsa -f ~/path-to-your-folder/id_rsa
const fs = require('fs')
const { join } = require('path')
const readFile = mode => {
const args = [join(__dirname, './id_rsa.pub'), 'utf8']
return !mode || mode == 'sync'
? () => fs.readFileSync(...args)
: () => new Promise((onSuccess, onFailure) => fs.readFile(...args, (err, data) => err ? onFailure(err) : onSuccess(data)))
}
module.exports = {
getPubKeySync: readFile(),
getPubKeyAsync: readFile('async')
}
SSM periodically checks for the status of all the monitored EC2 instance. Reconfiguring it can take up to 30 minutes until the new setup is active.
The easiest way to configure the IAM roles an make sure that all EC2 instances with an SSM agent can be accessed via SSM is to use the quick setup:
- Log in the AWS Console and select the
Session Manager
service. - Click on the
Quick setup
, then on theCreate
button. - Select the
Host Management
option, then clickNext
. - Use the default setting then click
Create
. - You may have to wait up to 30 minutes before the systems are ready.
{
"Records": [
{
"messageId": "xxxx-xxxx-xxx-xxxx-xxxx",
"receiptHandle": "AQEBpUhJm26...nTHgLuw7ARjfIlQE=",
"body": "Your message here. If this queues is an SNS topic subscription, the JSON SNS message is stringified here",
"attributes":
{
"ApproximateReceiveCount": "6",
"AWSTraceHeader": "Root=1234",
"SentTimestamp": "1656027806860",
"SenderId": "AIDAIY4XCTD3OFZN5ED42",
"ApproximateFirstReceiveTimestamp": "1656027806860"
},
"messageAttributes":
{},
"md5OfBody": "dewd4de3de3e",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:ap-southeast-2:123455:some-project",
"awsRegion": "ap-southeast-2"
}]
}
If the SQS queue is the target of an SNS subscription, the
Records[].body
is a stringified version of theRecords[].Sns
value described in the next section SNS event source mapping payload.
{
"Records": [
{
"EventSource": "aws:sns....",
"EventVersion": "1.0",
"EventSubscriptionArn": "arn:....",
"Sns": {
"Type": "Notification",
"MessageId": "cebc16d5-590c-536b-990a-4dfba5d2698f",
"TopicArn": "arn:aws:sns:ap-southeast-2:12342:some-project-prod",
"Message": "----some message----rn",
"Timestamp": "2022-06-23T23:43:26.685Z",
"SignatureVersion": "1",
"Signature": "12345",
"SigningCertURL": "https://sns.ap-southeast-2.amazonaws.com/SimpleNotificationService-acd.pem",
"UnsubscribeURL": "https://sns.ap-southeast-2.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns..."
}
}]
}
Add a new Alias record in Route 53:
- Browse to Route 53 and select your Hosted Zone.
- Create a new
A
record to your custom domain and tick theAlias
switch. This will reveal a new menu to select a AWS service. - Select
Alias to CloudFront distribition
. - Enter your CloudFront distribution domain name, If you've used the
S3.Website
API, this value is located under thewebsite.cloudfront.domainName
property.