Skip to content

Commit

Permalink
feat(s3-deployment): allow specifying memory limit (#4204)
Browse files Browse the repository at this point in the history
* feat(s3-deployment): allow specifying memory limit

when deploying large files, users may need to increase the resource handler's memory configuration.

note: since custom resource handlers are singletons, we need to provision a handler for each memory configuration defined in the app. we do this by simply adding a suffix to the uuid of the singleton resource that includes the memory limit.

fixes #4058

* alphabetize imports
  • Loading branch information
Elad Ben-Israel authored and mergify[bot] committed Sep 23, 2019
1 parent d998e46 commit 84e1d4b
Show file tree
Hide file tree
Showing 3 changed files with 80 additions and 4 deletions.
9 changes: 9 additions & 0 deletions packages/@aws-cdk/aws-s3-deployment/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,15 @@ new s3deploy.BucketDeployment(this, 'DeployWithInvalidation', {
});
```

## Memory Limit

The default memory limit for the deployment resource is 128MiB. If you need to
copy larger files, you can use the `memoryLimit` configuration to specify the
size of the AWS Lambda resource handler.

> NOTE: a new AWS Lambda handler will be created in your stack for each memory
> limit configuration.
## Notes

* This library uses an AWS CloudFormation custom resource which about 10MiB in
Expand Down
36 changes: 33 additions & 3 deletions packages/@aws-cdk/aws-s3-deployment/lib/bucket-deployment.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@ import iam = require('@aws-cdk/aws-iam');
import lambda = require('@aws-cdk/aws-lambda');
import s3 = require('@aws-cdk/aws-s3');
import cdk = require('@aws-cdk/core');
import { Token } from '@aws-cdk/core';
import path = require('path');
import {ISource, SourceConfig} from './source';
import { ISource, SourceConfig } from './source';

const handlerCodeBundle = path.join(__dirname, '..', 'lambda', 'bundle.zip');

Expand Down Expand Up @@ -55,6 +56,17 @@ export interface BucketDeploymentProps {
* @default - All files under the destination bucket key prefix will be invalidated.
*/
readonly distributionPaths?: string[];

/**
* The amount of memory (in MiB) to allocate to the AWS Lambda function which
* replicates the files from the CDK bucket to the destination bucket.
*
* If you are deploying large files, you will need to increase this number
* accordingly.
*
* @default 128
*/
readonly memoryLimit?: number;
}

export class BucketDeployment extends cdk.Construct {
Expand All @@ -66,12 +78,13 @@ export class BucketDeployment extends cdk.Construct {
}

const handler = new lambda.SingletonFunction(this, 'CustomResourceHandler', {
uuid: '8693BB64-9689-44B6-9AAF-B0CC9EB8756C',
uuid: this.renderSingletonUuid(props.memoryLimit),
code: lambda.Code.fromAsset(handlerCodeBundle),
runtime: lambda.Runtime.PYTHON_3_6,
handler: 'index.handler',
lambdaPurpose: 'Custom::CDKBucketDeployment',
timeout: cdk.Duration.minutes(15)
timeout: cdk.Duration.minutes(15),
memorySize: props.memoryLimit
});

const sources: SourceConfig[] = props.sources.map((source: ISource) => source.bind(this));
Expand Down Expand Up @@ -100,4 +113,21 @@ export class BucketDeployment extends cdk.Construct {
}
});
}

private renderSingletonUuid(memoryLimit?: number) {
let uuid = '8693BB64-9689-44B6-9AAF-B0CC9EB8756C';

// if user specify a custom memory limit, define another singleton handler
// with this configuration. otherwise, it won't be possible to use multiple
// configurations since we have a singleton.
if (memoryLimit) {
if (Token.isUnresolved(memoryLimit)) {
throw new Error(`Can't use tokens when specifying "memoryLimit" since we use it to identify the singleton custom resource handler`);
}

uuid += `-${memoryLimit.toString()}MiB`;
}

return uuid;
}
}
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import { expect, haveResource } from '@aws-cdk/assert';
import { countResources, expect, haveResource } from '@aws-cdk/assert';
import cloudfront = require('@aws-cdk/aws-cloudfront');
import s3 = require('@aws-cdk/aws-s3');
import cdk = require('@aws-cdk/core');
Expand Down Expand Up @@ -394,4 +394,41 @@ export = {
}));
test.done();
},

'memoryLimit can be used to specify the memory limit for the deployment resource handler'(test: Test) {
// GIVEN
const stack = new cdk.Stack();
const bucket = new s3.Bucket(stack, 'Dest');

// WHEN

// we define 3 deployments with 2 different memory configurations

new s3deploy.BucketDeployment(stack, 'Deploy256-1', {
sources: [s3deploy.Source.asset(path.join(__dirname, 'my-website'))],
destinationBucket: bucket,
memoryLimit: 256
});

new s3deploy.BucketDeployment(stack, 'Deploy256-2', {
sources: [s3deploy.Source.asset(path.join(__dirname, 'my-website'))],
destinationBucket: bucket,
memoryLimit: 256
});

new s3deploy.BucketDeployment(stack, 'Deploy1024', {
sources: [s3deploy.Source.asset(path.join(__dirname, 'my-website'))],
destinationBucket: bucket,
memoryLimit: 1024
});

// THEN

// we expect to find only two handlers, one for each configuration

expect(stack).to(countResources('AWS::Lambda::Function', 2));
expect(stack).to(haveResource('AWS::Lambda::Function', { MemorySize: 256 }));
expect(stack).to(haveResource('AWS::Lambda::Function', { MemorySize: 1024 }));
test.done();
}
};

0 comments on commit 84e1d4b

Please sign in to comment.