Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Upload construct #103

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
179 changes: 179 additions & 0 deletions docs/upload.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
# Upload

The `upload` construct deploys a S3 bucket where you can upload files from the frontend.

It also creates a Lambda function that will generate a temporary URL to upload to the S3 bucket.

## Quick start

```bash
serverless plugin install -n serverless-lift
```

```yaml
service: my-app
provider:
name: aws

functions:
myFunction:
handler: src/index.handler
events:
- httpApi: '*'

constructs:
upload:
type: upload

plugins:
- serverless-lift
```

On `serverless deploy`, a S3 bucket will be created and the Lambda function will be attached to your API Gateway.

## How it works

The `upload` construct creates and configures the S3 bucket for the upload:

- Files stored in the bucket are automatically encrypted (S3 takes care of encrypting and decrypting data on the fly, without change to our applications).
- Files are stored in a `tmp` folder and files are automatically deleted after 24 hours.
- Cross-Origin Resource Sharing (CORS) is configured to be reachable from a web browser.

It also creates a Lambda function :

- It is automatically attached to your API Gateway under the path `/upload-url`
- It requires to be called via a **POST** request containing a JSON body with the fields `fileName` and `contentType`
- It will generate the pre-signed URL that will be valid for 5 minutes
- It will return a JSON containing the `uploadUrl` and the `fileName` which is the path in the S3 bucket where the file will be stored

**Warning:** because files are deleted from the bucket after 24 hours, your backend code
should move it if it needs to be stored permanently. This is done this way to avoid uploaded files that are never used,
such as a user that uploads a file but never submits the form.

## How to use it in the browser

Here is an example of how to use this construct with `fetch`

```html
<input id="fileInput" type="file">
...
<script>
const fileInput = document.getElementById('fileInput');

fileInput.addEventListener('change', async function (event) {
let file = fileInput.files[0];

// CHANGE THIS URL
const uploadResponse = await fetch('https://my-api-gateway.com/upload-url', {
method: 'POST',
body: JSON.stringify({
fileName: file.name,
contentType: file.type,
})
});
const { uploadUrl, fileName } = await uploadResponse.json();

await fetch(uploadUrl, {
method: 'PUT',
headers: {
'Content-Type': file.type,
},
body: file,
});

// send 'fileName' to your backend for processing
});
</script>
```

## Variables

All upload constructs expose the following variables:

- `bucketName`: the name of the deployed S3 bucket
- `bucketArn`: the ARN of the deployed S3 bucket

This can be used to reference the bucket from Lambda functions, for example:

```yaml
constructs:
upload:
type: upload

functions:
myFunction:
handler: src/index.handler
environment:
UPLOAD_BUCKET_NAME: ${construct:upload.bucketName}
```

_How it works: the `${construct:upload.bucketName}` variable will automatically be replaced with a CloudFormation reference to the S3 bucket._

This is useful to process the uploaded files. Remember that the files will be automatically deleted after 24 hours.

## Permissions

By default, all the Lambda functions deployed in the same `serverless.yml` file **will be allowed to read/write into the upload bucket**.

In the example below, there are no IAM permissions to set up: `myFunction` will be allowed to read and write into the `upload` bucket.

```yaml
constructs:
upload:
type: upload

functions:
myFunction:
handler: src/index.handler
environment:
UPLOAD_BUCKET_NAME: ${construct:avatars.bucketName}
```

Automatic permissions can be disabled: [read more about IAM permissions](permissions.md).

## Configuration reference

### API Gateway

API Gateway provides 2 versions of APIs:

- v1: REST API
- v2: HTTP API, the fastest and cheapest

By default, the `upload` construct supports v2 HTTP APIs.

If your Lambda functions uses `http` events (v1 REST API) instead of `httpApi` events (v2 HTTP API), use the `apiGateway: "rest"` option:

```yaml
constructs:
upload:
type: upload
apiGateway: 'rest' # either "rest" (v1) or "http" (v2, the default)

functions:
v1:
handler: foo.handler
events:
- http: 'GET /' # REST API (v1)
v2:
handler: bar.handler
events:
- httpApi: 'GET /' # HTTP API (v2)
```

### Encryption

By default, files are encrypted using [the default S3 encryption mechanism](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html) (free).

Alternatively, for example to comply with certain policies, it is possible to [use KMS](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html):

```yaml
constructs:
upload:
# ...
encryption: kms
```

### More options

Looking for more options in the construct configuration? [Open a GitHub issue](https://github.com/getlift/lift/issues/new).
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
"description": "Lift",
"dependencies": {
"@aws-cdk/aws-apigatewayv2-alpha": "^2.21.1-alpha.0",
"@aws-cdk/aws-apigatewayv2-integrations-alpha": "^2.76.0-alpha.0",
"aws-cdk-lib": "^2.21.1",
"chalk": "^4.1.1",
"change-case": "^4.1.2",
Expand Down
2 changes: 0 additions & 2 deletions src/constructs/aws/Storage.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ const STORAGE_DEFINITION = {
type: "object",
properties: {
type: { const: "storage" },
archive: { type: "number", minimum: 30 },
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was unused in the construct

encryption: {
anyOf: [{ const: "s3" }, { const: "kms" }],
},
Expand All @@ -21,7 +20,6 @@ const STORAGE_DEFINITION = {
} as const;
const STORAGE_DEFAULTS: Required<FromSchema<typeof STORAGE_DEFINITION>> = {
type: "storage",
archive: 45,
encryption: "s3",
};

Expand Down
194 changes: 194 additions & 0 deletions src/constructs/aws/Upload.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
import type { CfnBucket } from "aws-cdk-lib/aws-s3";
import { BlockPublicAccess, Bucket, BucketEncryption, HttpMethods } from "aws-cdk-lib/aws-s3";
import type { Construct as CdkConstruct } from "constructs";
import type { CfnResource } from "aws-cdk-lib/core";
import { CfnOutput, Duration, Fn, Stack } from "aws-cdk-lib/core";
import { Code, Function as LambdaFunction, Runtime } from "aws-cdk-lib/aws-lambda";
import type { FromSchema } from "json-schema-to-ts";
import type { AwsProvider } from "@lift/providers";
import { AwsConstruct } from "@lift/constructs/abstracts";
import type { IHttpApi } from "@aws-cdk/aws-apigatewayv2-alpha";
import { HttpApi, HttpMethod, HttpRoute, HttpRouteKey } from "@aws-cdk/aws-apigatewayv2-alpha";
import type { Resource } from "aws-cdk-lib/aws-apigateway";
import { LambdaIntegration, RestApi } from "aws-cdk-lib/aws-apigateway";
import { HttpLambdaIntegration } from "@aws-cdk/aws-apigatewayv2-integrations-alpha";
import { Role } from "aws-cdk-lib/aws-iam";
import { CfnDistribution } from "aws-cdk-lib/aws-cloudfront";
import { PolicyStatement } from "../../CloudFormation";

const UPLOAD_DEFINITION = {
type: "object",
properties: {
type: { const: "upload" },
apiGateway: { enum: ["http", "rest"] },
encryption: {
anyOf: [{ const: "s3" }, { const: "kms" }],
},
},
additionalProperties: false,
} as const;
const UPLOAD_DEFAULTS: Required<FromSchema<typeof UPLOAD_DEFINITION>> = {
type: "upload",
encryption: "s3",
apiGateway: "http",
};

type Configuration = FromSchema<typeof UPLOAD_DEFINITION>;

export class Upload extends AwsConstruct {
public static type = "upload";
public static schema = UPLOAD_DEFINITION;

private readonly bucket: Bucket;
private readonly bucketNameOutput: CfnOutput;
private function: LambdaFunction;
private httpApi: IHttpApi | undefined;
private route: HttpRoute | undefined;
private restApi: RestApi | undefined;

constructor(scope: CdkConstruct, id: string, configuration: Configuration, private provider: AwsProvider) {
super(scope, id);

const resolvedConfiguration = Object.assign({}, UPLOAD_DEFAULTS, configuration);

const encryptionOptions = {
s3: BucketEncryption.S3_MANAGED,
kms: BucketEncryption.KMS_MANAGED,
};

this.bucket = new Bucket(this, "Bucket", {
encryption: encryptionOptions[resolvedConfiguration.encryption],
blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
enforceSSL: true,
cors: [
{
allowedMethods: [HttpMethods.PUT],
allowedOrigins: ["*"],
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for this first version I chose to set wide CORS permissions. We may add an option to configure it later.

allowedHeaders: ["*"],
},
],
lifecycleRules: [
{
expiration: Duration.days(1),
},
],
});

this.bucketNameOutput = new CfnOutput(this, "BucketName", {
value: this.bucket.bucketName,
});

this.function = new LambdaFunction(this, "Function", {
code: Code.fromInline(this.createFunctionCode()),
handler: "index.handler",
runtime: Runtime.NODEJS_12_X,
environment: {
LIFT_UPLOAD_BUCKET_NAME: this.bucket.bucketName,
},
role: Role.fromRoleArn(
this,
"LambdaRole",
Fn.getAtt(this.provider.naming.getRoleLogicalId(), "Arn").toString()
),
});

if (resolvedConfiguration.apiGateway === "http") {
this.provider.enableHttpApiCors();
this.httpApi = HttpApi.fromHttpApiAttributes(this, "HttpApi", {
httpApiId: Fn.ref(this.provider.naming.getHttpApiLogicalId()),
});

const lambdaProxyIntegration = new HttpLambdaIntegration("LambdaProxyIntegration", this.function);

this.route = new HttpRoute(this, "Route", {
httpApi: this.httpApi,
integration: lambdaProxyIntegration,
routeKey: HttpRouteKey.with("/upload-url", HttpMethod.POST),
});
this.route = new HttpRoute(this, "CORSRoute", {
httpApi: this.httpApi,
integration: lambdaProxyIntegration,
routeKey: HttpRouteKey.with("/upload-url", HttpMethod.OPTIONS),
});
}

if (resolvedConfiguration.apiGateway === "rest") {
this.restApi = RestApi.fromRestApiAttributes(this, "RestApi", {
restApiId: Fn.ref(this.provider.naming.getRestApiLogicalId()),
rootResourceId: Fn.getAtt(this.provider.naming.getRestApiLogicalId(), "RootResourceId").toString(),
}) as RestApi;

const resource: Resource = this.restApi.root.addResource("upload-url");
resource.addCorsPreflight({
allowHeaders: ["*"],
allowMethods: ["POST"],
allowOrigins: ["*"],
});
resource.addMethod("POST", new LambdaIntegration(this.function));
}
}

variables(): Record<string, unknown> {
return {
bucketArn: this.bucket.bucketArn,
bucketName: this.bucket.bucketName,
};
}

permissions(): PolicyStatement[] {
return [
new PolicyStatement(
["s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:ListBucket"],
[this.bucket.bucketArn, Stack.of(this).resolve(Fn.join("/", [this.bucket.bucketArn, "*"]))]
),
];
}

outputs(): Record<string, () => Promise<string | undefined>> {
return {
bucketName: () => this.getBucketName(),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be nice to output the full upload URL but I wasn't able to generate it properly for Rest APIs. Another problem was that people may have domains configured on their API Gateway or they may be using the server-side construct which woul make the outputed URL useless.

};
}

async getBucketName(): Promise<string | undefined> {
return this.provider.getStackOutput(this.bucketNameOutput);
}

private createFunctionCode(): string {
return `
const AWS = require('aws-sdk');
const crypto = require("crypto");
const s3 = new AWS.S3();

exports.handler = async (event) => {
if (event.requestContext?.http?.method === 'OPTIONS') return "";
const body = JSON.parse(event.body);
const fileName = \`tmp/\${crypto.randomBytes(5).toString('hex')}-\${body.fileName}\`;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this generates a random file name to avoid collisions by adding a random hash before the submitted filename.


const url = s3.getSignedUrl('putObject', {
Bucket: process.env.LIFT_UPLOAD_BUCKET_NAME,
Key: fileName,
ContentType: body.contentType,
Expires: 60 * 5,
});

return {
body: JSON.stringify({
fileName: fileName,
uploadUrl: url,
}),
headers: {
"Access-Control-Allow-Origin": event.headers.origin,
},
statusCode: 200
};
}
`;
}

extend(): Record<string, CfnResource> {
return {
bucket: this.bucket.node.defaultChild as CfnBucket,
};
}
}
Loading