diff --git a/docs/custom-remote-caching.md b/docs/custom-remote-caching.md index c6c1ecf4..599065d6 100644 --- a/docs/custom-remote-caching.md +++ b/docs/custom-remote-caching.md @@ -6,7 +6,12 @@ nav_order: 5 # Enable custom remote caching in your Turborepo monorepo -To enable a custom remote caching server in your Turborepo monorepo, you must add a config file by hand. The `turbo login` command works only with the official Vercel server. +To enable a custom remote caching server in your Turborepo monorepo, you must +either add a config file by hand or set local environment variables. + +## Config file + +You must add the config file by hand. The `turbo login` command works only with the official Vercel server. 1. create `.turbo` folder at the root of your monorepo 2. create `config.json` file inside it, and add these properties: @@ -37,9 +42,23 @@ For example: //... ``` -## Enable remote caching in Docker +### Enable remote caching in Docker For some reason, the `.turbo/config.json` is not working in Docker containers. In order to enable remote caching in Docker, you need to pass the configuration via CLI arguments. ```json "build": "turbo run build --team=\"team_awesome\" --token=\"turbotoken\" --api=\"https://your-caching.server.dev\"", ``` + +## Local environment variables + +You can also configure your developer environment by setting the following +environment variables: + +| Variable | Type | Description | +| ------------- | ------ | ----------- | +| `TURBO_API` | string | The address of a running `turborepo-remote-cache` server | +| `TURBO_TEAM` | string | The team id (see *Config file* above)| +| `TURBO_TOKEN` | string | Your secret key. This must be the same as the `TURBO_TOKEN` variable set on your turborepo-remote-cache server | + +**Note, these environment variables are used by the Turborepo CLI, so should not +be confused with the environment variables used to configure your server!** diff --git a/docs/deployment-environments.md b/docs/deployment-environments.md index 9935d974..95e7accb 100644 --- a/docs/deployment-environments.md +++ b/docs/deployment-environments.md @@ -2,6 +2,7 @@ layout: default title: Deployment Environments nav_order: 4 +has_children: true --- # Deployment Environments @@ -9,6 +10,7 @@ nav_order: 4 - [Deploy on Vercel](#deploy-on-vercel) - [Deploy on Docker](#deploy-on-docker) - [Deploy on DigitalOcean](#deploy-on-digitalocean) +- [Deploy on AWS Lambda](#deploy-on-aws-lambda) - [Remoteless with npx](#deploy-remoteless-with-npx) ## Deploy on Vercel @@ -44,7 +46,12 @@ The server can be easily deployed on DigitalOcean App Service. __Note: Local storage isn't supported for this deployment method.__ -[![Deploy to DO](https://www.deploytodo.com/do-btn-blue.svg)](https://cloud.digitalocean.com/apps/new?repo=https://github.com/ducktors/turborepo-remote-cache/tree/main) +[![Deploy to +DO](https://www.deploytodo.com/do-btn-blue.svg)](https://cloud.digitalocean.com/apps/new?repo=https://github.com/ducktors/turborepo-remote-cache/tree/main) + +## Deploy on AWS Lambda +This server can be deployed as an AWS Lambda Function. See this +[guide](https://ducktors.github.io/turborepo-remote-cache/running-in-lambda) on deployment steps. ## Deploy "remoteless" with npx If you have Node.js installed, you can run the server simply by typing diff --git a/docs/running-in-lambda.md b/docs/running-in-lambda.md new file mode 100644 index 00000000..0c36702b --- /dev/null +++ b/docs/running-in-lambda.md @@ -0,0 +1,134 @@ +--- +layout: default +title: Running in an AWS Lambda Function +parent: Deployment Environments +nav_order: 1 +--- + +# Running in an AWS Lambda Function + +The server can be deployed to run in an AWS Lambda. The following steps take you +through creating: + +- An S3 bucket to store the artifacts +- An IAM role to grant the Lambda permission to access the bucket +- The Lambda function +- An HTTP API Gateway +- Configuring your repository to use the new API + +## Create S3 Bucket to store artifacts +First, create an S3 Bucket with a unique name, such as `turborepo-cache-udaw82`. +Leave **Block all public access** ticked to ensure your artifacts remain +private. + +*Note - to prevent this bucket from growing forever, you may want to create a +**Lifecyle rule** to expire cache objects that are older than a certain number +of days.* + +## Create an IAM role to grant the Lambda permission to access the bucket +Create a new IAM role. Under **Trusted entity type** choose **AWS service**, and +under **Use case** select **Lambda**. On the **Add permissions** screen, click +**Next**. On the **Name, review, and create** screen create a name for your role +such as `turborepo-cache-lambda-role` then click on **Create role**. + +View your new role, and under **Permissions policies** click the button **Add +permissions** and choose **Create inline policy**. Click on **JSON** and add the +following policy: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "s3:*" + ], + "Resource": [ + "arn:aws:s3:::", + "arn:aws:s3:::/*" + ] + } + ] +} +``` + +This will only grant the Lambda function access to the artifacts bucket, and no +other S3 resources. + +Click on **Review policy** and give your policy a name such as +`turborepo-cache-lambda-policy`, then click on **Create Policy**. + +## Create the Lambda Function + +Create a new Lambda function with a name like `turborepo-cache-lambda` using the +latest Node.js runtime. Under **Permissions** click on **Change default +execution role**, select **Use an existing role** and select the role you just +created. Click on **Create function**. + +### Handler code + +Create a new package for your Lambda handler, and add `turborepo-remote-cache` +as a dependency. Your `index.js` handler code should look like this: + +```js +export { handler } from 'turborepo-remote-cache/build/aws-lambda'; +``` + +*Note - You will need to bundle dependencies and upload the handler code. How +you choose to do this is outside the scope of this guide, but one method to +consider is using `esbuild`:* + +``` +esbuild src/index.js --bundle --platform=node --outfile=build/index.js +``` + +### Configuration + +Under your Lambda **Configuration**, edit the **General configuration** and +increase the timeout to 10 seconds (as the default value of 3 seconds can +sometimes cause timeouts). + +Go into **Environment variables** and create the following environment +variables: + +| Variable | Value | +|--------------------|--------------------| +| `STORAGE_PATH` | | +| `STORAGE_PROVIDER` | s3 | +| `TURBO_TOKEN` | | + +*See [Environment +variables](https://ducktors.github.io/turborepo-remote-cache/environment-variables) +for more information on configuring these.* + +### ARN + +Copy your Lambda's ARN for the next step. + +## Create an HTTP API Gateway + +Go to the API Gateway service, and choose **Create**. Under **HTTP API** click +on **Build**. + +Under **Integrations** click on **Add integration**. Choose **Lambda** and +search for your Lambda's ARN. Enter an API name such as `turborepo-cache-api`. + +Under **Configure routes** leave the **Method** as `ANY` and change the +**Resource path** to `$default`. Click on **Next**. + +On the **Configure stages** screen, leave the stage name as `$default` and click +on **Next**, then on the **Review and create** screen click on **Create**. + +You have now created your API Gateway. Copy the **Invoke URL** and use this to +set up your repository. + +## Configuring your repository to use the new API + +You will need to enable custom remote caching in your turbo repository. Your +**Invoke URL** is your Turborepo API URL, see [Enable custom remote caching in a +Turborepo +monorepo](https://ducktors.github.io/turborepo-remote-cache/custom-remote-caching) +for more information on how to configure this. + +Your remote `turborepo-remote-cache` API is now ready to use! diff --git a/package.json b/package.json index 6ebce524..e28f96f8 100644 --- a/package.json +++ b/package.json @@ -32,6 +32,7 @@ }, "dependencies": { "@commitlint/lint": "^17.2.0", + "@fastify/aws-lambda": "^3.1.3", "@google-cloud/storage": "6.4.1", "@hapi/boom": "9.1.4", "@sinclair/typebox": "0.23.1", diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index bd581466..d827f150 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -6,6 +6,7 @@ specifiers: '@commitlint/lint': ^17.2.0 '@commitlint/prompt': ^17.1.2 '@cspotcode/source-map-support': ^0.7.0 + '@fastify/aws-lambda': ^3.1.3 '@google-cloud/storage': 6.4.1 '@hapi/boom': 9.1.4 '@semantic-release/changelog': ^6.0.1 @@ -60,6 +61,7 @@ specifiers: dependencies: '@commitlint/lint': 17.2.0 + '@fastify/aws-lambda': 3.1.3 '@google-cloud/storage': 6.4.1 '@hapi/boom': 9.1.4 '@sinclair/typebox': 0.23.1 @@ -540,6 +542,10 @@ packages: ajv: 6.12.6 dev: false + /@fastify/aws-lambda/3.1.3: + resolution: {integrity: sha512-5bE17UqQlzja83XIOEvE0pNoDidbfNVu7R+DGus2uFECIg1m0o77XnRCEd9pzuP7ZOQmiRLrQdcET9ki+hSaVw==} + dev: false + /@google-cloud/paginator/3.0.7: resolution: {integrity: sha512-jJNutk0arIQhmpUUQJPJErsojqo834KcyB6X7a1mxuic8i1tKXxde8E69IZxNZawRIlZdIK2QY4WALvlK5MzYQ==} engines: {node: '>=10'} diff --git a/src/aws-lambda.ts b/src/aws-lambda.ts new file mode 100644 index 00000000..35aff84e --- /dev/null +++ b/src/aws-lambda.ts @@ -0,0 +1,9 @@ +import awsLambdaFastify from '@fastify/aws-lambda' +import { createApp } from './app' + +const app = createApp({ + trustProxy: true, +}) + +// eslint-disable-next-line @typescript-eslint/no-unused-vars +export const handler = awsLambdaFastify(app, { enforceBase64: _ => true })