Skip to content

Latest commit

 

History

History
221 lines (150 loc) · 10.3 KB

deployment.mdx

File metadata and controls

221 lines (150 loc) · 10.3 KB
title label order desc keywords
Production Deployment
Deployment
10
When your Payload based app is ready, tested, looking great, it is time to deploy. Learn how to deploy your app and what to consider before deployment.
deployment, production, config, configuration, documentation, Content Management System, cms, headless, javascript, node, react, express
So you've developed a Payload app, it's fully tested, and running great locally. Now it's time to launch. Awesome! Great work! Now, what's next?

There are many ways to deploy Payload to a production environment. When evaluating how you will deploy Payload, you need to consider these main aspects:

  1. Basics
  2. Security
  3. Your MongoDB
  4. Permanent File Storage
  5. Docker

Basics

In order for Payload to run, it requires both the server code and the built admin panel. These will be the dist and build directories by default. If you've used create-payload-app to create your project, executing the build npm script will build both and output these directories.

Security

Payload features a suite of security features that you can rely on to strengthen your application's security. When deploying to Production, it's a good idea to double-check that you are making proper use of each of them.

The Secret Key

When you initialize Payload, you provide it with a secret property. This property should be impossible to guess and extremely difficult for brute-force attacks to crack. Make sure your Production secret is a long, complex string. It's often best practice to store it in an env file which is not checked into your Git repository, using dotenv to supply it to your payload.init call.

Double-check and thoroughly test all Access Control

Because you are in complete control of who can do what with your data, you should double and triple-check that you wield that power responsibly before deploying to Production.

By default, all Access Control functions require that a user is successfully logged in to Payload to create, read, update, or delete data. But, if you allow public user registration, for example, you will want to make sure that your access control functions are more strict - permitting only appropriate users to perform appropriate actions.
Building the Admin panel

Before running in Production, you need to have built a production-ready copy of the Payload Admin panel. To do this, Payload provides the build NPM script. You can use it by adding a script to your package.json file like this:

package.json:

{
  "name": "project-name-here",
  "scripts": {
    "build": "payload build"
  },
  "dependencies": {
    // your dependencies
  }
}

Then, to build Payload, you would run npm run build in your project folder. A production-ready Admin bundle will be created in the build directory.

Setting Node to Production

Make sure you set the environment variable NODE_ENV to production. Based on this variable, many Node packages automatically optimize themselves. In production, Payload automatically disables the GraphQL Playground, serves a production-ready version of the Admin panel, and other changes.

Secure Cookie Settings

You should be using an SSL certificate for production Payload instances, which means you can enable secure cookies in your Authentication-enabled Collection configs.

Preventing API Abuse

Payload comes with a robust set of built-in anti-abuse measures, such as locking out users after X amount of failed login attempts, request rate limiting, GraphQL query complexity limits, max depth settings, and more. Click here to learn more.

MongoDB

Payload can be used with any MongoDB compatible database including AWS DocumentDB or Azure Cosmos DB.

Managing MongoDB yourself

If you are using a persistent filesystem-based cloud host such as a DigitalOcean Droplet or an Amazon EC2 server, you might opt to install MongoDB directly on that server itself so that Node can communicate with it locally. With this approach, you can benefit from faster response times, but scaling can become more involved as your app's user base grows.

Letting someone else do it

Alternatively, you can rely on a third-party MongoDB host such as MongoDB Atlas. With Atlas or a similar cloud provider, you can trust them to take care of your database's availability, security, redundancy, and backups.

Note:
If versions are enabled and a collection has many documents you may need a minimum of an m10 mongoDB atlas cluster if you reach a sorting `exceeded memory limit` error to view a collection list in the admin UI. The limitations of the m2 and m5 tier clusters are here: [Atlas M0 (Free Cluster), M2, and M5 Limitations](https://www.mongodb.com/docs/atlas/reference/free-shared-limitations/?_ga=2.176267877.1329169847.1677683154-860992573.1647438381#operational-limitations).
DocumentDB

When using AWS DocumentDB, you will need to configure connection options for authentication in the mongoOptions passed to payload.init. You also need to set mongoOptions.useFacet to false to disable use of the unsupported $facet aggregation.

CosmosDB

When using Azure Cosmos DB, an index is needed for any field you may want to sort on. To add the sort index for all fields that may be sorted in the admin UI use the indexSortableFields configuration option.

File storage

If you are using Payload to manage file uploads, you need to consider where your uploaded files will be permanently stored. If you do not use Payload for file uploads, then this section does not impact your app whatsoever.

Persistent vs Ephemeral Filesystems

Some cloud app hosts such as Heroku use ephemeral file systems, which means that any files uploaded to your server only last until the server restarts or shuts down. Heroku and similar providers schedule restarts and shutdowns without your control, meaning your uploads will accidentally disappear without any way to get them back.

Alternatively, persistent filesystems will never delete your files and can be trusted to reliably host uploads perpetually.

Popular cloud providers with ephemeral filesystems:

  • Heroku
  • DigitalOcean Apps

Popular cloud providers with persistent filesystems:

  • DigitalOcean Droplets
  • Amazon EC2
  • GoDaddy
  • Many other more traditional web hosts
Warning:
If you rely on Payload's Upload functionality, make sure you either use a host with a persistent filesystem or have an integration with a third-party file host like Amazon S3.
Using ephemeral filesystem providers like Heroku

If you don't use Payload's upload functionality, you can go ahead and use Heroku or similar platform easily. Everything will work exactly as you want it to.

But, if you do, and you still want to use an ephemeral filesystem provider, you can write a hook-based solution to copy the files your users upload to a more permanent storage solution like Amazon S3 or DigitalOcean Spaces.

To automatically send uploaded files to S3 or similar, you could:

  • Write an asynchronous beforeChange hook for all Collections that support Uploads, which takes any uploaded file from the Express req and sends it to an S3 bucket
  • Write an afterRead hook to save a s3URL field that automatically takes the filename stored and formats a full S3 URL
  • Write an afterDelete hook that automatically deletes files from the S3 bucket

With the above configuration, deploying to Heroku or similar becomes no problem.

DigitalOcean Tutorials

DigitalOcean provides extremely helpful documentation that can walk you through the entire process of creating a production-ready Droplet to host your Payload app:

  1. Create a new Ubuntu 20.04 droplet on DigitalOcean
  2. Initial server setup
  3. Install nginx
  4. Install and secure MongoDB
  5. Create a new MongoDB and user
  6. Set up Node for production

Docker

This is an example of a multi-stage docker build of Payload for production. Ensure you are setting your environment variables on deployment, like PAYLOAD_SECRET, PAYLOAD_CONFIG_PATH, and MONGODB_URI if needed.

FROM node:18-alpine as base

FROM base as builder

WORKDIR /home/node
COPY package*.json ./

COPY . .
RUN yarn install
RUN yarn build

FROM base as runtime

ENV NODE_ENV=production

WORKDIR /home/node
COPY package*.json  ./

RUN yarn install --production
COPY --from=builder /home/node/dist ./dist
COPY --from=builder /home/node/build ./build

EXPOSE 3000

CMD ["node", "dist/server.js"]

Docker Compose

Here is an example of a docker-compose.yml file that can be used for development

version: "3"

services:
  payload:
    image: node:18-alpine
    ports:
      - "3000:3000"
    volumes:
      - .:/home/node/app
      - node_modules:/home/node/app/node_modules
    working_dir: /home/node/app/
    command: sh -c "yarn install && yarn dev"
    depends_on:
      - mongo
    environment:
      MONGODB_URI: mongodb://mongo:27017/payload
      PORT: 3000
      NODE_ENV: development
      PAYLOAD_SECRET: TESTING

  mongo:
    image: mongo:latest
    ports:
      - "27017:27017"
    command:
      - --storageEngine=wiredTiger
    volumes:
      - data:/data/db
    logging:
      driver: none

volumes:
  data:
  node_modules: