Skip to content
This repository was archived by the owner on May 20, 2025. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions dictionary.txt
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,7 @@ todo
todos
transpiling
ARN
monorepos

^.+[-:_]\w+$
[a-z]+([A-Z0-9]|[A-Z0-9]\w+)
Expand Down
4 changes: 4 additions & 0 deletions src/pages/faq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,10 @@ Add the environment variable to your `~/.zshrc` or `~/.bashrc` as:
PULUMI_ACCESS_TOKEN=access_token
```

## Does Nitric support monorepos?

Yes, Nitric supports monorepos through the custom runtime feature, which allows you to change the build context of your Docker build. For more information, see [custom containers](/reference/custom-containers). Alternatively, you can move your `nitric.yaml` to the root of your repository.

## Will I be locked-in to Nitric?

Nitric is designed with flexibility to avoid lock-in, including to Nitric. If the framework no longer serves you, you'll simply need to choose a new IaC and migrate your provisioning code. The Nitric framework and CLI are written in Go, and use the Pulumi Go Providers, so you may be able to avoid rewriting all of the provisioning code by lifting the provisioning code which Nitric has already built for you. If relevant, you'll also need to rebuild your CI pipelines to leverage the new IaC tooling you've chosen. Nitric doesn't have access to your data, so no data migration is needed.
Expand Down
70 changes: 70 additions & 0 deletions src/pages/reference/custom-containers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -176,3 +176,73 @@ ENTRYPOINT ["/bin/main"]
### Create an ignore file

Custom dockerfile templates also support co-located dockerignore files. If your custom docker template is at path `./docker/node.dockerfile` you can create an ignore file at `./docker/node.dockerfile.dockerignore`.

## Create a monorepo with custom runtimes

Nitric supports monorepos via the custom runtime feature, this allows you to change the build context of your docker build. To use a custom runtime in a monorepo, you can specify the `runtime` key per service definition as shown below.

<Note>Available in Nitric CLI version 1.45.0 and above</Note>

### Example for Turborepo

[Turborepo](https://turbo.build/) is a monorepo tool for JavaScript and TypeScript that allows you to manage multiple packages in a single repository. In this example, we will use a custom runtime to build a service in a monorepo using a custom dockerfile.

```yaml {{ tag: "root/backends/guestbook-app/nitric.yaml" }}
name: guestbook-app
services:
- match: services/*.ts
runtime: turbo
type: ''
start: npm run dev:services $SERVICE_PATH
runtimes:
turbo:
dockerfile: ./turbo.dockerfile # the custom dockerfile
context: ../../ # the context of the docker build
args:
TURBO_SCOPE: 'guestbook-api'
```

```docker {{ tag: "root/backends/guestbook-app/turbo.dockerfile" }}
FROM node:alpine AS builder
ARG TURBO_SCOPE

# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
RUN apk update
# Set working directory
WORKDIR /app
RUN yarn global add turbo

# copy from root of the mono-repo
COPY . .
RUN turbo prune --scope=${TURBO_SCOPE} --docker

# Add lockfile and package.json's of isolated subworkspace
FROM node:alpine AS installer
ARG TURBO_SCOPE
ARG HANDLER
RUN apk add --no-cache libc6-compat
RUN apk update
WORKDIR /app
RUN yarn global add typescript @vercel/ncc turbo

# First install dependencies (as they change less often)
COPY .gitignore .gitignore
COPY --from=builder /app/out/json/ .
COPY --from=builder /app/out/yarn.lock ./yarn.lock
RUN yarn install --frozen-lockfile --production

# Build the project and its dependencies
COPY --from=builder /app/out/full/ .
COPY turbo.json turbo.json

RUN turbo run build --filter=${TURBO_SCOPE} -- ./${HANDLER} -m --v8-cache -o lib/

FROM node:alpine AS runner
ARG TURBO_SCOPE
WORKDIR /app

COPY --from=installer /app/backends/${TURBO_SCOPE}/lib .

ENTRYPOINT ["node", "index.js"]
```