Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docs on self-hosted ISR persisting across pods. #36520

Merged
merged 5 commits into from
Apr 27, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,29 @@ export async function getStaticProps() {
}
```

## Self-hosting ISR

Incremental Static Regeneration (ISR) works on [self-hosted Next.js sites](/docs/deployment.md#self-hosting) out of the box when you use `next start`.

You can use this approach when deploying to container orchestrators such as [Kubernetes](https://kubernetes.io/) or [HashiCorp Nomad](https://www.nomadproject.io/). By default, generated assets will be stored in-memory on each pod. This means that each pod will have its own copy of the static files. Stale data may be shown until that specific pod is hit by a request.

To ensure consistency across all pods, you can disable in-memory caching. This will inform the Next.js server to only leverage assets generated by ISR in the file system.

You can use a shared network mount in your Kubernetes pods (or similar setup) to reuse the same file-system cache between different containers. By sharing the same mount, the `.next` folder which contains the `next/image` cache will also be shared and re-used.

To disable in-memory caching, set `isrMemoryCacheSize` to `0` in your `next.config.js` file:

```js
module.exports = {
experimental: {
// Defaults to 50MB
isrMemoryCacheSize: 0,
},
}
```

> **Note:** You might need to consider a race condition between multiple pods trying to update the cache at the same time, depending on how your shared mount is configured.

## Related

For more information on what to do next, we recommend the following sections:
Expand Down