Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion pages/cloudflare/_meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@
"examples": "",
"community": "Community projects",
"troubleshooting": "",
"migrate-from-0.5-to-0.6": "Migrate from 0.5 to 0.6",
"migrate-from-0.6-to-1.0.0-beta": "Migrate from 0.6 to 1.0.0-beta",
"former-releases": "Former releases"
}
224 changes: 83 additions & 141 deletions pages/cloudflare/caching.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,28 +21,30 @@ There are two storage options to use for the incremental cache.
reflected globally, when using the default TTL of 60 seconds.
</Callout>

<Tabs items={["Workers KV", "R2 Object Storage"]}>
<Tabs items={["R2 Object Storage", "Workers KV"]}>
<Tabs.Tab>
##### 1. Create a KV namespace

##### 1. Create an R2 Bucket

```
npx wrangler@latest kv namespace create <YOUR_NAMESPACE_NAME>
npx wrangler@latest r2 bucket create <YOUR_BUCKET_NAME>
```

##### 2. Add the KV namespace and Service Binding to your Worker
##### 2. Add the R2 Bucket and Service Binding to your Worker

The binding name used in your app's worker is `NEXT_INC_CACHE_KV`.
The `WORKER_SELF_REFERENCE` service binding should be a self reference to your worker where `<WORKER_NAME>` is the name in your wrangler configuration file.
The binding name used in your app's worker is `NEXT_INC_CACHE_R2_BUCKET`. The service binding should be a self reference to your worker where `<WORKER_NAME>` is the name in your wrangler configuration file.

The prefix used by the R2 bucket can be configured with the `NEXT_INC_CACHE_R2_PREFIX` environment variable, and defaults to `incremental-cache`.

```jsonc
// wrangler.jsonc
{
// ...
"name": "<WORKER_NAME>",
"kv_namespaces": [
"r2_buckets": [
{
"binding": "NEXT_INC_CACHE_KV",
"id": "<BINDING_ID>",
"binding": "NEXT_INC_CACHE_R2_BUCKET",
"bucket_name": "<BUCKET_NAME>",
},
],
"services": [
Expand All @@ -56,46 +58,62 @@ The `WORKER_SELF_REFERENCE` service binding should be a self reference to your w

##### 3. Configure the cache

In your project's OpenNext config, enable the KV cache.
In your project's OpenNext config, enable the R2 cache.

You can optionally setup a regional cache to use with the R2 incremental cache. This will enable faster retrieval of cache entries and reduce the amount of requests being sent to object storage.

The regional cache has two modes:

- `short-lived`: Responses are re-used for up to a minute.
- `long-lived`: Fetch responses are re-used until revalidated, and ISR/SSG responses are re-used for up to 30 minutes.

Additionally, lazy updating of the regional cache can be enabled with the `shouldLazilyUpdateOnCacheHit` option. When requesting data from the cache, it sends a background request to the R2 bucket to get the latest entry. This is enabled by default for the `long-lived` mode.

```ts
// open-next.config.ts
import { defineCloudflareConfig } from "@opennextjs/cloudflare";
import kvIncrementalCache from "@opennextjs/cloudflare/overrides/incremental-cache/kv-incremental-cache";
import r2IncrementalCache from "@opennextjs/cloudflare/overrides/incremental-cache/r2-incremental-cache";
import { withRegionalCache } from "@opennextjs/cloudflare/overrides/incremental-cache/regional-cache";
// ...

// With regional cache enabled:
export default defineCloudflareConfig({
incrementalCache: kvIncrementalCache,
incrementalCache: withRegionalCache(r2IncrementalCache, {
mode: "long-lived",
shouldLazilyUpdateOnCacheHit: true,
}),
// ...
});

// Without regional cache:
export default defineCloudflareConfig({
incrementalCache: r2IncrementalCache,
// ...
});
```

</Tabs.Tab>

<Tabs.Tab>

##### 1. Create an R2 Bucket
##### 1. Create a KV namespace

```
npx wrangler@latest r2 bucket create <YOUR_BUCKET_NAME>
npx wrangler@latest kv namespace create <YOUR_NAMESPACE_NAME>
```

##### 2. Add the R2 Bucket and Service Binding to your Worker

The binding name used in your app's worker is `NEXT_INC_CACHE_R2_BUCKET`. The service binding should be a self reference to your worker where `<WORKER_NAME>` is the name in your wrangler configuration file.
##### 2. Add the KV namespace and Service Binding to your Worker

The prefix used by the R2 bucket can be configured with the `NEXT_INC_CACHE_R2_PREFIX` environment variable, and defaults to `incremental-cache`.
The binding name used in your app's worker is `NEXT_INC_CACHE_KV`.
The `WORKER_SELF_REFERENCE` service binding should be a self reference to your worker where `<WORKER_NAME>` is the name in your wrangler configuration file.

```jsonc
// wrangler.jsonc
{
// ...
"name": "<WORKER_NAME>",
"r2_buckets": [
"kv_namespaces": [
{
"binding": "NEXT_INC_CACHE_R2_BUCKET",
"bucket_name": "<BUCKET_NAME>",
"preview_bucket_name": "<PREVIEW_BUCKET_NAME>",
"binding": "NEXT_INC_CACHE_KV",
"id": "<BINDING_ID>",
},
],
"services": [
Expand All @@ -109,36 +127,16 @@ The prefix used by the R2 bucket can be configured with the `NEXT_INC_CACHE_R2_P

##### 3. Configure the cache

In your project's OpenNext config, enable the R2 cache.

You can optionally setup a regional cache to use with the R2 incremental cache. This will enable faster retrieval of cache entries and reduce the amount of requests being sent to object storage.

The regional cache has two modes:

- `short-lived`: Responses are re-used for up to a minute.
- `long-lived`: Fetch responses are re-used until revalidated, and ISR/SSG responses are re-used for up to 30 minutes.

Additionally, lazy updating of the regional cache can be enabled with the `shouldLazilyUpdateOnCacheHit` option. When requesting data from the cache, it sends a background request to the R2 bucket to get the latest entry. This is enabled by default for the `long-lived` mode.
In your project's OpenNext config, enable the KV cache.

```ts
// open-next.config.ts
import { defineCloudflareConfig } from "@opennextjs/cloudflare";
import r2IncrementalCache from "@opennextjs/cloudflare/overrides/incremental-cache/r2-incremental-cache";
import { withRegionalCache } from "@opennextjs/cloudflare/overrides/incremental-cache/regional-cache";
import kvIncrementalCache from "@opennextjs/cloudflare/overrides/incremental-cache/kv-incremental-cache";
// ...

// With regional cache enabled:
export default defineCloudflareConfig({
incrementalCache: withRegionalCache(r2IncrementalCache, {
mode: "long-lived",
shouldLazilyUpdateOnCacheHit: true,
}),
// ...
});

// Without regional cache:
export default defineCloudflareConfig({
incrementalCache: r2IncrementalCache,
incrementalCache: kvIncrementalCache,
// ...
});
```
Expand Down Expand Up @@ -172,14 +170,14 @@ You will also need to add some binding to your `wrangler.jsonc` file.
"bindings": [
{
"name": "NEXT_CACHE_DO_QUEUE",
"class_name": "DurableObjectQueueHandler"
"class_name": "DOQueueHandler"
}
]
},
"migrations": [
{
"tag": "v1",
"new_sqlite_classes": ["DurableObjectQueueHandler"]
"new_sqlite_classes": ["DOQueueHandler"]
}
],
```
Expand All @@ -189,7 +187,7 @@ You can customize the behaviors of the queue with environment variables:
- The max number of revalidations that can be processed by an instance of durable object at the same time (`NEXT_CACHE_DO_QUEUE_MAX_REVALIDATION`)
- The max time in milliseconds that a revalidation can take before being considered as failed (`NEXT_CACHE_DO_QUEUE_REVALIDATION_TIMEOUT_MS`)
- The amount of time after which a revalidation will be attempted again if it failed. If it fails again it will exponentially back off until it reaches the max retry interval (`NEXT_CACHE_DO_QUEUE_RETRY_INTERVAL_MS`)
- The maximum number of attempts that can be made to revalidate a path (`NEXT_CACHE_DO_QUEUE_MAX_NUM_REVALIDATIONS`)
- The maximum number of attempts that can be made to revalidate a path (`NEXT_CACHE_DO_QUEUE_MAX_RETRIES`)
- Disable SQLite for this durable object. It should only be used if your incremental cache is not eventually consistent (`NEXT_CACHE_DO_QUEUE_DISABLE_SQLITE`)

<Callout>
Expand All @@ -216,15 +214,15 @@ To use on-demand revalidation, you should also follow the [ISR setup steps](#inc
You can also skip this step if your app doesn't to use `revalidateTag` nor `revalidatePath`.
</Callout>

There are 3 different options to choose from for the tag cache: `d1NextTagCache`, `doShardedTagCache` and `d1TagCache`.
There are 2 different options to choose from for the tag cache: `d1NextTagCache`, `doShardedTagCache`.
Which one to choose should be based on two key factors:

1. **Expected Load**: Consider the volume of traffic or data you anticipate.
2. **Usage of** `revalidateTag` / `revalidatePath`: Evaluate how frequently these features will be utilized.

If either of these factors is significant, opting for a sharded database is recommended. Additionally, incorporating a regional cache can further enhance performance.

<Tabs items={["D1NextTagCache", "doShardedTagCache", "D1TagCache"]}>
<Tabs items={["D1NextTagCache", "doShardedTagCache"]}>
<Tabs.Tab>
##### 1. Create a D1 database and Service Binding

Expand Down Expand Up @@ -256,22 +254,43 @@ The D1 tag cache requires a `revalidations` table that tracks On-Demand revalida

##### 3. Configure the cache

In your project's OpenNext config, enable the KV cache and set up a queue (see above). The queue will send a revalidation request to a page when needed, but it will not dedupe requests.
In your project's OpenNext config, enable the R2 cache and set up a queue (see above). The queue will send a revalidation request to a page when needed, but it will not dedupe requests.

```ts
// open-next.config.ts
import { defineCloudflareConfig } from "@opennextjs/cloudflare";
import kvIncrementalCache from "@opennextjs/cloudflare/kv-cache";
import d1NextTagCache from "@opennextjs/cloudflare/d1-next-tag-cache";
import memoryQueue from "@opennextjs/cloudflare/memory-queue";
import r2IncrementalCache from "@opennextjs/cloudflare/overrides/incremental-cache/r2-incremental-cache";
import d1NextTagCache from "@opennextjs/cloudflare/overrides/tag-cache/d1-next-tag-cache";
import doQueue from "@opennextjs/cloudflare/overrides/queue/do-queue";

export default defineCloudflareConfig({
incrementalCache: kvIncrementalCache,
incrementalCache: r2IncrementalCache,
tagCache: d1NextTagCache,
queue: memoryQueue,
queue: doQueue,
});
```

##### 4. Initialise the cache during deployments

In order for the cache to be properly initialised with the build-time revalidation data, you need to run a command as part of your deploy step. This should be run as part of each deployment to ensure that the cache is being populated with each build's data.

To populate remote bindings and deploy your application at the same time, you can use the `deploy` command. Similarly, the `preview` command will populate your local bindings and start a Wrangler dev server.

```sh
# Populate remote and deploy.
opennextjs-cloudflare deploy

# Populate local and start dev server.
opennextjs-cloudflare preview
```

It is possible to only populate the cache without any other steps with the `populateCache` command.

```sh
# The target is passed as an option, either `local` or `remote`.
opennextjs-cloudflare populateCache local
```

</Tabs.Tab>

<Tabs.Tab>
Expand All @@ -286,18 +305,18 @@ The service binding should be a self reference to your worker where `<WORKER_NAM
"bindings": [
{
"name": "NEXT_CACHE_DO_QUEUE",
"class_name": "DurableObjectQueueHandler",
"class_name": "DOQueueHandler",
},
{
"name": "NEXT_CACHE_DO_SHARDED",
"name": "NEXT_TAG_CACHE_DO_SHARDED",
"class_name": "DOShardedTagCache",
},
],
},
"migrations": [
{
"tag": "v1",
"new_sqlite_classes": ["DurableObjectQueueHandler", "DOShardedTagCache"],
"new_sqlite_classes": ["DOQueueHandler", "DOShardedTagCache"],
},
],
"services": [
Expand Down Expand Up @@ -332,87 +351,10 @@ doShardedTagCache tahes the following options:
- `baseShardSize` - The number of shards to use for the cache. The more shards you have, the more evenly the cache will be distributed across the shards. The default is 4. Soft (internal next tags used for `revalidatePath`) and hard tags (the one you define in your app) will be split in different shards
- `regionalCache` - Whether to use regional cache for the cache. The default is false. This option is useful when you want to reduce the stress on the durable object
- `regionalCacheTtlSec` - The TTL for the regional cache. The default is 5 seconds. Increasing this value will increase the time it takes for the cache to be invalidated across regions
- `enableShardReplication`: Enable replicating the Shard. Shard replication will duplicate each shards into replicas to spread the load even more
- `shardReplicationOptions.numberOfSoftReplicas`: Number of replicas for the soft tag shards
- `shardReplicationOptions.numberOfHardReplicas`: Number of replicas for the hard tag shards
- `shardReplication`: Enable replicating the Shard. Shard replication will duplicate each shards into replicas to spread the load even more
- `shardReplication.numberOfSoftReplicas`: Number of replicas for the soft tag shards
- `shardReplication.numberOfHardReplicas`: Number of replicas for the hard tag shards
- `maxWriteRetries`: The number of retries to perform when writing tags

</Tabs.Tab>

<Tabs.Tab>

<Callout>
The `d1TagCache` is not recommended for production use, as it does not scale well with the number of tags.
</Callout>

##### 1. Create a D1 database and Service Binding

The binding name used in your app's worker is `NEXT_TAG_CACHE_D1`. The service binding should be a self reference to your worker where `<WORKER_NAME>` is the name in your wrangler configuration file.

```jsonc
// wrangler.jsonc
{
// ...
"name": "<WORKER_NAME>",
"d1_databases": [
{
"binding": "NEXT_TAG_CACHE_D1",
"database_id": "<DATABASE_ID>",
"database_name": "<DATABASE_NAME>",
},
],
"services": [
{
"binding": "WORKER_SELF_REFERENCE",
"service": "<WORKER_NAME>",
},
],
}
```

The D1 database uses two tables, created when initialising the cache:

- the "tags" table keeps a record of the tag/path mappings
- the "revalidations" table tracks revalidation times

##### 2. Configure the cache

In your project's OpenNext config, enable the KV cache and set up a queue. The queue will send a revalidation request to a page when needed, but it will not dedupe requests.

```ts
// open-next.config.ts
import { defineCloudflareConfig } from "@opennextjs/cloudflare";
import kvIncrementalCache from "@opennextjs/cloudflare/overrides/incremental-cache/kv-incremental-cache";
import d1TagCache from "@opennextjs/cloudflare/overrides/tag-cache/d1-tag-cache";
import memoryQueue from "@opennextjs/cloudflare/overrides/queue/memory-queue";

export default defineCloudflareConfig({
incrementalCache: kvIncrementalCache,
tagCache: d1TagCache,
queue: memoryQueue,
});
```

##### 3. Initialise the cache during deployments

In order for the cache to be properly initialised with the build-time revalidation data, you need to run a command as part of your deploy step. This should be run as part of each deployment to ensure that the cache is being populated with each build's data.

To populate remote bindings and deploy your application at the same time, you can use the `deploy` command. Similarly, the `preview` command will populate your local bindings and start a Wrangler dev server.

```sh
# Populate remote and deploy.
opennextjs-cloudflare deploy

# Populate local and start dev server.
opennextjs-cloudflare preview
```

It is possible to only populate the cache without any other steps with the `populateCache` command.

```sh
# The target is passed as an option, either `local` or `remote`.
opennextjs-cloudflare populateCache local
```

</Tabs.Tab>
</Tabs>
7 changes: 7 additions & 0 deletions pages/cloudflare/former-releases/0.6/_meta.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"index": "Overview",
"get-started": "",
"bindings": "",
"caching": "",
"examples": ""
}
Loading