diff --git a/docs/server/kubernetes-operator/v1.0.0/operations/database-deployment.md b/docs/server/kubernetes-operator/v1.0.0/operations/database-deployment.md
index ecdec6d55..c85b8a807 100644
--- a/docs/server/kubernetes-operator/v1.0.0/operations/database-deployment.md
+++ b/docs/server/kubernetes-operator/v1.0.0/operations/database-deployment.md
@@ -380,11 +380,7 @@ kubectl apply -f cluster.yaml
## Three Node Secure Cluster (using LetsEncrypt)
Using LetsEncrypt, or any publicly trusted certificate, in an operator-managed KurrentDB cluster
-is not supported.
-
-The recommended workaround is to combine [self-signed certificates within the cluster](
-#three-node-secure-cluster-using-self-signed-certificates) with an Ingress that does TLS
-termination using the LetsEncrypt certificate.
+is not supported in v1.0.0; please upgrade to v1.4.0.
## Viewing Deployments
diff --git a/docs/server/kubernetes-operator/v1.1.0/operations/database-deployment.md b/docs/server/kubernetes-operator/v1.1.0/operations/database-deployment.md
index f20add6ff..b9a98aced 100644
--- a/docs/server/kubernetes-operator/v1.1.0/operations/database-deployment.md
+++ b/docs/server/kubernetes-operator/v1.1.0/operations/database-deployment.md
@@ -445,11 +445,7 @@ kubectl apply -f cluster.yaml
## Three Node Secure Cluster (using LetsEncrypt)
Using LetsEncrypt, or any publicly trusted certificate, in an operator-managed KurrentDB cluster
-is not supported.
-
-The recommended workaround is to combine [self-signed certificates within the cluster](
-#three-node-secure-cluster-using-self-signed-certificates) with an Ingress that does TLS
-termination using the LetsEncrypt certificate.
+is not supported in v1.0.0; please upgrade to v1.4.0.
## Deploying With Scheduling Constraints
diff --git a/docs/server/kubernetes-operator/v1.2.0/operations/database-deployment.md b/docs/server/kubernetes-operator/v1.2.0/operations/database-deployment.md
index f20add6ff..b9a98aced 100644
--- a/docs/server/kubernetes-operator/v1.2.0/operations/database-deployment.md
+++ b/docs/server/kubernetes-operator/v1.2.0/operations/database-deployment.md
@@ -445,11 +445,7 @@ kubectl apply -f cluster.yaml
## Three Node Secure Cluster (using LetsEncrypt)
Using LetsEncrypt, or any publicly trusted certificate, in an operator-managed KurrentDB cluster
-is not supported.
-
-The recommended workaround is to combine [self-signed certificates within the cluster](
-#three-node-secure-cluster-using-self-signed-certificates) with an Ingress that does TLS
-termination using the LetsEncrypt certificate.
+is not supported in v1.0.0; please upgrade to v1.4.0.
## Deploying With Scheduling Constraints
diff --git a/docs/server/kubernetes-operator/v1.3.1/operations/database-deployment.md b/docs/server/kubernetes-operator/v1.3.1/operations/database-deployment.md
index 4eea0958e..8002447f5 100644
--- a/docs/server/kubernetes-operator/v1.3.1/operations/database-deployment.md
+++ b/docs/server/kubernetes-operator/v1.3.1/operations/database-deployment.md
@@ -441,11 +441,7 @@ kubectl apply -f cluster.yaml
## Three Node Secure Cluster (using LetsEncrypt)
Using LetsEncrypt, or any publicly trusted certificate, in an operator-managed KurrentDB cluster
-is not supported.
-
-The recommended workaround is to combine [self-signed certificates within the cluster](
-#three-node-secure-cluster-using-self-signed-certificates) with an Ingress that does TLS
-termination using the LetsEncrypt certificate.
+is not supported in v1.0.0; please upgrade to v1.4.0.
## Deploying With Scheduling Constraints
diff --git a/docs/server/kubernetes-operator/v1.4.0/README.md b/docs/server/kubernetes-operator/v1.4.0/README.md
new file mode 100644
index 000000000..9326c4843
--- /dev/null
+++ b/docs/server/kubernetes-operator/v1.4.0/README.md
@@ -0,0 +1,5 @@
+---
+# title is for breadcrumb and sidebar nav
+title: Kubernetes Operator v1.4.0
+order: 1
+---
diff --git a/docs/server/kubernetes-operator/v1.4.0/getting-started/README.md b/docs/server/kubernetes-operator/v1.4.0/getting-started/README.md
new file mode 100644
index 000000000..620233e1a
--- /dev/null
+++ b/docs/server/kubernetes-operator/v1.4.0/getting-started/README.md
@@ -0,0 +1,68 @@
+---
+order: 1
+dir:
+ text: "Getting started"
+ link: true
+ order: 1
+---
+
+
+
+---
+Welcome to the **KurrentDB Kubernetes Operator** guide. In this guide, we’ll refer to the KurrentDB Kubernetes Operator simply as “the Operator.” Use the Operator to simplify backup, scaling, and upgrades of KurrentDB clusters on Kubernetes.
+
+:::important
+The Operator is an Enterprise-only feature, please [contact us](https://www.kurrent.io/contact) for more information.
+:::
+
+## Why run KurrentDB on Kubernetes?
+
+Kubernetes is the modern enterprise standard for deploying containerized applications at scale. The Operator streamlines deployment and management of KurrentDB clusters.
+
+## Features
+
+* Deploy single-node or multi-node clusters
+* Back up and restore clusters
+* Automate backups with a schedule and retention policies
+* Perform rolling upgrades and update configurations
+
+### New in 1.4.0
+
+* Support configurable traffic strategies for each of server-server and client-server traffic. This
+ enables the use of LetsEncrypt certificates without creating Ingresses, for example. See
+ [Traffic Strategies][ts] for details.
+* Support backup scheduling and retention policies. There is a new [KurrentDBBackupSchedule][bs]
+ CRD with a CronJob-like syntax. There are also two mechanisms for configuring retention policies:
+ a `.keep` count on `KurrentDBBackupSchedule`, and a new `.ttl` on `KurrentDBBackup`.
+* Support standalone read-only replicas pointed at a remote cluster. This enables advanced
+ topologies like a having your quorum nodes in one region and a read-only replica in a distant
+ region. See [Deploying Standalone Read-Only Replicas][ror] for an example.
+* Support template strings in some extra metadata for child resources of the `KurrentDB` object.
+ This allows, for example, to annotate each of the automatically created LoadBalancers with unique
+ external-dns annotations. See [KurrentDBExtraMetadataSpec][em] for details.
+
+[ts]: ../operations/advanced-networking.md#traffic-strategy-options
+[bs]: resource-types.md#kurrentdbbackupschedulespec
+[ror]: ../operations/database-deployment.md#deploying-standalone-read-only-replicas
+[em]: resource-types.md#kurrentdbextrametadataspec
+
+## Supported KurrentDB Versions
+
+The Operator supports running the following major versions of KurrentDB:
+- v25.x
+- v24.x
+- v23.10+
+
+## Supported Hardware Architectures
+
+The Operator is packaged for the following hardware architectures:
+- x86\_64
+- arm64
+
+## Technical Support
+
+For support questions, please [contact us](https://www.kurrent.io/contact).
+
+## First Steps
+
+Ready to install? Head over to the [installation](installation.md) section.
diff --git a/docs/server/kubernetes-operator/v1.4.0/getting-started/images/install/deployments-list.png b/docs/server/kubernetes-operator/v1.4.0/getting-started/images/install/deployments-list.png
new file mode 100644
index 000000000..00a310c8d
Binary files /dev/null and b/docs/server/kubernetes-operator/v1.4.0/getting-started/images/install/deployments-list.png differ
diff --git a/docs/server/kubernetes-operator/v1.4.0/getting-started/images/install/logs.png b/docs/server/kubernetes-operator/v1.4.0/getting-started/images/install/logs.png
new file mode 100644
index 000000000..fa207c732
Binary files /dev/null and b/docs/server/kubernetes-operator/v1.4.0/getting-started/images/install/logs.png differ
diff --git a/docs/server/kubernetes-operator/v1.4.0/getting-started/images/install/namespace-list.png b/docs/server/kubernetes-operator/v1.4.0/getting-started/images/install/namespace-list.png
new file mode 100644
index 000000000..75b948c65
Binary files /dev/null and b/docs/server/kubernetes-operator/v1.4.0/getting-started/images/install/namespace-list.png differ
diff --git a/docs/server/kubernetes-operator/v1.4.0/getting-started/images/install/pods-list.png b/docs/server/kubernetes-operator/v1.4.0/getting-started/images/install/pods-list.png
new file mode 100644
index 000000000..5161e42a5
Binary files /dev/null and b/docs/server/kubernetes-operator/v1.4.0/getting-started/images/install/pods-list.png differ
diff --git a/docs/server/kubernetes-operator/v1.4.0/getting-started/installation.md b/docs/server/kubernetes-operator/v1.4.0/getting-started/installation.md
new file mode 100644
index 000000000..75aff8b4b
--- /dev/null
+++ b/docs/server/kubernetes-operator/v1.4.0/getting-started/installation.md
@@ -0,0 +1,209 @@
+---
+title: Installation
+order: 2
+---
+
+This section covers the various aspects of installing the Operator.
+
+::: important
+The Operator is an Enterprise-only feature, please [contact us](https://www.kurrent.io/contact) for more information.
+:::
+
+## Prerequisites
+
+::: tip
+To get the best out of this guide, a basic understanding of [Kubernetes concepts](https://kubernetes.io/docs/concepts/) is essential.
+:::
+
+* A Kubernetes cluster running any [non-EOL version of Kubernetes](https://kubernetes.io/releases/).
+* Permission to create resources, deploy the Operator and install CRDs in the target cluster.
+* The following CLI tools installed, on your shell’s `$PATH`, with `$KUBECONFIG` pointing to your cluster:
+ * [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl)
+ * [k9s](https://k9scli.io/topics/install/)
+ * [Helm 3 CLI](https://helm.sh/docs/intro/install/)
+ * A valid Operator license. Please [contact us](https://www.kurrent.io/contact) for more information.
+
+## Configure Helm Repository
+
+Add the Kurrent Helm repository to your local environment:
+
+```bash
+helm repo add kurrent-latest \
+ 'https://packages.kurrent.io/basic/kurrent-latest/helm/charts/'
+```
+
+## Install Custom Resource Definitions (CRDs)
+
+The Operator uses Custom Resource Definitions (CRDs) to extend Kubernetes. You can install them automatically with Helm or manually.
+
+The following resource types are supported:
+- [KurrentDB](resource-types.md#kurrentdbspec)
+- [KurrentDBBackup](resource-types.md#kurrentdbbackupspec)
+- [KurrentDBBackupSchedules](resource-types.md#kurrentdbbackupschedulesspec)
+
+Since CRDs are managed globally by Kubernetes, special care must be taken to install them.
+
+### Automatic Install
+
+It's recommended to install and manage the CRDs using Helm. See [Deployment Modes](#deployment-modes) for more information.
+
+### Manual Install
+
+If you prefer to install CRDs yourself:
+
+```bash
+# Download the kurrentdb-operator Helm chart
+helm pull kurrent-latest/kurrentdb-operator --version 1.4.0 --untar
+# Install the CRDs
+kubectl apply -f kurrentdb-operator/templates/crds
+```
+*Expected Output*:
+```
+customresourcedefinition.apiextensions.k8s.io/kurrentdbbackups.kubernetes.kurrent.io created
+customresourcedefinition.apiextensions.k8s.io/kurrentdbs.kubernetes.kurrent.io created
+```
+
+After installing CRDs manually, you should include the `--set crds.enabled=false` flag for the `helm
+install` command, and include one of `--set crds.enabled=false`, `--reuse-values`, or
+`--reset-then-reuse-values` for the `helm upgrade` command.
+
+::: caution
+If you set the value of `crds.keep` to `false` (the default is `true`), helm upgrades and rollbacks
+can result in data loss. If `crds.keep` is `false` and `crds.enabled` transitions from `true` to
+`false` during an upgrade or rollback, the CRDs will be removed from the cluster, deleting all
+`KurrentDBs` and `KurrentDBBackups` and their associated child resources, including the PVCs and
+VolumeSnapshots containing your data!
+:::
+
+## Deployment Modes
+
+The Operator can be scoped to track Kurrent resources across **all** or **specific** namespaces.
+
+### Cluster-wide
+
+In cluster-wide mode, the Operator tracks Kurrent resources across **all** namespaces and requires `ClusterRole`. Helm creates the `ClusterRole` automatically.
+
+To deploy the Operator in this mode, run:
+
+```bash
+helm install kurrentdb-operator kurrent-latest/kurrentdb-operator \
+ --version 1.4.0 \
+ --namespace kurrent \
+ --create-namespace \
+ --set crds.enabled=true \
+ --set-file operator.license.key=/path/to/license.key \
+ --set-file operator.license.file=/path/to/license.lic
+```
+
+This command:
+- Deploys the Operator into the `kurrent` namespace (use `--create-namespace` to create it). Feel free to modify this namespace.
+- Creates the namespace (if it already exists, leave out the `--create-namespace` flag).
+- Deploys CRDs (this can be skipped by changing to `--set crds.enabled=false`).
+- Applies the Operator license.
+- Deploys a new Helm release called `kurrentdb-operator` in the `kurrent` namespace.
+
+*Expected Output*:
+```
+NAME: kurrentdb-operator
+LAST DEPLOYED: Thu Mar 20 14:51:42 2025
+NAMESPACE: kurrent
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+```
+
+Once installed, navigate to the [deployment validation](#deployment-validation) section.
+
+### Specific Namespace(s)
+
+In this mode, the Operator will track Kurrent resources across **specific** namespaces. This mode reduces the level of permissions required. The Operator will create a `Role` in each namespace that it is expected to manage.
+
+To deploy the Operator in this mode, the following command can be used:
+
+```bash
+helm install kurrentdb-operator kurrent-latest/kurrentdb-operator \
+ --version 1.4.0 \
+ --namespace kurrent \
+ --create-namespace \
+ --set crds.enabled=true \
+ --set-file operator.license.key=/path/to/license.key \
+ --set-file operator.license.file=/path/to/license.lic \
+ --set operator.namespaces='{kurrent, foo}'
+```
+
+Here's what the command does:
+- Sets the namespace of where the Operator will be deployed i.e. `kurrent` (feel free to change this)
+- Creates the namespace (if it already exists, leave out the `--create-namespace` flag)
+- Deploys CRDs (this can be skipped by changing to `--set crds.enabled=false`)
+- Configures the Operator license
+- Configures the Operator to operate on resources the namespaces `kurrent` and `foo`
+- Deploys a new Helm release called `kurrentdb-operator` in the `kurrent` namespace
+
+::: important
+Make sure the namespaces listed as part of the `operator.namespaces` parameter already exist before running the command.
+:::
+
+*Expected Output*:
+```
+NAME: kurrentdb-operator
+LAST DEPLOYED: Thu Mar 20 14:51:42 2025
+NAMESPACE: kurrent
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+```
+
+Once installed, navigate to the [deployment validation](#deployment-validation) section.
+
+#### Augmenting Namespaces
+
+The Operator deployment can be updated to adjust which namespaces are watched. For example, in addition to the `kurrent` and `foo` namespaces (from the example above), a new namespace `bar` may also be watched using the command below:
+
+```bash
+helm upgrade kurrentdb-operator kurrent-latest/kurrentdb-operator \
+ --version 1.4.0 \
+ --namespace kurrent \
+ --reuse-values \
+ --set operator.namespaces='{kurrent,foo,bar}'
+```
+
+This will trigger:
+- a new `Role` to be created in the `bar` namespace
+- a rolling restart of the Operator to pick up the new configuration changes
+
+## Deployment Validation
+
+Using the k9s tool, navigate to the namespace listing using the command `:namespaces`. It should show the namespace where the Operator was deployed:
+
+
+
+After stepping in to the `kurrent` namespace, type `:deployments` in the k9s console. It should show the following:
+
+
+
+Pods may also be viewed using the `:pods` command, for example:
+
+
+
+Pressing the `Return` key on the selected Operator pod will allow you to drill through the container hosted in the pod, and then finally to the logs:
+
+
+
+
+## Upgrading an Installation
+
+The Operator can be upgraded using the following `helm` commands:
+
+```bash
+helm repo update
+helm upgrade kurrentdb-operator kurrentdb-operator-repo/kurrentdb-operator \
+ --namespace kurrent \
+ --version {version} \
+ --reset-then-reuse-values
+```
+
+Here's what these commands do:
+- Refresh the local Helm repository index
+- Locate an existing operator installation in namespace `kurrent`
+- Select the target upgrade version `{version}` e.g. `1.4.0`
+- Perform the upgrade, preserving values that were set during installation
diff --git a/docs/server/kubernetes-operator/v1.4.0/getting-started/resource-types.md b/docs/server/kubernetes-operator/v1.4.0/getting-started/resource-types.md
new file mode 100644
index 000000000..645b61721
--- /dev/null
+++ b/docs/server/kubernetes-operator/v1.4.0/getting-started/resource-types.md
@@ -0,0 +1,290 @@
+---
+title: Supported Resource Types
+order: 3
+---
+
+The Operator supports the following resource types (known as `Kind`'s):
+- `KurrentDB`
+- `KurrentDBBackup`
+- `KurrentDBBackupSchedule`
+
+## KurrentDB
+
+This resource type is used to define a database deployment.
+
+### API
+
+#### KurrentDBSpec
+
+| Field | Required | Description |
+|---------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------|
+| `replicas` _integer_ | Yes | Number of nodes in a database cluster. May be 1, 3, or, for [standalone ReadOnly-Replicas][ror], it may be 0. |
+| `image` _string_ | Yes | KurrentDB container image URL |
+| `resources` _[ResourceRequirements][d1]_ | No | Database container resource limits and requests |
+| `storage` _[PersistentVolumeClaim][d2]_ | Yes | Persistent volume claim settings for the underlying data volume |
+| `network` _[KurrentDBNetwork][d3]_ | Yes | Defines the network configuration to use with the database |
+| `configuration` _yaml_ | No | Additional configuration to use with the database, see [below](#configuring-kurrent-db) |
+| `sourceBackup` _string_ | No | Backup name to restore a cluster from |
+| `security` _[KurrentDBSecurity][d4]_ | No | Security configuration to use for the database. This is optional, if not specified the cluster will be created without security enabled. |
+| `licenseSecret` _[SecretKeySelector][d5]_ | No | A secret that contains the Enterprise license for the database |
+| `constraints` _[KurrentDBConstraints][d6]_ | No | Scheduling constraints for the Kurrent DB pod. |
+| `readOnlyReplias` _[KurrentDBReadOnlyReplicasSpec][d7]_ | No | Read-only replica configuration the Kurrent DB Cluster. |
+| `extraMetadata` _[KurrentDBExtraMetadataSpec][d8]_ | No | Additional annotations and labels for child resources. |
+| `quorumNodes` _string array_ | No | A list of endpoints (in host:port notation) to reach the quorum nodes when .Replicas is zero, see [standalone ReadOnlyReplicas][ror] |
+
+[d1]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#resourcerequirements-v1-core
+[d2]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#persistentvolumeclaimspec-v1-core
+[d3]: #kurrentdbnetwork
+[d4]: #kurrentdbsecurity
+[d5]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#secretkeyselector-v1-core
+[d6]: #kurrentdbconstraints
+[d7]: #kurrentdbreadonlyreplicasspec
+[d8]: #kurrentdbextrametadataspec
+[ror]: ../operations/database-deployment.md#deploying-standalone-read-only-replicas
+
+#### KurrentDBReadOnlyReplicasSpec
+
+Other than `replicas`, each of the fields in `KurrentDBReadOnlyReplicasSpec` default to the corresponding values from the main KurrentDBSpec.
+
+| Field | Required | Description |
+|----------------------------------------------|----------|------------------------------------------------------------------|
+| `replicas` _integer_ | No | Number of read-only replicas in the cluster. Defaults to zero. |
+| `resources` _[ResourceRequirements][r1]_ | No | Database container resource limits and requests. |
+| `storage` _[PersistentVolumeClaim][r2]_ | No | Persistent volume claim settings for the underlying data volume. |
+| `configuration` _yaml_ | No | Additional configuration to use with the database. |
+| `constraints` _[KurrentDBConstraints][r3]_ | No | Scheduling constraints for the Kurrent DB pod. |
+
+[r1]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#resourcerequirements-v1-core
+[r2]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#persistentvolumeclaimspec-v1-core
+[r3]: #kurrentdbconstraints
+
+#### KurrentDBConstraints
+
+| Field | Required | Description |
+|----------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------|
+| `nodeSelector` _yaml_ | No | Identifies nodes that the Kurrent DB may consider during scheduling. |
+| `affinity` _[Affinity][c1]_ | No | The node affinity, pod affinity, and pod anti-affinity for scheduling the Kurrent DB pod. |
+| `tolerations` _list of [Toleration][c2]_ | No | The tolerations for scheduling the Kurrent DB pod. |
+| `topologySpreadConstraints` _list of [TopologySpreadConstraint][c3]_ | No | The topology spread constraints for scheduling the Kurrent DB pod. |
+
+[c1]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#affinity-v1-core
+[c2]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#toleration-v1-core
+[c3]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#topologyspreadconstraint-v1-core
+
+#### KurrentDBExtraMetadataSpec
+
+| Field | Required | Description |
+|----------------------------------------------------|----------|---------------------------------------------------------------------|
+| `all` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for all child resource types. |
+| `configMaps` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for ConfigMaps. |
+| `statefulSets` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for StatefulSets. |
+| `pods` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for Pods. |
+| `persistentVolumeClaims` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for PersistentVolumeClaims. |
+| `headlessServices` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for the per-cluster headless Services. |
+| `headlessPodServices` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for the per-pod headless Services. |
+| `loadBalancers` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for LoadBalancer-type Services. |
+
+[m1]: #extrametadataspec
+
+Note that select kinds of extra metadata support template expansion to allow multiple instances of
+a child resource to be distinguished from one another. In particular, `ConfigMaps`, `StatefulSets`,
+and `HeadlessServices` support "per-node-kind" template expansions:
+- `{name}` expands to KurrentDB.metadata.name
+- `{namespace}` expands to KurretnDB.metadata.namespace
+- `{domain}` expands to the KurrnetDBNetwork.domain
+- `{nodeTypeSuffix}` expands to `""` for a primary node or `"-replica"` for a replica node
+
+Additionally, `HeadlessPodServices` and `LoadBalancers` support "per-pod" template expansions:
+- `{name}` expands to KurrentDB.metadata.name
+- `{namespace}` expands to KurretnDB.metadata.namespace
+- `{domain}` expands to the KurrnetDBNetwork.domain
+- `{nodeTypeSuffix}` expands to `""` for a primary node or `"-replica"` for a replica node
+- `{podName}` expands to the name of the pod corresponding to the resource
+- `{podOrdinal}` the ordinal assigned to the pod corresponding to the resource
+
+Notably, `Pods` and `PersistentVolumeClaims` do not support any template expansions, due to how
+`StatefulSets` work.
+
+#### ExtraMetadataSpec
+
+| Field | Required | Description
+|-------------------------|-----------|-----------------------------------|
+| `labels` _object_ | No | Extra labels for a resource. |
+| `annotations` _object_ | No | Extra annotations for a resource. |
+
+#### KurrentDBNetwork
+
+| Field | Required | Description |
+|----------------------------------------------|----------|---------------------------------------------------------------------------------------------------------------------|
+| `domain` _string_ | Yes | Domain used for external DNS e.g. advertised address exposed in the gossip state |
+| `loadBalancer` _[KurrentDBLoadBalancer][n1]_ | Yes | Defines a load balancer to use with the database |
+| `fqdnTemplate` _string_ | No | The template string used to define the external advertised address of a node |
+| `internodeTrafficStrategy` _string_ | No | How servers dial each other. One of `"ServiceName"` (default), `"FQDN"`, or `"SplitDNS"`. See [details][n2]. |
+| `clientTrafficStrategy` _string_ | No | How clients dial servers. One of `"ServiceName"` or `"FQDN"` (default). See [details][n2]. |
+| `splitDNSExtraRules` _list of [DNSRule][n3]_ | No | Advanced configuration for when `internodeTrafficStrategy` is set to `"SplitDNS"`. |
+
+[n1]: #kurrentdbloadbalancer
+[n2]: ../operations/advanced-networking.md#traffic-strategy-options
+[n3]: #dnsrule
+
+Note that `fqdnTemplate` supports the following expansions:
+- `{name}` expands to KurrentDB.metadata.name
+- `{namespace}` expands to KurretnDB.metadata.namespace
+- `{domain}` expands to the KurrnetDBNetwork.domain
+- `{nodeTypeSuffix}` expands to `""` for a primary node or `"-replica"` for a replica node
+- `{podName}` expands to the name of the pod
+
+When `fqdnTemplate` is empty, it defaults to `{podName}.{name}{nodeTypeSuffix}.{domain}`.
+
+#### DNSRule
+
+| Field | Required | Description |
+|--------------------|----------|----------------------------------------------------------------------------------------|
+| `host` _string_ | Yes | A host name that should be intercepted. |
+| `result` _string_ | Yes | An IP address to return, or another hostname to look up for the final IP address. |
+| `regex` _boolean_ | No | Whether `host` and `result` should be treated as regex patterns. Defaults to `false`. |
+
+Note that when `regex` is `true`, the regex support is provided by the [go standard regex library](
+https://pkg.go.dev/regexp/syntax), and [referencing captured groups](
+https://pkg.go.dev/regexp#Regexp.Expand) differs from some other regex implementations. For
+example, to redirect lookups matching the pattern
+
+ .my-db.my-namespace.svc.cluster.local
+
+to
+
+ .my-domain.com
+
+you could use the following dns rule:
+
+```yaml
+host: ([a-z0-9-]*)\.my-db\.my-namespace\.svc\.cluster\.local
+result: ${1}.my-domain.com
+regex: true
+```
+
+#### KurrentDBLoadBalancer
+
+| Field | Required | Description |
+|------------------------------|----------|--------------------------------------------------------------------------------|
+| `enabled` _boolean_ | Yes | Determines if a load balancer should be deployed for each node |
+| `allowedIps` _string array_ | No | List of IP ranges allowed by the load balancer (default will allow all access) |
+
+#### KurrentDBSecurity
+
+| Field | Required | Description |
+|------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------------------|
+| `certificateReservedNodeCommonName` _string_ | No | Common name for the TLS certificate (this maps directly to the database property `CertificateReservedNodeCommonName`) |
+| `certificateAuthoritySecret` _[CertificateSecret](#certificatesecret)_ | No | Secret containing the CA TLS certificate. |
+| `certificateSecret` _[CertificateSecret](#certificatesecret)_ | Yes | Secret containing the TLS certificate to use. |
+| `certificateSubjectName` _string_ | No | Deprecated field. The value of this field is always ignored. |
+
+#### CertificateSecret
+
+| Field | Required | Description |
+|---------------------------|----------|------------------------------------------------------------------|
+| `name` _string_ | Yes | Name of the secret holding the certificate details |
+| `keyName` _string_ | Yes | Key within the secret containing the TLS certificate |
+| `privateKeyName` _string_ | No | Key within the secret containing the TLS certificate private key |
+
+
+## KurrentDBBackup
+
+This resource type is used to define a backup for an existing database deployment.
+
+:::important
+Resources of this type must be created within the same namespace as the target database cluster to backup.
+:::
+
+### API
+
+#### KurrentDBBackupSpec
+
+| Field | Required | Description |
+|----------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------|
+| `clusterName` _string_ | Yes | Name of the source database cluster |
+| `nodeName` _string_ | No | Specific node name within the database cluster to use as the backup. If unspecified, the leader is used. |
+| `volumeSnapshotClassName` _string_ | Yes | The name of the underlying volume snapshot class to use. |
+| `extraMetadata` _[KurrentDBBackupExtraMetadataSpec][b1]_ | No | Additional annotations and labels for child resources. |
+| `ttl` _string_ | No | A time-to-live for this backup. If unspecified, the TTL is treated as infinite. |
+
+[b1]: #kurrentdbbackupextrametadataspec
+
+The format of the `ttl` may be in years (`y`), weeks (`w`), days (`d`), hours (`h`), or seconds
+(`s`), or a combination like `1d12h`
+
+#### KurrentDBBackupExtraMetadataSpec
+
+| Field | Required | Description |
+|------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------------|
+| All _[ExtraMetadataSpec](#extrametadataspec)_ | No | Extra annotations and labels for all child resource types (currently only VolumeSnapshots). |
+| VolumeSnapshots _[ExtraMetadataSpec](#extrametadataspec)_ | No | Extra annotations and labels for VolumeSnapshots. |
+
+## KurrentDBBackupSchedule
+
+This resource type is used to define a schedule for creating database backups and retention policies.
+
+#### KurrentDBBackupScheduleSpec
+
+| Field | Required | Description |
+|------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------|
+| `schedule` _string_ | Yes | A CronJob-style schedule. See [Writing a CronJob Spec][s2]. |
+| `timeZone` _string_ | No | A timezone specification. Defaults to `Etc/UTC`. |
+| `template` _[KurrentDBBackup][s1]_ | Yes | A `KurrentDBBackup` template. |
+| `keep` _integer_ | No | The maximum of complete backups this schedule will accumulate before it prunes the oldes ones. If unset, there is no limit. |
+| `suspend` _boolean_ | No |
+
+[s1]: #kurrentdbbackupspec
+[s2]: https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#writing-a-cronjob-spec
+
+Note that the only metadata allowed in `template.metadata` is `name`, `labels`, and `annotations`.
+If `name` is provided, it will be extended with an index like `my-name-1` when creating backups,
+otherwise created backups will be based on the name of the schedule resource.
+
+## Configuring Kurrent DB
+
+The [`KurrentDB.spec.configuration` yaml field](#kurrentdbspec) may contain any valid configuration values for Kurrent
+DB. However, some values may be unnecessary, as the Operator provides some defaults, while other
+values may be ignored, as the Operator may override them.
+
+The Operator-defined default configuration values, which may be overridden by the user's
+`KurrentDB.spec.configuration` are:
+
+| Default Field | Default Value |
+|------------------------------|---------------|
+| DisableLogFile | true |
+| EnableAtomPubOverHTTP | true |
+| Insecure | false |
+| PrepareTimeoutMs | 3000 |
+| CommitTimeoutMs | 3000 |
+| GossipIntervalMs | 2000 |
+| GossipTimeoutMs | 5000 |
+| LeaderElectionTimeoutMs | 2000 |
+| ReplicationHeartbeatInterval | 1000 |
+| ReplicationHeartbeatTimeout | 2500 |
+| NodeHeartbeatInterval | 1000 |
+| NodeHeartbeatTimeout | 2500 |
+
+The Operator-managed configuration values, which take precedence over the user's
+`KurrentDB.spec.configuration`, are:
+
+
+
+
+| Managed Field | Value |
+|------------------------------| -------------------------------------------------------------|
+| Db | hard-coded volume mount point |
+| Index | hard-coded volume mount point |
+| Log | hard-coded volume mount point |
+| Insecure | true if `KurrentDB.spec.security.certificateSecret` is empty |
+| DiscoverViaDns | false (`GossipSeed` is used instead) |
+| AllowAnonymousEndpointAccess | true |
+| AllowUnknownOptions | true |
+| NodeIp | 0.0.0.0 (to accept traffic from outside pod) |
+| ReplicationIp | 0.0.0.0 (to accept traffic from outside pod) |
+| NodeHostAdvertiseAs | Derived from pod name |
+| ReplicationHostAdvertiseAs | Derived from pod name |
+| AdveritseHostToClientAs | Derived from `KurrentDB.spec.newtork.fqdnTemplate` |
+| ClusterSize | Derived from `KurrentDB.spec.replicas` |
+| GossipSeed | Derived from pod list |
+| ReadOnlyReplica | Automatically set for ReadOnlyReplica pods |
diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/README.md b/docs/server/kubernetes-operator/v1.4.0/operations/README.md
new file mode 100644
index 000000000..90833bd40
--- /dev/null
+++ b/docs/server/kubernetes-operator/v1.4.0/operations/README.md
@@ -0,0 +1,11 @@
+---
+order: 2
+dir:
+ text: "Operations"
+ link: true
+ order: 1
+---
+
+A number of operations can be performed with the Operator which are catalogued below:
+
+
\ No newline at end of file
diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/advanced-networking.md b/docs/server/kubernetes-operator/v1.4.0/operations/advanced-networking.md
new file mode 100644
index 000000000..b52f843e0
--- /dev/null
+++ b/docs/server/kubernetes-operator/v1.4.0/operations/advanced-networking.md
@@ -0,0 +1,236 @@
+---
+title: Advanced Networking
+order: 5
+---
+
+KurrentDB is a clustered database, and all official KurrentDB clients are cluster-aware. As a
+result, there are times when a client will find out from one server how to connect to another
+server. To make this work, each server advertises how clients and other servers should contact it.
+
+The Operator lets you customize these advertisements. Such customizations are influenced by your
+cluster topology, where your KurrentDB clients will run, and also your security posture. This page
+will help you select the right networking and security configurations for your needs.
+
+## Configuration Options
+
+This document is intended to help pick appropriate traffic strategies and certificate options for
+your situation. Let us first examine the range of possible settings for each.
+
+### Traffic Strategy Options
+
+Servers advertise how they should be dialed by other servers according to the
+`KurrentDB.spec.network.internodeTrafficStrategy` setting, which is one of:
+
+* `"ServiceName"` (default): servers use each other's Kubernetes service name to contact each other.
+
+* `"FQDN"`: servers use each other's fully-qualified domain name (FQDN) to contact each other.
+
+* `"SplitDNS"`: servers advertise FQDNs to each other, but a tiny sidecar DNS resolver in each
+ server pod intercepts the lookup of FQDNs for local pods and returns their actual pod IP address
+ instead (the same IP address returned by the `"ServiceName"` setting).
+
+Servers advertise how they should be dialed by clients according to the
+`KurrentDB.spec.network.clientTrafficStrategy` setting, which is one of:
+
+* `"ServiceName"`: clients dial servers using the server's Kubernetes service
+ name.
+
+* `"FQDN"` (default): clients dial servers using the server's FQDN.
+
+Note that the `"SplitDNS"` settings is not an option for the `clientTrafficStrategy`, simply because
+the KurrentDB Operator does not deploy your clients and so cannot inject a DNS sidecar container
+into your client pods. However, it is possible to write a [CoreDNS rewrite rule][rr] to
+accomplish a similar effect as `"SplitDNS"` but for client-to-server traffic.
+
+[rr]: https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
+
+### Certificate Options
+
+Except for test deployments, you always want to provide TLS certificates to your KurrentDB
+deployments. The reason is that insecure deployments disable not only TLS, but also all
+authentication and authorization features of the database.
+
+There are three basic options for how to obtain certificates:
+
+* Use self-signed certs: you can put any name in your self-signed certs, including Kubernetes
+ service names, which enables `"ServiceName"` traffic strategies. A common technique is to use
+ [cert-manager][cm] to manage the self-signed certificates and to use [trust-manager][tm] to
+ distribute trust of those self-signed certificates to clients.
+
+* Use a publicly-trusted certificate provider: you can only put FQDNs on your certificate, which
+ limits your traffic strategies to FQDN-based connections (`"FQDN"` or `"SplitDNS"`).
+
+* Use both: self-signed certs on the servers, plus an Ingress using certificates from a public
+ certificate provider and configured for TLS termination. Note that at this time, the Operator
+ does not assist with the creation of Ingresses.
+
+[cm]: https://cert-manager.io/
+[tm]: https://cert-manager.io/docs/trust/trust-manager/
+
+## Considerations
+
+Now let us consider a few different aspects of your situation to help guide the selection of
+options.
+
+### What are your security requirements?
+
+The choice of certificate provider has a security aspect to it. The KurrentDB servers use the
+certificate to authenticate each other, so anybody who has read access to the certificate or who can
+produce a matching, trusted certificate, can impersonate another server, and obtain full access to
+the database.
+
+The obvious implication of this is that access to the Kubernetes Secrets which contain server
+certificates should be limited to those who are authorized to administer the database.
+
+But it may not be obvious that if control of your domain's DNS configuration is shared by many
+business units in your organization, it may be the case that self-signed certificates with
+`internodeTrafficStrategy` of `"ServiceName"` provides the tightest control over database access.
+
+So your security posture may require that you choose one of:
+
+* self-signed certs and `"ServiceName"` traffic strategies, if all your clients are inside the
+ Kubernetes cluster
+
+* self-signed certs on servers with `internodeTrafficStrategy` of `"ServiceName"` plus Ingresses
+ configured with publicly-trusted certificate providers and `clientTrafficStrategy` of `"FQDN"`
+
+### Where will your KurrentDB servers run?
+
+If any servers are not in the same Kubernetes cluster, for instance, if you are using the
+[standalone read-only-replica feature](
+database-deployment.md#deploying-standalone-read-only-replicas) to run a read-only replica in a
+second Kubernetes cluster from the quorum nodes, then you will need to pick from a few options to
+ensure internode connectivity:
+
+* `internodeTrafficStrategy` of `"SplitDNS"`, so every server connects to others by their FQDN, but
+ when a connection begins to another pod in the same cluster, the SplitDNS feature will direct the
+ traffic along direct pod-to-pod network interfaces. This solution assumes FQDNs on certificates,
+ which enables you to use publicly trusted certificate authorities to generate certificates for
+ each cluster, which can also ease certificate management.
+
+* `internodeTrafficStrategy` of `"ServiceName"`, plus manually-created [ExternalName Services][ens]
+ in each Kubernetes cluster for each server in the other cluster. This solution requires
+ self-signed certificates, and also that the certificates on servers in both clusters are signed by
+ the same self-signed Certificate Authority.
+
+[ens]: https://kubernetes.io/docs/concepts/services-networking/service/#externalname
+
+### Where will your KurrentDB clients run?
+
+If any of your KurrentDB clients will run outside of Kubernetes, your `clientTrafficStrategy` must
+be `"FQDN"` to ensure connectivity.
+
+If your KurrentDB clients are all within Kubernetes, but spread through more than one Kubernetes
+cluster, you may use one of:
+
+* `clientTrafficStrategy` of `"FQDN"`.
+
+* `clientTrafficStrategy` of `"ServiceName"` plus manually-created [ExternalName Services][ens] in
+ each Kubernetes cluster for each server in the other cluster(s), as described above.
+
+### How bad are hairpin traffic patterns for your deployment?
+
+Hairpin traffic patterns occur when a pod inside a Kubernetes cluster connects to another pod in the
+same Kubernetes cluster through its public IP address rather than its pod IP address. The traffic
+moves outside of Kubernetes to the public IP then "hairpin" turns back into the cluster.
+
+For example, with `clientTrafficStrategy` of `"FQDN"`, clients connecting to a server inside the
+same cluster will not automatically connect directly to the server pod, even though they are both
+inside the Kubernetes cluster and that would be the most direct possible connection.
+
+Hairpin traffic patterns are never good, but they're also not always bad. You will need to evaluate
+the impact in your own environment. Consider some of the following possibilities:
+
+* In a cloud environment, sometimes internal traffic is cheaper than traffic through a public IP,
+ so there could be a financial impact.
+
+* If the FQDN connects to, for example, an nginx ingress, then pushing Kubernetes-internal traffic
+ through nginx may either over-burden your nginx instance or it may slow down your traffic
+ unnecessarily.
+
+Between servers, hairpin traffic can always be avoided with an `internodeTrafficStrategy` of
+`"SplitDNS"`.
+
+For clients, one solution is to prefer a `clientTrafficStrategy` of `"ServiceName"`, or you may
+consider adding a [CoreDNS rewrite rule][rr].
+
+## Common Solutions
+
+With the above considerations in mind, let us consider a few common solutions.
+
+### Everything in One Kubernetes Cluster
+
+When all your KurrentDB servers and clients are within a single Kubernetes cluster, life is
+easy:
+
+* Set `internodeTrafficStrategy` to `"ServiceName"`.
+
+* Set `clientTrafficStrategy` to `"ServiceName"`.
+
+* Use cert-manager to configure a certificate based on the KurrentDB based around service names.
+
+* Use trust-manager to configure clients to trust the self-signed certificates.
+
+This solution provides the highest possible security, avoids hairpin traffic patterns, and leverages
+Kubernetes-native tooling to ease the pain of self-signed certificate management.
+
+### Servers Anywhere, Clients Anywhere
+
+If using publicly trusted certificates is acceptable (see
+[above](#what-are-your-security-requirements)), almost every need can be met with one of the
+simplest configurations:
+
+* Set `internodeTrafficStrategy` to `"SplitDNS"`.
+
+* Set `clientTrafficStrategy` to `"FQDN"`.
+
+* Use cert-manager to automatically create certificates through an ACME provider like LetsEncrypt.
+
+* If clients may be outside of Kubernetes or multiple Kubernetes clusters are in play, set
+ `KurrentDB.spec.network.loadBalancer.enable` to `true`, making your servers publicly accessible.
+
+This solution is still highly secure, provided your domain's DNS management is tightly
+controlled. It also supports virtually every server and client topology. Server hairpin traffic
+never occurs and client hairpin traffic — if a problem — can be addressed with a
+[CoreDNS rewrite rule][rr].
+
+### Multiple Kubernetes Clusters and a VPC Peering
+
+If you want all your KurrentDB resources within private networking for extra security, but also need
+to support multiple Kubernetes clusters in different regions, you can set up a VPC Peering between
+your clusters and configure your inter-cluster traffic to use it.
+
+There could be many variants of this solution; we'll describe one based on ServiceNames and one
+based on FQDNs.
+
+#### ServiceName-based Variant
+
+* Set `internodeTrafficStrategy` to `"ServiceName"`.
+
+* Set `clientTrafficStrategy` to `"ServiceName"`.
+
+* Ensure that each server has an IP address in the VPC Peering.
+
+* In each Kubernetes cluster, manually configure [ExternalName Services][ens] for each server not in
+ that cluster. ExternalName Services can only redirect to hostnames, not bare IP addresses, so you
+ may need to ensure that there is a DNS name to resolve each server's IP address in the VPC
+ Peering.
+
+* Use self-signed certificates, and make sure to use the same certificate authority to sign
+ certificates in each cluster.
+
+#### FQDN-based Variant
+
+* Set `internodeTrafficStrategy` to `"SplitDNS"`.
+
+* Set `clientTrafficStrategy` to `"FQDN"`.
+
+* Ensure that each server has an IP address in the VPC Peering.
+
+* Ensure that each server's FQDN resolves to the IP address of that server in the VPC peering.
+
+* If client-to-server hairpin traffic within each Kubernetes cluster is a problem, add a [CoreDNS
+ rewrite rule][rr] to each cluster to prevent it.
+
+* Use a publicly-trusted certificate authority to create certificates based on the FQDN. They may
+ be generated per-Kubernetes cluster independently, since the certificate trust will be automatic.
diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/database-backup.md b/docs/server/kubernetes-operator/v1.4.0/operations/database-backup.md
new file mode 100644
index 000000000..03d548526
--- /dev/null
+++ b/docs/server/kubernetes-operator/v1.4.0/operations/database-backup.md
@@ -0,0 +1,115 @@
+---
+title: Database Backup
+order: 3
+---
+
+The sections below detail how database backups can be performed. Refer to the [KurrentDBBackup API](../getting-started/resource-types.md#kurrentdbbackup) for detailed information.
+
+## Backing up the leader
+
+Assuming there is a cluster called `kurrentdb-cluster` that resides in the `kurrent` namespace, the following `KurrentDBBackup` resource can be defined:
+
+```yaml
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDBBackup
+metadata:
+ name: kurrentdb-cluster
+spec:
+ volumeSnapshotClassName: ebs-vs
+ clusterName: kurrentdb-cluster
+```
+
+In the example above, the backup definition leverages the `ebs-vs` volume snapshot class to perform the underlying volume snapshot. This class name will vary per Kubernetes cluster/Cloud provider, please consult with your Kubernetes administrator to determine this value.
+
+The `KurrentDBBackup` type takes an optional `nodeName`. If left blank, the leader will be derived based on the gossip state of the database cluster.
+
+## Backing up a specific node
+
+Assuming there is a cluster called `kurrentdb-cluster` that resides in the `kurrent` namespace, the following `KurrentDBBackup` resource can be defined:
+
+```yaml
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDBBackup
+metadata:
+ name: kurrentdb-cluster
+spec:
+ volumeSnapshotClassName: ebs-vs
+ clusterName: kurrentdb-cluster
+ nodeName: kurrentdb-1
+```
+
+In the example above, the backup definition leverages the `ebs-vs` volume snapshot class to perform the underlying volume snapshot. This class name will vary per Kubernetes cluster, please consult with your Kubernetes administrator to determine this value.
+
+## Restoring from a backup
+
+A `KurrentDB` cluster can be restored from a backup by specifying an additional field `sourceBackup` as part of the cluster definition.
+
+For example, if an existing `KurrentDBBackup` exists called `kurrentdb-cluster-backup`, the following snippet could be used to restore it:
+
+
+```yaml
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDB
+metadata:
+ name: kurrentdb-cluster
+ namespace: kurrent
+spec:
+ replicas: 1
+ image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0
+ sourceBackup: kurrentdb-cluster-backup
+ resources:
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+ network:
+ domain: kurrent.test
+ loadBalancer:
+ enabled: true
+```
+
+## Automatically delete backups with a TTL
+
+A TTL can be set on a backup to delete the backup after a certain amount of time has passed since
+its creation. For example, to delete the backup 5 days after it was created:
+
+```yaml
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDBBackup
+metadata:
+ name: kurrentdb-cluster
+spec:
+ volumeSnapshotClassName: ebs-vs
+ clusterName: kurrentdb-cluster
+ ttl: 5d
+```
+
+## Scheduling Backups
+
+A `KurrentDBBackupSchedule` can be created with a CronJob-like schedule.
+
+Schedules also support a `.spec.keep` setting to automatically limit how many backups created by
+that schedule are retained. Using a schedule with `.keep` is slightly safer than using TTLs on the
+individual backups. This is because if, for some reason, you ceased to be able to create new
+backups, the TTL will continue to delete backups until you have none left, while in the same
+situation .keep would leave all your old snapshots in place until a new one could be created.
+
+For example, to create a new backup every midnight (UTC), and to
+keep the last 7 such backups at any time, you could create a `KurrentDBBackupSchedule` resource like
+this:
+
+```yaml
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDBBackupSchedule
+metadata:
+ name: my-backup-schedule
+spec:
+ schedule: "0 0 * * *"
+ timeZone: Etc/UTC
+ template:
+ metadata:
+ name: my-backup
+ spec:
+ volumeSnapshotClassName: ebs-vs
+ clusterName: kurrentdb-cluster
+ keep: 7
+```
diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/database-deployment.md b/docs/server/kubernetes-operator/v1.4.0/operations/database-deployment.md
new file mode 100644
index 000000000..c5482489e
--- /dev/null
+++ b/docs/server/kubernetes-operator/v1.4.0/operations/database-deployment.md
@@ -0,0 +1,512 @@
+---
+title: Example Deployments
+order: 1
+---
+
+This page shows various deployment examples of KurrentDB. Each example assumes the that the
+Operator has been installed in a way that it can at least control KurrentDB resources in the
+`kurrent` namespace.
+
+Each example is designed to illustrate specific techniques:
+
+* [Single Node Insecure Cluster](#single-node-insecure-cluster) is the "hello world" example
+ that illustrates the most basic features possible. An insecure cluster should not be used in
+ production.
+
+* [Three Node Insecure Cluster with Two Read-Only Replicas](
+ #three-node-insecure-cluster-with-two-read-only-replicas) illustrates how to deploy a clustered
+ KurrentDB instance and how to add read-only replicas to it.
+
+* [Three Node Secure Cluster (using self-signed certificates)](
+ #three-node-secure-cluster-using-self-signed-certificates) illustrates how to secure a cluster with
+ self-signed certificates using cert-manager.
+
+* [Three Node Secure Cluster (using LetsEncrypt)](
+ #three-node-secure-cluster-using-letsencrypt) illustrates how to secure a cluster with LetsEncrypt.
+
+* [Deploying Standalone Read-only Replicas](#deploying-standalone-read-only-replicas) illustrates
+ an advanced topology where a pair of read-only replicas is deployed in a different Kubernetes
+ cluster than where the quorum nodes are deployed.
+
+* [Deploying With Scheduling Constraints](#deploying-with-scheduling-constraints): illustrates how
+ to deploy a cluster with customized scheduling constraints for the KurrentDB pods.
+
+* [Custom Database Configuration](#custom-database-configuration) illustrates how to make direct
+ changes to the KurrentDB configuration file.
+
+## Single Node Insecure Cluster
+
+The following `KurrentDB` resource type defines a single node cluster with the following properties:
+
+- The database will be deployed in the `kurrent` namespace with the name `kurrentdb-cluster`
+- Security is not enabled
+- KurrentDB version 25.0.0 will be used
+- 1 vCPU will be requested as the minimum (upper bound is unlimited)
+- 1 GB of memory will be used
+- 512 MB of storage will be allocated for the data disk
+- The KurrentDB instance that is provisioned will be exposed as `kurrentdb-0.kurrent.test`
+
+```yaml
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDB
+metadata:
+ name: kurrentdb-cluster
+ namespace: kurrent
+spec:
+ replicas: 1
+ image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0
+ resources:
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+ storage:
+ volumeMode: "Filesystem"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 512Mi
+ network:
+ domain: kurrent.test
+ loadBalancer:
+ enabled: true
+ fqdnTemplate: '{podName}.{domain}'
+```
+
+## Three Node Insecure Cluster with Two Read-Only Replicas
+
+Note that read-only replicas are only supported by KurrentDB in clustered configurations, that is,
+with multiple quorum nodes.
+
+The following `KurrentDB` resource type defines a three node cluster with the following properties:
+- Security is not enabled
+- 1 GB of memory will be used per quorum node, but read-only replicas will have 2 GB of memory
+- The quorum nodes will be exposed as `kurrentdb-{idx}.kurrent.test`
+- The read-only replicas will be exposed as `kurrentdb-replica-{idx}.kurrent.test`
+
+```yaml
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDB
+metadata:
+ name: kurrentdb-cluster
+ namespace: kurrent
+spec:
+ replicas: 3
+ image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0
+ resources:
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+ storage:
+ volumeMode: "Filesystem"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 512Mi
+ network:
+ domain: kurrent.test
+ loadBalancer:
+ enabled: true
+ fqdnTemplate: '{podName}.{domain}'
+ readOnlyReplicas:
+ replicas: 2
+```
+
+## Three Node Secure Cluster (using self-signed certificates)
+
+The following `KurrentDB` resource type defines a three node cluster with the following properties:
+- Security is enabled using self-signed certificates
+- The KurrentDB servers will be exposed as `kurrentdb-{idx}.kurrent.test`
+- Servers will dial each other by Kubernetes service name (`*.kurrent.svc.cluster.local`)
+- Clients will dial servers by the FQDN (`*.kurrent.test`)
+- The self-signed certificate is valid for both service name and FQDN.
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Certificate
+metadata:
+ name: kurrentdb-cluster
+ namespace: kurrent
+spec:
+ secretName: kurrentdb-cluster-tls
+ isCA: false
+ usages:
+ - client auth
+ - server auth
+ - digital signature
+ - key encipherment
+ commonName: kurrentdb-node
+ subject:
+ organizations:
+ - Kurrent
+ organizationalUnits:
+ - Cloud
+ dnsNames:
+ - '*.kurrentdb-cluster.kurrent.svc.cluster.local'
+ - '*.kurrentdb-cluster-replica.kurrent.svc.cluster.local'
+ privateKey:
+ algorithm: RSA
+ encoding: PKCS1
+ size: 2048
+ issuerRef:
+ name: ca-issuer
+ kind: Issuer
+---
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDB
+metadata:
+ name: kurrentdb-cluster
+ namespace: kurrent
+spec:
+ replicas: 3
+ image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0
+ resources:
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+ storage:
+ volumeMode: "Filesystem"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 512Mi
+ network:
+ domain: kurrent.test
+ loadBalancer:
+ enabled: true
+ fqdnTemplate: '{podName}.{domain}'
+ internodeTrafficStrategy: ServiceName
+ clientTrafficStrategy: ServiceName
+ security:
+ certificateReservedNodeCommonName: kurrentdb-node
+ certificateAuthoritySecret:
+ name: ca-tls
+ keyName: ca.crt
+ certificateSecret:
+ name: kurrentdb-cluster-tls
+ keyName: tls.crt
+ privateKeyName: tls.key
+```
+
+Before deploying this cluster, be sure to follow the steps in [Using Self-Signed Certificates](
+managing-certificates.md#using-self-signed-certificates).
+
+## Three Node Secure Cluster (using LetsEncrypt)
+
+The following `KurrentDB` resource type defines a three node cluster with the following properties:
+- Security is enabled using certificates from LetsEncrypt
+- The KurrentDB instance that is provisioned will be exposed as `kurrentdb-{idx}.kurrent.test`
+- The LetsEncrypt certificate is only valid for the FQDN (`*.kurrent.test`)
+- Clients will dial servers by FQDN
+- Server will dial each other by FQDN but because of the `SplitDNS` feature, they will still connect
+ via direct pod-to-pod networking, as if they had dialed each other by Kubernetes service name.
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Certificate
+metadata:
+ name: kurrentdb-cluster
+ namespace: kurrent
+spec:
+ secretName: kurrentdb-cluster-tls
+ isCA: false
+ usages:
+ - client auth
+ - server auth
+ - digital signature
+ - key encipherment
+ commonName: '*.kurrent.test'
+ subject:
+ organizations:
+ - Kurrent
+ organizationalUnits:
+ - Cloud
+ dnsNames:
+ - '*.kurrent.test'
+ privateKey:
+ algorithm: RSA
+ encoding: PKCS1
+ size: 2048
+ issuerRef:
+ name: letsencrypt
+ kind: Issuer
+---
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDB
+metadata:
+ name: kurrentdb-cluster
+ namespace: kurrent
+spec:
+ replicas: 3
+ image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0
+ resources:
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+ storage:
+ volumeMode: "Filesystem"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 512Mi
+ network:
+ domain: kurrent.test
+ loadBalancer:
+ enabled: true
+ fqdnTemplate: '{podName}.{domain}'
+ internodeTrafficStrategy: SplitDNS
+ clientTrafficStrategy: FQDN
+ security:
+ certificateReservedNodeCommonName: '*.kurrent.test'
+ certificateSecret:
+ name: kurrentdb-cluster-tls
+ keyName: tls.crt
+ privateKeyName: tls.key
+```
+
+Before deploying this cluster, be sure to follow the steps in [Using LetsEncrypt Certificates](
+managing-certificates.md#using-trusted-certificates-via-letsencrypt).
+
+## Deploying Standalone Read-only Replicas
+
+This example illustrates an advanced topology where a pair of read-only replicas is deployed in a
+different Kubernetes cluster than where the quorum nodes are deployed.
+
+We make the following assumptions:
+- LetsEncrypt certificates are used everywhere, to ease certificate management
+- LoadBalancers are enabled to ensure each node is accessible through its FQDN
+- `internodeTrafficStrategy` is `"SplitDNS"` to avoid hairpin traffic patterns between servers
+- the quorum nodes will have `-qn` suffixes in their FQDN while the read-only replicas will have
+ `-rr` suffixes
+
+This `Certificate` should be deployed in **both** clusters:
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Certificate
+metadata:
+ name: mydb
+ namespace: kurrent
+spec:
+ secretName: mydb-tls
+ isCA: false
+ usages:
+ - client auth
+ - server auth
+ - digital signature
+ - key encipherment
+ commonName: '*.kurrent.test'
+ subject:
+ organizations:
+ - Kurrent
+ organizationalUnits:
+ - Cloud
+ dnsNames:
+ - '*.kurrent.test'
+ privateKey:
+ algorithm: RSA
+ encoding: PKCS1
+ size: 2048
+ issuerRef:
+ name: letsencrypt
+ kind: Issuer
+```
+
+This `KurrentDB` resource defines the quorum nodes in one cluster:
+
+```yaml
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDB
+metadata:
+ name: mydb
+ namespace: kurrent
+spec:
+ replicas: 3
+ image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0
+ resources:
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+ storage:
+ volumeMode: "Filesystem"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 512Mi
+ network:
+ domain: kurrent.test
+ loadBalancer:
+ enabled: true
+ fqdnTemplate: '{podName}-qn.{domain}'
+ internodeTrafficStrategy: SplitDNS
+ clientTrafficStrategy: FQDN
+ security:
+ certificateReservedNodeCommonName: '*.kurrent.test'
+ certificateSecret:
+ name: mydb-tls
+ keyName: tls.crt
+ privateKeyName: tls.key
+```
+
+And this `KurrentDB` resource defines the standalone read-only replica in another cluster. Notice
+that:
+
+- `.replicas` is 0, but `.quorumNodes` is set instead
+- `.readOnlyReplicas.replicas` is set
+- `fqdnTemplate` differs slightly from above
+
+```yaml
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDB
+metadata:
+ name: mydb
+ namespace: kurrent
+spec:
+ replicas: 0
+ quorumNodes:
+ - mydb-0-qn.kurrent.test:2113
+ - mydb-1-qn.kurrent.test:2113
+ - mydb-2-qn.kurrent.test:2113
+ readOnlyReplicas:
+ replicas: 2
+ image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0
+ resources:
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+ storage:
+ volumeMode: "Filesystem"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 512Mi
+ network:
+ domain: kurrent.test
+ loadBalancer:
+ enabled: true
+ fqdnTemplate: '{podName}-sa.{domain}'
+ internodeTrafficStrategy: SplitDNS
+ clientTrafficStrategy: FQDN
+ security:
+ certificateReservedNodeCommonName: '*.kurrent.test'
+ certificateSecret:
+ name: mydb-tls
+ keyName: tls.crt
+ privateKeyName: tls.key
+```
+
+## Deploying With Scheduling Constraints
+
+The pods created for a KurrentDB resource can be configured with any of the constraints commonly applied to pods:
+
+- [Node Selectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
+- [Affinity and Anti-Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
+- [Topology Spread Constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/)
+- [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)
+- [Node Name](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodename)
+
+For example, in cloud deployments, you may want to maximize uptime by asking each replica of a
+KurrentDB cluster to be deployed in a different availability zone. The following KurrentDB resource
+does that, and also requires KurrentDB to schedule pods onto nodes labeled with
+`machine-size:large`:
+
+```yaml
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDB
+metadata:
+ name: my-kurrentdb-cluster
+ namespace: kurrent
+spec:
+ replicas: 3
+ image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0
+ resources:
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+ storage:
+ volumeMode: "Filesystem"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 512Mi
+ network:
+ domain: kurrent.test
+ loadBalancer:
+ enabled: true
+ fqdnTemplate: '{podName}.{domain}'
+ constraints:
+ nodeSelector:
+ machine-size: large
+ topologySpreadConstraints:
+ - maxSkew: 1
+ topologyKey: zone
+ labelSelector:
+ matchLabels:
+ app.kubernetes.io/part-of: kurrentdb-operator
+ app.kubernetes.io/name: my-kurrentdb-cluster
+ whenUnsatisfiable: DoNotSchedule
+```
+
+If no scheduling constraints are configured, the operator sets a default soft constraint configuring
+pod anti-affinity such that multiple replicas will prefer to run on different nodes, for better
+fault tolerance.
+
+## Custom Database Configuration
+
+If custom parameters are required in the underlying database configuration then these can be
+specified using the `configuration` YAML block within a `KurrentDB`. The parameters which are
+defaulted or overridden by the operator are listed [in the CRD reference](
+../getting-started/resource-types.md#configuring-kurrent-db).
+
+For example, to enable projections, the deployment configuration looks as follows:
+
+```yaml
+apiVersion: kubernetes.kurrent.io/v1
+kind: KurrentDB
+metadata:
+ name: kurrentdb-cluster
+ namespace: kurrent
+spec:
+ replicas: 1
+ image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0
+ configuration:
+ RunProjections: all
+ StartStandardProjections: true
+ resources:
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+ storage:
+ volumeMode: "Filesystem"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 512Mi
+ network:
+ domain: kurrent.test
+ loadBalancer:
+ enabled: true
+ fqdnTemplate: '{podName}.{domain}'
+```
+
+## Accessing Deployments
+
+### External
+
+The Operator will create one service of type `LoadBalancer` per KurrentDB node when the
+`spec.network.loadBalancer.enabled` flag is set to `true`.
+
+Each service is annotated with `external-dns.alpha.kubernetes.io/hostname: {external cluster endpoint}` to allow the third-party tool [ExternalDNS](https://github.com/kubernetes-sigs/external-dns) to configure external access.
+
+### Internal
+
+The Operator will create headless services to access a KurrentDB cluster internally. This includes:
+- One for the underlying statefulset (selects all pods)
+- One per pod in the statefulset to support `Ingress` rules that require one target endpoint
diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/images/certs/ca-issuer-details.png b/docs/server/kubernetes-operator/v1.4.0/operations/images/certs/ca-issuer-details.png
new file mode 100644
index 000000000..430f66453
Binary files /dev/null and b/docs/server/kubernetes-operator/v1.4.0/operations/images/certs/ca-issuer-details.png differ
diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/images/certs/ca-issuer.png b/docs/server/kubernetes-operator/v1.4.0/operations/images/certs/ca-issuer.png
new file mode 100644
index 000000000..3a17ef3fa
Binary files /dev/null and b/docs/server/kubernetes-operator/v1.4.0/operations/images/certs/ca-issuer.png differ
diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/managing-certificates.md b/docs/server/kubernetes-operator/v1.4.0/operations/managing-certificates.md
new file mode 100644
index 000000000..53cf068b1
--- /dev/null
+++ b/docs/server/kubernetes-operator/v1.4.0/operations/managing-certificates.md
@@ -0,0 +1,182 @@
+---
+title: Managing Certificates
+order: 6
+---
+
+The Operator expects consumers to leverage a thirdparty tool to generate TLS certificates that can be wired in to [KurrentDB](../getting-started/resource-types.md#kurrentdb) deployments using secrets. The sections below describe how certificates can be generated using popular vendors.
+
+## Picking certificate names
+
+Each node in each KurrentDB cluster you create will advertise a fully-qualified domain name (FQDN).
+Clients will expect those advertised names to match the names you configure on your TLS
+certificates. You will need to understand how the FQDN is calculated for each node in order to
+request a TLS certificate that is valid for each node of your kurrentdb cluster.
+
+By default, the [network.fqdnTemplate field of your KurrentDB spec](
+../getting-started/resource-types.md#kurrentdbnetwork) is
+`{podName}.{name}{nodeTypeSuffix}.{domain}`, which may require multiple wildcard names on your
+certificate, like both `*.myName.myDomain.com` and `*.myName-replica.myDomain.com`. You may prefer
+to instead configure an `fqdnTemplate` like `{podName}.{domain}`, which could be covered
+by a single wildcard: `*.myDomain.com`.
+
+## Certificate Manager (cert-manager)
+
+### Prerequisites
+
+Before following the instructions in this section, these requirements should be met:
+
+* [cert-manager](https://cert-manager.io) is installed
+* You have the required permissions to create/manage new resources on the Kubernetes cluster
+* The following CLI tools are installed and configured to interact with your Kubernetes cluster. This means the tool must be accessible from your shell's `$PATH`, and your `$KUBECONFIG` environment variable must point to the correct Kubernetes configuration file:
+ * [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl)
+ * [k9s](https://k9scli.io/topics/install/)
+
+### Using trusted certificates via LetsEncrypt
+
+To use self-signed certficates with KurrentDB, follow these steps:
+
+1. Create a [LetsEncrypt Issuer](#letsencrypt-issuer)
+2. Future certificates should be created using the `letsencrypt` issuer
+
+### LetsEncrypt Issuer
+
+The following example shows how a LetsEncrypt issuer can be deployed that leverages [AWS Route53](https://cert-manager.io/docs/configuration/acme/dns01/route53/):
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: ClusterIssuer
+metadata:
+ name: letsencrypt
+spec:
+ acme:
+ privateKeySecretRef:
+ name: letsencrypt-issuer-key
+ email: { email }
+ preferredChain: ""
+ server: https://acme-v02.api.letsencrypt.org/directory
+ solvers:
+ - dns01:
+ route53:
+ region: { region }
+ hostedZoneID: { hostedZoneId }
+ accessKeyID: { accessKeyId }
+ secretAccessKeySecretRef:
+ name: aws-route53-credentials
+ key: secretAccessKey
+ selector:
+ dnsZones:
+ - { domain }
+ - "*.{ domain }"
+```
+
+This can be deployed using the following steps:
+- Replace the variables `{...}` with the appropriate values
+- Copy the YAML snippet above to a file called `issuer.yaml`
+- Run the following command:
+
+```bash
+kubectl -n kurrent apply -f issuer.yaml
+```
+
+### Using Self-Signed certificates
+
+To use self-signed certficates with KurrentDB, follow these steps:
+
+1. Create a [Self-Signed Issuer](#self-signed-issuer)
+2. Create a [Self-Signed Certificate Authority](#self-signed-certificate-authority)
+3. Create a [Self-Signed Certificate Authority Issuer](#self-signed-certificate-authority-issuer)
+4. Future certificates should be created using the `ca-issuer` issuer
+
+### Self-Signed Issuer
+
+The following example shows how a self-signed issuer can be deployed:
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: ClusterIssuer
+metadata:
+ name: selfsigned-issuer
+spec:
+ selfSigned: {}
+```
+
+This can be deployed using the following steps:
+- Copy the YAML snippet above to a file called `issuer.yaml`
+- Run the following command:
+
+```bash
+kubectl -n kurrent apply -f issuer.yaml
+```
+
+### Self-Signed Certificate Authority
+
+The following example shows how a self-signed certificate authority can be generated once a [self-signed issuer](#self-signed-issuer) has been deployed:
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Certificate
+metadata:
+ name: selfsigned-ca
+spec:
+ isCA: true
+ commonName: ca
+ subject:
+ organizations:
+ - Kurrent
+ organizationalUnits:
+ - Cloud
+ secretName: ca-tls
+ privateKey:
+ algorithm: RSA
+ encoding: PKCS1
+ size: 2048
+ issuerRef:
+ name: selfsigned-issuer
+ kind: ClusterIssuer
+ group: cert-manager.io
+```
+
+:::note
+The values for `subject` should be changed to reflect what you require.
+:::
+
+This can be deployed using the following steps:
+- Copy the YAML snippet above to a file called `ca.yaml`
+- Ensure that the `kurrent` namespace has been created
+- Run the following command:
+
+```bash
+kubectl -n kurrent apply -f ca.yaml
+```
+
+### Self-Signed Certificate Authority Issuer
+
+The following example shows how a self-signed certificate authority issuer can be generated once a [CA certificate](#self-signed-certificate-authority) has been created:
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: ca-issuer
+spec:
+ ca:
+ secretName: ca-tls
+```
+
+This can be deployed using the following steps:
+- Copy the YAML snippet above to a file called `ca-issuer.yaml`
+- Ensure that the `kurrent` namespace has been created
+- Run the following command:
+
+```bash
+kubectl -n kurrent apply -f ca-issuer.yaml
+```
+
+Once this step is complete, future certificates can be generated using the self-signed certificate authority. Using k9s,
+the following issuers should be visible in the `kurrent` namespace:
+
+
+
+Describing the issuer should yield:
+
+
diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/modify-deployments.md b/docs/server/kubernetes-operator/v1.4.0/operations/modify-deployments.md
new file mode 100644
index 000000000..3a960c673
--- /dev/null
+++ b/docs/server/kubernetes-operator/v1.4.0/operations/modify-deployments.md
@@ -0,0 +1,118 @@
+---
+title: Modify Deployments
+order: 2
+---
+
+Updating KurrentDB deployments through the Operator is done by modifying the KurrentDB Custom
+Resources (CRs) using standard Kubernetes tools. Most updates are processed almost immediately, but
+there is special logic in place around resizing the number of replicas in a cluster.
+
+## Applying Updates
+
+`KurrentDB` instances support updates to:
+
+- Container Image
+- Memory
+- CPU
+- Volume Size (increases only)
+- Replicas (node count)
+- Configuration
+
+To update the specification of a `KurrentDB` instance, simply issue a patch command via the kubectl tool. In the examples below, the cluster name is `kurrentdb-cluster`. Once patched, the Operator will take care of augmenting the underlying resources, which will cause database pods to be recreated.
+
+### Container Image
+
+```bash
+kubectl -n kurrent patch kurrentdb kurrentdb-cluster --type=merge -p '{"spec":{"image": "docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0"}}'
+```
+
+### Memory
+
+```bash
+kubectl -n kurrent patch kurrentdb kurrentdb-cluster --type=merge -p '{"spec":{"resources": {"requests": {"memory": "2048Mi"}}}}'
+```
+
+### CPU
+
+```bash
+kubectl -n kurrent patch kurrentdb kurrentdb-cluster --type=merge -p '{"spec":{"resources": {"requests": {"cpu": "2000m"}}}}'
+```
+
+### Volume Size
+
+```bash
+kubectl -n kurrent patch kurrentdb kurrentdb-cluster --type=merge -p '{"spec":{"storage": {"resources": {"requests": {"storage": "2048Mi"}}}}}'
+```
+
+### Replicas
+
+```bash
+kubectl -n kurrent patch kurrentdb kurrentdb-cluster --type=merge -p '{"spec":{"replicas": 3}}'
+```
+
+Note that the actual count of replicas in a cluster may take time to update. See [Updating Replica Count](#updating-replica-count), below.
+
+### Configuration
+
+```bash
+kubectl -n kurrent patch kurrentdb kurrentdb-cluster --type=merge -p '{"spec":{"configuration": {"ProjectionsLevel": "all", "StartStandardProjections": "true"}}}'
+```
+
+## Updating Primary Replica Count
+
+A user configures the KurrentDB cluster by setting the `.spec.replicas` setting of a KurrentDB
+resource. The current actual number of replicas can be observed as `.status.replicas`. The process
+to grow or shrink the replicas in a cluster safely requires carefully stepping the KurrentDB
+cluster through a series of consensus states, which the Operator handles automatically.
+
+In both cases, if the resizing flow gets stuck for some reason, you can cancel the resize by setting
+`.spec.replicas` back to its original value.
+
+### Upsizing a KurrentDB Cluster
+
+The steps that the Operator takes to go from 1 to 3 nodes in a KurrentDB cluster are:
+
+- Take a VolumeSnapshot of pod 0 (the initial pod).
+- Reconfigure pod 0 to expect a three-node cluster.
+- Start a new pod 1 from the VolumeSnapshot.
+- Wait for pod 0 and pod 1 to establish quorum.
+- Start a new pod 2 from the VolumeSnapshot.
+
+Note that the database cannot process writes between the time that the Operator reconfigures pod 0
+for a three-node cluster and when pod 0 and pod 1 establish quorum. The purpose of the
+VolumeSnapshot is to greatly reduce the amount of replication pod 1 must do from pod 0 before quorum
+is established, which greatly reduce the amount of downtime during the resize.
+
+### Downsizing a KurrentDB Cluster
+
+The steps that the Operator takes to go from 3 nodes to 1 in a KurrentDB cluster are:
+
+- Make sure pod 0 and pod 1 are caught up with the leader (which may be one of them).
+- Stop pod 2.
+- Wait for quorum to be re-established between pods 0 and 1.
+- Stop pod 1.
+- Reconfigure pod 0 as a one-node cluster.
+
+Note that the database cannot process writes briefly after the Operator stops pod 2, and again
+briefly after the Operator reconfigures pod 0.
+
+:::important
+It is technically possible for data loss to occur when the Operator stops pod 2 if there are active
+writes against the database, and either of the other two pods happen to fail at approximately the
+same time pod 2 stops.
+
+The frequency of an environment failure should hopefully be low enough that this is not a realistic
+concern. However, to reduce the risk to truly zero, you must ensure that there are no writes
+against the database at the time when you downsize your cluster.
+:::
+
+## Updating Read-Only Replica Count
+
+Since Read-Only Replica nodes are not electable as leaders, it is simpler to increase or decrease
+the number of running read-only replicas. Still, when adding new read-only replicas, the Operator
+uses VolumeSnapshots to expedite the initial catch-up reads for new read-only replicas.
+
+The steps that the Operator takes to increase the number of read-only replicas are:
+
+- Take a VolumeSnapshot of a primary node.
+- Start new read-only replica node(s) based on that snapshot.
diff --git a/docs/server/kubernetes-operator/versions.json b/docs/server/kubernetes-operator/versions.json
index 0fe30d7a3..b1826ef4f 100644
--- a/docs/server/kubernetes-operator/versions.json
+++ b/docs/server/kubernetes-operator/versions.json
@@ -4,6 +4,11 @@
"basePath": "server",
"group": "Kubernetes Operator",
"versions": [
+ {
+ "path": "kubernetes-operator/v1.4.0",
+ "version": "v1.4.0",
+ "startPage": "getting-started/"
+ },
{
"path": "kubernetes-operator/v1.3.1",
"version": "v1.3.1",