diff --git a/docs/server/kubernetes-operator/v1.0.0/operations/database-deployment.md b/docs/server/kubernetes-operator/v1.0.0/operations/database-deployment.md index ecdec6d55..c85b8a807 100644 --- a/docs/server/kubernetes-operator/v1.0.0/operations/database-deployment.md +++ b/docs/server/kubernetes-operator/v1.0.0/operations/database-deployment.md @@ -380,11 +380,7 @@ kubectl apply -f cluster.yaml ## Three Node Secure Cluster (using LetsEncrypt) Using LetsEncrypt, or any publicly trusted certificate, in an operator-managed KurrentDB cluster -is not supported. - -The recommended workaround is to combine [self-signed certificates within the cluster]( -#three-node-secure-cluster-using-self-signed-certificates) with an Ingress that does TLS -termination using the LetsEncrypt certificate. +is not supported in v1.0.0; please upgrade to v1.4.0. ## Viewing Deployments diff --git a/docs/server/kubernetes-operator/v1.1.0/operations/database-deployment.md b/docs/server/kubernetes-operator/v1.1.0/operations/database-deployment.md index f20add6ff..b9a98aced 100644 --- a/docs/server/kubernetes-operator/v1.1.0/operations/database-deployment.md +++ b/docs/server/kubernetes-operator/v1.1.0/operations/database-deployment.md @@ -445,11 +445,7 @@ kubectl apply -f cluster.yaml ## Three Node Secure Cluster (using LetsEncrypt) Using LetsEncrypt, or any publicly trusted certificate, in an operator-managed KurrentDB cluster -is not supported. - -The recommended workaround is to combine [self-signed certificates within the cluster]( -#three-node-secure-cluster-using-self-signed-certificates) with an Ingress that does TLS -termination using the LetsEncrypt certificate. +is not supported in v1.0.0; please upgrade to v1.4.0. ## Deploying With Scheduling Constraints diff --git a/docs/server/kubernetes-operator/v1.2.0/operations/database-deployment.md b/docs/server/kubernetes-operator/v1.2.0/operations/database-deployment.md index f20add6ff..b9a98aced 100644 --- a/docs/server/kubernetes-operator/v1.2.0/operations/database-deployment.md +++ b/docs/server/kubernetes-operator/v1.2.0/operations/database-deployment.md @@ -445,11 +445,7 @@ kubectl apply -f cluster.yaml ## Three Node Secure Cluster (using LetsEncrypt) Using LetsEncrypt, or any publicly trusted certificate, in an operator-managed KurrentDB cluster -is not supported. - -The recommended workaround is to combine [self-signed certificates within the cluster]( -#three-node-secure-cluster-using-self-signed-certificates) with an Ingress that does TLS -termination using the LetsEncrypt certificate. +is not supported in v1.0.0; please upgrade to v1.4.0. ## Deploying With Scheduling Constraints diff --git a/docs/server/kubernetes-operator/v1.3.1/operations/database-deployment.md b/docs/server/kubernetes-operator/v1.3.1/operations/database-deployment.md index 4eea0958e..8002447f5 100644 --- a/docs/server/kubernetes-operator/v1.3.1/operations/database-deployment.md +++ b/docs/server/kubernetes-operator/v1.3.1/operations/database-deployment.md @@ -441,11 +441,7 @@ kubectl apply -f cluster.yaml ## Three Node Secure Cluster (using LetsEncrypt) Using LetsEncrypt, or any publicly trusted certificate, in an operator-managed KurrentDB cluster -is not supported. - -The recommended workaround is to combine [self-signed certificates within the cluster]( -#three-node-secure-cluster-using-self-signed-certificates) with an Ingress that does TLS -termination using the LetsEncrypt certificate. +is not supported in v1.0.0; please upgrade to v1.4.0. ## Deploying With Scheduling Constraints diff --git a/docs/server/kubernetes-operator/v1.4.0/README.md b/docs/server/kubernetes-operator/v1.4.0/README.md index 813195a5c..9326c4843 100644 --- a/docs/server/kubernetes-operator/v1.4.0/README.md +++ b/docs/server/kubernetes-operator/v1.4.0/README.md @@ -1,5 +1,5 @@ --- # title is for breadcrumb and sidebar nav -title: Kubernetes Operator v1.3.1 +title: Kubernetes Operator v1.4.0 order: 1 --- diff --git a/docs/server/kubernetes-operator/v1.4.0/getting-started/README.md b/docs/server/kubernetes-operator/v1.4.0/getting-started/README.md index a0ae14981..620233e1a 100644 --- a/docs/server/kubernetes-operator/v1.4.0/getting-started/README.md +++ b/docs/server/kubernetes-operator/v1.4.0/getting-started/README.md @@ -23,41 +23,40 @@ Kubernetes is the modern enterprise standard for deploying containerized applica * Deploy single-node or multi-node clusters * Back up and restore clusters +* Automate backups with a schedule and retention policies * Perform rolling upgrades and update configurations -### New in 1.3.1 - -* Fix/improve support for resizing KurrentDB clusters, including explicitly handling data safety, - minimizing downtime, and allowing the user to cancel a resize operation that is not progressing. - See [Updating Replica Count](../operations/modify-deployments.md#updating-replica-count) for details. -* Support for custom labels and annotations on all child resources (StatefulSets, Pods, - LoadBalancers, etc). -* Allow users to use public certificate authorities like LetsEncrypt without having to manually pass - the publicly trusted cert in a secret. -* Allow manual overrides to the generated ConfigMap that is passed to KurrentDB. Previously, if a - user manually altered the ConfigMap it would get immediately overwritten, whereas now it will - "stick" until the next time the KurrentDB resource is updated. -* Fix a bug affecting the KurrentDBBackup behavior when cluster's fqdnTemplate met certain criteria. -* Fix and clarified the `credentialsSecretName` behavior in the helm chart. It is not normally - required at all, but in previous versions, it was generating warning events with the default - configuration. -* Add a new `crds.keep` value to the helm chart. With the default value of `true`, CRDs installed - by the helm chart will not be deleted by helm, which offers a layer of protection against - accidental data loss. In earlier versions of the helm chart, or with `crds.keep=false`, a - transition from `crds.enabled=true` to `crds.enabled=false` would cause the deletion of the CRDs - and all KurrentDB and KurrentDBBackup objects across the cluster. +### New in 1.4.0 + +* Support configurable traffic strategies for each of server-server and client-server traffic. This + enables the use of LetsEncrypt certificates without creating Ingresses, for example. See + [Traffic Strategies][ts] for details. +* Support backup scheduling and retention policies. There is a new [KurrentDBBackupSchedule][bs] + CRD with a CronJob-like syntax. There are also two mechanisms for configuring retention policies: + a `.keep` count on `KurrentDBBackupSchedule`, and a new `.ttl` on `KurrentDBBackup`. +* Support standalone read-only replicas pointed at a remote cluster. This enables advanced + topologies like a having your quorum nodes in one region and a read-only replica in a distant + region. See [Deploying Standalone Read-Only Replicas][ror] for an example. +* Support template strings in some extra metadata for child resources of the `KurrentDB` object. + This allows, for example, to annotate each of the automatically created LoadBalancers with unique + external-dns annotations. See [KurrentDBExtraMetadataSpec][em] for details. + +[ts]: ../operations/advanced-networking.md#traffic-strategy-options +[bs]: resource-types.md#kurrentdbbackupschedulespec +[ror]: ../operations/database-deployment.md#deploying-standalone-read-only-replicas +[em]: resource-types.md#kurrentdbextrametadataspec ## Supported KurrentDB Versions The Operator supports running the following major versions of KurrentDB: - v25.x - v24.x -- v23.x +- v23.10+ ## Supported Hardware Architectures The Operator is packaged for the following hardware architectures: -- x86_64 +- x86\_64 - arm64 ## Technical Support diff --git a/docs/server/kubernetes-operator/v1.4.0/getting-started/installation.md b/docs/server/kubernetes-operator/v1.4.0/getting-started/installation.md index 52b9d303e..75aff8b4b 100644 --- a/docs/server/kubernetes-operator/v1.4.0/getting-started/installation.md +++ b/docs/server/kubernetes-operator/v1.4.0/getting-started/installation.md @@ -37,8 +37,9 @@ helm repo add kurrent-latest \ The Operator uses Custom Resource Definitions (CRDs) to extend Kubernetes. You can install them automatically with Helm or manually. The following resource types are supported: -- [KurrentDB](resource-types.md#kurrentdb) -- [KurrentDBBackup](resource-types.md#kurrentdbbackup) +- [KurrentDB](resource-types.md#kurrentdbspec) +- [KurrentDBBackup](resource-types.md#kurrentdbbackupspec) +- [KurrentDBBackupSchedules](resource-types.md#kurrentdbbackupschedulesspec) Since CRDs are managed globally by Kubernetes, special care must be taken to install them. @@ -52,7 +53,7 @@ If you prefer to install CRDs yourself: ```bash # Download the kurrentdb-operator Helm chart -helm pull kurrent-latest/kurrentdb-operator --version 1.3.1 --untar +helm pull kurrent-latest/kurrentdb-operator --version 1.4.0 --untar # Install the CRDs kubectl apply -f kurrentdb-operator/templates/crds ``` @@ -86,7 +87,7 @@ To deploy the Operator in this mode, run: ```bash helm install kurrentdb-operator kurrent-latest/kurrentdb-operator \ - --version 1.3.1 \ + --version 1.4.0 \ --namespace kurrent \ --create-namespace \ --set crds.enabled=true \ @@ -121,7 +122,7 @@ To deploy the Operator in this mode, the following command can be used: ```bash helm install kurrentdb-operator kurrent-latest/kurrentdb-operator \ - --version 1.3.1 \ + --version 1.4.0 \ --namespace kurrent \ --create-namespace \ --set crds.enabled=true \ @@ -160,7 +161,7 @@ The Operator deployment can be updated to adjust which namespaces are watched. F ```bash helm upgrade kurrentdb-operator kurrent-latest/kurrentdb-operator \ - --version 1.3.1 \ + --version 1.4.0 \ --namespace kurrent \ --reuse-values \ --set operator.namespaces='{kurrent,foo,bar}' @@ -204,5 +205,5 @@ helm upgrade kurrentdb-operator kurrentdb-operator-repo/kurrentdb-operator \ Here's what these commands do: - Refresh the local Helm repository index - Locate an existing operator installation in namespace `kurrent` -- Select the target upgrade version `{version}` e.g. `1.3.1` +- Select the target upgrade version `{version}` e.g. `1.4.0` - Perform the upgrade, preserving values that were set during installation diff --git a/docs/server/kubernetes-operator/v1.4.0/getting-started/resource-types.md b/docs/server/kubernetes-operator/v1.4.0/getting-started/resource-types.md index 591457d2d..645b61721 100644 --- a/docs/server/kubernetes-operator/v1.4.0/getting-started/resource-types.md +++ b/docs/server/kubernetes-operator/v1.4.0/getting-started/resource-types.md @@ -6,6 +6,7 @@ order: 3 The Operator supports the following resource types (known as `Kind`'s): - `KurrentDB` - `KurrentDBBackup` +- `KurrentDBBackupSchedule` ## KurrentDB @@ -13,79 +14,155 @@ This resource type is used to define a database deployment. ### API -| Field | Required | Description | -|---------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------| -| `replicas` _integer_ | Yes | Number of nodes in a database cluster (1 or 3) | -| `image` _string_ | Yes | KurrentDB container image URL | -| `resources` _[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#resourcerequirements-v1-core)_ | No | Database container resource limits and requests | -| `storage` _[PersistentVolumeClaim](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#persistentvolumeclaimspec-v1-core)_ | Yes | Persistent volume claim settings for the underlying data volume | -| `network` _[KurrentDBNetwork](#kurrentdbnetwork)_ | Yes | Defines the network configuration to use with the database | -| `configuration` _yaml_ | No | Additional configuration to use with the database, see [below](#configuring-kurrent-db) | -| `sourceBackup` _string_ | No | Backup name to restore a cluster from | -| `security` _[KurrentDBSecurity](#kurrentdbsecurity)_ | No | Security configuration to use for the database. This is optional, if not specified the cluster will be created without security enabled. | -| `licenseSecret` _[SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#secretkeyselector-v1-core)_ | No | A secret that contains the Enterprise license for the database | -| `constraints` _[KurrentDBConstraints](#kurrentdbconstraints)_ | No | Scheduling constraints for the Kurrent DB pod. | -| `readOnlyReplias` _[KurrentDBReadOnlyReplicasSpec](#kurrentdbreadonlyreplicasspec)_ | No | Read-only replica configuration the Kurrent DB Cluster. | -| `extraMetadata` _[KurrentDBExtraMetadataSpec](#kurrentdbextrametadataspec)_ | No | Additional annotations and labels for child resources. | +#### KurrentDBSpec + +| Field | Required | Description | +|---------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------| +| `replicas` _integer_ | Yes | Number of nodes in a database cluster. May be 1, 3, or, for [standalone ReadOnly-Replicas][ror], it may be 0. | +| `image` _string_ | Yes | KurrentDB container image URL | +| `resources` _[ResourceRequirements][d1]_ | No | Database container resource limits and requests | +| `storage` _[PersistentVolumeClaim][d2]_ | Yes | Persistent volume claim settings for the underlying data volume | +| `network` _[KurrentDBNetwork][d3]_ | Yes | Defines the network configuration to use with the database | +| `configuration` _yaml_ | No | Additional configuration to use with the database, see [below](#configuring-kurrent-db) | +| `sourceBackup` _string_ | No | Backup name to restore a cluster from | +| `security` _[KurrentDBSecurity][d4]_ | No | Security configuration to use for the database. This is optional, if not specified the cluster will be created without security enabled. | +| `licenseSecret` _[SecretKeySelector][d5]_ | No | A secret that contains the Enterprise license for the database | +| `constraints` _[KurrentDBConstraints][d6]_ | No | Scheduling constraints for the Kurrent DB pod. | +| `readOnlyReplias` _[KurrentDBReadOnlyReplicasSpec][d7]_ | No | Read-only replica configuration the Kurrent DB Cluster. | +| `extraMetadata` _[KurrentDBExtraMetadataSpec][d8]_ | No | Additional annotations and labels for child resources. | +| `quorumNodes` _string array_ | No | A list of endpoints (in host:port notation) to reach the quorum nodes when .Replicas is zero, see [standalone ReadOnlyReplicas][ror] | + +[d1]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#resourcerequirements-v1-core +[d2]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#persistentvolumeclaimspec-v1-core +[d3]: #kurrentdbnetwork +[d4]: #kurrentdbsecurity +[d5]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#secretkeyselector-v1-core +[d6]: #kurrentdbconstraints +[d7]: #kurrentdbreadonlyreplicasspec +[d8]: #kurrentdbextrametadataspec +[ror]: ../operations/database-deployment.md#deploying-standalone-read-only-replicas #### KurrentDBReadOnlyReplicasSpec Other than `replicas`, each of the fields in `KurrentDBReadOnlyReplicasSpec` default to the corresponding values from the main KurrentDBSpec. -| Field | Required | Description | -|---------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------| -| `replicas` _integer_ | No | Number of read-only replicas in the cluster. Defaults to zero. | -| `resources` _[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#resourcerequirements-v1-core)_ | No | Database container resource limits and requests. | -| `storage` _[PersistentVolumeClaim](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#persistentvolumeclaimspec-v1-core)_ | No | Persistent volume claim settings for the underlying data volume. | -| `configuration` _yaml_ | No | Additional configuration to use with the database. | -| `constraints` _[KurrentDBConstraints](#kurrentdbconstraints)_ | No | Scheduling constraints for the Kurrent DB pod. | +| Field | Required | Description | +|----------------------------------------------|----------|------------------------------------------------------------------| +| `replicas` _integer_ | No | Number of read-only replicas in the cluster. Defaults to zero. | +| `resources` _[ResourceRequirements][r1]_ | No | Database container resource limits and requests. | +| `storage` _[PersistentVolumeClaim][r2]_ | No | Persistent volume claim settings for the underlying data volume. | +| `configuration` _yaml_ | No | Additional configuration to use with the database. | +| `constraints` _[KurrentDBConstraints][r3]_ | No | Scheduling constraints for the Kurrent DB pod. | + +[r1]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#resourcerequirements-v1-core +[r2]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#persistentvolumeclaimspec-v1-core +[r3]: #kurrentdbconstraints #### KurrentDBConstraints -| Field | Required | Description | -|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------| -| `nodeSelector` _yaml_ | No | Identifies nodes that the Kurrent DB may consider during scheduling. | -| `affinity` _[Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#affinity-v1-core)_ | No | The node affinity, pod affinity, and pod anti-affinity for scheduling the Kurrent DB pod. | -| `tolerations` _list of [Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#toleration-v1-core)_ | No | The tolerations for scheduling the Kurrent DB pod. | -| `topologySpreadConstraints` _list of [TopologySpreadConstraint](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#topologyspreadconstraint-v1-core)_ | No | The topology spread constraints for scheduling the Kurrent DB pod. | +| Field | Required | Description | +|----------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------| +| `nodeSelector` _yaml_ | No | Identifies nodes that the Kurrent DB may consider during scheduling. | +| `affinity` _[Affinity][c1]_ | No | The node affinity, pod affinity, and pod anti-affinity for scheduling the Kurrent DB pod. | +| `tolerations` _list of [Toleration][c2]_ | No | The tolerations for scheduling the Kurrent DB pod. | +| `topologySpreadConstraints` _list of [TopologySpreadConstraint][c3]_ | No | The topology spread constraints for scheduling the Kurrent DB pod. | + +[c1]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#affinity-v1-core +[c2]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#toleration-v1-core +[c3]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#topologyspreadconstraint-v1-core #### KurrentDBExtraMetadataSpec -| Field | Required | Description | -|------------------------------------------------------------------|----------|---------------------------------------------------------------------| -| All _[ExtraMetadataSpec](#extrametadataspec)_ | No | Extra annotations and labels for all child resource types. | -| ConfigMaps _[ExtraMetadataSpec](#extrametadataspec)_ | No | Extra annotations and labels for ConfigMaps. | -| StatefulSets _[ExtraMetadataSpec](#extrametadataspec)_ | No | Extra annotations and labels for StatefulSets. | -| Pods _[ExtraMetadataSpec](#extrametadataspec)_ | No | Extra annotations and labels for Pods. | -| PersistentVolumeClaims _[ExtraMetadataSpec](#extrametadataspec)_ | No | Extra annotations and labels for PersistentVolumeClaims. | -| HeadlessServices _[ExtraMetadataSpec](#extrametadataspec)_ | No | Extra annotations and labels for the per-cluster headless Services. | -| HeadlessPodServices _[ExtraMetadataSpec](#extrametadataspec)_ | No | Extra annotations and labels for the per-pod headless Services. | -| LoadBalancers _[ExtraMetadataSpec](#extrametadataspec)_ | No | Extra annotations and labels for LoadBalancer-type Services. | +| Field | Required | Description | +|----------------------------------------------------|----------|---------------------------------------------------------------------| +| `all` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for all child resource types. | +| `configMaps` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for ConfigMaps. | +| `statefulSets` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for StatefulSets. | +| `pods` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for Pods. | +| `persistentVolumeClaims` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for PersistentVolumeClaims. | +| `headlessServices` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for the per-cluster headless Services. | +| `headlessPodServices` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for the per-pod headless Services. | +| `loadBalancers` _[ExtraMetadataSpec][m1]_ | No | Extra annotations and labels for LoadBalancer-type Services. | + +[m1]: #extrametadataspec + +Note that select kinds of extra metadata support template expansion to allow multiple instances of +a child resource to be distinguished from one another. In particular, `ConfigMaps`, `StatefulSets`, +and `HeadlessServices` support "per-node-kind" template expansions: +- `{name}` expands to KurrentDB.metadata.name +- `{namespace}` expands to KurretnDB.metadata.namespace +- `{domain}` expands to the KurrnetDBNetwork.domain +- `{nodeTypeSuffix}` expands to `""` for a primary node or `"-replica"` for a replica node + +Additionally, `HeadlessPodServices` and `LoadBalancers` support "per-pod" template expansions: +- `{name}` expands to KurrentDB.metadata.name +- `{namespace}` expands to KurretnDB.metadata.namespace +- `{domain}` expands to the KurrnetDBNetwork.domain +- `{nodeTypeSuffix}` expands to `""` for a primary node or `"-replica"` for a replica node +- `{podName}` expands to the name of the pod corresponding to the resource +- `{podOrdinal}` the ordinal assigned to the pod corresponding to the resource + +Notably, `Pods` and `PersistentVolumeClaims` do not support any template expansions, due to how +`StatefulSets` work. #### ExtraMetadataSpec -| Field | Required | Description -|-----------------------|-----------|-----------------------------------| -| Labels _object_ | No | Extra labels for a resource. | -| Annotations _object_ | No | Extra annotations for a resource. | +| Field | Required | Description +|-------------------------|-----------|-----------------------------------| +| `labels` _object_ | No | Extra labels for a resource. | +| `annotations` _object_ | No | Extra annotations for a resource. | #### KurrentDBNetwork -| Field | Required | Description | -|------------------------------------------------------------------|----------|----------------------------------------------------------------------------------| -| `domain` _string_ | Yes | Domain used for external DNS e.g. advertised address exposed in the gossip state | -| `loadBalancer` _[KurrentDBLoadBalancer](#kurrentdbloadbalancer)_ | Yes | Defines a load balancer to use with the database | -| `fqdnTemplate` _string_ | No | The template string used to define the external advertised address of a node | +| Field | Required | Description | +|----------------------------------------------|----------|---------------------------------------------------------------------------------------------------------------------| +| `domain` _string_ | Yes | Domain used for external DNS e.g. advertised address exposed in the gossip state | +| `loadBalancer` _[KurrentDBLoadBalancer][n1]_ | Yes | Defines a load balancer to use with the database | +| `fqdnTemplate` _string_ | No | The template string used to define the external advertised address of a node | +| `internodeTrafficStrategy` _string_ | No | How servers dial each other. One of `"ServiceName"` (default), `"FQDN"`, or `"SplitDNS"`. See [details][n2]. | +| `clientTrafficStrategy` _string_ | No | How clients dial servers. One of `"ServiceName"` or `"FQDN"` (default). See [details][n2]. | +| `splitDNSExtraRules` _list of [DNSRule][n3]_ | No | Advanced configuration for when `internodeTrafficStrategy` is set to `"SplitDNS"`. | + +[n1]: #kurrentdbloadbalancer +[n2]: ../operations/advanced-networking.md#traffic-strategy-options +[n3]: #dnsrule Note that `fqdnTemplate` supports the following expansions: - `{name}` expands to KurrentDB.metadata.name - `{namespace}` expands to KurretnDB.metadata.namespace - `{domain}` expands to the KurrnetDBNetwork.domain -- `{podName}` expands to the name of the pod - `{nodeTypeSuffix}` expands to `""` for a primary node or `"-replica"` for a replica node +- `{podName}` expands to the name of the pod When `fqdnTemplate` is empty, it defaults to `{podName}.{name}{nodeTypeSuffix}.{domain}`. +#### DNSRule + +| Field | Required | Description | +|--------------------|----------|----------------------------------------------------------------------------------------| +| `host` _string_ | Yes | A host name that should be intercepted. | +| `result` _string_ | Yes | An IP address to return, or another hostname to look up for the final IP address. | +| `regex` _boolean_ | No | Whether `host` and `result` should be treated as regex patterns. Defaults to `false`. | + +Note that when `regex` is `true`, the regex support is provided by the [go standard regex library]( +https://pkg.go.dev/regexp/syntax), and [referencing captured groups]( +https://pkg.go.dev/regexp#Regexp.Expand) differs from some other regex implementations. For +example, to redirect lookups matching the pattern + + .my-db.my-namespace.svc.cluster.local + +to + + .my-domain.com + +you could use the following dns rule: + +```yaml +host: ([a-z0-9-]*)\.my-db\.my-namespace\.svc\.cluster\.local +result: ${1}.my-domain.com +regex: true +``` + #### KurrentDBLoadBalancer | Field | Required | Description | @@ -121,12 +198,20 @@ Resources of this type must be created within the same namespace as the target d ### API -| Field | Required | Description | -|-----------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------| -| `clusterName` _string_ | Yes | Name of the source database cluster | -| `nodeName` _string_ | No | Specific node name within the database cluster to use as the backup. If this is not specified, the leader will be picked as the source. | -| `volumeSnapshotClassName` _string_ | Yes | The name of the underlying volume snapshot class to use. | -| `extraMetadata` _[KurrentDBBackupExtraMetadataSpec](#kurrentdbbackupextrametadataspec)_ | No | Additional annotations and labels for child resources. | +#### KurrentDBBackupSpec + +| Field | Required | Description | +|----------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------| +| `clusterName` _string_ | Yes | Name of the source database cluster | +| `nodeName` _string_ | No | Specific node name within the database cluster to use as the backup. If unspecified, the leader is used. | +| `volumeSnapshotClassName` _string_ | Yes | The name of the underlying volume snapshot class to use. | +| `extraMetadata` _[KurrentDBBackupExtraMetadataSpec][b1]_ | No | Additional annotations and labels for child resources. | +| `ttl` _string_ | No | A time-to-live for this backup. If unspecified, the TTL is treated as infinite. | + +[b1]: #kurrentdbbackupextrametadataspec + +The format of the `ttl` may be in years (`y`), weeks (`w`), days (`d`), hours (`h`), or seconds +(`s`), or a combination like `1d12h` #### KurrentDBBackupExtraMetadataSpec @@ -135,9 +220,30 @@ Resources of this type must be created within the same namespace as the target d | All _[ExtraMetadataSpec](#extrametadataspec)_ | No | Extra annotations and labels for all child resource types (currently only VolumeSnapshots). | | VolumeSnapshots _[ExtraMetadataSpec](#extrametadataspec)_ | No | Extra annotations and labels for VolumeSnapshots. | +## KurrentDBBackupSchedule + +This resource type is used to define a schedule for creating database backups and retention policies. + +#### KurrentDBBackupScheduleSpec + +| Field | Required | Description | +|------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------| +| `schedule` _string_ | Yes | A CronJob-style schedule. See [Writing a CronJob Spec][s2]. | +| `timeZone` _string_ | No | A timezone specification. Defaults to `Etc/UTC`. | +| `template` _[KurrentDBBackup][s1]_ | Yes | A `KurrentDBBackup` template. | +| `keep` _integer_ | No | The maximum of complete backups this schedule will accumulate before it prunes the oldes ones. If unset, there is no limit. | +| `suspend` _boolean_ | No | + +[s1]: #kurrentdbbackupspec +[s2]: https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#writing-a-cronjob-spec + +Note that the only metadata allowed in `template.metadata` is `name`, `labels`, and `annotations`. +If `name` is provided, it will be extended with an index like `my-name-1` when creating backups, +otherwise created backups will be based on the name of the schedule resource. + ## Configuring Kurrent DB -The [`KurrentDB.spec.configuration` yaml field](#kurrentdb) may contain any valid configuration values for Kurrent +The [`KurrentDB.spec.configuration` yaml field](#kurrentdbspec) may contain any valid configuration values for Kurrent DB. However, some values may be unnecessary, as the Operator provides some defaults, while other values may be ignored, as the Operator may override them. @@ -162,8 +268,8 @@ The Operator-defined default configuration values, which may be overridden by th The Operator-managed configuration values, which take precedence over the user's `KurrentDB.spec.configuration`, are: - - + + | Managed Field | Value | |------------------------------| -------------------------------------------------------------| diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/advanced-networking.md b/docs/server/kubernetes-operator/v1.4.0/operations/advanced-networking.md new file mode 100644 index 000000000..b52f843e0 --- /dev/null +++ b/docs/server/kubernetes-operator/v1.4.0/operations/advanced-networking.md @@ -0,0 +1,236 @@ +--- +title: Advanced Networking +order: 5 +--- + +KurrentDB is a clustered database, and all official KurrentDB clients are cluster-aware. As a +result, there are times when a client will find out from one server how to connect to another +server. To make this work, each server advertises how clients and other servers should contact it. + +The Operator lets you customize these advertisements. Such customizations are influenced by your +cluster topology, where your KurrentDB clients will run, and also your security posture. This page +will help you select the right networking and security configurations for your needs. + +## Configuration Options + +This document is intended to help pick appropriate traffic strategies and certificate options for +your situation. Let us first examine the range of possible settings for each. + +### Traffic Strategy Options + +Servers advertise how they should be dialed by other servers according to the +`KurrentDB.spec.network.internodeTrafficStrategy` setting, which is one of: + +* `"ServiceName"` (default): servers use each other's Kubernetes service name to contact each other. + +* `"FQDN"`: servers use each other's fully-qualified domain name (FQDN) to contact each other. + +* `"SplitDNS"`: servers advertise FQDNs to each other, but a tiny sidecar DNS resolver in each + server pod intercepts the lookup of FQDNs for local pods and returns their actual pod IP address + instead (the same IP address returned by the `"ServiceName"` setting). + +Servers advertise how they should be dialed by clients according to the +`KurrentDB.spec.network.clientTrafficStrategy` setting, which is one of: + +* `"ServiceName"`: clients dial servers using the server's Kubernetes service + name. + +* `"FQDN"` (default): clients dial servers using the server's FQDN. + +Note that the `"SplitDNS"` settings is not an option for the `clientTrafficStrategy`, simply because +the KurrentDB Operator does not deploy your clients and so cannot inject a DNS sidecar container +into your client pods. However, it is possible to write a [CoreDNS rewrite rule][rr] to +accomplish a similar effect as `"SplitDNS"` but for client-to-server traffic. + +[rr]: https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/ + +### Certificate Options + +Except for test deployments, you always want to provide TLS certificates to your KurrentDB +deployments. The reason is that insecure deployments disable not only TLS, but also all +authentication and authorization features of the database. + +There are three basic options for how to obtain certificates: + +* Use self-signed certs: you can put any name in your self-signed certs, including Kubernetes + service names, which enables `"ServiceName"` traffic strategies. A common technique is to use + [cert-manager][cm] to manage the self-signed certificates and to use [trust-manager][tm] to + distribute trust of those self-signed certificates to clients. + +* Use a publicly-trusted certificate provider: you can only put FQDNs on your certificate, which + limits your traffic strategies to FQDN-based connections (`"FQDN"` or `"SplitDNS"`). + +* Use both: self-signed certs on the servers, plus an Ingress using certificates from a public + certificate provider and configured for TLS termination. Note that at this time, the Operator + does not assist with the creation of Ingresses. + +[cm]: https://cert-manager.io/ +[tm]: https://cert-manager.io/docs/trust/trust-manager/ + +## Considerations + +Now let us consider a few different aspects of your situation to help guide the selection of +options. + +### What are your security requirements? + +The choice of certificate provider has a security aspect to it. The KurrentDB servers use the +certificate to authenticate each other, so anybody who has read access to the certificate or who can +produce a matching, trusted certificate, can impersonate another server, and obtain full access to +the database. + +The obvious implication of this is that access to the Kubernetes Secrets which contain server +certificates should be limited to those who are authorized to administer the database. + +But it may not be obvious that if control of your domain's DNS configuration is shared by many +business units in your organization, it may be the case that self-signed certificates with +`internodeTrafficStrategy` of `"ServiceName"` provides the tightest control over database access. + +So your security posture may require that you choose one of: + +* self-signed certs and `"ServiceName"` traffic strategies, if all your clients are inside the + Kubernetes cluster + +* self-signed certs on servers with `internodeTrafficStrategy` of `"ServiceName"` plus Ingresses + configured with publicly-trusted certificate providers and `clientTrafficStrategy` of `"FQDN"` + +### Where will your KurrentDB servers run? + +If any servers are not in the same Kubernetes cluster, for instance, if you are using the +[standalone read-only-replica feature]( +database-deployment.md#deploying-standalone-read-only-replicas) to run a read-only replica in a +second Kubernetes cluster from the quorum nodes, then you will need to pick from a few options to +ensure internode connectivity: + +* `internodeTrafficStrategy` of `"SplitDNS"`, so every server connects to others by their FQDN, but + when a connection begins to another pod in the same cluster, the SplitDNS feature will direct the + traffic along direct pod-to-pod network interfaces. This solution assumes FQDNs on certificates, + which enables you to use publicly trusted certificate authorities to generate certificates for + each cluster, which can also ease certificate management. + +* `internodeTrafficStrategy` of `"ServiceName"`, plus manually-created [ExternalName Services][ens] + in each Kubernetes cluster for each server in the other cluster. This solution requires + self-signed certificates, and also that the certificates on servers in both clusters are signed by + the same self-signed Certificate Authority. + +[ens]: https://kubernetes.io/docs/concepts/services-networking/service/#externalname + +### Where will your KurrentDB clients run? + +If any of your KurrentDB clients will run outside of Kubernetes, your `clientTrafficStrategy` must +be `"FQDN"` to ensure connectivity. + +If your KurrentDB clients are all within Kubernetes, but spread through more than one Kubernetes +cluster, you may use one of: + +* `clientTrafficStrategy` of `"FQDN"`. + +* `clientTrafficStrategy` of `"ServiceName"` plus manually-created [ExternalName Services][ens] in + each Kubernetes cluster for each server in the other cluster(s), as described above. + +### How bad are hairpin traffic patterns for your deployment? + +Hairpin traffic patterns occur when a pod inside a Kubernetes cluster connects to another pod in the +same Kubernetes cluster through its public IP address rather than its pod IP address. The traffic +moves outside of Kubernetes to the public IP then "hairpin" turns back into the cluster. + +For example, with `clientTrafficStrategy` of `"FQDN"`, clients connecting to a server inside the +same cluster will not automatically connect directly to the server pod, even though they are both +inside the Kubernetes cluster and that would be the most direct possible connection. + +Hairpin traffic patterns are never good, but they're also not always bad. You will need to evaluate +the impact in your own environment. Consider some of the following possibilities: + +* In a cloud environment, sometimes internal traffic is cheaper than traffic through a public IP, + so there could be a financial impact. + +* If the FQDN connects to, for example, an nginx ingress, then pushing Kubernetes-internal traffic + through nginx may either over-burden your nginx instance or it may slow down your traffic + unnecessarily. + +Between servers, hairpin traffic can always be avoided with an `internodeTrafficStrategy` of +`"SplitDNS"`. + +For clients, one solution is to prefer a `clientTrafficStrategy` of `"ServiceName"`, or you may +consider adding a [CoreDNS rewrite rule][rr]. + +## Common Solutions + +With the above considerations in mind, let us consider a few common solutions. + +### Everything in One Kubernetes Cluster + +When all your KurrentDB servers and clients are within a single Kubernetes cluster, life is +easy: + +* Set `internodeTrafficStrategy` to `"ServiceName"`. + +* Set `clientTrafficStrategy` to `"ServiceName"`. + +* Use cert-manager to configure a certificate based on the KurrentDB based around service names. + +* Use trust-manager to configure clients to trust the self-signed certificates. + +This solution provides the highest possible security, avoids hairpin traffic patterns, and leverages +Kubernetes-native tooling to ease the pain of self-signed certificate management. + +### Servers Anywhere, Clients Anywhere + +If using publicly trusted certificates is acceptable (see +[above](#what-are-your-security-requirements)), almost every need can be met with one of the +simplest configurations: + +* Set `internodeTrafficStrategy` to `"SplitDNS"`. + +* Set `clientTrafficStrategy` to `"FQDN"`. + +* Use cert-manager to automatically create certificates through an ACME provider like LetsEncrypt. + +* If clients may be outside of Kubernetes or multiple Kubernetes clusters are in play, set + `KurrentDB.spec.network.loadBalancer.enable` to `true`, making your servers publicly accessible. + +This solution is still highly secure, provided your domain's DNS management is tightly +controlled. It also supports virtually every server and client topology. Server hairpin traffic +never occurs and client hairpin traffic — if a problem — can be addressed with a +[CoreDNS rewrite rule][rr]. + +### Multiple Kubernetes Clusters and a VPC Peering + +If you want all your KurrentDB resources within private networking for extra security, but also need +to support multiple Kubernetes clusters in different regions, you can set up a VPC Peering between +your clusters and configure your inter-cluster traffic to use it. + +There could be many variants of this solution; we'll describe one based on ServiceNames and one +based on FQDNs. + +#### ServiceName-based Variant + +* Set `internodeTrafficStrategy` to `"ServiceName"`. + +* Set `clientTrafficStrategy` to `"ServiceName"`. + +* Ensure that each server has an IP address in the VPC Peering. + +* In each Kubernetes cluster, manually configure [ExternalName Services][ens] for each server not in + that cluster. ExternalName Services can only redirect to hostnames, not bare IP addresses, so you + may need to ensure that there is a DNS name to resolve each server's IP address in the VPC + Peering. + +* Use self-signed certificates, and make sure to use the same certificate authority to sign + certificates in each cluster. + +#### FQDN-based Variant + +* Set `internodeTrafficStrategy` to `"SplitDNS"`. + +* Set `clientTrafficStrategy` to `"FQDN"`. + +* Ensure that each server has an IP address in the VPC Peering. + +* Ensure that each server's FQDN resolves to the IP address of that server in the VPC peering. + +* If client-to-server hairpin traffic within each Kubernetes cluster is a problem, add a [CoreDNS + rewrite rule][rr] to each cluster to prevent it. + +* Use a publicly-trusted certificate authority to create certificates based on the FQDN. They may + be generated per-Kubernetes cluster independently, since the certificate trust will be automatic. diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/database-backup.md b/docs/server/kubernetes-operator/v1.4.0/operations/database-backup.md index 0802e8470..03d548526 100644 --- a/docs/server/kubernetes-operator/v1.4.0/operations/database-backup.md +++ b/docs/server/kubernetes-operator/v1.4.0/operations/database-backup.md @@ -23,16 +23,6 @@ In the example above, the backup definition leverages the `ebs-vs` volume snapsh The `KurrentDBBackup` type takes an optional `nodeName`. If left blank, the leader will be derived based on the gossip state of the database cluster. -The example above can be deployed using the following steps: -- Copy the YAML snippet above to a file called `backup.yaml` -- Run the following command: - -```bash -kubectl -n kurrent apply -f backup.yaml -``` - -Once deployed, navigate to the [Viewing Backups](#viewing-backups) section. - ## Backing up a specific node Assuming there is a cluster called `kurrentdb-cluster` that resides in the `kurrent` namespace, the following `KurrentDBBackup` resource can be defined: @@ -50,28 +40,76 @@ spec: In the example above, the backup definition leverages the `ebs-vs` volume snapshot class to perform the underlying volume snapshot. This class name will vary per Kubernetes cluster, please consult with your Kubernetes administrator to determine this value. -The example above can be deployed using the following steps: -- Copy the YAML snippet above to a file called `backup.yaml` -- Run the following command: +## Restoring from a backup + +A `KurrentDB` cluster can be restored from a backup by specifying an additional field `sourceBackup` as part of the cluster definition. + +For example, if an existing `KurrentDBBackup` exists called `kurrentdb-cluster-backup`, the following snippet could be used to restore it: + -```bash -kubectl -n kurrent apply -f backup.yaml +```yaml +apiVersion: kubernetes.kurrent.io/v1 +kind: KurrentDB +metadata: + name: kurrentdb-cluster + namespace: kurrent +spec: + replicas: 1 + image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0 + sourceBackup: kurrentdb-cluster-backup + resources: + requests: + cpu: 1000m + memory: 1Gi + network: + domain: kurrent.test + loadBalancer: + enabled: true ``` -Once deployed, navigate to the [Viewing Backups](#viewing-backups) section. +## Automatically delete backups with a TTL -## Viewing Backups +A TTL can be set on a backup to delete the backup after a certain amount of time has passed since +its creation. For example, to delete the backup 5 days after it was created: -Using the k9s tool, navigate to the namespaces list using the command `:namespaces`, it should show a screen similar to: +```yaml +apiVersion: kubernetes.kurrent.io/v1 +kind: KurrentDBBackup +metadata: + name: kurrentdb-cluster +spec: + volumeSnapshotClassName: ebs-vs + clusterName: kurrentdb-cluster + ttl: 5d +``` -![Namespaces](images/database-backup/namespace-list.png) +## Scheduling Backups -From here, press the `Return` key on the namespace where the `KurrentDBBackup` was created, in the screen above the namespace is `kurrent`. Now enter the k9s command `:kurrentdbbackups` and press the `Return` key. The following screen will show a list of database backups for the selected namespace. +A `KurrentDBBackupSchedule` can be created with a CronJob-like schedule. -![Backup Listing](images/database-backup/backup-list.png) +Schedules also support a `.spec.keep` setting to automatically limit how many backups created by +that schedule are retained. Using a schedule with `.keep` is slightly safer than using TTLs on the +individual backups. This is because if, for some reason, you ceased to be able to create new +backups, the TTL will continue to delete backups until you have none left, while in the same +situation .keep would leave all your old snapshots in place until a new one could be created. -## Periodic Backups +For example, to create a new backup every midnight (UTC), and to +keep the last 7 such backups at any time, you could create a `KurrentDBBackupSchedule` resource like +this: -You can use [Kubernetes CronJobs]( -https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) for basic periodic backup -functionality. +```yaml +apiVersion: kubernetes.kurrent.io/v1 +kind: KurrentDBBackupSchedule +metadata: + name: my-backup-schedule +spec: + schedule: "0 0 * * *" + timeZone: Etc/UTC + template: + metadata: + name: my-backup + spec: + volumeSnapshotClassName: ebs-vs + clusterName: kurrentdb-cluster + keep: 7 +``` diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/database-deployment.md b/docs/server/kubernetes-operator/v1.4.0/operations/database-deployment.md index 4eea0958e..9f2cb889b 100644 --- a/docs/server/kubernetes-operator/v1.4.0/operations/database-deployment.md +++ b/docs/server/kubernetes-operator/v1.4.0/operations/database-deployment.md @@ -1,81 +1,50 @@ --- -title: Database Deployment +title: Example Deployments order: 1 --- -The sections below detail the different deployment options for KurrentDB. For detailed information on the various properties, visit the [KurrentDB API](../getting-started/resource-types.md#kurrentdb) section. +This page shows various deployment examples of KurrentDB. Each example assumes the that the +Operator has been installed in a way that it can at least control KurrentDB resources in the +`kurrent` namespace. -## Prerequisites +Each example is designed to illustrate specific techniques: -Before deploying a `KurrentDB` cluster, the following requirements should be met: +* [Single Node Insecure Cluster](#single-node-insecure-cluster) is the "hello world" example + that illustrates the most basic features possible. An insecure cluster should not be used in + production. -* The Operator has been installed as per the [Installation](../getting-started/installation.md) section. -* The following CLI tools are installed and configured to interact with your Kubernetes cluster. This means the tool must be accessible from your shell's `$PATH`, and your `$KUBECONFIG` environment variable must point to the correct Kubernetes configuration file: - * [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl) - * [k9s](https://k9scli.io/topics/install/) +* [Three Node Insecure Cluster with Two Read-Only Replicas]( + #three-node-insecure-cluster-with-two-read-only-replicas) illustrates how to deploy a clustered + KurrentDB instance and how to add read-only replicas to it. -:::important -With the examples listed in this guide, the Operator is assumed to have been deployed such that it can track the `kurrent` namespace for deployments. -::: +* [Three Node Secure Cluster (using self-signed certificates)]( + #three-node-secure-cluster-using-self-signed-certificates) illustrates how to secure a cluster with + self-signed certificates using cert-manager. -## Single Node Insecure Cluster +* [Three Node Secure Cluster (using LetsEncrypt)]( + #three-node-secure-cluster-using-letsencrypt) illustrates how to secure a cluster with LetsEncrypt. -The following `KurrentDB` resource type defines a single node cluster with the following properties: -- The database will be deployed in the `kurrent` namespace with the name `kurrentdb-cluster` -- Security is not enabled -- KurrentDB version 25.0.0 will be used -- 1vcpu will be requested as the minimum (upper bound is unlimited) -- 1gb of memory will be used -- 512mb of storage will be allocated for the data disk -- The KurrentDB instance that is provisioned will be exposed as `kurrentdb-0.kurrentdb-cluster.kurrent.test` +* [Deploying Standalone Read-only Replicas](#deploying-standalone-read-only-replicas) illustrates + an advanced topology where a pair of read-only replicas is deployed in a different Kubernetes + cluster than where the quorum nodes are deployed. -```yaml -apiVersion: kubernetes.kurrent.io/v1 -kind: KurrentDB -metadata: - name: kurrentdb-cluster - namespace: kurrent -spec: - replicas: 1 - image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0 - resources: - requests: - cpu: 1000m - memory: 1Gi - storage: - volumeMode: "Filesystem" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 512Mi - network: - domain: kurrentdb-cluster.kurrent.test - loadBalancer: - enabled: true -``` +* [Deploying With Scheduling Constraints](#deploying-with-scheduling-constraints): illustrates how + to deploy a cluster with customized scheduling constraints for the KurrentDB pods. -This can be deployed using the following steps: -- Copy the YAML snippet above to a file called `cluster.yaml` -- Ensure that the `kurrent` namespace has been created -- Run the following command: - -```bash -kubectl apply -f cluster.yaml -``` +* [Custom Database Configuration](#custom-database-configuration) illustrates how to make direct + changes to the KurrentDB configuration file. -Once deployed, navigate to the [Viewing Deployments](#viewing-deployments) section. +## Single Node Insecure Cluster -## Three Node Insecure Cluster +The following `KurrentDB` resource type defines a single node cluster with the following properties: -The following `KurrentDB` resource type defines a three node cluster with the following properties: - The database will be deployed in the `kurrent` namespace with the name `kurrentdb-cluster` - Security is not enabled - KurrentDB version 25.0.0 will be used -- 1vcpu will be requested as the minimum (upper bound is unlimited) per node -- 1gb of memory will be used per node -- 512mb of storage will be allocated for the data disk per node -- The KurrentDB instances that are provisioned will be exposed as `kurrentdb-{idx}.kurrentdb-cluster.kurrent.test` +- 1 vCPU will be requested as the minimum (upper bound is unlimited) +- 1 GB of memory will be used +- 512 MB of storage will be allocated for the data disk +- The KurrentDB instance that is provisioned will be exposed as `kurrentdb-0.kurrent.test` ```yaml apiVersion: kubernetes.kurrent.io/v1 @@ -84,7 +53,7 @@ metadata: name: kurrentdb-cluster namespace: kurrent spec: - replicas: 3 + replicas: 1 image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0 resources: requests: @@ -98,36 +67,22 @@ spec: requests: storage: 512Mi network: - domain: kurrentdb-cluster.kurrent.test + domain: kurrent.test loadBalancer: enabled: true + fqdnTemplate: '{podName}.{domain}' ``` -This can be deployed using the following steps: -- Copy the YAML snippet above to a file called `cluster.yaml` -- Ensure that the `kurrent` namespace has been created -- Run the following command: - -```bash -kubectl apply -f cluster.yaml -``` - -Once deployed, navigate to the [Viewing Deployments](#viewing-deployments) section. - ## Three Node Insecure Cluster with Two Read-Only Replicas Note that read-only replicas are only supported by KurrentDB in clustered configurations, that is, -with multiple primary nodes. +with multiple quorum nodes. The following `KurrentDB` resource type defines a three node cluster with the following properties: -- The database will be deployed in the `kurrent` namespace with the name `kurrentdb-cluster` - Security is not enabled -- KurrentDB version 25.0.0 will be used -- 1vcpu will be requested as the minimum (upper bound is unlimited) per node -- 1gb of memory will be used per primary node, but read-only replicas will have 2gb of memory -- 512mb of storage will be allocated for the data disk per node -- The main KurrentDB instances that are provisioned will be exposed as `kurrentdb-{idx}.kurrentdb-cluster.kurrent.test` -- The read-only replicas that are provisioned will be exposed as `kurrentdb-replica-{idx}.kurrentdb-cluster.kurrent.test` +- 1 GB of memory will be used per quorum node, but read-only replicas will have 2 GB of memory +- The quorum nodes will be exposed as `kurrentdb-{idx}.kurrent.test` +- The read-only replicas will be exposed as `kurrentdb-replica-{idx}.kurrent.test` ```yaml apiVersion: kubernetes.kurrent.io/v1 @@ -150,39 +105,22 @@ spec: requests: storage: 512Mi network: - domain: kurrentdb-cluster.kurrent.test + domain: kurrent.test loadBalancer: enabled: true + fqdnTemplate: '{podName}.{domain}' readOnlyReplicas: replicas: 2 - resources: - requests: - cpu: 1000m - memory: 1Gi - -``` - -This can be deployed using the following steps: -- Copy the YAML snippet above to a file called `cluster.yaml` -- Ensure that the `kurrent` namespace has been created -- Run the following command: - -```bash -kubectl apply -f cluster.yaml ``` -Once deployed, navigate to the [Viewing Deployments](#viewing-deployments) section. - -## Single Node Secure Cluster (using self-signed certificates) +## Three Node Secure Cluster (using self-signed certificates) -The following `KurrentDB` resource type defines a single node cluster with the following properties: -- The database will be deployed in the `kurrent` namespace with the name `kurrentdb-cluster` +The following `KurrentDB` resource type defines a three node cluster with the following properties: - Security is enabled using self-signed certificates -- KurrentDB version 25.0.0 will be used -- 1vcpu will be requested as the minimum (upper bound is unlimited) -- 1gb of memory will be used -- 512mb of storage will be allocated for the data disk -- The KurrentDB instance that is provisioned will be exposed as `kurrentdb-cluster-0.kurrentdb-cluster.kurrent.test` +- The KurrentDB servers will be exposed as `kurrentdb-{idx}.kurrent.test` +- Servers will dial each other by Kubernetes service name (`*.kurrent.svc.cluster.local`) +- Clients will dial servers by the FQDN (`*.kurrent.test`) +- The self-signed certificate is valid for both service name and FQDN. ```yaml apiVersion: cert-manager.io/v1 @@ -206,9 +144,7 @@ spec: - Cloud dnsNames: - '*.kurrentdb-cluster.kurrent.svc.cluster.local' - - '*.kurrentdb-cluster.kurrent.test' - '*.kurrentdb-cluster-replica.kurrent.svc.cluster.local' - - '*.kurrentdb-cluster-replica.kurrent.test' privateKey: algorithm: RSA encoding: PKCS1 @@ -223,7 +159,7 @@ metadata: name: kurrentdb-cluster namespace: kurrent spec: - replicas: 1 + replicas: 3 image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0 resources: requests: @@ -237,10 +173,14 @@ spec: requests: storage: 512Mi network: - domain: kurrentdb-cluster.kurrent.test + domain: kurrent.test loadBalancer: enabled: true + fqdnTemplate: '{podName}.{domain}' + internodeTrafficStrategy: ServiceName + clientTrafficStrategy: ServiceName security: + certificateReservedNodeCommonName: kurrentdb-node certificateAuthoritySecret: name: ca-tls keyName: ca.crt @@ -250,29 +190,18 @@ spec: privateKeyName: tls.key ``` -Before deploying this cluster, ensure that the steps described in section [Using Self-Signed certificates](managing-certificates.md#using-self-signed-certificates) have been followed. +Before deploying this cluster, be sure to follow the steps in [Using Self-Signed Certificates]( +managing-certificates.md#using-self-signed-certificates). -Follow these steps to deploy the cluster: -- Copy the YAML snippet above to a file called `cluster.yaml` -- Ensure that the `kurrent` namespace has been created -- Run the following command: - -```bash -kubectl apply -f cluster.yaml -``` - -Once deployed, navigate to the [Viewing Deployments](#viewing-deployments) section. - -## Three Node Secure Cluster (using self-signed certificates) +## Three Node Secure Cluster (using LetsEncrypt) The following `KurrentDB` resource type defines a three node cluster with the following properties: -- The database will be deployed in the `kurrent` namespace with the name `kurrentdb-cluster` -- Security is enabled using self-signed certificates -- KurrentDB version 25.0.0 will be used -- 1vcpu will be requested as the minimum (upper bound is unlimited) per node -- 1gb of memory will be used per node -- 512mb of storage will be allocated for the data disk per node -- The KurrentDB instance that is provisioned will be exposed as `kurrentdb-{idx}.kurrentdb-cluster.kurrent.test` +- Security is enabled using certificates from LetsEncrypt +- The KurrentDB instance that is provisioned will be exposed as `kurrentdb-{idx}.kurrent.test` +- The LetsEncrypt certificate is only valid for the FQDN (`*.kurrent.test`) +- Clients will dial servers by FQDN +- Server will dial each other by FQDN but because of the `SplitDNS` feature, they will still connect + via direct pod-to-pod networking, as if they had dialed each other by Kubernetes service name. ```yaml apiVersion: cert-manager.io/v1 @@ -288,23 +217,20 @@ spec: - server auth - digital signature - key encipherment - commonName: kurrentdb-node + commonName: '*.kurrent.test' subject: organizations: - Kurrent organizationalUnits: - Cloud dnsNames: - - '*.kurrentdb-cluster.kurrent.svc.cluster.local' - - '*.kurrentdb-cluster.kurrent.test' - - '*.kurrentdb-cluster-replica.kurrent.svc.cluster.local' - - '*.kurrentdb-cluster-replica.kurrent.test' + - '*.kurrent.test' privateKey: algorithm: RSA encoding: PKCS1 size: 2048 issuerRef: - name: ca-issuer + name: letsencrypt kind: Issuer --- apiVersion: kubernetes.kurrent.io/v1 @@ -313,7 +239,7 @@ metadata: name: kurrentdb-cluster namespace: kurrent spec: - replicas: 3 + replicas: 1 image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0 resources: requests: @@ -327,68 +253,59 @@ spec: requests: storage: 512Mi network: - domain: kurrentdb-cluster.kurrent.test + domain: kurrent.test loadBalancer: enabled: true + fqdnTemplate: '{podName}.{domain}' + internodeTrafficStrategy: SplitDNS + clientTrafficStrategy: FQDN security: - certificateAuthoritySecret: - name: ca-tls - keyName: ca.crt + certificateReservedNodeCommonName: '*.kurrent.test' certificateSecret: name: kurrentdb-cluster-tls keyName: tls.crt privateKeyName: tls.key ``` -Before deploying this cluster, ensure that the steps described in section [Using Self-Signed certificates](managing-certificates.md#using-self-signed-certificates) have been followed. +Before deploying this cluster, be sure to follow the steps in [Using LetsEncrypt Certificates]( +managing-certificates.md#using-trusted-certificates-via-letsencrypt). -Follow these steps to deploy the cluster: -- Copy the YAML snippet above to a file called `cluster.yaml` -- Ensure that the `kurrent` namespace has been created -- Run the following command: +## Deploying Standalone Read-only Replicas -```bash -kubectl apply -f cluster.yaml -``` - -Once deployed, navigate to the [Viewing Deployments](#viewing-deployments) section. +This example illustrates an advanced topology where a pair of read-only replicas is deployed in a +different Kubernetes cluster than where the quorum nodes are deployed. -## Single Node Secure Cluster (using LetsEncrypt) +We make the following assumptions: +- LetsEncrypt certificates are used everywhere, to ease certificate management +- LoadBalancers are enabled to ensure each node is accessible through its FQDN +- `internodeTrafficStrategy` is `"SplitDNS"` to avoid hairpin traffic patterns between servers +- the quorum nodes will have `-qn` suffixes in their FQDN while the read-only replicas will have + `-rr` suffixes -The following `KurrentDB` resource type defines a single node cluster with the following properties: -- The database will be deployed in the `kurrent` namespace with the name `kurrentdb-cluster` -- Security is enabled using certificates from LetsEncrypt -- KurrentDB version 25.0.0 will be used -- 1vcpu will be requested as the minimum (upper bound is unlimited) -- 1gb of memory will be used -- 512mb of storage will be allocated for the data disk -- The KurrentDB instance that is provisioned will be exposed as `kurrentdb-cluster-0.kurrentdb-cluster.kurrent.test` +This `Certificate` should be deployed in **both** clusters: ```yaml apiVersion: cert-manager.io/v1 kind: Certificate metadata: - name: kurrentdb-cluster + name: mydb namespace: kurrent spec: - secretName: kurrentdb-cluster-tls + secretName: mydb-tls isCA: false usages: - client auth - server auth - digital signature - key encipherment - commonName: kurrentdb-node + commonName: '*.kurrent.test' subject: organizations: - Kurrent organizationalUnits: - Cloud dnsNames: - - '*.kurrentdb-cluster.kurrent.svc.cluster.local' - - '*.kurrentdb-cluster.kurrent.test' - - '*.kurrentdb-cluster-replica.kurrent.svc.cluster.local' - - '*.kurrentdb-cluster-replica.kurrent.test' + - '*.kurrent.test' privateKey: algorithm: RSA encoding: PKCS1 @@ -396,14 +313,18 @@ spec: issuerRef: name: letsencrypt kind: Issuer ---- +``` + +This `KurrentDB` resource defines the quorum nodes in one cluster: + +```yaml apiVersion: kubernetes.kurrent.io/v1 kind: KurrentDB metadata: - name: kurrentdb-cluster + name: mydb namespace: kurrent spec: - replicas: 1 + replicas: 3 image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0 resources: requests: @@ -417,36 +338,68 @@ spec: requests: storage: 512Mi network: - domain: kurrentdb-cluster.kurrent.test + domain: kurrent.test loadBalancer: enabled: true + fqdnTemplate: '{podName}-qn.{domain}' + internodeTrafficStrategy: SplitDNS + clientTrafficStrategy: FQDN security: + certificateReservedNodeCommonName: '*.kurrent.test' certificateSecret: - name: kurrentdb-cluster-tls + name: mydb-tls keyName: tls.crt privateKeyName: tls.key ``` -Before deploying this cluster, ensure that the steps described in section [Using LetsEncrypt certificates](managing-certificates.md#using-trusted-certificates-via-letsencrypt) have been followed. +And this `KurrentDB` resource defines the standalone read-only replica in another cluster. Notice +that: -Follow these steps to deploy the cluster: -- Copy the YAML snippet above to a file called `cluster.yaml` -- Ensure that the `kurrent` namespace has been created -- Run the following command: +- `.replicas` is 0, but `.quorumNodes` is set instead +- `.readOnlyReplicas.replicas` is set +- `fqdnTemplate` differs slightly from above -```bash -kubectl apply -f cluster.yaml +```yaml +apiVersion: kubernetes.kurrent.io/v1 +kind: KurrentDB +metadata: + name: mydb + namespace: kurrent +spec: + replicas: 0 + quorumNodes: + - mydb-0-qn.kurrent.test:2113 + - mydb-1-qn.kurrent.test:2113 + - mydb-2-qn.kurrent.test:2113 + readOnlyReplicas: + - replicas: 2 + image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0 + resources: + requests: + cpu: 1000m + memory: 1Gi + storage: + volumeMode: "Filesystem" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 512Mi + network: + domain: kurrent.test + loadBalancer: + enabled: true + fqdnTemplate: '{podName}-sa.{domain}' + internodeTrafficStrategy: SplitDNS + clientTrafficStrategy: FQDN + security: + certificateReservedNodeCommonName: '*.kurrent.test' + certificateSecret: + name: mydb-tls + keyName: tls.crt + privateKeyName: tls.key ``` -## Three Node Secure Cluster (using LetsEncrypt) - -Using LetsEncrypt, or any publicly trusted certificate, in an operator-managed KurrentDB cluster -is not supported. - -The recommended workaround is to combine [self-signed certificates within the cluster]( -#three-node-secure-cluster-using-self-signed-certificates) with an Ingress that does TLS -termination using the LetsEncrypt certificate. - ## Deploying With Scheduling Constraints The pods created for a KurrentDB resource can be configured with any of the constraints commonly applied to pods: @@ -483,9 +436,10 @@ spec: requests: storage: 512Mi network: - domain: kurrentdb-cluster.kurrent.test + domain: kurrent.test loadBalancer: enabled: true + fqdnTemplate: '{podName}.{domain}' constraints: nodeSelector: machine-size: large @@ -497,51 +451,18 @@ spec: app.kubernetes.io/part-of: kurrentdb-operator app.kubernetes.io/name: my-kurrentdb-cluster whenUnsatisfiable: DoNotSchedule - ``` If no scheduling constraints are configured, the operator sets a default soft constraint configuring pod anti-affinity such that multiple replicas will prefer to run on different nodes, for better fault tolerance. -## Viewing Deployments - -Using the k9s tool, navigate to the namespaces list using the command `:namespaces`, it should show a screen similar to: - -![Namespaces](images/database-deployment/namespace-list.png) - -From here, press the `Return` key on the namespace where `KurrentDB` was deployed. In the screen above the namespace is `kurrent`. Now enter the k9s command `:kurrentdbs` and press the `Return` key. The following screen will show a list of deployed databases for the selected namespace, as shown below: - -![Databases](images/database-deployment/database-list.png) - -Summary information is shown on this screen. For more information press the `d` key on the selected database. The following screen will provide additional information about the deployment: - -![Database Details](images/database-deployment/db-decribe.png) - -Scrolling further will also show the events related to the deployment, such as: - -- transitions between states -- gossip endpoint -- leader details -- database version - -## Accessing Deployments - -### External - -The Operator will create services of type `LoadBalancer` to access a KurrentDB cluster (for each node) when the `spec.network.loadBalancer.enabled` flag is set to `true`. - -Each service is annotated with `external-dns.alpha.kubernetes.io/hostname: {external cluster endpoint}` to allow the third-party tool [ExternalDNS](https://github.com/kubernetes-sigs/external-dns) to configure external access. - -### Internal - -The Operator will create headless services to access a KurrentDB cluster internally. This includes: -- One for the underlying statefulset (selects all pods) -- One per pod in the statefulset to support `Ingress` rules that require one target endpoint - ## Custom Database Configuration -If custom parameters are required in the underlying database configuration then these can be specified using the `configuration` YAML block within a `KurrentDB`. Note, these values will be passed through as-is. +If custom parameters are required in the underlying database configuration then these can be +specified using the `configuration` YAML block within a `KurrentDB`. The parameters which are +defaulted or overridden by the operator are listed [in the CRD reference]( +../getting-started/resource-types.md#configuring-kurrent-db). For example, to enable projections, the deployment configuration looks as follows: @@ -569,7 +490,23 @@ spec: requests: storage: 512Mi network: - domain: kurrentdb-cluster.kurrent.test + domain: kurrent.test loadBalancer: enabled: true + fqdnTemplate: '{podName}.{domain}' ``` + +## Accessing Deployments + +### External + +The Operator will create one service of type `LoadBalancer` per KurrentDB node when the +`spec.network.loadBalancer.enabled` flag is set to `true`. + +Each service is annotated with `external-dns.alpha.kubernetes.io/hostname: {external cluster endpoint}` to allow the third-party tool [ExternalDNS](https://github.com/kubernetes-sigs/external-dns) to configure external access. + +### Internal + +The Operator will create headless services to access a KurrentDB cluster internally. This includes: +- One for the underlying statefulset (selects all pods) +- One per pod in the statefulset to support `Ingress` rules that require one target endpoint diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/database-restore.md b/docs/server/kubernetes-operator/v1.4.0/operations/database-restore.md deleted file mode 100644 index 50c8aaf58..000000000 --- a/docs/server/kubernetes-operator/v1.4.0/operations/database-restore.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Database Restore -order: 4 ---- - -The sections below detail how a database restore can be performed. Refer to the [KurrentDB API](../getting-started/resource-types.md#kurrentdb) for detailed information. - -## Restoring from a backup - -A `KurrentDB` cluster can be restored from a backup by specifying an additional field `sourceBackup` as part of the cluster definition. - -For example, if an existing `KurrentDBBackup` exists called `kurrentdb-cluster-backup`, the following snippet could be used to restore it: - - -```yaml -apiVersion: kubernetes.kurrent.io/v1 -kind: KurrentDB -metadata: - name: kurrentdb-cluster - namespace: kurrent -spec: - replicas: 1 - image: docker.kurrent.io/kurrent-latest/kurrentdb:25.0.0 - sourceBackup: kurrentdb-cluster-backup - resources: - requests: - cpu: 1000m - memory: 1Gi - network: - domain: kurrentdb-cluster.kurrent.test - loadBalancer: - enabled: true -``` diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/images/database-backup/backup-list.png b/docs/server/kubernetes-operator/v1.4.0/operations/images/database-backup/backup-list.png deleted file mode 100644 index 1d7e706f5..000000000 Binary files a/docs/server/kubernetes-operator/v1.4.0/operations/images/database-backup/backup-list.png and /dev/null differ diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/images/database-backup/namespace-list.png b/docs/server/kubernetes-operator/v1.4.0/operations/images/database-backup/namespace-list.png deleted file mode 100644 index 75b948c65..000000000 Binary files a/docs/server/kubernetes-operator/v1.4.0/operations/images/database-backup/namespace-list.png and /dev/null differ diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/images/database-deployment/database-list.png b/docs/server/kubernetes-operator/v1.4.0/operations/images/database-deployment/database-list.png deleted file mode 100644 index 6b6ba34a9..000000000 Binary files a/docs/server/kubernetes-operator/v1.4.0/operations/images/database-deployment/database-list.png and /dev/null differ diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/images/database-deployment/db-decribe.png b/docs/server/kubernetes-operator/v1.4.0/operations/images/database-deployment/db-decribe.png deleted file mode 100644 index f57b3dfbd..000000000 Binary files a/docs/server/kubernetes-operator/v1.4.0/operations/images/database-deployment/db-decribe.png and /dev/null differ diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/images/database-deployment/namespace-list.png b/docs/server/kubernetes-operator/v1.4.0/operations/images/database-deployment/namespace-list.png deleted file mode 100644 index 75b948c65..000000000 Binary files a/docs/server/kubernetes-operator/v1.4.0/operations/images/database-deployment/namespace-list.png and /dev/null differ diff --git a/docs/server/kubernetes-operator/v1.4.0/operations/managing-certificates.md b/docs/server/kubernetes-operator/v1.4.0/operations/managing-certificates.md index 3c8dda20d..53cf068b1 100644 --- a/docs/server/kubernetes-operator/v1.4.0/operations/managing-certificates.md +++ b/docs/server/kubernetes-operator/v1.4.0/operations/managing-certificates.md @@ -1,6 +1,6 @@ --- title: Managing Certificates -order: 5 +order: 6 --- The Operator expects consumers to leverage a thirdparty tool to generate TLS certificates that can be wired in to [KurrentDB](../getting-started/resource-types.md#kurrentdb) deployments using secrets. The sections below describe how certificates can be generated using popular vendors.