Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(node pools): updates to node pools content to reflect move to beta #9460

Merged
merged 2 commits into from
Dec 14, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion documentation/assemblies/configuring/assembly-config.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Use custom resources to configure and create instances of the following componen
You can also use custom resource configuration to manage your instances or modify your deployment to introduce additional features.
This might include configuration that supports the following:

* (Preview) Specifying node pools
* Specifying node pools
* Securing client access to Kafka brokers
* Accessing Kafka brokers from outside the cluster
* Creating topics
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,27 +12,24 @@ You can use these files to deploy the Topic Operator and User Operator at the sa

After you have deployed the Cluster Operator, use a `Kafka` resource to deploy the following components:

* xref:deploying-kafka-cluster-{context}[Kafka cluster] or (preview) xref:deploying-kafka-node-pools-{context}[Kafka cluster with node pools]
* A Kafka cluster that uses KRaft or ZooKeeper:
** xref:deploying-kafka-node-pools-{context}[KRaft-based or ZooKeeper-based Kafka cluster with node pools]
** xref:deploying-kafka-cluster-{context}[ZooKeeper-based Kafka cluster without node pools]
* xref:deploying-the-topic-operator-using-the-cluster-operator-{context}[Topic Operator]
* xref:deploying-the-user-operator-using-the-cluster-operator-{context}[User Operator]

When installing Kafka, Strimzi also installs a ZooKeeper cluster and adds the necessary configuration to connect Kafka with ZooKeeper.

If you are trying the preview of the node pools feature, you can deploy a Kafka cluster with one or more node pools.
Node pools provide configuration for a set of Kafka nodes.
By using node pools, nodes can have different configuration within the same Kafka cluster.

Node pools are not enabled by default, so you must xref:ref-operator-kafka-node-pools-feature-gate-{context}[enable the `KafkaNodePools` feature gate] before using them.
By using node pools, nodes can have different configuration within the same Kafka cluster.

If you haven't deployed a Kafka cluster as a `Kafka` resource, you can't use the Cluster Operator to manage it.
This applies, for example, to a Kafka cluster running outside of Kubernetes.
However, you can use the Topic Operator and User Operator with a Kafka cluster that is *not managed* by Strimzi, by xref:deploy-standalone-operators_{context}[deploying them as standalone components].
You can also deploy and use other Kafka components with a Kafka cluster not managed by Strimzi.

//Deploy Kafka cluster with storage option
include::../../modules/deploying/proc-deploy-kafka-cluster.adoc[leveloffset=+1]
//Deploy Kafka node pools
//Deploy Kafka w/ node pools
include::../../modules/deploying/proc-deploy-kafka-node-pools.adoc[leveloffset=+1]
//Deploy ZooKeeper-based Kafka cluster
include::../../modules/deploying/proc-deploy-kafka-cluster.adoc[leveloffset=+1]
//Include Topic Operator in deployment
include::../../modules/deploying/proc-deploy-topic-operator-with-cluster-operator.adoc[leveloffset=+1]
//Include User Operator in deployment
Expand Down
2 changes: 1 addition & 1 deletion documentation/modules/configuring/con-config-examples.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ examples
<4> `Kafka` custom resource configuration for a deployment of Mirror Maker. Includes example configuration for replication policy and synchronization frequency.
<5> xref:assembly-metrics-config-files-{context}[Metrics configuration], including Prometheus installation and Grafana dashboard files.
<6> `Kafka` custom resource configuration for a deployment of Kafka. Includes example configuration for an ephemeral or persistent single or multi-node deployment.
<7> (Preview) `KafkaNodePool` configuration for Kafka nodes in a Kafka cluster. Includes example configuration for nodes in clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper.
<7> `KafkaNodePool` configuration for Kafka nodes in a Kafka cluster. Includes example configuration for nodes in clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper.
<8> `Kafka` custom resource with a deployment configuration for Cruise Control. Includes `KafkaRebalance` custom resources to generate optimization proposals from Cruise Control, with example configurations to use the default or user optimization goals.
<9> `KafkaConnect` and `KafkaConnector` custom resource configuration for a deployment of Kafka Connect. Includes example configurations for a single or multi-node deployment.
<10> `KafkaBridge` custom resource configuration for a deployment of Kafka Bridge.
35 changes: 16 additions & 19 deletions documentation/modules/configuring/con-config-node-pools.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,11 @@
// assembly-config.adoc

[id='config-node-pools-{context}']
= (Preview) Configuring node pools
= Configuring node pools

[role="_abstract"]
Update the `spec` properties of the `KafkaNodePool` custom resource to configure a node pool deployment.

NOTE: The node pools feature is available as a preview. Node pools are not enabled by default, so you must xref:ref-operator-kafka-node-pools-feature-gate-{context}[enable the `KafkaNodePools` feature gate] before using them.

A node pool refers to a distinct group of Kafka nodes within a Kafka cluster.
Each pool has its own unique configuration, which includes mandatory settings for the number of replicas, roles, and storage allocation.

Expand Down Expand Up @@ -37,21 +35,22 @@ IMPORTANT: **KRaft mode is not ready for production in Apache Kafka or in Strimz

For a deeper understanding of the node pool configuration options, refer to the link:{BookURLConfiguring}[Strimzi Custom Resource API Reference^].

NOTE: While the `KafkaNodePools` feature gate that enables node pools is in alpha phase, replica and storage configuration properties in the `KafkaNodePool` resource must also be present in the `Kafka` resource. The configuration in the `Kafka` resource is ignored when node pools are used. Similarly, ZooKeeper configuration properties must also be present in the `Kafka` resource when using KRaft mode. These properties are also ignored.
NOTE: Currently, replica and storage configuration properties in the `KafkaNodePool` resource must also be present in the `Kafka` resource. The configuration in the `Kafka` resource is ignored when node pools are used. Similarly, ZooKeeper configuration properties must also be present in the `Kafka` resource when using KRaft mode. These properties are also ignored.

.Example configuration for a node pool in a cluster using ZooKeeper
.Example configuration for a node pool in a cluster using KRaft mode
[source,yaml,subs="+attributes"]
----
apiVersion: {KafkaNodePoolApiVersion}
kind: KafkaNodePool
metadata:
name: pool-a # <1>
name: kraft-dual-role # <1>
labels:
strimzi.io/cluster: my-cluster # <2>
spec:
replicas: 3 # <3>
roles:
- broker # <4>
roles: # <4>
- controller
- broker
storage: # <5>
type: jbod
volumes:
Expand All @@ -70,30 +69,31 @@ spec:
<1> Unique name for the node pool.
<2> The Kafka cluster the node pool belongs to. A node pool can only belong to a single cluster.
<3> Number of replicas for the nodes.
<4> Roles for the nodes in the node pool, which can only be `broker` when using Kafka with ZooKeeper.
<4> Roles for the nodes in the node pool. In this example, the nodes have dual roles as controllers and brokers.
<5> Storage specification for the nodes.
<6> Requests for reservation of supported resources, currently `cpu` and `memory`, and limits to specify the maximum resources that can be consumed.

.Example configuration for a node pool in a cluster using KRaft mode
NOTE: The configuration for the `Kafka` resource must be suitable for KRaft mode. Currently, KRaft mode has xref:ref-operator-use-kraft-feature-gate-str[a number of limitations].

.Example configuration for a node pool in a cluster using ZooKeeper
[source,yaml,subs="+attributes"]
----
apiVersion: {KafkaNodePoolApiVersion}
kind: KafkaNodePool
metadata:
name: kraft-dual-role
name: pool-a
labels:
strimzi.io/cluster: my-cluster
spec:
replicas: 3
roles: # <1>
- controller
- broker
roles:
- broker # <1>
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 20Gi
size: 100Gi
deleteClaim: false
resources:
requests:
Expand All @@ -103,7 +103,4 @@ spec:
memory: 64Gi
cpu: "12"
----
<1> Roles for the nodes in the node pool. In this example, the nodes have dual roles as controllers and brokers.

NOTE: The configuration for the `Kafka` resource must be suitable for KRaft mode. Currently, KRaft mode has xref:ref-operator-use-kraft-feature-gate-str[a number of limitations].

<1> Roles for the nodes in the node pool, which can only be `broker` when using Kafka with ZooKeeper.
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
// assembly-config.adoc

[id='proc-managing-node-pools-ids-{context}']
= (Preview) Assigning IDs to node pools for scaling operations
= Assigning IDs to node pools for scaling operations

[role="_abstract"]
This procedure describes how to use annotations for advanced node ID handling by the Cluster Operator when performing scaling operations on node pools.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
// assembly-config.adoc

[id='proc-managing-storage-affinity-node-pools-{context}']
= (Preview) Managing storage affinity using node pools
= Managing storage affinity using node pools

[role="_abstract"]
In situations where storage resources, such as local persistent volumes, are constrained to specific worker nodes, or availability zones, configuring storage affinity helps to schedule pods to use the right nodes.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
// assembly-config.adoc

[id='proc-managing-storage-node-pools-{context}']
= (Preview) Managing storage using node pools
= Managing storage using node pools

[role="_abstract"]
Storage management in Strimzi is usually straightforward, and requires little change when set up, but there might be situations where you need to modify your storage configurations.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,13 @@
// assembly-config.adoc

[id='proc-migrating-clusters-node-pools-{context}']
= (Preview) Migrating existing Kafka clusters to use Kafka node pools
= Migrating existing Kafka clusters to use Kafka node pools

[role="_abstract"]
This procedure describes how to migrate existing Kafka clusters to use Kafka node pools.
After you have updated the Kafka cluster, you can use the node pools to manage the configuration of nodes within each pool.

NOTE: While the `KafkaNodePools` feature gate that enables node pools is in alpha phase, replica and storage configuration in the `KafkaNodePool` resource must also be present in the `Kafka` resource. The configuration is ignored when node pools are being used.
NOTE: Currently, replica and storage configuration in the `KafkaNodePool` resource must also be present in the `Kafka` resource. The configuration is ignored when node pools are being used.

.Prerequisites

Expand Down Expand Up @@ -63,7 +63,7 @@ By applying this resource, you switch Kafka to using node pools.
+
There is no change or rolling update and resources are identical to how they were before.

. Enable the `KafkaNodePools` in the `Kafka` resource using the `strimzi.io/node-pools: enabled` annotation.
. Enable support for node pools in the `Kafka` resource using the `strimzi.io/node-pools: enabled` annotation.
+
.Example configuration for a node pool in a cluster using ZooKeeper
[source,yaml,subs="+attributes"]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
// assembly-config.adoc

[id='proc-moving-node-pools-{context}']
= (Preview) Moving nodes between node pools
= Moving nodes between node pools

[role="_abstract"]
This procedure describes how to move nodes between source and target Kafka node pools without downtime.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
// assembly-config.adoc

[id='proc-scaling-down-node-pools-{context}']
= (Preview) Removing nodes from a node pool
= Removing nodes from a node pool

[role="_abstract"]
This procedure describes how to scale down a node pool to remove nodes.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
// assembly-config.adoc

[id='proc-scaling-up-node-pools-{context}']
= (Preview) Adding nodes to a node pool
= Adding nodes to a node pool

[role="_abstract"]
This procedure describes how to scale up a node pool to add new nodes.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ The following resources are created by the Cluster Operator in the Kubernetes cl
`data-<kafka_cluster_name>-kafka-<pod_id>`:: Persistent Volume Claim for the volume used for storing data for a specific Kafka broker. This resource is created only if persistent storage is selected for provisioning persistent volumes to store data.
`data-<id>-<kafka_cluster_name>-kafka-<pod_id>`:: Persistent Volume Claim for the volume `id` used for storing data for a specific Kafka broker. This resource is created only if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data.

.(Preview) Kafka node pools
.Kafka node pools

If you are using Kafka node pools, the resources created apply to the nodes managed in the node pools whether they are operating as brokers, controllers, or both.
The naming convention includes the name of the Kafka cluster and the node pool: `<kafka_cluster_name>-<pool_name>`.
Expand Down
10 changes: 5 additions & 5 deletions documentation/modules/deploying/proc-deploy-kafka-cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
// deploying/assembly_deploy-kafka-cluster.adoc

[id='deploying-kafka-cluster-{context}']
= Deploying the Kafka cluster
= Deploying a ZooKeeper-based Kafka cluster without node pools

[role="_abstract"]
This procedure shows how to deploy a Kafka cluster to your Kubernetes cluster using the Cluster Operator.
This procedure shows how to deploy a ZooKeeper-based Kafka cluster to your Kubernetes cluster using the Cluster Operator.

The deployment uses a YAML file to provide the specification to create a `Kafka` resource.

Expand Down Expand Up @@ -65,15 +65,15 @@ spec:

.Procedure

. Create and deploy an ephemeral or persistent cluster.
. Deploy a ZooKeeper-based Kafka cluster.
+
--
* To create and deploy an ephemeral cluster:
* To deploy an ephemeral cluster:
+
[source,shell,subs="attributes+"]
kubectl apply -f examples/kafka/kafka-ephemeral.yaml

* To create and deploy a persistent cluster:
* To deploy a persistent cluster:
+
[source,shell,subs="attributes+"]
kubectl apply -f examples/kafka/kafka-persistent.yaml
Expand Down
40 changes: 21 additions & 19 deletions documentation/modules/deploying/proc-deploy-kafka-node-pools.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,34 +3,31 @@
// deploying/assembly_deploy-kafka-cluster.adoc

[id='deploying-kafka-node-pools-{context}']
= (Preview) Deploying Kafka node pools
= Deploying a Kafka cluster with node pools

[role="_abstract"]
This procedure shows how to deploy Kafka node pools to your Kubernetes cluster using the Cluster Operator.
This procedure shows how to deploy Kafka with node pools to your Kubernetes cluster using the Cluster Operator.
Node pools represent a distinct group of Kafka nodes within a Kafka cluster that share the same configuration.
For each Kafka node in the node pool, any configuration not defined in node pool is inherited from the cluster configuration in the `kafka` resource.

NOTE: The node pools feature is available as a preview. Node pools are not enabled by default, so you must xref:ref-operator-kafka-node-pools-feature-gate-{context}[enable the `KafkaNodePools` feature gate] before using them.

The deployment uses a YAML file to provide the specification to create a `KafkaNodePool` resource.
You can use node pools with Kafka clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper for cluster management.
To deploy a Kafka cluster in KRaft mode, you must use the `KafkaNodePool` resources.

IMPORTANT: **KRaft mode is not ready for production in Apache Kafka or in Strimzi.**

Strimzi provides the following xref:config-examples-{context}[example files] that you can use to create a Kafka node pool:
Strimzi provides the following xref:config-examples-{context}[example files] that you can use to create a Kafka cluster that uses node pools:

`kafka.yaml`:: Deploys ZooKeeper with 3 nodes, and 2 different pools of Kafka brokers. Each of the pools has 3 brokers. The pools in the example use different storage configuration.
`kafka-with-dual-role-kraft-nodes.yaml`:: Deploys a Kafka cluster with one pool of KRaft nodes that share the broker and controller roles.
`kafka-with-kraft.yaml`:: Deploys a Kafka cluster with one pool of controller nodes and one pool of broker nodes.
`kafka-with-kraft.yaml`:: Deploys a persistent Kafka cluster with one pool of controller nodes and one pool of broker nodes.
`kafka-with-kraft-ephemeral.yaml`:: Deploys an ephemeral Kafka cluster with one pool of controller nodes and one pool of broker nodes.
`kafka.yaml`:: Deploys ZooKeeper with 3 nodes, and 2 different pools of Kafka brokers. Each of the pools has 3 brokers. The pools in the example use different storage configuration.

NOTE: You don't need to start using node pools right away. If you decide to use them, you can perform the steps outlined here to deploy a new Kafka cluster with `KafkaNodePool` resources or xref:proc-migrating-clusters-node-pools-{context}[migrate your existing Kafka cluster].
NOTE: You can perform the steps outlined here to deploy a new Kafka cluster with `KafkaNodePool` resources or xref:proc-migrating-clusters-node-pools-{context}[migrate your existing Kafka cluster].

.Prerequisites

* xref:deploying-cluster-operator-str[The Cluster Operator must be deployed.]
* You have xref:deploying-kafka-cluster-{context}[created and deployed a Kafka cluster].

NOTE: If you want to migrate an existing Kafka cluster to use node pools, see the xref:proc-migrating-clusters-node-pools-{context}[steps to migrate existing Kafka clusters].
* xref:deploying-cluster-operator-str[The Cluster Operator must be deployed.]

.Procedure

Expand All @@ -52,23 +49,28 @@ env
+
This updates the Cluster Operator.

. Create a node pool.
. Deploy a Kafka cluster with node pools
+
* To deploy a Kafka cluster and ZooKeeper cluster with two node pools of three brokers:
+
[source,shell,subs="attributes+"]
kubectl apply -f examples/kafka/nodepools/kafka.yaml

* To deploy a Kafka cluster in KRaft mode with a single node pool that uses dual-role nodes:
+
[source,shell,subs="attributes+"]
kubectl apply -f examples/kafka/nodepools/kafka-with-dual-role-kraft-nodes.yaml

* To deploy a Kafka cluster in KRaft mode with separate node pools for broker and controller nodes:
* To deploy a persistent Kafka cluster in KRaft mode with separate node pools for broker and controller nodes:
+
[source,shell,subs="attributes+"]
kubectl apply -f examples/kafka/nodepools/kafka-with-kraft.yaml

* To deploy an ephemeral Kafka cluster in KRaft mode with separate node pools for broker and controller nodes:
+
[source,shell,subs="attributes+"]
kubectl apply -f examples/kafka/nodepools/kafka-with-kraft-ephemeral.yaml

* To deploy a Kafka cluster and ZooKeeper cluster with two node pools of three brokers:
+
[source,shell,subs="attributes+"]
kubectl apply -f examples/kafka/nodepools/kafka.yaml

. Check the status of the deployment:
+
[source,shell,subs="+quotes"]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ The `Kafka` custom resource using KRaft mode must also have the annotation `stri
If this annotation is set to `disabled` or any other value, or if it is missing, the operator handles the `Kafka` custom resource as if it is using ZooKeeper for cluster management.

[id='ref-operator-kafka-node-pools-feature-gate-{context}']
== (Preview) KafkaNodePools feature gate
== KafkaNodePools feature gate

The `KafkaNodePools` feature gate has a default state of _enabled_.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
// overview/assembly-configuration-points.adoc

[id="configuration-points-node_pools_{context}"]
= (Preview) Kafka node pools configuration
= Kafka node pools configuration

[role="_abstract"]
A node pool refers to a distinct group of Kafka nodes within a Kafka cluster.
Expand Down