Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] Fix various formatting issues #1483

Merged
merged 4 commits into from Mar 29, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions documentation/book/assembly-healthchecks.adoc
Expand Up @@ -12,7 +12,7 @@

= Healthchecks

Healthchecks are periodical tests which verify that the application's health.
Healthchecks are periodical tests that verify whether the application is working correctly.
When the Healthcheck fails, {ProductPlatformName} can assume that the application is not healthy and attempt to fix it.
{ProductPlatformName} supports two types of Healthcheck probes:

Expand All @@ -22,7 +22,7 @@ When the Healthcheck fails, {ProductPlatformName} can assume that the applicatio
For more details about the probes, see {K8sLivenessReadinessProbes}.
Both types of probes are used in {ProductName} components.

Users can configure selected options for liveness and readiness probes
Users can configure selected options for liveness and readiness probes.

include::ref-healthchecks.adoc[leveloffset=+1]

Expand Down
13 changes: 6 additions & 7 deletions documentation/book/con-tls-connections.adoc
Expand Up @@ -8,22 +8,21 @@
== Zookeeper communication

Zookeeper does not support TLS itself.
By deploying an `stunnel` sidecar within every Zookeeper pod, the Cluster Operator is able to provide data encryption and authentication between Zookeeper nodes in a cluster.
Zookeeper communicates only with the `stunnel` sidecar over the loopback interface.
The `stunnel` sidecar then proxies all Zookeeper traffic, TLS decrypting data upon entry into a Zookeeper pod and TLS encrypting data upon departure from a Zookeeper pod.
By deploying a TLS sidecar within every Zookeeper pod, the Cluster Operator is able to provide data encryption and authentication between Zookeeper nodes in a cluster.
Zookeeper only communicates with the TLS sidecar over the loopback interface.
The TLS sidecar then proxies all Zookeeper traffic, TLS decrypting data upon entry into a Zookeeper pod, and TLS encrypting data upon departure from a Zookeeper pod.

This TLS encrypting `stunnel` proxy is instantiated from the `spec.zookeeper.stunnelImage` specified in the Kafka resource.

== Kafka interbroker communication

Communication between Kafka brokers is done through the `REPLICATION` listener on port 9091, which is encrypted by default.

Communication between Kafka brokers and Zookeeper nodes uses an `stunnel` sidecar, as described above.
Communication between Kafka brokers and Zookeeper nodes uses a TLS sidecar, as described above.

== Topic and User Operators

Like the Cluster Operator, the Topic and User Operators each use an `stunnel` sidecar when communicating with Zookeeper.
The Topic Operator connects to Kafka brokers on port 9091.

Like the Cluster Operator, the Topic and User Operators each use a TLS sidecar when communicating with Zookeeper. The Topic Operator connects to Kafka brokers on port 9091.

== Kafka Client connections

Expand Down
5 changes: 2 additions & 3 deletions documentation/book/proc-changing-kafka-user.adoc
Expand Up @@ -56,14 +56,13 @@ ifdef::Kubernetes[]
On {KubernetesName} this can be done using `kubectl apply`:
[source,shell,subs=+quotes]
kubectl apply -f _your-file_
endif::Kubernetes[]
+
endif::Kubernetes[]
On {OpenShiftName} this can be done using `oc apply`:
[source,shell,subs=+quotes]
oc apply -f _your-file_
+
. Use the updated credentials from the `my-user` secret in your application.

. Use the updated credentials from the `my-user` secret in your application.

.Additional resources

Expand Down
2 changes: 1 addition & 1 deletion documentation/book/proc-configuring-kafka-listeners.adoc
Expand Up @@ -13,7 +13,7 @@
.Procedure

. Edit the `listeners` property in the `Kafka.spec.kafka` resource.

+
An example configuration of the plain (unencrypted) listener without authentication:
+
[source,yaml,subs=attributes+]
Expand Down
6 changes: 3 additions & 3 deletions documentation/book/proc-dedicated-nodes.adoc
Expand Up @@ -12,9 +12,9 @@

.Procedure

. Select the nodes which should be used as dedicated
. Make sure there are no workloads scheduled on these nodes
. Set the taints on the selected nodes
. Select the nodes which should be used as dedicated.
. Make sure there are no workloads scheduled on these nodes.
. Set the taints on the selected nodes:
+
ifdef::Kubernetes[]
On {KubernetesName} this can be done using `kubectl taint`:
Expand Down
6 changes: 3 additions & 3 deletions documentation/book/proc-deleting-a-topic.adoc
Expand Up @@ -15,7 +15,7 @@ This procedure describes how to delete a Kafka topic using a `KafkaTopic` {Produ

.Procedure

. Delete the `KafkaTopic` resource in {ProductPlatformName}.
* Delete the `KafkaTopic` resource in {ProductPlatformName}.
+
ifdef::Kubernetes[]
On {KubernetesName} this can be done using `kubectl`:
Expand All @@ -28,8 +28,8 @@ On {OpenShiftName} this can be done using `oc`:
+
[source,shell,subs=+quotes]
oc delete kafkatopic _your-topic-name_
+
NOTE: Whether the topic can actually be deleted depends on the value of the `delete.topic.enable` Kafka broker configuration, specified in the `Kafka.spec.kafka.config` property.

NOTE: Whether the topic can actually be deleted depends on the value of the `delete.topic.enable` Kafka broker configuration specified in the `Kafka.spec.kafka.config` property.

.Additional resources
* For more information about deploying a Kafka cluster using the Cluster Operator, see xref:cluster-operator-str[].
Expand Down
3 changes: 1 addition & 2 deletions documentation/book/proc-deleting-kafka-user.adoc
Expand Up @@ -15,7 +15,7 @@ This procedure describes how to delete a Kafka user created with `KafkaUser` {Pr

.Procedure

. Delete the `KafkaUser` resource in {ProductPlatformName}.
* Delete the `KafkaUser` resource in {ProductPlatformName}.
+
ifdef::Kubernetes[]
On {KubernetesName} this can be done using `kubectl`:
Expand All @@ -28,7 +28,6 @@ On {OpenShiftName} this can be done using `oc`:
+
[source,shell,subs=+quotes]
oc delete kafkauser _your-user-name_
+

.Additional resources

Expand Down
Expand Up @@ -13,7 +13,7 @@ include::frag-cluster-operator-namespace-sed.adoc[]

.Procedure

. Deploy the Cluster Operator
* Deploy the Cluster Operator:
+
[source]
----
Expand Down
Expand Up @@ -26,7 +26,7 @@ sed -i '' 's/namespace: .\*/namespace: _my-project_/' install/cluster-operator/*

.Procedure

. Deploy the Cluster Operator
* Deploy the Cluster Operator:
+
[source]
----
Expand Down
7 changes: 4 additions & 3 deletions documentation/book/proc-manual-delete-pod-pvc-kafka.adoc
Expand Up @@ -19,7 +19,7 @@ WARNING: Deleting a `PersistentVolumeClaim` can cause permanent data loss. The f
.Procedure

. Find the name of the `Pod` that you want to delete.

+
For example, if the cluster is named _cluster-name_, the pods are named _cluster-name_-kafka-_index_, where _index_ starts at zero and ends at the total number of replicas.

. Annotate the `Pod` resource in {ProductPlatformName}.
Expand All @@ -28,12 +28,13 @@ ifdef::Kubernetes[]
On {KubernetesName} use `kubectl annotate`:
[source,shell,subs=+quotes]
kubectl annotate pod _cluster-name_-kafka-_index_ strimzi.io/delete-pod-and-pvc=true
endif::Kubernetes[]
+
endif::Kubernetes[]
On {OpenShiftName} use `oc annotate`:
+
[source,shell,subs=+quotes]
oc annotate pod _cluster-name_-kafka-_index_ strimzi.io/delete-pod-and-pvc=true
+

. Wait for the next reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated.

.Additional resources
Expand Down
7 changes: 4 additions & 3 deletions documentation/book/proc-manual-delete-pod-pvc-zookeeper.adoc
Expand Up @@ -19,7 +19,7 @@ WARNING: Deleting a `PersistentVolumeClaim` can cause permanent data loss. The f
.Procedure

. Find the name of the `Pod` that you want to delete.

+
For example, if the cluster is named _cluster-name_, the pods are named _cluster-name_-zookeeper-_index_, where _index_ starts at zero and ends at the total number of replicas.

. Annotate the `Pod` resource in {ProductPlatformName}.
Expand All @@ -28,12 +28,13 @@ ifdef::Kubernetes[]
On {KubernetesName} use `kubectl annotate`:
[source,shell,subs=+quotes]
kubectl annotate pod _cluster-name_-zookeeper-_index_ strimzi.io/delete-pod-and-pvc=true
endif::Kubernetes[]
+
endif::Kubernetes[]
On {OpenShiftName} use `oc annotate`:
+
[source,shell,subs=+quotes]
oc annotate pod _cluster-name_-zookeeper-_index_ strimzi.io/delete-pod-and-pvc=true
+

. Wait for the next reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated.

.Additional resources
Expand Down
8 changes: 4 additions & 4 deletions documentation/book/proc-manual-rolling-update-kafka.adoc
Expand Up @@ -17,23 +17,23 @@ This procedure describes how to manually trigger a rolling update of an existing
. Find the name of the `StatefulSet` that controls the Kafka pods you want to manually update.
+
For example, if your Kafka cluster is named _my-cluster_, the corresponding `StatefulSet` is named _my-cluster-kafka_.
+

. Annotate a `StatefulSet` resource in {ProductPlatformName}.
+
ifdef::Kubernetes[]
On {KubernetesName}, use `kubectl annotate`:
[source,shell,subs=+quotes]
kubectl annotate statefulset _cluster-name_-kafka strimzi.io/manual-rolling-update=true
endif::Kubernetes[]
+
endif::Kubernetes[]
On {OpenShiftName}, use `oc annotate`:
+
[source,shell,subs=+quotes]
oc annotate statefulset _cluster-name_-kafka strimzi.io/manual-rolling-update=true
+

. Wait for the next reconciliation to occur (every two minutes by default).
A rolling update of all pods within the annotated `StatefulSet` is triggered, as long as the annotation was detected by the reconciliation process.
Once the rolling update of all the pods is complete, the annotation is removed from the `StatefulSet`.
When the rolling update of all the pods is complete, the annotation is removed from the `StatefulSet`.

.Additional resources

Expand Down
8 changes: 4 additions & 4 deletions documentation/book/proc-manual-rolling-update-zookeeper.adoc
Expand Up @@ -17,23 +17,23 @@ This procedure describes how to manually trigger a rolling update of an existing
. Find the name of the `StatefulSet` that controls the Zookeeper pods you want to manually update.
+
For example, if your Kafka cluster is named _my-cluster_, the corresponding `StatefulSet` is named _my-cluster-zookeeper_.
+

. Annotate a `StatefulSet` resource in {ProductPlatformName}.
+
ifdef::Kubernetes[]
On {KubernetesName}, use `kubectl annotate`:
[source,shell,subs=+quotes]
kubectl annotate statefulset _cluster-name_-zookeeper strimzi.io/manual-rolling-update=true
endif::Kubernetes[]
+
endif::Kubernetes[]
On {OpenShiftName}, use `oc annotate`:
+
[source,shell,subs=+quotes]
oc annotate statefulset _cluster-name_-zookeeper strimzi.io/manual-rolling-update=true
+

. Wait for the next reconciliation to occur (every two minutes by default).
A rolling update of all pods within the annotated `StatefulSet` is triggered, as long as the annotation was detected by the reconciliation process.
Once the rolling update of all the pods is complete, the annotation is removed from the `StatefulSet`.
When the rolling update of all the pods is complete, the annotation is removed from the `StatefulSet`.

.Additional resources

Expand Down
2 changes: 1 addition & 1 deletion documentation/book/ref-tolerations.adoc
Expand Up @@ -5,7 +5,7 @@
[id='tolerations-{context}']
= Tolerations

Tolerations ca be configured using the `tolerations` property in following resources:
Tolerations can be configured using the `tolerations` property in following resources:

* `Kafka.spec.kafka`
* `Kafka.spec.zookeeper`
Expand Down