Skip to content
This repository has been archived by the owner on May 25, 2022. It is now read-only.

[DOCS] Kafka Connect and Mirror Maker upgrades, plus minor style edits #48

Merged
merged 9 commits into from May 21, 2019
17 changes: 11 additions & 6 deletions books/assembly-upgrade-1-1-0.adoc
Expand Up @@ -17,7 +17,7 @@ This chapter describes how to upgrade {ProductName} on Red Hat Enterprise Linux
|1.1.0 |2.1.1
|=======================

Although {ProductName} 1.1.0 and Kafka 2.1.1 are both minor releases, the Kafka protocol has changed since Kafka 2.0.0 was released. In particular, the message format version and inter-broker protocol version are both now at version 2.1. As a result, the upgrade process involves making both configuration changes to existing Kafka brokers and code changes to client applications (consumers and producers). The following table shows the differences between the two Kafka versions:
Although {ProductName} 1.1.0 and Kafka 2.1.1 are both minor releases, the Kafka protocol has changed since Kafka 2.0.0 was released. In particular, the message format version and inter-broker protocol version are both now at version 2.1. As a result, the upgrade process involves making configuration changes to existing Kafka brokers and code changes to client applications. The following table shows the differences between the two Kafka versions:

[options="header"]
|=======================
Expand All @@ -26,9 +26,9 @@ Although {ProductName} 1.1.0 and Kafka 2.1.1 are both minor releases, the Kafka
|2.1.1 |2.1 |2.1 |3.4.13
|=======================

Although Kafka 2.0.0 and 2.1.1 use the same version of Zookeeper, Red Hat recommends that you update your Zookeeper cluster to use the newest Zookeeper binaries before proceeding with the main {ProductName} 1.1.0 upgrade.
Although Kafka 2.0.0 and 2.1.1 use the same version of Zookeeper, it is recommended that you update your Zookeeper cluster to use the newest Zookeeper binaries before proceeding with the main {ProductName} 1.1.0 upgrade.

Message format version::
== Message format version

When a producer sends a message to a Kafka broker, the message is encoded using a specific format. This is referred to as the message format version. You can configure a Kafka broker to convert messages from newer format versions to a given older format version, before the broker appends the message to the log.

Expand All @@ -40,7 +40,7 @@ In Kafka, there are two different methods for setting the message format version

The default value of `message.format.version` for a topic is defined by the `log.message.format.version` that is set on the Kafka broker. You can manually set the `message.format.version` of a topic by modifying its topic configuration. This chapter refers to the _message format version_ throughout when discussing both Kafka brokers and topics.

Procedure outline::
== Procedure outline

In summary, upgrading to {ProductName} 1.1.0 is a three-stage process. To upgrade brokers and clients without downtime, you *must* complete the upgrade procedures in the order shown here.

Expand All @@ -49,16 +49,19 @@ In summary, upgrading to {ProductName} 1.1.0 is a three-stage process. To upgrad
.. xref:proc-updating-zookeeper-binaries-{context}[Updating the Zookeeper binaries]

. Second, upgrade all Kafka brokers to {ProductName} 1.1.0 and configure them to use the previous protocol versions.

.. xref:proc-upgrading-kafka-brokers-to-amq-streams-1-1-0-{context}[Upgrading Kafka brokers to {ProductName} 1.1.0]

. Third, upgrade all Kafka brokers and client applications to Kafka 2.1.1. To avoid cluster downtime, this stage involves:

.. xref:proc-updating-kafka-brokers-to-new-inter-broker-protocol-version-{context}[Configuring Kafka brokers to use the new inter-broker protocol version]

.. xref:con-strategies-for-upgrading-clients-{context}[Introducing strategies for upgrading clients]
.. xref:con-strategies-for-upgrading-clients-{context}[Strategies for upgrading clients]

.. xref:proc-upgrading-clients-to-new-kafka-version-{context}[Upgrading client applications to the new Kafka version] (based on an adopted strategy)

.. xref:proc-upgrading-kafka-connect-to-amq-streams-1-1-0-{context}[Upgrading Kafka Connect to {ProductName} 1.1.0]

.. xref:proc-updating-kafka-brokers-to-new-message-format-version-{context}[Configuring Kafka brokers to the new message format version]

== Upgrade prerequisites
Expand All @@ -79,4 +82,6 @@ include::con-strategies-for-upgrading-clients.adoc[leveloffset=+1]

include::proc-upgrading-clients-to-new-kafka-version.adoc[leveloffset=+1]

include::proc-updating-kafka-brokers-to-new-message-format-version.adoc[leveloffset=+1]
include::proc-upgrading-kafka-connect-to-amq-streams-1-1-0.adoc[leveloffset=+1]

include::proc-updating-kafka-brokers-to-new-message-format-version.adoc[leveloffset=+1]
2 changes: 1 addition & 1 deletion books/con-strategies-for-upgrading-clients.adoc
Expand Up @@ -6,7 +6,7 @@

= Strategies for upgrading clients

The best approach to upgrading your client applications (including Kafka Connect connectors) depends on your particular circumstances.
The best approach to upgrading your client applications depends on your particular circumstances. Client applications might include producers, consumers, Kafka Connect, and Kafka MirrorMaker.

Consuming applications need to receive messages in a message format that they understand. You can ensure that this is the case in one of two ways:

Expand Down
Expand Up @@ -29,15 +29,15 @@ For each Kafka broker in your {ProductName} cluster and one at a time:
inter.broker.protocol.version=2.1
----

. On the command line, stop the Kafka broker that you most recently modified and restarted as part of this procedure. If you are modifying the first Kafka broker in this procedure, go to step four.
. On the command line, stop the Kafka broker that you modified:
+
[source,shell,subs=+quotes]
----
/opt/kafka/bin/kafka-server-stop.sh
jcmd | grep kafka
----

. Restart the Kafka broker whose configuration you modified in step two:
. Restart the Kafka broker that you modified:
+
[source,shell,subs=+quotes]
----
Expand Down
4 changes: 2 additions & 2 deletions books/proc-updating-zookeeper-binaries.adoc
Expand Up @@ -20,14 +20,14 @@ mkdir /tmp/kafka-1-1-0
unzip amq-streams-1.1.0-bin.zip -d /tmp/kafka-1-1-0
----

. Delete the `libs` `bin` and `docs` directories from your existing installation:
. Delete the `libs`, `bin`, and `docs` directories from your existing installation:
+
[source,shell,subs=+quotes]
----
rm -rf /opt/kafka/libs /opt/kafka/bin /opt/kafka/docs
----

. Copy the `libs` `bin` and `docs` directories from the temporary directory:
. Copy the `libs`, `bin`, and `docs` directories from the temporary directory:
+
[source,shell,subs=+quotes]
----
Expand Down
2 changes: 1 addition & 1 deletion books/proc-upgrading-clients-to-new-kafka-version.adoc
Expand Up @@ -6,7 +6,7 @@

= Upgrading client applications to the new Kafka version

This procedure describes one possible approach to upgrading your client applications to Kafka 2.1.1, the version used in {ProductName} 1.1.0. It is based on the "Per-topic consumers first, with down conversion" approach outlined in xref:con-strategies-for-upgrading-clients-{context}[Strategies for upgrading clients].
This procedure describes one possible approach to upgrading your client applications to Kafka 2.1.1, the version used in {ProductName} 1.1.0. It is based on the "Per-topic consumers first, with down conversion" approach outlined in xref:con-strategies-for-upgrading-clients-{context}[Strategies for upgrading clients]. Client applications might include producers, consumers, Kafka Connect, and MirrorMaker.

.Prerequisites

Expand Down
4 changes: 2 additions & 2 deletions books/proc-upgrading-kafka-brokers-to-amq-streams-1-1-0.adoc
Expand Up @@ -29,14 +29,14 @@ mkdir /tmp/kafka-1-1-0
unzip amq-streams-1.1.0-bin.zip -d /tmp/kafka-1-1-0
----

. Delete the `libs` `bin` and `docs` directories from your existing installation:
. Delete the `libs`, `bin`, and `docs` directories from your existing installation:
+
[source,shell,subs=+quotes]
----
rm -rf /opt/kafka/libs /opt/kafka/bin /opt/kafka/docs
----

. Copy the `libs` `bin` and `docs` directories from the temporary directory:
. Copy the `libs`, `bin`, and `docs` directories from the temporary directory:
+
[source,shell,subs=+quotes]
----
Expand Down
94 changes: 94 additions & 0 deletions books/proc-upgrading-kafka-connect-to-amq-streams-1-1-0.adoc
@@ -0,0 +1,94 @@
// Module included in the following assemblies:
//
// assembly-upgrade-1-1-0.adoc

[id='proc-upgrading-kafka-connect-to-amq-streams-1-1-0-{context}']

= Upgrading Kafka Connect to {ProductName} 1.1.0

This procedure describes how to upgrade your Kafka Connect cluster to use the {ProductName} 1.1.0 binaries. Kafka Connect is a client application and should be included in your chosen strategy for upgrading clients. For more information, see xref:con-strategies-for-upgrading-clients-{context}[Strategies for upgrading clients].

.Prerequisites
* You are logged in to Red Hat Enterprise Linux as the `kafka` user.

.Procedure

For each Kafka broker in your {ProductName} cluster and one at a time:

. Download the *Red Hat AMQ Streams 1.1.0* archive from the {ReleaseDownload}.
+
NOTE: If prompted, log in to your Red Hat account.

. On the command line, create a temporary directory and extract the contents of the `amq-streams-1.1.0-bin.zip` file.
+
[source,shell,subs=+quotes]
----
mkdir /tmp/kafka-1-1-0
unzip amq-streams-1.1.0-bin.zip -d /tmp/kafka-1-1-0
----

. Delete the `libs`, `bin`, and `docs` directories from your existing installation:
+
[source,shell,subs=+quotes]
----
rm -rf /opt/kafka/libs /opt/kafka/bin /opt/kafka/docs
----

. Copy the `libs`, `bin`, and `docs` directories from the temporary directory:
+
[source,shell,subs=+quotes]
----
cp -r /tmp/kafka-1-1-0/kafka_y.y-x.x.x/libs /opt/kafka/
cp -r /tmp/kafka-1-1-0/kafka_y.y-x.x.x/bin /opt/kafka/
cp -r /tmp/kafka-1-1-0/kafka_y.y-x.x.x/docs /opt/kafka/
----

. Delete the temporary directory.
+
[source,shell,subs=+quotes]
----
rm -r /tmp/kafka-1-1-0
----

. Start Kafka Connect in either standalone or distributed mode.

** To start in standalone mode, run the `connect-standalone.sh` script. Specify the Kafka Connect standalone configuration file and the configuration files of your Kafka Connect connectors.
+
[source,shell,subs=+quotes]
----
su - kafka
/opt/kafka/bin/connect-standalone.sh /opt/kafka/config/connect-standalone.properties connector1.properties
[connector2.properties ...]
----

** To start in distributed mode, start the Kafka Connect workers with the `/opt/kafka/config/connect-distributed.properties` configuration file on all Kafka Connect nodes:
+
[source,shell,subs=+quotes]
----
su - kafka
/opt/kafka/bin/connect-distributed.sh /opt/kafka/config/connect-distributed.properties
----

. Verify that Kafka Connect is running:

** In standalone mode:
+
[source,shell,subs=+quotes]
----
jcmd | grep ConnectStandalone
----

** In distributed mode:
+
[source,shell,subs=+quotes]
----
jcmd | grep ConnectDistributed
----

. Verify that Kafka Connect is producing and consuming data as expected.

.Additional resources

* xref:proc-running-kafka-connect-standalone-{context}[Running Kafka Connect in standalone mode]
* xref:proc-running-kafka-connect-distributed-{context}[Running distributed Kafka Connect]
* xref:con-strategies-for-upgrading-clients-{context}[Strategies for upgrading clients]