Skip to content

Commit

Permalink
Upgrade Smallrye Reactive Messaging to 3.8.0, Update configuration do…
Browse files Browse the repository at this point in the history
…cumentation

Update health check doc for readiness and startup

Add record key propagation.
  • Loading branch information
ozangunalp committed Jul 26, 2021
1 parent 2de1f0d commit f840d17
Show file tree
Hide file tree
Showing 3 changed files with 317 additions and 269 deletions.
297 changes: 28 additions & 269 deletions docs/src/main/asciidoc/kafka.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -955,6 +955,16 @@ public class PriceProcessor {
}
----

=== Propagating Record Key

When processing messages, you can propagate incoming record key to the outgoing record.

Enabled with `mp.messaging.outgoing.$channel.propagate-record-key=true` configuration,
record key propagation produces the outgoing record with the same _key_ as the incoming record.

If the outgoing record already contains a _key_, it *won't be overridden* by the incoming record key.
If the incoming record does have a _null_ key, the `mp.messaging.outgoing.$channel.key` property is used.

[[kafka-bare-clients]]
== Accessing Kafka clients directly

Expand Down Expand Up @@ -1274,15 +1284,19 @@ This is described in a dedicated guide: link:kafka-schema-registry-avro[Using Ap
Quarkus provides several health checks for Kafka.
These checks are used in combination with the `quarkus-smallrye-health` extension.

=== Kafka Broker Readiness Check
When using the `quarkus-kafka-client` extension, you can enable _readiness_ health check by setting the `quarkus.kafka.health.enabled` property to `true` in your `application.properties`.
This check reports the status of the interaction with a _default_ Kafka broker (configured using `kafka.bootstrap.servers`).
That check requires an _admin connection_ with the Kafka broker.
This check is disabled by default.
It requires an _admin connection_ with the Kafka broker, and it is disabled by default.
If enabled, when you access the `/q/health/ready` endpoint of your application, you will have information about the connection validation status.

When using Reactive Messaging and the Kafka connector, each configured channel (incoming or outgoing) provides a _liveness_ and _readiness_ check.
The _liveness_ check captures any unrecoverable failure happening during the communication with Kafka.
The _readiness_ check verifies that communication with Kafka is established.
=== Kafka Reactive Messaging Health Checks
When using Reactive Messaging and the Kafka connector, each configured channel (incoming or outgoing) provides _startup_, _liveness_ and _readiness_ checks.

- The _startup_ check verifies that the communication with Kafka cluster is established.
- The _liveness_ check captures any unrecoverable failure happening during the communication with Kafka.
- The _readiness_ check verifies that the Kafka connector is ready to consume/produce messages to the configured Kafka topics.

For each channel, you can disable the checks using:

[source, properties]
Expand All @@ -1303,12 +1317,16 @@ mp.messaging.outgoing.your-channel.health-readiness-enabled=false
NOTE: You can configure the `bootstrap.servers` for each channel using `mp.messaging.incoming|outgoing.$channel.bootstrap.servers` property.
Default is `kafka.bootstrap.servers`.

Reactive Messaging readiness check offers two strategies.
Reactive Messaging _startup_ and _readiness_ checks offer two strategies.
The default strategy verifies that an active connection is established with the broker.
This approach is not intrusive as it's based on built-in metrics.
This approach is not intrusive as it's based on built-in Kafka client metrics.

Using the `health-topic-verification-enabled=true` attribute, _startup_ probe uses an _admin client_ to check for the list of topics.
Whereas the _readiness_ probe for an incoming channel checks that at least one partition is assigned for consumption,
and for an outgoing channel checks that the topic used by the producer exist in the broker.

Using the `health-readiness-topic-verification=true` attribute, you can also check the topics used by the application exist in the broker.
Note that to achieve this, an _admin connection_ is required.
You can adjust the timeout for topic verification calls to the broker using the `health-topic-verification-timeout` configuration.

== Kafka Streams

Expand Down Expand Up @@ -1509,143 +1527,7 @@ Some properties have aliases which can be configured globally:
kafka.bootstrap.servers=...
----

.Incoming Attributes of the 'smallrye-kafka' connector
[cols="25, 30, 15, 20",options="header"]
|===
|Attribute (_alias_) | Description | Mandatory | Default

| *bootstrap.servers*

_(kafka.bootstrap.servers)_ | A comma-separated list of host:port to use for establishing the initial connection to the Kafka cluster.

Type: _string_ | false | `localhost:9092`

| *topic* | The consumed / populated Kafka topic. If neither this property nor the `topics` properties are set, the channel name is used

Type: _string_ | false |

| *health-enabled* | Whether health reporting is enabled (default) or disabled

Type: _boolean_ | false | `true`

| *health-readiness-enabled* | Whether readiness health reporting is enabled (default) or disabled

Type: _boolean_ | false | `true`

| *health-readiness-topic-verification* | Whether the readiness check should verify that topics exist on the broker. Default to false. Enabling it requires an admin connection.

Type: _boolean_ | false | `false`

| *health-readiness-timeout* | During the readiness health check, the connector connects to the broker and retrieves the list of topics. This attribute specifies the maximum duration (in ms) for the retrieval. If exceeded, the channel is considered not-ready.

Type: _long_ | false | `2000`

| *tracing-enabled* | Whether tracing is enabled (default) or disabled

Type: _boolean_ | false | `true`

| *cloud-events* | Enables (default) or disables the Cloud Event support. If enabled on an _incoming_ channel, the connector analyzes the incoming records and try to create Cloud Event metadata. If enabled on an _outgoing_, the connector sends the outgoing messages as Cloud Event if the message includes Cloud Event Metadata.

Type: _boolean_ | false | `true`

| *topics* | A comma-separating list of topics to be consumed. Cannot be used with the `topic` or `pattern` properties

Type: _string_ | false |

| *pattern* | Indicate that the `topic` property is a regular expression. Must be used with the `topic` property. Cannot be used with the `topics` property

Type: _boolean_ | false | `false`

| *key.deserializer* | The deserializer classname used to deserialize the record's key

Type: _string_ | false | `org.apache.kafka.common.serialization.StringDeserializer`

| *value.deserializer* | The deserializer classname used to deserialize the record's value

Type: _string_ | true |

| *fetch.min.bytes* | The minimum amount of data the server should return for a fetch request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive.

Type: _int_ | false | `1`

| *group.id* | A unique string that identifies the consumer group the application belongs to.

If not set, defaults to the application name as set by the `quarkus.application.name` configuration property.

If that is not set either, a unique, generated id is used.
It is recommended to always define a `group.id`, the automatic generation is only a convenient feature for development.

Type: _string_ | false |

| *enable.auto.commit* | If enabled, consumer's offset will be periodically committed in the background by the underlying Kafka client, ignoring the actual processing outcome of the records. It is recommended to NOT enable this setting and let Reactive Messaging handles the commit.

Type: _boolean_ | false | `false`

| *retry* | Whether or not the connection to the broker is re-attempted in case of failure

Type: _boolean_ | false | `true`

| *retry-attempts* | The maximum number of reconnection before failing. -1 means infinite retry

Type: _int_ | false | `-1`

| *retry-max-wait* | The max delay (in seconds) between 2 reconnects

Type: _int_ | false | `30`

| *broadcast* | Whether the Kafka records should be dispatched to multiple consumer

Type: _boolean_ | false | `false`

| *auto.offset.reset* | What to do when there is no initial offset in Kafka.Accepted values are earliest, latest and none

Type: _string_ | false | `latest`

| *failure-strategy* | Specify the failure strategy to apply when a message produced from a record is acknowledged negatively (nack). Values can be `fail` (default), `ignore`, or `dead-letter-queue`

Type: _string_ | false | `fail`

| *commit-strategy* | Specify the commit strategy to apply when a message produced from a record is acknowledged. Values can be `latest`, `ignore` or `throttled`. If `enable.auto.commit` is true then the default is `ignore` otherwise it is `throttled`

Type: _string_ | false |

| *throttled.unprocessed-record-max-age.ms* | While using the `throttled` commit-strategy, specify the max age in milliseconds that an unprocessed message can be before the connector is marked as unhealthy.

Type: _int_ | false | `60000`

| *dead-letter-queue.topic* | When the `failure-strategy` is set to `dead-letter-queue` indicates on which topic the record is sent. Defaults is `dead-letter-topic-$channel`

Type: _string_ | false |

| *dead-letter-queue.key.serializer* | When the `failure-strategy` is set to `dead-letter-queue` indicates the key serializer to use. If not set the serializer associated to the key deserializer is used

Type: _string_ | false |

| *dead-letter-queue.value.serializer* | When the `failure-strategy` is set to `dead-letter-queue` indicates the value serializer to use. If not set the serializer associated to the value deserializer is used

Type: _string_ | false |

| *partitions* | The number of partitions to be consumed concurrently. The connector creates the specified amount of Kafka consumers. It should match the number of partition of the targeted topic

Type: _int_ | false | `1`

| *consumer-rebalance-listener.name* | The name set in `javax.inject.Named` of a bean that implements `io.smallrye.reactive.messaging.kafka.KafkaConsumerRebalanceListener`. If set, this rebalance listener is applied to the consumer.

Type: _string_ | false |

| *key-deserialization-failure-handler* | The name set in `javax.inject.Named` of a bean that implements `io.smallrye.reactive.messaging.kafka.DeserializationFailureHandler`. If set, deserialization failure happening when deserializing keys are delegated to this handler which may provide a fallback value.

Type: _string_ | false |

| *value-deserialization-failure-handler* | The name set in `javax.inject.Named` of a bean that implements `io.smallrye.reactive.messaging.kafka.DeserializationFailureHandler`. If set, deserialization failure happening when deserializing values are delegated to this handler which may provide a fallback value.

Type: _string_ | false |

| *graceful-shutdown* | Whether or not a graceful shutdown should be attempted when the application terminates.

Type: _boolean_ | false | `true`

|===
include::smallrye-kafka-incoming.adoc[]

=== Outgoing channel configuration (writing to Kafka)

Expand All @@ -1664,130 +1546,7 @@ Some properties have aliases which can be configured globally:
kafka.bootstrap.servers=...
----

.Outgoing Attributes of the 'smallrye-kafka' connector
[cols="25, 30, 15, 20",options="header"]
|===
|Attribute (_alias_) | Description | Mandatory | Default

| *acks* | The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. Accepted values are: 0, 1, all

Type: _string_ | false | `1`

| *bootstrap.servers*

_(kafka.bootstrap.servers)_ | A comma-separated list of host:port to use for establishing the initial connection to the Kafka cluster.

Type: _string_ | false | `localhost:9092`

| *buffer.memory* | The total bytes of memory the producer can use to buffer records waiting to be sent to the server.

Type: _long_ | false | `33554432`

| *close-timeout* | The amount of milliseconds waiting for a graceful shutdown of the Kafka producer

Type: _int_ | false | `10000`

| *cloud-events* | Enables (default) or disables the Cloud Event support. If enabled on an _incoming_ channel, the connector analyzes the incoming records and try to create Cloud Event metadata. If enabled on an _outgoing_, the connector sends the outgoing messages as Cloud Event if the message includes Cloud Event Metadata.

Type: _boolean_ | false | `true`

| *cloud-events-data-content-type*

_(cloud-events-default-data-content-type)_ | Configure the default `datacontenttype` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `datacontenttype` attribute itself

Type: _string_ | false |

| *cloud-events-data-schema*

_(cloud-events-default-data-schema)_ | Configure the default `dataschema` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `dataschema` attribute itself

Type: _string_ | false |

| *cloud-events-insert-timestamp*

_(cloud-events-default-timestamp)_ | Whether or not the connector should insert automatically the `time` attribute` into the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `time` attribute itself

Type: _boolean_ | false | `true`

| *cloud-events-mode* | The Cloud Event mode (`structured` or `binary` (default)). Indicates how are written the cloud events in the outgoing record

Type: _string_ | false | `binary`

| *cloud-events-source*

_(cloud-events-default-source)_ | Configure the default `source` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `source` attribute itself

Type: _string_ | false |

| *cloud-events-subject*

_(cloud-events-default-subject)_ | Configure the default `subject` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `subject` attribute itself

Type: _string_ | false |

| *cloud-events-type*

_(cloud-events-default-type)_ | Configure the default `type` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `type` attribute itself

Type: _string_ | false |

| *health-enabled* | Whether health reporting is enabled (default) or disabled

Type: _boolean_ | false | `true`

| *health-readiness-enabled* | Whether readiness health reporting is enabled (default) or disabled

Type: _boolean_ | false | `true`

| *health-readiness-timeout* | During the readiness health check, the connector connects to the broker and retrieves the list of topics. This attribute specifies the maximum duration (in ms) for the retrieval. If exceeded, the channel is considered not-ready.

Type: _long_ | false | `2000`

| *health-readiness-topic-verification* | Whether the readiness check should verify that topics exist on the broker. Default to false. Enabling it requires an admin connection.

Type: _boolean_ | false | `false`

| *key* | A key to used when writing the record

Type: _string_ | false |

| *key.serializer* | The serializer classname used to serialize the record's key

Type: _string_ | false | `org.apache.kafka.common.serialization.StringSerializer`

| *max-inflight-messages* | The maximum number of messages to be written to Kafka concurrently. It limits the number of messages waiting to be written and acknowledged by the broker. You can set this attribute to `0` remove the limit

Type: _long_ | false | `1024`

| *merge* | Whether the connector should allow multiple upstreams

Type: _boolean_ | false | `false`

| *partition* | The target partition id. -1 to let the client determine the partition

Type: _int_ | false | `-1`

| *retries* | Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error.

Type: _long_ | false | `2147483647`

| *topic* | The consumed / populated Kafka topic. If neither this property nor the `topics` properties are set, the channel name is used

Type: _string_ | false |

| *tracing-enabled* | Whether tracing is enabled (default) or disabled

Type: _boolean_ | false | `true`

| *value.serializer* | The serializer classname used to serialize the payload

Type: _string_ | true |

| *waitForWriteCompletion* | Whether the client waits for Kafka to acknowledge the written record before acknowledging the message

Type: _boolean_ | false | `true`

|===
include::smallrye-kafka-outgoing.adoc[]

== Going further

Expand Down

0 comments on commit f840d17

Please sign in to comment.