Fetching contributors…
Cannot retrieve contributors at this time
261 lines (157 sloc) 14.6 KB


A consumer subscribes to Kafka topics and passes the messages into an Akka Stream.

The underlying implementation is using the KafkaConsumer, see @javadocKafka API for a description of consumer groups, offsets, and other details.


When creating a consumer stream you need to pass in ConsumerSettings (@scaladocAPI) that define things like:

  • de-serializers for the keys and values
  • bootstrap servers of the Kafka cluster
  • group id for the consumer, note that offsets are always committed for a given consumer group
  • Kafka consumer tuning parameters

Scala : @@ snip snip { #settings }

Java : @@ snip snip { #settings }

In addition to programmatic construction of the ConsumerSettings (@scaladocAPI) it can also be created from configuration (application.conf).

When creating ConsumerSettings with the ActorSystem (@scaladocAPI) settings it uses the config section akka.kafka.consumer. The format of these settings files are described in the Typesafe Config Documentation.

@@ snip snip { #consumer-settings }

ConsumerSettings (@scaladocAPI) can also be created from any other Config section with the same layout as above.

See @javadocKafkaConsumer API and @javadocConsumerConfig API for more details regarding settings.

Offset Storage external to Kafka

The Kafka read offset can either be stored in Kafka (see below), or at a data store of your choice.

Consumer.plainSource (@scala[@scaladocConsumer API]@java[@scaladocConsumer API]) and Consumer.plainPartitionedManualOffsetSource can be used to emit ConsumerRecord (@javadocKafka API) elements as received from the underlying KafkaConsumer. They do not have support for committing offsets to Kafka. When using these Sources, either store an offset externally, or use auto-commit (note that auto-commit is disabled by default).

Scala : @@ snip snip { #settings-autocommit }

Java : @@ snip snip { #settings-autocommit }

The consumer application doesn't need to use Kafka's built-in offset storage, it can store offsets in a store of its own choosing. The primary use case for this is allowing the application to store both the offset and the results of the consumption in the same system in a way that both the results and offsets are stored atomically. This is not always possible, but when it is it will make the consumption fully atomic and give "exactly once" semantics that are stronger than the "at-least-once" semantics you get with Kafka's offset commit functionality.

Scala : @@ snip snip { #plainSource }

Java : @@ snip snip { #plainSource }

For Consumer.plainSource the Subscriptions.assignmentWithOffset specifies the starting point (offset) for a given consumer group id, topic and partition. The group id is defined in the ConsumerSettings.

Alternatively, with Consumer.plainPartitionedManualOffsetSource (@scala[@scaladocConsumer API]@java[@scaladocConsumer API]), only the consumer group id and the topic are required on creation. The starting point is fetched by calling the getOffsetsOnAssign function passed in by the user. This function should return a Map of TopicPartition (@javadocAPI) to Long, with the Long representing the starting point. If a consumer is assigned a partition that is not included in the Map that results from getOffsetsOnAssign, the default starting position will be used, according to the consumer configuration value auto.offset.reset. Also note that Consumer.plainPartitionedManualOffsetSource emits tuples of assigned topic-partition and a corresponding source, as in Source per partition.

Offset Storage in Kafka - committing

The Consumer.committableSource (@scala[@scaladocConsumer API]@java[@scaladocConsumer API]) makes it possible to commit offset positions to Kafka. Compared to auto-commit this gives exact control of when a message is considered consumed.

This is useful when "at-least-once" delivery is desired, as each message will likely be delivered one time, but in failure cases could be received more than once.

Scala : @@ snip snip { #atLeastOnce }

Java : @@ snip snip { #atLeastOnce }

The above example uses separate mapAsync stages for processing and committing. This guarantees that for parallelism higher than 1 we will keep correct ordering of messages sent for commit.

Committing the offset for each message as illustrated above is rather slow. It is recommended to batch the commits for better throughput, with the trade-off that more messages may be re-delivered in case of failures.

You can use the Akka Stream batch combinator to perform the batching. Note that it will only aggregate elements into batches if the downstream consumer is slower than the upstream producer.

Scala : @@ snip snip { #atLeastOnceBatch }

Java : @@ snip snip { #atLeastOnceBatch }

If you consume from a topic with low activity, and possibly no messages arrive for more than 24 hours, consider enabling periodical commit refresh (akka.kafka.consumer.commit-refresh-interval configuration parameters), otherwise offsets might expire in the Kafka storage.

For less active topics timing-based aggregation with groupedWithin might be a better choice than the batch operator.

Scala : @@ snip snip { #groupedWithin }

Java : @@ snip snip { #groupedWithin }

The Consumer.commitWithMetadataSource allows you to add metadata to the committed offset based on the last consumed record.

Note that the first offset provided to the consumer during a partition assignment will not contain metadata. This offset can get committed due to a periodic commit refresh (akka.kafka.consumer.commit-refresh-interval configuration parmeters) and the commit will not contain metadata.

Scala : @@ snip snip { #commitWithMetadata }

Java : @@ snip snip { #commitWithMetadata }

If you commit the offset before processing the message you get "at-most-once" delivery semantics, this is provided by Consumer.atMostOnceSource. However, atMostOnceSource commits the offset for each message and that is rather slow, batching of commits is recommended.

Scala : @@ snip snip { #atMostOnce }

Java : @@ snip snip { #atMostOnce }

Maintaining at-least-once delivery semantics requires care, many risks and solutions are covered in @ref:At-Least-Once Delivery.

Connecting Producer and Consumer

For cases when you need to read messages from one topic, transform or enrich them, and then write to another topic you can use Consumer.committableSource and connect it to a Producer.commitableSink. The commitableSink will commit the offset back to the consumer when it has successfully published the message.

The committableSink accepts implementations ProducerMessage.Envelope (@scaladocAPI) that contain the offset to commit the consumption of the originating message (of type ConsumerMessage.Committable (@scaladocAPI)). See @refProducing messages about different implementations of Envelope supported.

Note that there is a risk that something fails after publishing but before committing, so commitableSink has "at-least-once" delivery semantics.

Scala : @@ snip snip { #consumerToProducerSink }

Java : @@ snip snip { #consumerToProducerSink }

As Producer.committableSink's committing of messages one-by-one is rather slow, prefer a flow together with batching of commits.

Scala : @@ snip snip { #consumerToProducerFlowBatch }

Java : @@ snip snip { #consumerToProducerFlowBatch }


There is a risk that something fails after publishing, but before committing, so commitableSink has "at-least-once" delivery semantics.

To get delivery guarantees, please read about @reftransactions.


Source per partition

Consumer.plainPartitionedSource (@scala[@scaladocConsumer API]@java[@scaladocConsumer API]) , Consumer.committablePartitionedSource, and Consumer.commitWithMetadataPartitionedSource support tracking the automatic partition assignment from Kafka. When a topic-partition is assigned to a consumer, this source will emit a tuple with the assigned topic-partition and a corresponding source. When a topic-partition is revoked, the corresponding source completes.

Backpressure per partition with batch commit:

Scala : @@ snip snip { #committablePartitionedSource }

Java : @@ snip snip { #committablePartitionedSource }

Separate streams per partition:

Scala : @@ snip snip { #committablePartitionedSource-stream-per-partition }

Java : @@ snip snip { #committablePartitionedSource-stream-per-partition }

Join flows based on automatically assigned partitions:

Scala : @@ snip snip { #committablePartitionedSource3 }

Sharing the KafkaConsumer instance

If you have many streams it can be more efficient to share the underlying KafkaConsumer (@javadocKafka API) instance. It is shared by creating a KafkaConsumerActor (@scaladocAPI). You need to create the actor and stop it by sending KafkaConsumerActor.Stop when it is not needed any longer. You pass the ActorRef as a parameter to the Consumer (@scala[@scaladocConsumer API]@java[@scaladocConsumer API]) factory methods.

Scala : @@ snip snip { #consumerActor }

Java : @@ snip snip { #consumerActor }

Accessing KafkaConsumer metrics

You can access the underlying consumer metrics via the materialized Control instance:

Scala : @@ snip snip { #consumerMetrics }

Java : @@ snip snip { #consumerMetrics }

Accessing KafkaConsumer metadata

Accessing of Kafka consumer metadata is possible as described in @refConsumer Metadata.

Listening for rebalance events

You may set up an rebalance event listener actor that will be notified when your consumer will be assigned or revoked from consuming from specific topic partitions. Two kinds of messages will be sent to this listener actor

  • akka.kafka.TopicPartitionsAssigned and
  • akka.kafka.TopicPartitionsRevoked, like this:

Scala : @@ snip snip { #withRebalanceListenerActor }

Java : @@ snip snip { #withRebalanceListenerActor }

Controlled shutdown

The Source created with Consumer.plainSource and similar methods materializes to a Consumer.Control (@scala[@scaladocAPI]@java[@scaladocAPI]) instance. This can be used to stop the stream in a controlled manner.

When using external offset storage, a call to Consumer.Control.shutdown() suffices to complete the Source, which starts the completion of the stream.

Scala : @@ snip snip { #shutdownPlainSource }

Java : @@ snip snip { #shutdownPlainSource }

When you are using offset storage in Kafka, the shutdown process involves several steps:

  1. Consumer.Control.stop() to stop producing messages from the Source. This does not stop the underlying Kafka Consumer.
  2. Wait for the stream to complete, so that a commit request has been made for all offsets of all processed messages (via commitScaladsl() or commitJavadsl()).
  3. Consumer.Control.shutdown() to wait for all outstanding commit requests to finish and stop the Kafka Consumer.

To manage this shutdown process, use the Consumer.DrainingControl (@scala[@scaladocAPI]@java[@scaladocAPI]) by combining the Consumer.Control with the sink's materialized completion future in mapMaterializedValue'. That control offers the method drainAndShutdown` which implements the process descibed above. It is recommended to use the same shutdown mechanism also when not using batching to avoid potential race conditions, depending on the exact layout of the stream.

Scala : @@ snip snip { #shutdownCommitableSource }

Java : @@ snip snip { #shutdownCommitableSource }