You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Commits the [[offsets]] to Kafka in a single commit.
* For the batch to be valid and for commit to succeed,
* the following conditions must hold:<br>
* - [[consumerGroupIdsMissing]] must be false, and<br>
* - [[consumerGroupIds]] must have exactly one ID.<br>
* <br>
* If one of the conditions above do not hold, there will
* be a [[ConsumerGroupException]] exception raised and a
* commit will not be attempted. If [[offsets]] is empty
* then these conditions do not need to hold, as there
* is nothing to commit.
*/
defcommit:F[Unit]
It would be good to document these also in fs2.kafka.commitBatchWithin, which calls CommittableOffsetBatch#commit but does not document the failure conditions:
I experienced an app failing with error fs2.kafka.ConsumerGroupException: multiple or missing consumer group ids [topic_foo_id, topic_bar_id]. After some investigation, I narrowed the issue down: the events consumed from two topics were being merged into a single stream, processed, and offsets committed with a call to fs2.kafka.commitBatchWithin. The issue is slightly vicious because the code was able to run for a while before multiple events from different topics happened to end up in the same batch and blew up at runtime. The solution settled on was to process the consuming streams separately using .parJoinUnbounded instead.
(there is also a "Rolls-Royce" solution of making commitBatchWithin able to separate and batch commits by topic, but I think warnings in the docs are a good start 😄)
If it sounds OK I could raise a small PR to append the commit scaladoc (starting from "For the batch to be valid...") onto the end of commitBatchWithin.
The text was updated successfully, but these errors were encountered:
CommittableOffsetBatch#commit
clearly explains some requirements for safe usage in its scaladoc:fs2-kafka/modules/core/src/main/scala/fs2/kafka/CommittableOffsetBatch.scala
Lines 82 to 95 in fcbb8b2
It would be good to document these also in
fs2.kafka.commitBatchWithin
, which callsCommittableOffsetBatch#commit
but does not document the failure conditions:fs2-kafka/modules/core/src/main/scala/fs2/kafka/package.scala
Lines 51 to 61 in 5a44da9
I experienced an app failing with error
fs2.kafka.ConsumerGroupException: multiple or missing consumer group ids [topic_foo_id, topic_bar_id]
. After some investigation, I narrowed the issue down: the events consumed from two topics were beingmerge
d into a single stream, processed, and offsets committed with a call tofs2.kafka.commitBatchWithin
. The issue is slightly vicious because the code was able to run for a while before multiple events from different topics happened to end up in the same batch and blew up at runtime. The solution settled on was to process the consuming streams separately using.parJoinUnbounded
instead.(there is also a "Rolls-Royce" solution of making
commitBatchWithin
able to separate and batch commits by topic, but I think warnings in the docs are a good start 😄)If it sounds OK I could raise a small PR to append the
commit
scaladoc (starting from "For the batch to be valid...") onto the end ofcommitBatchWithin
.The text was updated successfully, but these errors were encountered: