Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KIP-320 implementation, WIP #4122

Closed
wants to merge 13 commits into from
Closed

KIP-320 implementation, WIP #4122

wants to merge 13 commits into from

Conversation

edenhill
Copy link
Contributor

Remaining tasks:

  • Write changelog to outline the new behaviour with epochs,
    and what it fixes.
    Also mention the new ERR__LOG_TRUNCATION error code that will
    be raised if log truncation is detected on the broker due to
    unclean leader election.

  • Write test cases using the mock broker. This also requires
    the new OffsetForLeaderEpochRequest to be added to the mock handlers.

  • Run the AK system tests truncation test suite to verify that the
    verifiable consumer (backed by librdkafka, preferably thru the python client)
    handles log truncation properly.
    Talk with Jing about running on Jenkins, and/or try to get it working
    with the local docker/ducker-ak based system testing.
    In the latter case you will need to find a way to get the verifiable clients
    on the docker images that are tested.

  • For high-level language bindings the following changes are needed:

  • Add the ERR__LOG_TRUNCATION error type.
  • Make sure that any relevant uses of rd_kafka_topic_partition_list_t
    and rd_kafka_topic_partition_t now respects and retains the
    leader epoch: use get_leader_epoch() to retrieve and set_leader_epoch()
    to set. Relevant APIs are:
    • seek
    • seek_partitions
    • commit
    • committed
    • position
    • *assign()
    • message_leader_epoch()
  • For .NET: add new method alternatives that takes the new TopicPartitionOffsetEpoch type.
  • The message object now has a leader_epoch() method, for the consumer, for committing
    offsets outside of Kafka.
  • The two convenience functions rd_kafka_buf_[write|read]_topic_partition_list()
    were changed to take an array of field specifiers, this allows these functions
    to be reused for various read/write partition array jobs even
    when the field ordering differs.
    However, the calling signature is variadic, which means existing code that calls
    these functions will compile but not work - this has been fixed in this PR
    of course, but other PRs sub-sequent to this one needs to be fixed too.

@edenhill edenhill requested a review from emasab December 21, 2022 19:38
@emasab
Copy link
Collaborator

emasab commented Jan 26, 2023

Reopened in #4162

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants