-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: bump default version to 1.0.0 #1791
Conversation
The current default, 0.8.2, is a 5 year-old version. The problem with defaulting to older versions when the Kafka cluster is a newer version is manifold: - requires Kafka to downconvert consumer response messages (e.g if a newly-configured client produced messages and a default Sarama client is reading) - this has implications on the performance of Kafka, because it uses additional CPU. Especially so if compression is enabled, at which point the broker would need to decompress, downconvert and compress the message back. Downconversion also blocks zerocopy, because your fetch response now needs to be copied into memory - requires Kafka to upconvert producer requests' messages - in general is hard to support. While the latest Kafka currently supports the oldest versions, it is in the best interest of the project to deprecate and eventually drop support for legacy versions. Otherwise it becomes hard to maintain, the test matrix grows and new features need to work around old version limitations (no idempotency, exactly once). It is easier for Kafka to deprecate/drop support when its ecosystem has done so already 0.11 is the minimum we should default at as it enables Kafka's v2 message format, avoiding expensive upconversion/downconversion in the Kafka broker. It is also required for correctness in some cases (e.g KIP-101) 1.0.0 is a 3-year old version at this point and a reasonable default
d3c19b3
to
921cf9c
Compare
hmm, that doesn't look related |
It should be sufficient to use 0.10 as default and then use ApiVersionRequest to select the highest supported version for each protocol request. |
@edenhill yes the plan is to migrate to ApiVersionRequest in a v2.x release, but I think this is a reasonable default behaviour to leave the 1.x series at so that any user that doesn’t override the version won’t be a hindrance to clusters |
@dnwe +1 |
It is related, because the mock broker is sending a V0 MetadataResponse when the client (at V1_0_0_0) has sent a V5 MetadataRequest |
I'm having a bit of a hard time to fix and pinpoint the issue. I took @dnwe's advice and fixed the test to use V5 MetadataResponses but it still results in a 10-minute timeout with a bunch of threads stuck, apparently when closing the producer and draining the inflight requests. As a side note - we could benefit from having a single place to define the request version (and we could use this for resp in tests too) - I see we set the metadata request version according to the conf.Version in multiple places in the sender's code. I don't want to increase the scope of this PR but thought I'd note the potential improvement in case we want to capture it in an github issue |
@stanislavkozlovski it'll be making V3 ProducerRequests too, so I think you'll need V3 ProducerResponses |
If you add However, as a simpler measure you might want to just update the tests to explicitly specify a Version = MinVersion in their config to just make them continue to exercise the behaviour as they used to. We already have (some) tests that exercise at specific versions. |
This reverts commit 91e853d.
Thanks @dnwe, I like the approach of switching back the Kafka version as ensuring each request and response are the same version seems like a large task, probably better to be deferred in a separate issue. The latest commit adds two test functions |
@stanislavkozlovski Thanks for getting this through. Use of NewTestConfig seems fine, I wonder if we should add some more documentation around it 🤔 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM
Thanks for the review @varun06. I had added documentation around the method:
Let me know if you can think of more to say, or whether some other place is best to document it in |
Need one more approval and then we can merge it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM — sorry for the delay
This PR fixes #1787. As explained there:
The current default, 0.8.2, is a 5 year-old version. The problem with defaulting to older versions when the Kafka cluster is a newer version is manifold:
0.11 is the minimum we should default at as it enables Kafka's v2 message format, avoiding expensive upconversion/downconversion in the Kafka broker. It is also required for correctness in some cases (e.g KIP-101)
1.0.0 is a 3-year old version and I think a good one to default to at this point.