Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: fill in the Fetch{Request,Response} protocol #1582

Merged
merged 1 commit into from
Jan 21, 2020
Merged

fix: fill in the Fetch{Request,Response} protocol #1582

merged 1 commit into from
Jan 21, 2020

Conversation

dnwe
Copy link
Collaborator

@dnwe dnwe commented Jan 17, 2020

In order to consume zstd-compressed records the consumer needs to send
and receive version 10 FetchRequest/FetchResponses, but they need to do
so in a well-formed manner that adheres to the encoding format.

Ref: https://kafka.apache.org/protocol

Signed-off-by: Dominic Evans dominic.evans@uk.ibm.com

@dnwe dnwe mentioned this pull request Jan 17, 2020
1 task
@dnwe
Copy link
Collaborator Author

dnwe commented Jan 17, 2020

Note: I know it'd be good to add some more testcases around this, I plan to do those in a follow-up PR

In order to consume zstd-compressed records the consumer needs to send
and receive version 10 FetchRequest/FetchResponses, but they need to do
so in a well-formed manner that adheres to the encoding format.

Ref: https://kafka.apache.org/protocol

Signed-off-by: Dominic Evans <dominic.evans@uk.ibm.com>
@d1egoaz
Copy link
Contributor

d1egoaz commented Jan 17, 2020

I think there is already some testing framework regarding compression algorithms, see 864d473

I have added your commit to my PR so I can run those tests.

On my local tests your patch worked great!

@dnwe
Copy link
Collaborator Author

dnwe commented Jan 17, 2020

@d1egoaz yeah I just meant extending the basic encode+decode roundtripping tests across all the available protocol versions and checking the output contains the expected fields as you increase the version

Copy link
Contributor

@varun06 varun06 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@d1egoaz d1egoaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

regarding the s/MinVersion/MaxVersion change

I'm wonder what would be the implications of that from existing producers/consumers, Would that change any behaviour?

also, I've notices there are more Min/Max versions that would need to be changed too if we keep following this pattern

@dnwe
Copy link
Collaborator Author

dnwe commented Jan 20, 2020

@d1egoaz I agree that we should ideally be consistent and I would potentially submit a follow-up PR to make the change across the board — (although my preference would be to just switch to using the ApiVersions call to enable/disable behaviour based on the broker's advertised supported function rather than guessing based on the local config setting)

In terms of change-in-behaviour, I think the new behaviour is slightly better and no worse:

  1. the requiredVersion(...) check is only ever used client-side when sending requests to a broker as verification step by comparing against the sarama.Config configured version, nothing is checked on incoming responses
  2. the previous behaviour was that if a protocol version wasn't covered by the cases in the requiredVersion switch statement than it would just always permit it regardless and rely on the remote broker rejecting it if it didn't understand it. I slightly changed this logic so that if we fallthrough to the default switch case then we return MaxVersion here which will likely reject it client-side unless the user has configured their sarama Config to the latest Kafka version known by the version of Sarama they are using.

I'd actually be tempted to make the default fallthrough return [4]uint{99, 99, 99, 99} so that it always rejects things client-side if they aren't known in the switch statement

@d1egoaz
Copy link
Contributor

d1egoaz commented Jan 21, 2020

do you want to merge this @dnwe? and I'll rebase #1574 on top of this

@dnwe
Copy link
Collaborator Author

dnwe commented Jan 21, 2020

@d1egoaz yeah I'm happy for this to be merged. I'm not a committer myself so I can't actually perform the merge 😀

@d1egoaz
Copy link
Contributor

d1egoaz commented Jan 21, 2020

Huh, I thought you were a committer :D
ping @bai

@d1egoaz d1egoaz merged commit ab4036c into IBM:master Jan 21, 2020
@d1egoaz
Copy link
Contributor

d1egoaz commented Jan 21, 2020

thanks for the contribution @dnwe

KJTsanaktsidis added a commit to KJTsanaktsidis/sarama that referenced this pull request Feb 13, 2020
…protocol"

This reverts commit ab4036c, reversing
changes made to ae8f056.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants