Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add protocol produce v6 #869

Merged
merged 17 commits into from
Sep 15, 2020
Merged

Add protocol produce v6 #869

merged 17 commits into from
Sep 15, 2020

Conversation

tulios
Copy link
Owner

@tulios tulios commented Sep 11, 2020

No description provided.

@ankon
Copy link
Contributor

ankon commented Sep 13, 2020

This is a change related to KIP-219, right?

In that case I think the throttleTime should be changed to clientSideThrottleTime here, so that the throttling will be done. See #776, specifically 29779f2

@tulios
Copy link
Owner Author

tulios commented Sep 13, 2020

Yes, I noticed that. I made the change locally but we have to change the client to wait for the pending requests when disconnecting otherwise we could lose data with the producer. I am on it.

@tulios
Copy link
Owner Author

tulios commented Sep 14, 2020

@Nevon @ankon I updated the PR to use the client-side throttling and for the request queue to wait for pending requests before it shut down and disconnect.

@tulios
Copy link
Owner Author

tulios commented Sep 14, 2020

There is one more detail, the request queue has to wait for the current inflight requests. I am making the change.

@tulios
Copy link
Owner Author

tulios commented Sep 14, 2020

I spot a bug in the way the request queue process pending requests

const pendingRequest = this.pending.pop()

it is processing the most recent request in the queue rather than the oldest one, I am switching from this.pending.pop() to this.pending.shift() since we want FIFO behavior.

@tulios
Copy link
Owner Author

tulios commented Sep 14, 2020

I ran many tests, and I am now happy with the solution. Please take a look.

Copy link
Collaborator

@JaapRood JaapRood left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks solid. Reading through the KIP and the comments on this PR makes me think you've covered all the scenarios, most importantly covering the fact that throttled messages should be flushed before disconnecting. Code and tests backs this up!

@@ -317,8 +324,13 @@ module.exports = class Connection {

try {
const payloadDecoded = await response.decode(payload)
// KIP-219: If the response indicates that the client-side needs to throttle, do that.

/**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Somewhat nit-picky, but this line wouldn't be a JSDoc-y comment, but just a multiline comment, no?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the doc block because it highlights the comment. I made the change to give the importance this comment deserve; I don't think we should argue about the comment block style; my intention was just to highlight a good comment 😄

await sendSampleMessages()
})

afterEach(async () => {
await producer.disconnect()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should use the same protection against producer not being assigned, to avoid noisy errors when tests fail due to problems with the cluster (see #795)

@ankon
Copy link
Contributor

ankon commented Sep 15, 2020

LGTM!

@tulios tulios merged commit 7ac93f5 into master Sep 15, 2020
@tulios tulios deleted the add-protocol-produce-v6 branch September 15, 2020 14:28
@tulios tulios linked an issue Sep 16, 2020 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support protocol produce v4-v6
4 participants