Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce messages fence mechanism for Key_Shared subscription #6977

Closed

Conversation

codelipenghui
Copy link
Contributor

Fixes #6554

Motivation

Introduce messages fence mechanism for Key_Shared subscription to guarantee ordering dispatching.

The fenced message means that the messages can't be delivered at this time. In Key_Shared subscription, the new consumer needs to wait for all messages before the first message that delivers to this consumer are acknowledged. The fencedMessagesContainer maintains all fenced messages by a map (mark-delete-position -> fenced messages). When the mark delete position of the cursor is greater than the mark-delete-position in the fencedMessagesContainer, it means the fenced messages will be lifted.

Modifications

Describe the modifications you've done.

Verifying this change

New unit tests added.

Does this pull request potentially affect one of the following parts:

If yes was chosen, please highlight the changes

  • Dependencies (does it add or upgrade a dependency): (no)
  • The public API: (no)
  • The schema: (no)
  • The default values of configurations: (no)
  • The wire protocol: (no)
  • The rest endpoints: (no)
  • The admin cli options: (no)
  • Anything that affects deployment: (no)

Documentation

  • Does this pull request introduce a new feature? (no)
  • If yes, how is the feature documented? (docs)

@codelipenghui codelipenghui self-assigned this May 18, 2020
@codelipenghui codelipenghui added this to the 2.6.0 milestone May 18, 2020
@codelipenghui
Copy link
Contributor Author

@merlimat Please help take a look at this PR. This implementation is based on your idea #6554 (comment). Thanks for your great idea.

Copy link
Contributor

@Huanli-Meng Huanli-Meng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leave my comments, PTAL

site2/docs/reference-configuration.md Outdated Show resolved Hide resolved
site2/docs/reference-configuration.md Outdated Show resolved Hide resolved
@merlimat
Copy link
Contributor

@codelipenghui This is not quite what I was describing in the comment. I don't think it's desirable to "fence" the individual messages, rather just keeping a pointer to where the consumer joined. I have already a fix (working in prod) but it's backed up on #6791.

@codelipenghui
Copy link
Contributor Author

@merlimat I have added a pointer to indicate where the consumer joined and if the pointer has been mark deleted, the dispatcher dispatch messages to the consumer. The fenced messages just messages buffer, If messages don't belong to this consumer, the messages can continue dispatch to other consumers. If the buffer is full, the dispatcher will stop dispatch messages. I'm not sure if this is a good approach, the main idea does not stop dispatch messages to other consumers.

I have already a fix (working in prod) but it's backed up on #6791.

Is this mean the fix depends on #6791? Or the fix is only for the consistent hash approach.

@codelipenghui
Copy link
Contributor Author

/pulsarbot run-failure-checks

@Huanli-Meng
Copy link
Contributor

should the key_shared subscription in messaging section be updated?

@feeblefakie
Copy link

@codelipenghui
Just a question.
So this and #6791 are solving the same issue with different approaches but the both will be applied ?
#6791 looks like more promising but not fully sure.
Could I get some information about it ?

@codelipenghui
Copy link
Contributor Author

codelipenghui commented May 27, 2020

@feeblefakie #6791 introduces a consistent hash approach for the consumer selector. The current implementation is a hash range split approach.

This PR is to fix #6554

@codelipenghui
Copy link
Contributor Author

close via #7106

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants