Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed readers backlog stats after data is skipped #7236

Merged
merged 3 commits into from
Jun 16, 2020

Conversation

merlimat
Copy link
Contributor

Motivation

The metrics for the reader backlog keep increasing when data is dropped because the reader cursor only moves on the next read attempt.
Instead we should proactively move the cursor forward on the first valid ledger.

@merlimat merlimat added type/bug The PR fixed a bug or issue reported a bug release/2.6.1 labels Jun 10, 2020
@merlimat merlimat added this to the 2.7.0 milestone Jun 10, 2020
@merlimat merlimat self-assigned this Jun 10, 2020
@merlimat merlimat changed the title Fixed readers backlog after data is skipped Fixed readers backlog stats after data is skipped Jun 10, 2020
PositionImpl highestPositionToDelete = new PositionImpl(firstNonDeletedLedger, -1);

cursors.forEach(cursor -> {
if (highestPositionToDelete.compareTo((PositionImpl) cursor.getMarkDeletedPosition()) > 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we need to add check for non-durable cursor?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for that, a durable cursor would have been already moved ahead, otherwise we wouldn't be trimming that ledger.

Comment on lines +711 to +712
assertEquals(nonDurableCursor.getNumberOfEntries(), 6);
assertEquals(nonDurableCursor.getNumberOfEntriesInBacklog(true), 6);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why durable cursor has 5 backlogs, non-durable cursor has 6 backlogs? Shouldn't they be the same?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's because the other cursor is positioned at the end of the 5th ledger, but not on the 6th. That means that only 4 ledgers are deleted. That cursor would move forward on the next mark-delete.

When advancing the non-durable cursor, we advance to the first available ledger and that might be before the durable cursor mark-delete position, but that's ok.

…l/ManagedLedgerImpl.java

Co-authored-by: lipenghui <penghui@apache.org>
@codelipenghui codelipenghui merged commit 6b9c90f into apache:master Jun 16, 2020
codelipenghui pushed a commit to streamnative/pulsar-archived that referenced this pull request Jul 14, 2020
### Motivation

The metrics for the reader backlog keep increasing when data is dropped because the reader cursor only moves on the next read attempt.
Instead we should proactively move the cursor forward on the first valid ledger.

(cherry picked from commit 6b9c90f)
wolfstudy pushed a commit that referenced this pull request Jul 29, 2020
### Motivation

The metrics for the reader backlog keep increasing when data is dropped because the reader cursor only moves on the next read attempt.
Instead we should proactively move the cursor forward on the first valid ledger.

(cherry picked from commit 6b9c90f)
huangdx0726 pushed a commit to huangdx0726/pulsar that referenced this pull request Aug 24, 2020
### Motivation

The metrics for the reader backlog keep increasing when data is dropped because the reader cursor only moves on the next read attempt.
Instead we should proactively move the cursor forward on the first valid ledger.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release/2.6.1 type/bug The PR fixed a bug or issue reported a bug
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants