Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed retention of keys in compaction #11287

Merged
merged 2 commits into from
Jul 13, 2021

Conversation

merlimat
Copy link
Contributor

Motivation

This change fixes few issues in the compaction mechanism, the

  • When a reader is created, reading from "earliest" message, it should read the compacted data and then continue from the next message.
  • When the compaction consumer starts, it shouldn't seek to the beginning. This causes 2 issues:
    • Rescanning of the topic each time the compaction runs
    • Keys that are being dropped from the topic are also getting dropped from the compacted view, while in fact they should be there until explicitly deleted (with an empty message for a key).

The main source of the problem is that when creating a cursor on "earliest" message, the cursor gets automatically adjusted on the earliest message available to read. This confuses the check for the read-compacted because it may think the reader/consumer is already ahead of the compaction horizon.

Modifications

Introduced a "isFirstRead" flag to make sure we double check the start message id and use MessageId.earliest instead of the earliest available message to read on the topic. After the first read, the positioning will be fine.

@merlimat merlimat added type/bug The PR fixed a bug or issue reported a bug release/2.8.1 labels Jul 12, 2021
@merlimat merlimat added this to the 2.9.0 milestone Jul 12, 2021
@merlimat merlimat self-assigned this Jul 12, 2021
@Anonymitaet
Copy link
Member

thanks for your contribution. For this PR, do we need to update docs?

@codelipenghui codelipenghui added the doc-not-needed Your PR changes do not impact docs label Jul 13, 2021
Copy link
Contributor

@codelipenghui codelipenghui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change looks good to me, I just left some minor comments about the test.

* Compaction should retain expired keys in the compacted view
*/
@Test
public void testCompaction() throws Exception {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the test, I think we need to verify the raw data of the topic has been deleted. because the issue only happens when the raw data has been deleted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


assertTrue(expectedKeys.contains(msg.getKey()), "Unexpected key: " + msg.getKey());

System.err.println("Received: " + msg.getKey());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any special reason here for using the System.err?

Copy link
Contributor

@eolivelli eolivelli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@codelipenghui codelipenghui merged commit feb4ff1 into apache:master Jul 13, 2021
@merlimat merlimat deleted the compaction-fixes branch July 13, 2021 17:11
codelipenghui pushed a commit that referenced this pull request Jul 14, 2021
This change fixes few issues in the compaction mechanism, the

 * When a reader is created, reading from "earliest" message, it should read the compacted data and then continue from the next message.
 * When the compaction consumer starts, it shouldn't seek to the beginning. This causes 2 issues:
   * Rescanning of the topic each time the compaction runs
   * Keys that are being dropped from the topic are also getting dropped from the compacted view, while in fact they should be there until explicitly deleted (with an empty message for a key).

The main source of the problem is that when creating a cursor on "earliest" message, the cursor gets automatically adjusted on the earliest message available to read. This confuses the check for the read-compacted because it may think the reader/consumer is already ahead of the compaction horizon.

Introduced a "isFirstRead" flag to make sure we double check the start message id and use `MessageId.earliest` instead of the earliest available message to read on the topic. After the first read, the positioning will be fine.

(cherry picked from commit feb4ff1)
@codelipenghui codelipenghui added the cherry-picked/branch-2.8 Archived: 2.8 is end of life label Jul 14, 2021
Technoboy- added a commit to Technoboy-/pulsar that referenced this pull request Jul 20, 2021
Technoboy- added a commit to Technoboy-/pulsar that referenced this pull request Jul 21, 2021
Technoboy- added a commit to Technoboy-/pulsar that referenced this pull request Jul 23, 2021
@codelipenghui codelipenghui added the cherry-picked/branch-2.7 Archived: 2.7 is end of life label Jul 23, 2021
codelipenghui added a commit to codelipenghui/incubator-pulsar that referenced this pull request Jan 5, 2022
### Motivation

To fix the reader skipping remaining compacted data while the topic been unloaded.
apache#11287 fixed the data skipped issue while the reader first time to read the messages
with the earliest position. But if the reader has consumed some messages from the
compacted ledger but not all, the start position will not be `earliest`, the broker
will rewind the cursor for the reader to the next valid position of the original topic.
So the remaining messages in the compacted ledger will be skipped.

Here are the logs from broker:

```
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.pulsar.broker.service.BrokerService - Created topic persistent://xxx/product-full-prod/5126 - dedup is disabled
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentTopic - [persistent://xxx/product-full-prod/5126][xxx] Creating non-durable subscription at msg id 181759:14:-1:-1
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [xxx/product-full-prod/persistent/5126] Created non-durable cursor read-position=221199:0 mark-delete-position=181759:13
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.bookkeeper.mledger.impl.ManagedLedgerImpl - [xxx/product-full-prod/persistent/5126] Opened new cursor: NonDurableCursorImpl{ledger=xxx/product-full-prod/persistent/5126, ackPos=181759:13, readPos=221199:0}
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.bookkeeper.mledger.impl.ManagedCursorImpl - [xxx/product-full-prod/persistent/5126-xxx] Rewind from 221199:0 to 221199:0
```

There some many compacted messages after `181759:13`, but the broker will not dispatch them to the reader.
The issue also can be reproduced by the unit test that added in this PR.

### Modification

If the cursor with `readCompacted = true`, just rewind to the next message of the mark delete position,
so that the reader can continue to read the data from the compacted ledger.

### Verification

New test added for testing the reader can get all the compacted messages and non-compacted message from the topic
during the topic unloading.
codelipenghui added a commit to codelipenghui/incubator-pulsar that referenced this pull request Jan 7, 2022
### Motivation

To fix the reader skipping remaining compacted data while the topic been unloaded.
apache#11287 fixed the data skipped issue while the reader first time to read the messages
with the earliest position. But if the reader has consumed some messages from the
compacted ledger but not all, the start position will not be `earliest`, the broker
will rewind the cursor for the reader to the next valid position of the original topic.
So the remaining messages in the compacted ledger will be skipped.

Here are the logs from broker:

```
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.pulsar.broker.service.BrokerService - Created topic persistent://xxx/product-full-prod/5126 - dedup is disabled
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentTopic - [persistent://xxx/product-full-prod/5126][xxx] Creating non-durable subscription at msg id 181759:14:-1:-1
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [xxx/product-full-prod/persistent/5126] Created non-durable cursor read-position=221199:0 mark-delete-position=181759:13
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.bookkeeper.mledger.impl.ManagedLedgerImpl - [xxx/product-full-prod/persistent/5126] Opened new cursor: NonDurableCursorImpl{ledger=xxx/product-full-prod/persistent/5126, ackPos=181759:13, readPos=221199:0}
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.bookkeeper.mledger.impl.ManagedCursorImpl - [xxx/product-full-prod/persistent/5126-xxx] Rewind from 221199:0 to 221199:0
```

There some many compacted messages after `181759:13`, but the broker will not dispatch them to the reader.
The issue also can be reproduced by the unit test that added in this PR.

### Modification

If the cursor with `readCompacted = true`, just rewind to the next message of the mark delete position,
so that the reader can continue to read the data from the compacted ledger.

### Verification

New test added for testing the reader can get all the compacted messages and non-compacted message from the topic
during the topic unloading.
codelipenghui added a commit that referenced this pull request Jan 7, 2022
…g. (#13629)

### Motivation

To fix the reader skipping remaining compacted data while the topic has been unloaded.
#11287 fixed the data skipped issue while the reader first time to read the messages
with the earliest position. But if the reader has consumed some messages from the
compacted ledger but not all, the start position will not be `earliest`, the broker
will rewind the cursor for the reader to the next valid position of the original topic.
So the remaining messages in the compacted ledger will be skipped.

Here are the logs from the broker:

```
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.pulsar.broker.service.BrokerService - Created topic persistent://xxx/product-full-prod/5126 - dedup is disabled
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentTopic - [persistent://xxx/product-full-prod/5126][xxx] Creating non-durable subscription at msg id 181759:14:-1:-1
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [xxx/product-full-prod/persistent/5126] Created non-durable cursor read-position=221199:0 mark-delete-position=181759:13
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.bookkeeper.mledger.impl.ManagedLedgerImpl - [xxx/product-full-prod/persistent/5126] Opened new cursor: NonDurableCursorImpl{ledger=xxx/product-full-prod/persistent/5126, ackPos=181759:13, readPos=221199:0}
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.bookkeeper.mledger.impl.ManagedCursorImpl - [xxx/product-full-prod/persistent/5126-xxx] Rewind from 221199:0 to 221199:0
```

There some many compacted messages after `181759:13`, but the broker will not dispatch them to the reader.
The issue also can be reproduced by the unit test that was added in this PR.

### Modification

If the cursor with `readCompacted = true`, just rewind to the next message of the mark delete position,
so that the reader can continue to read the data from the compacted ledger.

### Verification

A new test added for testing the reader can get all the compacted messages and non-compacted messages from the topic during the topic unloading.
codelipenghui added a commit that referenced this pull request Jan 10, 2022
…g. (#13629)

### Motivation

To fix the reader skipping remaining compacted data while the topic has been unloaded.
#11287 fixed the data skipped issue while the reader first time to read the messages
with the earliest position. But if the reader has consumed some messages from the
compacted ledger but not all, the start position will not be `earliest`, the broker
will rewind the cursor for the reader to the next valid position of the original topic.
So the remaining messages in the compacted ledger will be skipped.

Here are the logs from the broker:

```
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.pulsar.broker.service.BrokerService - Created topic persistent://xxx/product-full-prod/5126 - dedup is disabled
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentTopic - [persistent://xxx/product-full-prod/5126][xxx] Creating non-durable subscription at msg id 181759:14:-1:-1
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [xxx/product-full-prod/persistent/5126] Created non-durable cursor read-position=221199:0 mark-delete-position=181759:13
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.bookkeeper.mledger.impl.ManagedLedgerImpl - [xxx/product-full-prod/persistent/5126] Opened new cursor: NonDurableCursorImpl{ledger=xxx/product-full-prod/persistent/5126, ackPos=181759:13, readPos=221199:0}
10:44:36.035 [bookkeeper-ml-scheduler-OrderedScheduler-4-0] INFO  org.apache.bookkeeper.mledger.impl.ManagedCursorImpl - [xxx/product-full-prod/persistent/5126-xxx] Rewind from 221199:0 to 221199:0
```

There some many compacted messages after `181759:13`, but the broker will not dispatch them to the reader.
The issue also can be reproduced by the unit test that was added in this PR.

### Modification

If the cursor with `readCompacted = true`, just rewind to the next message of the mark delete position,
so that the reader can continue to read the data from the compacted ledger.

### Verification

A new test added for testing the reader can get all the compacted messages and non-compacted messages from the topic during the topic unloading.

(cherry picked from commit 07f131f)
eolivelli pushed a commit to eolivelli/pulsar that referenced this pull request Feb 25, 2022
This change fixes few issues in the compaction mechanism, the

 * When a reader is created, reading from "earliest" message, it should read the compacted data and then continue from the next message.
 * When the compaction consumer starts, it shouldn't seek to the beginning. This causes 2 issues:
   * Rescanning of the topic each time the compaction runs
   * Keys that are being dropped from the topic are also getting dropped from the compacted view, while in fact they should be there until explicitly deleted (with an empty message for a key).

The main source of the problem is that when creating a cursor on "earliest" message, the cursor gets automatically adjusted on the earliest message available to read. This confuses the check for the read-compacted because it may think the reader/consumer is already ahead of the compaction horizon.

Introduced a "isFirstRead" flag to make sure we double check the start message id and use `MessageId.earliest` instead of the earliest available message to read on the topic. After the first read, the positioning will be fine.

(cherry picked from commit feb4ff1)
(cherry picked from commit 9c39726)
bharanic-dev pushed a commit to bharanic-dev/pulsar that referenced this pull request Mar 18, 2022
### Motivation

This change fixes few issues in the compaction mechanism, the 

 * When a reader is created, reading from "earliest" message, it should read the compacted data and then continue from the next message. 
 * When the compaction consumer starts, it shouldn't seek to the beginning. This causes 2 issues: 
   * Rescanning of the topic each time the compaction runs
   * Keys that are being dropped from the topic are also getting dropped from the compacted view, while in fact they should be there until explicitly deleted (with an empty message for a key).

The main source of the problem is that when creating a cursor on "earliest" message, the cursor gets automatically adjusted on the earliest message available to read. This confuses the check for the read-compacted because it may think the reader/consumer is already ahead of the compaction horizon.
 
### Modifications

Introduced a "isFirstRead" flag to make sure we double check the start message id and use `MessageId.earliest` instead of the earliest available message to read on the topic. After the first read, the positioning will be fine.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/compaction cherry-picked/branch-2.7 Archived: 2.7 is end of life cherry-picked/branch-2.8 Archived: 2.8 is end of life doc-not-needed Your PR changes do not impact docs release/2.7.4 release/2.8.1 type/bug The PR fixed a bug or issue reported a bug
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants