Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pulsar Broker is running compaction on multiple partitions in parallel. #7223

Closed
lukestephenson opened this issue Jun 9, 2020 · 2 comments
Closed
Labels
area/broker help wanted lifecycle/stale type/bug The PR fixed a bug or issue reported a bug

Comments

@lukestephenson
Copy link
Contributor

lukestephenson commented Jun 9, 2020

Describe the bug
A single pulsar broker should only compact a single partition at a time. Otherwise the memory usage of compacting multiple partitions in parallel increases the chance of OutOfMemoryErrors

To Reproduce
Steps to reproduce the behavior:

  1. Create a topic with many partitions
  2. Publish lots of data to all of the partitions
  3. Use pulsar admin to establish compaction thresholds that would trigger compaction for the partitions published to above. bin/pulsar-admin namespaces set-compaction-threshold \ --threshold <some threshold> <tenant/namespace>
  4. Check the logs. Notice how on one broker compaction is triggered for the same partition within the same second (in the screenshot, these logs are all for broker 1).
    image

Expected behavior
Compaction should only run for 1 partition at a time on a given broker. The source code would suggest to me this was the intended behaviour, but it doesn't appear to be working that way.

Screenshots
Attached logs

Additional context
Raised on slack initially. https://apache-pulsar.slack.com/archives/C5Z4T36F7/p1591243377186900

Pulsar version: 2.5.2

@sijie
Copy link
Member

sijie commented Jun 10, 2020

@lukestephenson I think we should add a throttling logic for people to define the parallelism for compaction. This would allow people to balance between the resource usage and the speed of compaction.

@tisonkun
Copy link
Member

tisonkun commented Dec 9, 2022

Closed as stale. Please create a new issue if it's still relevant to the maintained versions.

@tisonkun tisonkun closed this as not planned Won't fix, can't repro, duplicate, stale Dec 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/broker help wanted lifecycle/stale type/bug The PR fixed a bug or issue reported a bug
Projects
None yet
Development

No branches or pull requests

5 participants