Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backport "compaction: Fix incremental compaction for sstable cleanup" to branch-5.1 #14195

Conversation

raphaelsc
Copy link
Member

After c7826aa, sstable runs are cleaned up together.

The procedure which executes cleanup was holding reference to all input sstables, such that it could later retry the same cleanup job on failure.

Turns out it was not taking into account that incremental compaction will exhaust the input set incrementally.

Therefore cleanup is affected by the 100% space overhead.

To fix it, cleanup will now have the input set updated, by removing the sstables that were already cleaned up. On failure, cleanup will retry the same job with the remaining sstables that weren't exhausted by incremental compaction.

New unit test reproduces the failure, and passes with the fix.

Fixes #14035.

Closes #14038

(cherry picked from commit 23443e0)

@raphaelsc raphaelsc requested a review from nyh as a code owner June 9, 2023 17:36
@raphaelsc raphaelsc requested a review from denesb June 9, 2023 17:36
After c7826aa, sstable runs are cleaned up together.

The procedure which executes cleanup was holding reference to all
input sstables, such that it could later retry the same cleanup
job on failure.

Turns out it was not taking into account that incremental compaction
will exhaust the input set incrementally.

Therefore cleanup is affected by the 100% space overhead.

To fix it, cleanup will now have the input set updated, by removing
the sstables that were already cleaned up. On failure, cleanup
will retry the same job with the remaining sstables that weren't
exhausted by incremental compaction.

New unit test reproduces the failure, and passes with the fix.

Fixes scylladb#14035.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>

Closes scylladb#14038

(cherry picked from commit 23443e0)
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
@raphaelsc raphaelsc force-pushed the branch-5.1-with-incremental-cleanup-fix branch from dfa4a5b to f08a136 Compare June 9, 2023 17:43
@scylladb-promoter
Copy link
Contributor

denesb pushed a commit that referenced this pull request Jun 13, 2023
After c7826aa, sstable runs are cleaned up together.

The procedure which executes cleanup was holding reference to all
input sstables, such that it could later retry the same cleanup
job on failure.

Turns out it was not taking into account that incremental compaction
will exhaust the input set incrementally.

Therefore cleanup is affected by the 100% space overhead.

To fix it, cleanup will now have the input set updated, by removing
the sstables that were already cleaned up. On failure, cleanup
will retry the same job with the remaining sstables that weren't
exhausted by incremental compaction.

New unit test reproduces the failure, and passes with the fix.

Fixes #14035.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>

Closes #14038

(cherry picked from commit 23443e0)
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>

Closes #14195
@denesb
Copy link
Contributor

denesb commented Jun 13, 2023

Queued as 97985a6.

@DoronArazii
Copy link

@denesb why it's not closed?

@denesb
Copy link
Contributor

denesb commented Jul 10, 2023

@denesb why it's not closed?

GH auto-close doesn't work on branches other than the main branch (master). I checked and the commit has made it in, so closing manually.

@denesb denesb closed this Jul 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants