Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v23.2.x] Show true partition count in cluster health API #13398

Conversation

vbotbuildovich
Copy link
Collaborator

Backport of PR #13296

... in order to split out aggregate_reports so it can be
separately tested.

(cherry picked from commit 4c161a7)
In addition to an explicit list of leaderless partitions and under
replicated partitions, include also a count of each of these
lists, except that the lists are subject to truncation while
the count is the full count as if truncation did not occur.

Issue redpanda-data#11378.

(cherry picked from commit d64e5aa)
In order to solve issue redpanda-data#11378, we will produce not-truncated counts
of leaderless and under-represented partitions. To do this, we need
to keep adding partitions to a set/map where we would not
previously have done so, in order to get a correct de-duplicated count
of these elements.

For very large numbers of partitions, this could be a performance
problem, so I wrote this benchmark.

Results for 100k partitions and 10 nodes (the config in the
benchmark):

test                         iterations      median          inst
original                            134     1.380ms     6400321.7
original_unlimited                   34    25.006ms   141876040.5
current                              66     7.363ms    42025980.6

Original is the code before the change, which stops adding elements
to the set when their size reaches 128. original_unlimited is the same
code but with the limit removed: the easiest way to get the counts,
while current is the actual code checked in the immediately prior
changes which uses the `collector` to collect per-topic lists of
partition IDs.

So the new code is about 5x slower than the old approach which didn't
count all partitions, but at least 3x fast than the direct approach to
counting the partitions. Importantly even for this very large test case
we are well below the reactor stall threshold (though well above the
default task quota). Overall the time of less than 10 ms seems fine
for a command which seems to take ~35ms even on a totally unloaded
single-node system over localhost.

(cherry picked from commit ced4a02)
Unit tests for health report aggregation that we've changed
during this series.

Checks that the expected results are obtained in some simple scenarios
and also that list truncation occurs at the expected amount and that
the count is correct even in this case.

Issue redpanda-data#11378.

(cherry picked from commit 7fbd532)
This change adds the count of leaderless and under-replicated
partitions to the cluster_health_overview API. This API already
contained an explicit list of such partitions, so what help is the
count? Well both lists are truncated at 128 entries, but the count
reports the true value.

This can be shown in the rpk health command so that progress can be
monitored even when the lists are beyond the truncation limit.

Fixes redpanda-data#11378.

(cherry picked from commit 58b699e)
@vbotbuildovich vbotbuildovich added this to the v23.2.x-next milestone Sep 12, 2023
@vbotbuildovich vbotbuildovich added the kind/backport PRs targeting a stable branch label Sep 12, 2023
@piyushredpanda piyushredpanda marked this pull request as ready for review September 18, 2023 18:22
@piyushredpanda
Copy link
Contributor

/ci-repeat 1

@ballard26 ballard26 self-requested a review September 19, 2023 02:34
@piyushredpanda piyushredpanda merged commit 625c020 into redpanda-data:v23.2.x Sep 19, 2023
26 checks passed
@piyushredpanda piyushredpanda modified the milestones: v23.2.x-next, v23.2.9 Sep 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/redpanda kind/backport PRs targeting a stable branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants