Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v23.3.x] Fixed large allocation in kafka::wait_for_leaders #16432

Conversation

vbotbuildovich
Copy link
Collaborator

Backport of PR #16287
Fixes: #16430, Fixes: #16431,

Previously we used a simple `std::vector` of futures to make
waiting for partition leaders concurrent. Using a vector has a drawback
when dealing with large number of topics and partitions since it may be
required to allocate large contiguous chunk of memory for a future
vector. In this particular case we may not use a fragmented vector or
chunked fifo as the `when_all` uses a plain vector internally.

To make sure no large chunk of memory is allocated to wait for the
partition leaders changed the logic to use
`seastar::max_concurrent_for_each`.

Fixes: redpanda-data#15908

Signed-off-by: Michal Maslanka <michal@redpanda.com>
(cherry picked from commit 8d0b584)
Sometimes it may happen that producer swarm is stopped after topic is
recreated leading to a test failure. Added check restarting the producer
if necessary

Signed-off-by: Michal Maslanka <michal@redpanda.com>
(cherry picked from commit a18fdcd)
@vbotbuildovich vbotbuildovich added this to the v23.3.x-next milestone Feb 1, 2024
@vbotbuildovich vbotbuildovich added the kind/backport PRs targeting a stable branch label Feb 1, 2024
@piyushredpanda piyushredpanda merged commit 80c282f into redpanda-data:v23.3.x Feb 3, 2024
19 checks passed
@piyushredpanda piyushredpanda modified the milestones: v23.3.x-next, v23.3.5 Feb 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/redpanda kind/backport PRs targeting a stable branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants