-
Notifications
You must be signed in to change notification settings - Fork 577
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adjust maximum value for memory_share_for_fetch
in MemoryStressTest.test_fetch_with_many_partitions
#11533
Conversation
The test still fails at 0.8, getting it down to 0.7
CI failures in https://buildkite.com/redpanda/redpanda/builds/31611#0188d489-fb1c-4494-ad16-477c664829b7 are not relevant |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems fine. My only question is was there a reason 0.8 was selected in the past? Are we just putting a band-aid over an actual problem by reducing this?
@michael-redpanda 0.8 was the maxiumum that allowed to avoid OOM, but apparently that's not right (see this comment). Re the band-aid, the memory semaphore solution is a band-aid, see this. |
All CI failures are irrelevant |
/backport v23.1.x |
/backport v22.3.x |
Failed to run cherry-pick command. I executed the commands below:
|
Failed to run cherry-pick command. I executed the commands below:
|
The failing case is OOM when memory reserved for fetch is 80% of kafka memory, but apparently some knobs in memory control do not reflect what is going on with allocations indeed. For this specific test, the setting should go down gradually until this crash is gone, and that should become the highest recommended setting for now.
Getting it down to 0.7.
Fixes #11458
Backports Required
Release Notes