-
Notifications
You must be signed in to change notification settings - Fork 705
Fix oversized allocations in kafka::fetch_session_cache
#26299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| size_t | ||
| memory_usage_lower_bound(const chunked_hash_map<K, V, Hash, EqualTo>& m) { | ||
| return m.bucket_count() | ||
| * sizeof(typename chunked_hash_map<K, V>::bucket_type) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this undercounts the bucket vector capacity but it should be good enough, especially with large values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, as far as I can tell there isn't a public accessible way to get the bucket vector capacity unfortunately. Otherwise the bound could've been made a lot tighter.
Retry command for Build#66646please wait until all jobs are finished before running the slash command |
CI test resultstest results on build#66646
test results on build#66659
test results on build#66661
|
fb32224 to
8bb7227
Compare
This PR adds the
memory_usage_lower_boundfunction forchunked_hash_maps to get a lower bound estimate the memory being allocated by the map. It also switches fromabsl::flat_hash_maptochunked_hash_mapin fetch sessions to avoid an oversized allocation.Backports Required
Release Notes