-
-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
options to limit max memory uaage #70
Comments
As a consumer or as a producer? As a producer, you can limit MaxBufferedMessages, and then application side, limit the size of what you're producing, so that you know there's an upper bound of messages*record_size. As a consumer, you can limit AllowedConcurrentFetches, and then limit FetchMaxBytes, so that you know at most n_concurrent_fetches*fetch_max_bytes will be buffered internally (minus the one caveat that if a single record exceeds FetchMaxBytes, kafka will return it). |
(I may change |
One last note: inflation from decompression cannot really be controlled. If a small fetch inflates to 1G when decompressed, the client can't know that going into the decompression. It's best to limit this on the producer side by setting |
no i'm not use compression =) yes question about consumer |
all fine thanks |
I saw in docs many options to limit memory per partition, broker, fetch and other.
But in microservices we need to limit upper memory size not , and number of brokers and partitions does not matter because it can change over time.
Can you suggest how to limit?
The text was updated successfully, but these errors were encountered: