New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KAFKA-8832:Limit the maximum size read by a fetch request on the kafka server. #7252
base: trunk
Are you sure you want to change the base?
Conversation
I found that kafka is not on the server side, limiting the amount of data read per fetch request. This may cause the kafka server program to report an error: OutOfMemory. Due to unreasonable client configuration, fetch.message.max.bytes configuration is too large, such as 100M, because the kafka server receives a lot of fetch requests at a certain moment, causing the server to report an error: OutOfMemory。So I think this is a bug。 |
Looking forward to your attention! @guozhangwang @stanislavkozlovski @junrao @gwenshap @ijuma |
@lordcheng10 since you are proposing to add a new config to the Kafka brokers, it needs to go through a KIP discussion as a public API change, could you file one KIP? Also I'm not sure if OOM is caused by the large min.bytes configs, since during fetch request handling Kafka brokers are usually doing nio zero-copy, i.e. it would not allocate more memory to host the fetched data before sending it to the IO layer, could you elaborate a bit more how you've found out that OOM is related to the large value of min.bytes? |
Thank you for your attention, I put a screenshot of the error log at the time in jira, I think this can explain some problems. https://issues.apache.org/jira/browse/KAFKA-8832 |
@guozhangwang Thank you for your attention! We all know that when the client fetches the data, the kafka server reads the disk data when sending the response. The method is org.apache.kafka.common.record.FileRecords#writeTo, but the socket will use the buffer. When a fetch data is too large, there will be a lot of network buffers in the memory, so the accumulated memory consumption will be large. |
|
We should limit the maximum size read by a fetch request on the kafka server.