Skip to content

in producer, max_block_ms should larger than request_timeout_ms by default #994

@hicqu

Description

@hicqu

Suppose we have a kafka producer initialized with default arguments, 60s as max_block_ms and 30s as request_timeout_ms. Then we run a loop:

while True:
    producer.send(topic, value)

After we kill all processes in kafka-cluster, I want the producer raise an exception KafkaTimeoutError.

but when max_block_ms > request_timeout_ms, the exception will never be raised out.

Only when max_block_ms < request_timeout_ms, Exception will raised:

KafkaTimeoutError: Failed to allocate memory within the configured max blocking time

My analyse:

when kafka producer try to send, it will block at most request_timeout_ms, after that it will drop those messages and clean send buffer. So, the producer can put new messages into the send buffer. The result is we can't receive any notice but our messages are droped exactly.

So, I think request_timeout_ms should never larger than max_block_ms. What do you think?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions