New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confluent.Kafka.KafkaException: 'Local: Queue full' when running BeginProduce example #703
Comments
Can you increase the messagetimeout and the retries to higher value? |
you're trying to send messages faster than librdkafka can get them delivered to kafka and the local queue is filling up. you can increase the queue size ( what you should do if you get this exception is catch it, then wait for some time before continuing. if you get the exception, you can be sure the message will not be sent to kafka, so you can be confident that re-trying will not result in duplicate messages (due to this exception at least). marked as an enhancement as a reminder to note this in the readme. |
And you probably want to set |
yes, good point. i've found |
I encountered similar error while using confluent-kafka-python. I inject network fault in my test, with a packet loss over 10% and when the error 'BufferError: Local: Queue full' prompts, most of the messages are lost. Can this be solved by adjusting the parameters? The network condition needs to be checked, I suppose. |
Is there a way to get the current queue usage? I am producing messages from Flask requests, and would be nice to block and flush when the queue is full instead of getting errors. Flushing at every write seems too slow. Currently I'm doing this: try:
producer.produce(
topic=my_topic,
value=data
)
except BufferError:
logger.warning('Buffer error, the queue must be full! Flushing...')
producer.flush()
logger.info('Queue flushed, will write the message again')
producer.produce(
topic=my_topic,
value=data,
) |
are you calling assuming you are doing this, it's intended that when you get a Queue full error, you should just wait a bit (no need to call flush) and try again. you should also configure the queue size so that this is not expected under normal operation. the statistics callback will tell you internal queue sizes i believe. |
I'm not, the documentation says "Polls the producer for events" but I don't understand what it means. I expect events to come only from the consumer, does it refer to "acknowledge" events? |
callbacks (delivery notification, global error notification, statistics) are called as a side effect to calling poll. more info is in this blog post: https://www.confluent.io/blog/kafka-python-asyncio-integration/ each pending callback has an associated event inside librdkafka, and these will accumulate if poll is not being called. |
Also I noticed only now that this is the issue tracker for the dotnet library -_- |
Thanks for the advice. Yes I did with the poll() method in the producer. The code is simplified as follows:
Say if I set totalMsgNumber=10000, will the producer accumulate all the messages in the buffer and flush them after the loop ends? I just want to emulate the scenario that producer send messages continuously. |
as soon as you call produce, messages are queued to be sent to the broker - and this will happen automatically. In practice, messages will be sent the the broker almost immediately with default settings, though you can control this with you only need to call |
And, i have a question, can increasing the number of partitions avoid the problem of queue full? |
Description
I am getting error "Confluent.Kafka.KafkaException: 'Local: Queue full'" when trying to run the Basic Producer Examples (with p.BeginProduce). all settings are default.
It works till I set for loop at 100k, after that it start throwing this error.
How to reproduce
Checklist
Please provide the following information:
The text was updated successfully, but these errors were encountered: