-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Description
Search before asking
- I searched in the issues and found nothing similar.
Version
Pulsar broker (and other components): v3.1.2
Pulsar client: go sdk v0.8.1 / v0.9.0
Minimal reproduce step
As the incident happened in production environment, I've tried to simulate the conditions with the ideas of:
- bookie slow response
- broker pending requests high
But unfortunately the increasing direct memory cannot be reproduced:
- set broker config of
maxPendingPublishRequestsPerConnectionto 1 - set client
OperationTimeoutand other timeout related parameters to a low value (like 100ms) - create a producer with high load and produce messages to
persistent://public/default/t1 - to simulate bookie slow response, either:
- unload the topic
- kill the bookie in charge of topic
Then the two kinds of error occurred:
- send operation:
2024/03/06 08:12:33 sendMessage failed: producer.Send: message send timeout: TimeoutError - create new producer operation using the same connection:
ERRO[0012] Failed to create producer at send PRODUCER request error="request timed out" topic="persistent://public/default/t1-partition-0"
What did you expect to see?
- DirectMemory should not have peak increasing
- Producer should not encounter TimeoutError
What did you see instead?
- DirectMemory kept increasing
- Producer send Timeout and new connection failed
Anything else?
This issue is rather rare as we have used Pulsar for years and always kept the version fresh (from v2.2.x -> v3.2.0) and never have encountered such problem.
I have digged into the source code, and for me, the possible DirectMemory reason may be that the ledger.asyncAddEntry did not complete (the async executor callback), so as code comment below, the buffer was not released.
Here are also some other findings that may help identify the root cause.
The code that in charge of toggling pulsar_broker_throttled_connections is org.apache.pulsar.broker.service.ServerCnx#startSendOperation and org.apache.pulsar.broker.service.ServerCnx#enableCnxAutoRead
That means during that period of time, ServerCnx's pendingSendRequest should have reached the maxPendingSendRequests which is 1000 for our configuration.
Also I can confirm that no publish rate limit set on any of the topics.
While the disableCnxAutoRead called, the command Producer on the connection will not be able to succeed so we can see many of below log:
2024-03-04T09:29:56,302+0000 [pulsar-io-3-1] INFO org.apache.pulsar.broker.service.ServerCnx - [/10.120.159.82:59748] Closed producer before its creation was completed. producerId=61
Then the producer will try to reconnect, which results to tons of metadata store operations in org.apache.pulsar.broker.service.BrokerService#getOrCreateTopic.
- For what it's worth, the broker / bookie / zookeeper logs show no
Exceptionorerrorand their CPU/Memory/JVM (except broker DirectMemory) seems fine.
the peaks around 17:40 is the time that I've restarted the brokers on question.
- As we have deployed our Pulsar clusters on AWS, my first guess was the networking issue, but after confirming with AWS Technical support and looking into the Cloudwatch metrics, both networking and disk metrics seem fine:
- No packet drop or high networking usage
- Disk queue is reasonable and read / write metrics too.
Are you willing to submit a PR?
- I'm willing to submit a PR!

