do not always retry write to pipe if we get blocking errors #519
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
If you have an rabbitmq event subscription the event_rabbitmq module will shm_alloc rmq events and write pointers of the structs to the pipe.
In the event that the node you have connected to goes down, the pipe will start to fill and when it reaches its max capacity (65535 bytes) or approximately 2700 events based on the 8 byte pointer, the write call with return EAGAIN causing the while loop to become an infinite loop until pointers start getting pulled off the pipe. This causes massive CPU consumption, as well as blocking any process that generates an event to event_rabbitmq.
Attempts to publish to rabbit MQ timeout after 3 minutes based on a default system tcp timeout, so only one event every 3 minutes will be pulled from the pipe while the amqp node is down.
You can recreate this issue by setting up a proxy with an event_rabbitmq subscription, adding an iptables rule to block access to the amqp node and send traffic to the proxy that would generate an event till you hit the max pipe size ~2703 events pointers.
This commit changes the logic to simply retry the write 3 times, then abort.