Skip to content

Conversation

panta
Copy link
Contributor

@panta panta commented Dec 4, 2017

The asyncsender was potentially blocking before, and it should never be.

Add the queue_circular and queue_maxsize options to asyncsender.FluentSender.
By default the circular buffer mode is on, so to avoid having potentially inifinite memory allocations (and OOM kills...).

@coveralls
Copy link

coveralls commented Dec 4, 2017

Coverage Status

Coverage increased (+2.0%) to 93.353% when pulling c0e04a4 on panta:master into af4fd66 on fluent:master.

@coveralls
Copy link

Coverage Status

Coverage increased (+2.0%) to 93.353% when pulling 32781a9 on panta:master into af4fd66 on fluent:master.

3 similar comments
@coveralls
Copy link

Coverage Status

Coverage increased (+2.0%) to 93.353% when pulling 32781a9 on panta:master into af4fd66 on fluent:master.

@coveralls
Copy link

Coverage Status

Coverage increased (+2.0%) to 93.353% when pulling 32781a9 on panta:master into af4fd66 on fluent:master.

@coveralls
Copy link

Coverage Status

Coverage increased (+2.0%) to 93.353% when pulling 32781a9 on panta:master into af4fd66 on fluent:master.

@arcivanov
Copy link
Member

Sorry, but I would vote against it. This behavior (circular buffer) will cause silent dropping of log messages. When you print to stdout/err or any other pipe, once the buffer runs out and there is nothing to consume it, the application stalls - this is an expected behavior. There is also an overflow handler in the Sender which allows to handle overflow as one sees fit.

@panta
Copy link
Contributor Author

panta commented Dec 4, 2017

@arcivanov I see your point, and I agree. What about setting queue_circular default to False and set a default for the queue to block if full? In this way the expected behaviour would be preserved, but for high-performance low-latency applications like ours we could still enable the possibly lossy handling.
In any case, with a large enough queue losses shouldn't happen, as long as the log producer won't log faster than the sender for a prolonged period (a situation that would indicate an underlying problem anyway).
If this would be okay with you, I'd proceed with the necessary adjustments.

@arcivanov
Copy link
Member

If the default expected behavior is preserved then any additional functionality is obviously welcome. I would put a big fat warning along the lines of "If you turn this on, you may lose messages silently."

@panta panta force-pushed the master branch 2 times, most recently from c13e35f to 164fed5 Compare December 5, 2017 09:03
panta added 7 commits December 5, 2017 10:10
… size.

NOTE: by default the async sender queue is blocking and non-circular.

Enabling the `queue_circular` async sender option, makes the sender queue circular and non-blocking, useful in high throughput low-latency applications.

Please note that with circular mode enabled, back-pressure could cause the queue to fill up and start discarding events (the oldest not yet sent).
@coveralls
Copy link

Coverage Status

Coverage increased (+1.5%) to 92.857% when pulling 6ac8458 on panta:master into af4fd66 on fluent:master.

2 similar comments
@coveralls
Copy link

Coverage Status

Coverage increased (+1.5%) to 92.857% when pulling 6ac8458 on panta:master into af4fd66 on fluent:master.

@coveralls
Copy link

coveralls commented Dec 5, 2017

Coverage Status

Coverage increased (+1.5%) to 92.857% when pulling 6ac8458 on panta:master into af4fd66 on fluent:master.

@panta
Copy link
Contributor Author

panta commented Dec 5, 2017

Ok, the default now is the blocking queue, and the README has a section on how to enable the non-blocking circular queue and a warning for the possible consequences.

@repeatedly repeatedly merged commit 3178185 into fluent:master Dec 6, 2017
@repeatedly
Copy link
Member

I understood the situation. Thanks for the patch!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants