You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If the subscriber is unable to write logs because of any reason the logger doAppend stuck in an infinite waiting loop because the writer cannot surpass the reader. Below is the is the thread dump captured in that moment.
Also, below is the code snippet where the requesting thread stuck. The code snippet is taken from reactor RingBuffer:
@Overridepubliclongnext(intn)
{
if (n < 1)
{
thrownewIllegalArgumentException("n must be > 0");
}
longcurrent;
longnext;
do
{
current = cursor.getAsLong();
next = current + n;
longwrapPoint = next - bufferSize;
longcachedGatingSequence = gatingSequenceCache.getAsLong();
if (wrapPoint > cachedGatingSequence || cachedGatingSequence > current)
{
longgatingSequence = RingBuffer.getMinimumSequence(gatingSequences, current);
if (wrapPoint > gatingSequence)
{
if(spinObserver != null) {
spinObserver.run();
}
LockSupport.parkNanos(1); // TODO, should we spin based on the wait strategy?continue;
}
gatingSequenceCache.set(gatingSequence);
}
elseif (cursor.compareAndSet(current, next))
{
break;
}
}
while (true);
returnnext;
}
I can see that that the wait strategy is not being used to handle this situation. Is there any way to get out of this situation either by throwing any timeout exception or anything else rational. I know we may loose logs for some time but at least the requesting threads will not hang because of this issue.
The text was updated successfully, but these errors were encountered:
simonbasle
changed the title
AysnAppender queueLoggingEvent method stuck when Ringbuffer is full with unread data
AsyncAppender queueLoggingEvent method stuck when Ringbuffer is full with unread data
Dec 29, 2016
noorulhaq
changed the title
AsyncAppender queueLoggingEvent method stuck when Ringbuffer is full with unread data
AsyncAppender queueLoggingEvent method stuck when Ringbuffer is full with unread events
Dec 29, 2016
This does not seem to be a reactor API issue. It happened because subscribers were unable to write logs due to some reason. This caused log publisher to keep on waiting for subscribers to read the log events from RingBuffer. Ideally, log writer should be monitored to avoid any such kind of situation. I have developed an extension of reactor-logback that uses Hystrix circuit breaker to monitor logging. In case there is any hiccup in file logging the circuit will open and will execute the fallback.
Please refer to below link if you are interested in avoiding cascading failure due to file logging with reactor logback. https://github.com/noorulhaq/reactor-logback-hystrix
Hi,
If the subscriber is unable to write logs because of any reason the logger doAppend stuck in an infinite waiting loop because the writer cannot surpass the reader. Below is the is the thread dump captured in that moment.
Also, below is the code snippet where the requesting thread stuck. The code snippet is taken from reactor RingBuffer:
I can see that that the wait strategy is not being used to handle this situation. Is there any way to get out of this situation either by throwing any timeout exception or anything else rational. I know we may loose logs for some time but at least the requesting threads will not hang because of this issue.
The text was updated successfully, but these errors were encountered: