You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fluentd StatefulSet
(I'm using Logging-operator to deploy it)
FluentBit send all logs to Fluentd. Fluentd process logs and sends all to elastic.
In my installation, I have 50 pods of FLuentd.
In fluentBit logs periodically I see:
[2022/10/01 06:02:46] [error] [upstream] connection #1158 to fluentd:24240 timed out after 10 seconds
[2022/10/01 06:02:46] [error] [output:forward:forward.0] no upstream connections available
[2022/10/01 06:02:46] [ warn] [engine] failed to flush chunk '1-1664603612.450502668.flb', retry in 8 seconds: task_id=658, input=tail.0 > output=forward.0 (out_id=0)
Describe the bug
I have K8s cluster, where I deploy:
(I'm using Logging-operator to deploy it)
FluentBit send all logs to Fluentd. Fluentd process logs and sends all to elastic.
In my installation, I have 50 pods of FLuentd.
In fluentBit logs periodically I see:
When I check FluentD I see a big Recv-Q:
and sometimes FluetnD stops listening port 24240
How can I fix It?
To Reproduce
Install Logging-Operator with many Flows (more, them 3000)
Expected behavior
All work without Recv-Queue and errors in Fluentbit
Your Environment
Your Configuration
Your Error Log
The text was updated successfully, but these errors were encountered: