You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Fluentd runs fine and logs everything at startup. but when other containers get new deploys the logs stop showing up in elasticsearch. once we restart fluentd they show up again.
What happened:
once another deployment cycles after a new version of it is deployed logs stop showing up
What you expected to happen:
I can redeploy other tools as much as I like without losing logs
How to reproduce it (as minimally and precisely as possible):
this is in a pretty minimal setup.
we have a basic chart that depends on version 11.9.0 nothing else is installed by this chart.
other components in the cluster are also installed via helm charts
values.yaml (only put values which differ from the defaults)
fluentd-elasticsearch: image: pullPolicy: IfNotPresent # Specify where fluentd can find logs hostLogDir: varLog: /var/log dockerContainers: /var/lib/docker/containers libSystemdDir: /usr/lib64 elasticsearch: hosts: - elasticnode.internal:80 scheme: 'http' ssl_version: TLSv1_2 auth: enabled: true user: "usernam" password: "password" logstash: enabled: true prefix: "eksstaging" buffer: enabled: true # ref: https://docs.fluentd.org/configuration/buffer-section#chunk-keys chunkKeys: "" type: "file" path: "/var/log/fluentd-buffers/kubernetes.system.buffer" flushMode: "interval" retryType: "exponential_backoff" flushThreadCount: 2 flushInterval: "5s" retryForever: true retryMaxInterval: 30 chunkLimitSize: "256M" queueLimitLength: 20 overflowAction: "block" # If you want to change args of fluentd process # by example you can add -vv to launch with trace log fluentdArgs: "--no-supervisor -q"
Describe the bug
Fluentd runs fine and logs everything at startup. but when other containers get new deploys the logs stop showing up in elasticsearch. once we restart fluentd they show up again.
Version of Helm and Kubernetes:
Helm Version:
Kubernetes Version:
AWS EKS cluster
nodes are AWS managed nodes groups al default.
Which version of the chart:
11.9.0
What happened:
once another deployment cycles after a new version of it is deployed logs stop showing up
What you expected to happen:
I can redeploy other tools as much as I like without losing logs
How to reproduce it (as minimally and precisely as possible):
this is in a pretty minimal setup.
we have a basic chart that depends on version 11.9.0 nothing else is installed by this chart.
other components in the cluster are also installed via helm charts
values.yaml (only put values which differ from the defaults)
install command
helm upgrade --install ops-fluentd-elasticsearch . --namespace ops-fluentd-elasticsearch -f acc-values.yaml
chart.yml
Anything else we need to know:
The text was updated successfully, but these errors were encountered: