Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fluentd-elasticsearch pods restarted sporadically. #398

Closed
kgmhk opened this issue Sep 14, 2020 · 1 comment
Closed

Fluentd-elasticsearch pods restarted sporadically. #398

kgmhk opened this issue Sep 14, 2020 · 1 comment

Comments

@kgmhk
Copy link

kgmhk commented Sep 14, 2020

Is this a request for help?:

yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Version of Helm and Kubernetes:

Kubernetes - v1.12.3
Helm - v2.12.1

Which chart in which version:

v3.0.2 - https://github.com/kiwigrid/helm-charts/tree/master/charts/fluentd-elasticsearch

What happened:

Fluentd pod is restarted sporadically with below logs

/usr/local/bundle/gems/kubeclient-4.6.0/lib/kubeclient.rb:27: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call
/usr/local/bundle/gems/kubeclient-4.6.0/lib/kubeclient/common.rb:61: warning: The called method `initialize_client' is defined here
2020-09-10 21:47:08 +0000 [error]: [filter_kubernetes_metadata] Exception encountered parsing pod watch event. The connection might have been closed. Retried 10 times yet still failing. Restarting.Error while watching pods: too old resource version: 260646453 (260665751)
2020-09-10 21:47:08.333073499 +0000 fluent.error: {"message":"[filter_kubernetes_metadata] Exception encountered parsing pod watch event. The connection might have been closed. Retried 10 times yet still failing. Restarting.Error while watching pods: too old resource version: 260646453 (260665751)"}
#<Thread:0x00007f217634e3a8 /usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-2.4.6/lib/fluent/plugin/filter_kubernetes_metadata.rb:276 run> terminated with exception (report_on_exception is true):
/usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-2.4.6/lib/fluent/plugin/kubernetes_metadata_watch_pods.rb:64:in `rescue in set_up_pod_thread': Exception encountered parsing pod watch event. The connection might have been closed. Retried 10 times yet still failing. Restarting. (Fluent::UnrecoverableError)
    from /usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-2.4.6/lib/fluent/plugin/kubernetes_metadata_watch_pods.rb:39:in `set_up_pod_thread'
    from /usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-2.4.6/lib/fluent/plugin/filter_kubernetes_metadata.rb:276:in `block in configure'
/usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-2.4.6/lib/fluent/plugin/kubernetes_metadata_watch_pods.rb:134:in `block in process_pod_watcher_notices': Error while watching pods: too old resource version: 260646453 (260665751) (RuntimeError)
    from /usr/local/bundle/gems/kubeclient-4.6.0/lib/kubeclient/watch_stream.rb:28:in `block in each'
    from /usr/local/bundle/gems/http-4.4.1/lib/http/response/body.rb:37:in `each'
    from /usr/local/bundle/gems/kubeclient-4.6.0/lib/kubeclient/watch_stream.rb:25:in `each'
    from /usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-2.4.6/lib/fluent/plugin/kubernetes_metadata_watch_pods.rb:111:in `process_pod_watcher_notices'
    from /usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-2.4.6/lib/fluent/plugin/kubernetes_metadata_watch_pods.rb:41:in `set_up_pod_thread'
    from /usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-2.4.6/lib/fluent/plugin/filter_kubernetes_metadata.rb:276:in `block in configure'
2020-09-10 21:47:08 +0000 [error]: unexpected error error_class=Fluent::UnrecoverableError error="Exception encountered parsing pod watch event. The connection might have been closed. Retried 10 times yet still failing. Restarting."
2020-09-10 21:47:08.334167530 +0000 fluent.error: {"error":"#<Fluent::UnrecoverableError: Exception encountered parsing pod watch event. The connection might have been closed. Retried 10 times yet still failing. Restarting.>","message":"unexpected error error_class=Fluent::UnrecoverableError error=\"Exception encountered parsing pod watch event. The connection might have been closed. Retried 10 times yet still failing. Restarting.\""}
  2020-09-10 21:47:08 +0000 [error]: /usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-2.4.6/lib/fluent/plugin/kubernetes_metadata_watch_pods.rb:64:in `rescue in set_up_pod_thread'
  2020-09-10 21:47:08 +0000 [error]: /usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-2.4.6/lib/fluent/plugin/kubernetes_metadata_watch_pods.rb:39:in `set_up_pod_thread'
  2020-09-10 21:47:08 +0000 [error]: /usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-2.4.6/lib/fluent/plugin/filter_kubernetes_metadata.rb:276:in `block in configure'
2020-09-10 21:47:08 +0000 [error]: Exception encountered parsing pod watch event. The connection might have been closed. Retried 10 times yet still failing. Restarting. error_class=Fluent::UnrecoverableError error="Exception encountered parsing pod watch event. The connection might have been closed. Retried 10 times yet still failing. Restarting."
  2020-09-10 21:47:08 +0000 [error]: suppressed same stacktrace

What you expected to happen:

I don't want the pods to be restarted.

How to reproduce it (as minimally and precisely as possible):

It happened sporadically.

Anything else we need to know:

This is fluentd-elasticsearch values.yaml

fluentd-elasticsearch:
  image:
    repository: "fluentd_elasticsearch/fluentd"
    tag: "v3.0.2"
    pullPolicy: "IfNotPresent"

  elasticsearch:
    auth:
      enabled: true
      user: "elastic"
      password: ""
    hosts: ["elasticsearch-master:9200"]
    logstash:
      prefix: "fluentd-${record['kubernetes']['namespace_name']}"
    outputType: "elasticsearch_dynamic"

  configMaps:
    useDefaults:
      systemConf: false
      containersInputConf: true
      systemInputConf: false
      forwardInputConf: false
      monitoringConf: false
      outputConf: true
@kgmhk
Copy link
Author

kgmhk commented Sep 15, 2020

Hello,

I found the issue with fluent-plugin-kubernetes_metadata_filter and it was fixed with 2.5.1.
fabric8io/fluent-plugin-kubernetes_metadata_filter#226

I updated v3.0.4. the issue has been solved.

Thanks.

@kgmhk kgmhk closed this as completed Sep 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant