Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Super huge log messages from journald - misconfigured filter_concat? #82

Open
nvtkaszpir opened this issue May 18, 2021 · 1 comment
Open
Labels
bug Something isn't working

Comments

@nvtkaszpir
Copy link
Contributor

nvtkaszpir commented May 18, 2021

Describe the bug
I've discovered that I get super huge messages from kubelet which runs under journald Ubuntu 18.04

Version of Helm and Kubernetes:
Helm Version:

$ helm version
version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.9", GitCommit:"94f372e501c973a7fa9eb40ec9ebd2fe7ca69848", GitTreeState:"clean", BuildDate:"2020-09-16T13:47:43Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart:
11.11.0

What happened:
Noticed that elastic says that messages above certain size will not be accepted.
Apparently some messages from kubelet are just insanely huge, due to the fact that multiple messages from journald are concatenaed.

What you expected to happen:
concatenate only specific messages, not everything

How to reproduce it (as minimally and precisely as possible):

actually running defaults in values.yaml, so maybe it is an issue with ubuntu setup or default journald?

Anything else we need to know:
Still investingating.

I haven't seen it before with this chart before (mainly using GKE back then), but now on on-prem with https://github.com/NVIDIA/deepops it surfaced out.

@nvtkaszpir nvtkaszpir added the bug Something isn't working label May 18, 2021
@jhuebner79
Copy link

I think my installation may suffer from the same problem, as I also have random "Fluent::Plugin::Buffer::BufferChunkOverflowError"s which I can not attribute to my own single concat filter.

It happens in three out of three EKS clusters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Development

No branches or pull requests

2 participants