Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EKS control plane logs should have their source based on the log stream name #372

Closed
robmp opened this issue Oct 15, 2020 · 4 comments
Closed

Comments

@robmp
Copy link
Contributor

robmp commented Oct 15, 2020

While the changes introduced in #371 and #365 that set EKS control plane logs source to eks are an improvement, I think they're not sufficient.

EKS log groups in CloudWatch have multiple log streams, e.g:

obraz

These come from different control plane components and have different formats (some glog, others json).

I think it would make sense to have the log source based on the log stream name, e.g. kube-apiserver, kube-scheduler etc.

@tianchu
Copy link
Contributor

tianchu commented Oct 15, 2020

@robmp are you suggesting have the source to be kube-apiserver or kube-scheduler?

@robmp
Copy link
Contributor Author

robmp commented Oct 16, 2020

@robmp are you suggesting have the source to be kube-apiserver or kube-scheduler?

I'm suggesting that the source should be the same as the log stream name (without the random suffix).

@tianchu
Copy link
Contributor

tianchu commented Oct 16, 2020

@robmp Thanks for your clarification! I think it makes more sense to keep source being eks, while have what you proposed in a new attribute, e.g., kube_service: kube-apiserver (just an example).

In fact, it would be better to add this in the default eks log pipeline as a Grok parser rule, which would provide a consistent user experience for eks logs ingested through other means (e.g., datadog kinesis firehose). In fact, you can update the log pipeline in your Datadog account that processes eks logs, add a Grok parser rule to parse the log stream name and get the result to a new field for query. To make it an OOB user experience, please submit a FR to support@datadoghq.com.

I'm going to close the issue for now, since I believe we would NOT make this change in the forwarder.

@tianchu tianchu closed this as completed Oct 16, 2020
@pawelpesz
Copy link

The current handling of EKS CloudWatch logs is really inconsistent across the Datadog infrastructure. Only the audit and scheduler logs are correctly recognised and a corresponding built-in pipeline is added for them, respectively "Kubernetes audit" and "Kube Scheduler (glog)". This is covered by the #406 PR which you @tianchu reviewed some months after this issued was closed.

Datadog expects a source:kube-apiserver in at least two places I was able to locate:

  • Default glog pipeline
  • Kubernetes API Server Overview dashboard

It would be great if you could take another look at this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants