Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add namespace_labels to kubernetes metadata #6544

Closed
peterbosalliandercom opened this issue Dec 13, 2022 · 13 comments · Fixed by #8279
Closed

Add namespace_labels to kubernetes metadata #6544

peterbosalliandercom opened this issue Dec 13, 2022 · 13 comments · Fixed by #8279

Comments

@peterbosalliandercom
Copy link

What is the problem:
We are using fluentbit for our logcollection. The output flows through fluentd to logstash to ES. In ES we find that we only get a subset of kubernetes metadata in the index. We are using the kubernetes.namespace_labels to identify the which tenant the logging comes from. We use the following label which is placed by the Capsule (https://capsule.clastix.io) operator to identify the tenant: capsule.clastix.io/tenant: tenantxxx ). In logstash we want to split indexes per tenant by using this namespace_label.

What do we like:
Fluentbit grabbing namespace labels and output it to fluentd.

Background
We are using the BanzaiCloud logging-operator (https://banzaicloud.com/docs/one-eye/logging-operator) so the configuration fixed and defaults to the use of fluentbit for kubernetes metadata, we cannot change that so there is now way to fix it without breaking the operator functionality. (related issue: kube-logging/logging-operator#704) The only way is by overriding everything the operator does (rbac) and fluentd config, but then the whole point of using the operator is lost.

@bilbof
Copy link

bilbof commented Jan 16, 2023

Same issue #3865

kubernetes.namespace_labels aren't fetched by the kubernetes filter plugin (https://github.com/fluent/fluent-bit/blob/master/plugins/filter_kubernetes) unlike fluentd (https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter/blob/f131c26b60999a0902c6219f10d290cccfbd03da/lib/fluent/plugin/kubernetes_metadata_common.rb#L51) so a missing feature for folks upgrading.

@patrick-stephens
Copy link
Contributor

It sounds like a good idea so feel free to submit a PR and ping me if you need approval for CI.

@genofire
Copy link

I would really like this feature.

@zanloy
Copy link

zanloy commented Jun 7, 2023

This is the only thing holding me back from switching from fluentd to fluent-bit. We have a large multi-tenant k8s environment and use namespace labels to help process logs and forward them to the correct OpenSearch backend depending on the tenant label. I would imagine this is a fairly common use-case where you might have different logging pipelines based on namespace variables.

@sa9226
Copy link

sa9226 commented Jun 8, 2023

We are looking for this feature too. Any ETA for this feature, please?

@lanrekkeeg
Copy link

lanrekkeeg commented Jun 12, 2023

I was able to resolve this issue by implementing a Lua filter. In the plugin's Lua filter, I extracted the namespace name from the Kubernetes logs using the field record["kubernetes"]["namespace_name"]. With this namespace name, I made an API call to Kubernetes to retrieve the corresponding namespace information since the service account was already mounted.After obtaining the namespace information, I added an additional field called namespace_meta to the record. This field contained the fetched metadata from the namespace.

@sa9226
Copy link

sa9226 commented Jun 12, 2023

@lanrekkeeg , Thank you so much for your suggestion. Would u mind sharing piece of code as I'm totalluy new to Lua. It will be really helpful.

@sa9226
Copy link

sa9226 commented Jun 13, 2023

@lanrekkeeg, we came with lua script to make api call and running into /fluent-bit/config/containerd.lua:14: module 'ssl' not found: trying to see if you built any custom image adding these missing modules. Please suggest. Appreciate your help

@gopikishanm
Copy link

Hi @lanrekkeeg @sa9226 can you share the pseudo code to read namespace labels, I am facing the same issue

@loggspark
Copy link

@genofire Here is the blog with sample code with logging. Hope it will help.
https://techandtutor.wordpress.com/2023/09/14/how-to-fetch-namespace-metadata-of-pod-via-fluentbit/

@patrick-stephens
Copy link
Contributor

Ta for that @loggspark , we could also look to cross-post that to the Fluent blog if you're up for it? @agup006

ryanohnemus added a commit to ryanohnemus/fluent-bit that referenced this issue Dec 12, 2023
Closes fluent#6544
Addresses fluent#3865

Signed-off-by: ryanohnemus <ryanohnemus@gmail.com>
Copy link
Contributor

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the exempt-stale label.

@github-actions github-actions bot added the Stale label Dec 24, 2023
@ryanohnemus
Copy link
Contributor

@patrick-stephens Can you add the exempt-stale label and re-open this one when you're back in the new year? I have a PR open and ready for review for this.

@github-actions github-actions bot removed the Stale label Dec 28, 2023
edsiper pushed a commit that referenced this issue Mar 12, 2024
- filter_kubernetes: add kubernetes_namespace metadata

Closes #6544
Addresses #3865

- filter_kubernetes: Add Tests for namespace labels & annotations
- also updates error text for kubelet vs kube api upstream errors
- get_namespace_api_server_info passes in meta->namespace
  instead of ctx->namespace which may not be set at time of call

filter_kubernetes: make pod meta fetching optional

now that you can fetch namespace labels/annotations OR
pod labels/annotations, do not attempt to fetch pod
labels/annotations if neither of them are requested via config

- fix cache for namespace meta

---------

Signed-off-by: ryanohnemus <ryanohnemus@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants