-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add namespace_labels to kubernetes metadata #6544
Comments
Same issue #3865 kubernetes.namespace_labels aren't fetched by the kubernetes filter plugin (https://github.com/fluent/fluent-bit/blob/master/plugins/filter_kubernetes) unlike fluentd (https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter/blob/f131c26b60999a0902c6219f10d290cccfbd03da/lib/fluent/plugin/kubernetes_metadata_common.rb#L51) so a missing feature for folks upgrading. |
It sounds like a good idea so feel free to submit a PR and ping me if you need approval for CI. |
I would really like this feature. |
This is the only thing holding me back from switching from fluentd to fluent-bit. We have a large multi-tenant k8s environment and use namespace labels to help process logs and forward them to the correct OpenSearch backend depending on the tenant label. I would imagine this is a fairly common use-case where you might have different logging pipelines based on namespace variables. |
We are looking for this feature too. Any ETA for this feature, please? |
I was able to resolve this issue by implementing a Lua filter. In the plugin's Lua filter, I extracted the namespace name from the Kubernetes logs using the field |
@lanrekkeeg , Thank you so much for your suggestion. Would u mind sharing piece of code as I'm totalluy new to Lua. It will be really helpful. |
@lanrekkeeg, we came with lua script to make api call and running into /fluent-bit/config/containerd.lua:14: module 'ssl' not found: trying to see if you built any custom image adding these missing modules. Please suggest. Appreciate your help |
Hi @lanrekkeeg @sa9226 can you share the pseudo code to read namespace labels, I am facing the same issue |
@genofire Here is the blog with sample code with logging. Hope it will help. |
Ta for that @loggspark , we could also look to cross-post that to the Fluent blog if you're up for it? @agup006 |
Closes fluent#6544 Addresses fluent#3865 Signed-off-by: ryanohnemus <ryanohnemus@gmail.com>
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the |
@patrick-stephens Can you add the |
- filter_kubernetes: add kubernetes_namespace metadata Closes #6544 Addresses #3865 - filter_kubernetes: Add Tests for namespace labels & annotations - also updates error text for kubelet vs kube api upstream errors - get_namespace_api_server_info passes in meta->namespace instead of ctx->namespace which may not be set at time of call filter_kubernetes: make pod meta fetching optional now that you can fetch namespace labels/annotations OR pod labels/annotations, do not attempt to fetch pod labels/annotations if neither of them are requested via config - fix cache for namespace meta --------- Signed-off-by: ryanohnemus <ryanohnemus@gmail.com>
What is the problem:
We are using fluentbit for our logcollection. The output flows through fluentd to logstash to ES. In ES we find that we only get a subset of kubernetes metadata in the index. We are using the kubernetes.namespace_labels to identify the which tenant the logging comes from. We use the following label which is placed by the Capsule (https://capsule.clastix.io) operator to identify the tenant: capsule.clastix.io/tenant: tenantxxx ). In logstash we want to split indexes per tenant by using this namespace_label.
What do we like:
Fluentbit grabbing namespace labels and output it to fluentd.
Background
We are using the BanzaiCloud logging-operator (https://banzaicloud.com/docs/one-eye/logging-operator) so the configuration fixed and defaults to the use of fluentbit for kubernetes metadata, we cannot change that so there is now way to fix it without breaking the operator functionality. (related issue: kube-logging/logging-operator#704) The only way is by overriding everything the operator does (rbac) and fluentd config, but then the whole point of using the operator is lost.
The text was updated successfully, but these errors were encountered: