New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fluentd multiplexed outputer #242
Comments
If I am understand correctly you want to point each Fluent Bit to a Fluentd named with the 'namespace' like fluentd-kube-system. If so I think is easier to configure this in your Yaml file using the Downward API to obtain your namespace: https://kubernetes.io/docs/user-guide/downward-api/ So having the namespace in an environment variable then you can use it in Fluent Bit, e.g:
if that is not what you mean, please let me know |
No, thats not what I meant. I'm thinking, have one fluent-bit per system. but, be able to send the logs to different k8s fluentd's based on the namespace of the pod who's logs are being collected. like, if you have two namespaces: A can launch a fluentd, B can launch a fluentd. the admin launches fluent-bit daemonset across all nodes. In this mode, all traffic out of a pod in A, will get sent to A's fluentd. and B's to B's fluentd. |
Now I get it. Since the namespace "already" exists in the file name and in the system, the goal would be to implement a way into the filter_kubernetes to "add" this namespace name in the tag, so then it can be routed using a specific match pattern. I will think a bit more about how to implement it. |
@kfox1111 just doing follow up, is your goal to have a Fluentd aggregator per namespace so the above implementation proposal makes routing easier ? |
@edsiper I think he was trying to have dynamic routing based on the filtered data. Something like this:
This If the log is from namespace |
To be more specific if we have an event:
We want to use the value from |
Yes, thats what I'm interested in. The person maintaining the namespaced set of services (tenant user) and the person maintaining fluent-bit daemon/config (k8s admin) could be on completely different teams with different rights. Letting the logs flow to processes maintained at the tenant level would allow the users to self service the processing/storage of their own aggregated logs without involving the k8s admin. |
Any update on this? |
I would love to have something like this. Any update? |
Any updates? |
👍 |
Any update or a suggestion how to implement this behaviour with latest version? |
For K8s, could we come up with a filter/outputter that forwarded traffic to the fluentd located in the namespace for the pod being collected for?
And as a bonus, the k8s service name can be overridden in a pod attribute?
That would allow the k8s cluster admin to setup 1 fluent-bit daemonset for everyone, and each project can setup their own filtering rules in their own fluentd(s) running under their own control.
The text was updated successfully, but these errors were encountered: