Skip to content

Fluentd config to Parse the log field correctly #458

@rajivml

Description

@rajivml

With the default fluentd config, the fields like logger, severity, tenant-id, trace-id and all other fields that are part of json log field are all crammed into one big json blob and hence not queriable.

To index these fields correctly so that all the keys which are part of log field which is a json are queriable, we need to modify the fluentd config like this and right now there isn't any mechanism through which we can pass this filter.

Can you guys please add an additional option through which this config is automatically injected at the time of Infra setup so that one additional manual step is avoided

step1:
kubectl -n logging edit configmap fluentdconf
    # Fixes json fields in Elasticsearch
    <filter kubernetes.**>
      @type parser
      format json
      key_name log
      reserve_time true
      reserve_data true
      remove_key_name_field true
      emit_invalid_record_to_error false
    </filter>
step2: change some param like requests or limits so that pods that are under daemonset gets restarted with updated configMap
kubectl -n logging edit ds/fluentd

Metadata

Metadata

Assignees

No one assigned

    Labels

    type::featureAn enhancement to an existing add on or feature

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions