-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nested JSON parsing stopped working with fluent/fluentd-kubernetes-daemonset:v0.12-debian-elasticsearch #2073
Comments
Maybe, the problem is kubernetes-metadata-filter did breaking changes. |
Thanks. In case anyone else will wonder how to combine nested json parsing with kubernetes fields, that's what works for me (in kubernetes.conf):
|
hey @arikunbotify can you please share your full configuration if you can ? I have been troubleshooting this problem for days now and my log messages are not passed as json to both |
@calinah I totally forgot to mention i switched to: I think this is the relevant config part:
hope it helps |
@arikunbotify Sorry to drudge up but what is your strategy for adding the filter to the daemonset? I'm attempting to load via configmap and am not having much luck. Would love to avoid the initcontainer solution I see here: |
Since this feature used to work, why can't you just add that config in the docker image by default so everyone doesn't need to manually override with custom configmaps? |
We are having this parsing issue and followed @arikunbotify example but the log field is not returning individual fields in kibana. it is a single log entry and the json is still showing escape characters.
results
Any advice. we want the kibana table results to show:
|
@Datise |
The following worked for me: fuentd-config-map.yml apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
data:
fluent.conf: |
<match fluent.**>
@type null
</match>
<match kubernetes.var.log.containers.**fluentd**.log>
@type null
</match>
<match kubernetes.var.log.containers.**kube-system**.log>
@type null
</match>
<match kubernetes.var.log.containers.**kibana**.log>
@type null
</match>
<source>
@type tail
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
read_from_head false
<parse>
@type json
json_parser oj
time_format %Y-%m-%dT%H:%M:%S
</parse>
</source>
<filter kubernetes.**>
@type kubernetes_metadata
@id filter_kube_metadata
</filter>
<filter kubernetes.var.log.containers.**>
@type parser
<parse>
@type json
json_parser oj
time_format %Y-%m-%dT%H:%M:%S
</parse>
key_name log
replace_invalid_sequence true
emit_invalid_record_to_error true
reserve_data true
</filter>
<match kubernetes.**>
@type elasticsearch
@log_level debug
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
user "#{ENV['FLUENT_ELASTICSEARCH_USER']}" # remove these lines if not needed
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}" # remove these lines if not needed
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
reload_connections true
log_es_400_reason true
<buffer>
flush_thread_count 8
flush_interval 5s
chunk_limit_size 2M
queue_limit_length 32
retry_max_interval 30
retry_forever true
</buffer>
</match> fluentd-daemonset.yml apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4-debian-elasticsearch-1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.default"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENT_UID
value: "0"
- name: FLUENT_ELASTICSEARCH_USER
value: "foo"
- name: FLUENT_ELASTICSEARCH_PASSWORD
value: "bar"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: fluentd-config
mountPath: /fluentd/etc
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: fluentd-config
configMap:
name: fluentd-config elasticsearch image: |
In our case, running
In our case, the json logs failing to parse had a |
I had an issue with this config (and the original from https://github.com/fluent/fluentd-kubernetes-daemonset/tree/master/docker-image/v1.11/debian-graylog/conf) where my json log was parsed correctly but the k8s metadata was packed in a |
I'm having the same issue as @peetasan |
For those wondering why the "Fixed" version might also still not work anymore (thanks fluentd, really making me work to get my logs ingested) is because using multi_format and the filter causes the following error to arise.
Below is the config that works for me while excluding the fluent logs which the previous one still breaks with. It breaks out the kubernetes metadata as well and looks like the following within kibana.
|
Sorry to necrobump, but this StackOverflow worked for me, which handles multiple formats using the Multi format parser plugin:
|
Hi,
I'm using fluent/fluentd-kubernetes-daemonset:v0.12-debian-elasticsearch and after updating to the new image (based on 0.12.43 and after solving the UID=0 issue reported here) I've stopped getting parsed nested objects. I get the kubernetes and docker fields parsed but the inside message in "log", which is a standard JSON from the application i run, is no longer parsed.
Have anyone encountered this issue with the new image?
(Also, the image based on 0.12.33 doesn't start at all form some reason, and I can't find older version tags to try).
Best,
AA
The text was updated successfully, but these errors were encountered: