You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
My application logs very long (more than 16K characters) messages in one entry. The problem is that such entries are split in ElastciSearch/Kibana into two separate records.
Hello. I have been facing the same issues for the last couple of days.
changing key message to key log indeed seems to fix the parsing issue of those big logs but I stopped getting app/backend logs streamed.
I have added max_lines 65536 which I was hoping to work, but still no luck. What else might be causing such issue?
Thanks
Describe the bug
My application logs very long (more than 16K characters) messages in one entry. The problem is that such entries are split in ElastciSearch/Kibana into two separate records.
Example Docker logs:
{"log":"09:25:23.626 very_long_message_that_is_cut_after_16_k_characters...","stream":"stdout","time":"2021-11-25T09:25:23.629585 122Z"} {"log":"09:25:23.629 rest_of_very_long_message\n","stream":"stdout","time":"2021-11-25T09:25:23.629585122Z"}
I assume that the problem is related to
concat plugin
configuration: https://github.com/kokuwaio/helm-charts/blob/main/charts/fluentd-elasticsearch/templates/configmaps.yaml#L158The problem seems to be fixed when I change
key message
tokey log
inhelm-charts/charts/fluentd-elasticsearch/templates/configmaps.yaml
Line 161 in 43fde9d
Version of Helm and Kubernetes:
Helm Version: 3.7.1
Kubernetes Version: 1.19
Which version of the chart: 13.1.0
What happened: Long log entry is split into two records in ElasticSearch/Kibana.
What you expected to happen: Long log entry should be saved as one record in ElasticSearch/Kibana.
How to reproduce it (as minimally and precisely as possible): Run container that generates very long log entry, at least 16K characters.
The text was updated successfully, but these errors were encountered: