You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm unclear if this situation is a problem with the plugin, my configuration or my understanding of what is supposed to happen. I have a situation where I have the following block in my td-agent.conf. Log lines that go through this block which do not have the tag CONTAINER_NAME get dropped and do not get into my log aggregation system. My understanding was that lines which have the tag CONTAINER_NAME matching ^k8s would get a new tag called 'kubernetes.journal.container'. And lines that do not match would still be available to process laster in my td-agent.conf they just would not have these tags.
The entire config is here in case order matters or i have something else eating the logs.
# Do not directly collect fluentd's own logs to avoid infinite loops.
<match fluent.**>
@type file
path /var/log/td-agent.log
</match>
<source>
type systemd
path /var/log/journal
pos_file /var/log/journal.pos
tag journal
read_from_head true
</source>
# This rewriterule is causing log messages to be lost.
<match journal>
@type rewrite_tag_filter
rewriterule1 CONTAINER_NAME ^k8s_ kubernetes.journal.container
log_level trace
</match>
<filter kubernetes.**>
type kubernetes_metadata
use_journal true
</filter>
# For debugging - write all logs to local disk to verify the journey from journald to fluentd is succesful
#<match journal>
# @type file
# path /var/log/all.log
#</match>
<match **>
type aws-elasticsearch-service
log_level info
include_tag_key true
port 9200
logstash_format true
buffer_chunk_limit 16M
# Cap buffer memory usage to 64MiB/chunk * 64 chunks = 4GB
buffer_queue_limit 128
flush_interval 60s
# Never wait longer than 5 minutes between retries.
max_retry_wait 30
# Disable the limit on the number of retries (retry forever).
disable_retry_limit
# Use multiple threads for processing.
num_threads 8
# Assigns ID to be used by the datadog plugin
id elasticsearch
slow_flush_log_threshold 60.0
request_timeout 60s
<endpoint>
url "#{ENV['ELASTICSEARCH_URI']}"
region "#{ENV['ELASTICSEARCH_REGION']}"
</endpoint>
</match>
<source>
@type monitor_agent
bind 0.0.0.0
port 24220
</source>
<source>
@type debug_agent
bind 127.0.0.1
port 24230
</source>
When I remove the rewrite rule stanza the logs without the CONTAINER_NAME tag flow through into my elastic search cluster. Let me know if you have any ideas.
Thanks. Derek
The text was updated successfully, but these errors were encountered:
Hello.
We only use fluentd with journald input
We are running with fluentd version
td-agent 0.12.31
with the following plugins and versions
td-agent-gem install --no-document fluent-plugin-kubernetes_metadata_filter -v 0.26.2
td-agent-gem install --no-document fluent-plugin-elasticsearch -v 1.9.2
td-agent-gem install --no-document fluent-plugin-systemd -v 0.0.7
td-agent-gem install --no-document fluent-plugin-rewrite-tag-filter -v 1.5.5
td-agent-gem install --no-document fluent-plugin-dd -v 0.1.8
td-agent-gem install --no-document fluent-plugin-aws-elasticsearch-service -v 0.1.6
I'm unclear if this situation is a problem with the plugin, my configuration or my understanding of what is supposed to happen. I have a situation where I have the following block in my td-agent.conf. Log lines that go through this block which do not have the tag CONTAINER_NAME get dropped and do not get into my log aggregation system. My understanding was that lines which have the tag CONTAINER_NAME matching ^k8s would get a new tag called 'kubernetes.journal.container'. And lines that do not match would still be available to process laster in my td-agent.conf they just would not have these tags.
The entire config is here in case order matters or i have something else eating the logs.
When I remove the rewrite rule stanza the logs without the CONTAINER_NAME tag flow through into my elastic search cluster. Let me know if you have any ideas.
Thanks. Derek
The text was updated successfully, but these errors were encountered: