New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1722380: Logging data from all projects are stored to .orphaned indexes with Elasticsearch #1680
Bug 1722380: Logging data from all projects are stored to .orphaned indexes with Elasticsearch #1680
Conversation
@richm: This pull request references a valid Bugzilla bug. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@richm: This pull request references a valid Bugzilla bug. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lgtm |
/test logging |
…ndexes with Elasticsearch https://bugzilla.redhat.com/show_bug.cgi?id=1722380 Cause: Fluentd is unable to correctly determine the docker log driver. It thinks the log driver is journald when it is json-file. Fluentd then looks for the `CONTAINER_NAME` field in the record to hold the kubernetes metadata and it is not present. Consequence: Fluentd is not able to add kubernetes metadata to records. Records go to the .orphaned index. Fluentd spews lots of errors like this: ``` [error]: record cannot use elasticsearch index name type project_full: record is missing kubernetes field ``` Fix: Fluentd should not rely on reading the docker configuration file to determine if the record contains kubernetes metadata. It should look at both the record tag and the record data and use whatever kubernetes metadata it finds there. Result: Fluentd can correctly add kubernetes metadata and assign records to the correct indices no matter which log driver docker is using. Records read from files under /var/log/containers/*.log will have a fluentd tag like kubernetes.var.log.containers.**. This applies both to CRI-O and docker file logs. Kubernetes records read from journald with CONTAINER_NAME will have a tag like journal.kubernetes.**. There is no CRI-O journald log driver yet, and it is not clear how those records will be represented, but hopefully they will follow the same CONTAINER_NAME convention, in which case they will Just Work. Using the string value of `'nil'` will cause the fluentd config parser to turn this into the ruby `nil` value. (cherry picked from commit 33011c5)
a8d8fe6
to
07a580d
Compare
tests are passing now - please review - see the |
/lgtm |
/cherrypick release-3.10 |
@richm: #1680 failed to apply on top of branch "release-3.10":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
https://bugzilla.redhat.com/show_bug.cgi?id=1722380
Cause: Fluentd is unable to correctly determine the docker log
driver. It thinks the log driver is journald when it is json-file.
Fluentd then looks for the
CONTAINER_NAME
field in the record tohold the kubernetes metadata and it is not present.
Consequence: Fluentd is not able to add kubernetes metadata to
records. Records go to the .orphaned index. Fluentd spews lots
of errors like this:
Fix: Fluentd should not rely on reading the docker configuration file
to determine if the record contains kubernetes metadata. It should
look at both the record tag and the record data and use whatever
kubernetes metadata it finds there.
Result: Fluentd can correctly add kubernetes metadata and assign
records to the correct indices no matter which log driver docker
is using.
Records read from files under
/var/log/containers/*.log
will havea fluentd tag like
kubernetes.var.log.containers.**
. This appliesboth to CRI-O and docker file logs. Kubernetes records read from
journald with
CONTAINER_NAME
will have a tag likejournal.kubernetes.**
. There is no CRI-O journald log driver yet,and it is not clear how those records will be represented, but
hopefully they will follow the same
CONTAINER_NAME
convention, inwhich case they will Just Work.
(cherry picked from commit 33011c5)
manual cherrypick of #1678