-
Notifications
You must be signed in to change notification settings - Fork 983
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Newest patch version is not a patch version #180
Comments
Same here. This broke our cluster all of a sudden. I've managed to get the logging to work again by setting I've tried appending this to the config but this will break all parsing:
I've also tried using the 1.2 version of the Docker image, but this pod gets into a crash loop because it is unable to access journalctl logs. I'm not really sure what is going on there. If there is someone who can help out with the parsing configuration so that the JSON will be parsed again, that would be amazing! |
@pkeuter I haven't managed to have time to look for a permanent solution (like upgrading, and figuring out the new configuration). I have resorted to rollback to the last known good version, unfortunately without a tag: the image we currently use is |
Hey @pvanderlinden, thanks for the message! I saw that in your previous post indeed, but it's good to know that you haven't been able to find a definitive solution yet. I was hoping that maybe @repeatedly (or another maintainer) would be able to point us in the right direction. Using an untagged version is a good workaround, but I'd rather try to "fix" it. |
Most of the issues you mentioned are due to the fact that fluentd is no longer running as root which means it cannot access
and it is parsed properly. |
I agree @pvanderlinden this was in no way a patch level change. It caused us a lot of grief, and real loss of trust in the project 😞 |
This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days |
This issue was automatically closed because of stale in 30 days |
We used to use tag: v0.12-alpine-elasticsearch
As an emergency fix I had to rollback to the last known good version which was fluent/fluentd-kubernetes-daemonset@sha256:fe67be752f17dd4a66b0c88a46b1d937bff1fa9bd653c5be18880a6b744744cb
When the image was updated because of a node upgrade, everything stopped working. After looking it up this "patch" version changed running as root to non-root, causing all kind of errors. The new version with the work around doesn't parse JSON anymore though and leaves the original message as is.
The issues:
The text was updated successfully, but these errors were encountered: