You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I deployed strimzi-canary about a month ago and just recently got alerted to problems with zookeeper instances in associated clusters filling up their log volumes. Looking at disk usage historically, this volume was previously static in usage, < 5%, but since deploying strimzi-canary, every single zookeeper instance has shown a steady linear increase in usage.
Inspecting the log files shows records such as the following accounting for 99% of the usage (these are apparently binary files, excuse the formatting):
It varies from cluster to cluster in ways I can't figure out. One reached 100% but other, even larger, clusters with the same sized log volume are still under 20% (though climbing). I'm working on configuring zookeeper to automatically purge these regularly but I was wondering if this was an understood issue, if there's any way to reduce the logging volume in general, and why one cluster blew up so much more than the others.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I deployed strimzi-canary about a month ago and just recently got alerted to problems with zookeeper instances in associated clusters filling up their log volumes. Looking at disk usage historically, this volume was previously static in usage, < 5%, but since deploying strimzi-canary, every single zookeeper instance has shown a steady linear increase in usage.
Inspecting the log files shows records such as the following accounting for 99% of the usage (these are apparently binary files, excuse the formatting):
It varies from cluster to cluster in ways I can't figure out. One reached 100% but other, even larger, clusters with the same sized log volume are still under 20% (though climbing). I'm working on configuring zookeeper to automatically purge these regularly but I was wondering if this was an understood issue, if there's any way to reduce the logging volume in general, and why one cluster blew up so much more than the others.
Beta Was this translation helpful? Give feedback.
All reactions