You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Tested with grafana/loki:1.6.1 ( Noticed with this version but have not tried older ones ) and also with grafana/loki:k32-164f5cd as asked by ewelch on slack
Whenever i do a rolling restart of my Ingesters i can see all of them panic with
I did not see any OOMKilling happening nor any of the local PVC are full ( just because i run this with smaller resources than the upstream loki mixin would set )
To Reproduce
Steps to reproduce the behavior:
Started Loki (SHA or version) v1.6.1 on kubernetes using the upstream loki mixin
Do a rolling restart of the ingester statefulset kubectl rollout restart -n loki ingester
For reference in Slack it was suggested this might be a bug due to having 2 entries in my schema because , when i upgraded to 1.6, i changed the new indexes to be 24h.
Describe the bug
Tested with
grafana/loki:1.6.1
( Noticed with this version but have not tried older ones ) and also withgrafana/loki:k32-164f5cd
as asked by ewelch on slackWhenever i do a rolling restart of my Ingesters i can see all of them panic with
For reference this is my Ingester manifest and Config
I did not see any OOMKilling happening nor any of the local PVC are full ( just because i run this with smaller resources than the upstream loki mixin would set )
To Reproduce
Steps to reproduce the behavior:
kubectl rollout restart -n loki ingester
Expected behavior
Clean shutdown
Environment:
The text was updated successfully, but these errors were encountered: