Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Istio Mixer (v1.4) creates too many watches of secrets #19481

Closed
logicalhan opened this issue Dec 9, 2019 · 0 comments · Fixed by #19492
Closed

Istio Mixer (v1.4) creates too many watches of secrets #19481

logicalhan opened this issue Dec 9, 2019 · 0 comments · Fixed by #19492

Comments

@logicalhan
Copy link

@logicalhan logicalhan commented Dec 9, 2019

Bug description

We've observed a number of occurrences where istio-mixer creates too many watches of secrets, overwhelming the kube-apiserver.

Expected behavior

The expected behavior is to not create 20k+ watches on secrets for clusters which have 3 nodes.

I suspect that this is the underlying reason for this issue (#18167) and probably also this one (#19414).

What we have observed is that the kube-apiserver is getting saturated by watches from istio mixer, which we have been able to deduce by looking at the apiserver_registered_watchers metric. This eventually causes the apiserver to OOM, invoking a kubelet restart. Once restarted, watch connections get resumed against the kube-apiserver and since there are a lot of watches, this causes rate-limiting to kick in and 429s to start getting returned. Since we have no request prioritization (yet) in the kube-apiserver, this means healthz will also get 429'd, which will cause the liveness probes to fail and the apiserver to enter a crash-loop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
3 participants
You can’t perform that action at this time.