Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consul webhook injector is not able to register services if one (or many) of worker nodes are down. #779

Closed
TomasKohout opened this issue Oct 12, 2021 · 2 comments 路 Fixed by #991
Labels
type/bug Something isn't working

Comments

@TomasKohout
Copy link

TomasKohout commented Oct 12, 2021

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you!
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Overview of the Issue

We upgraded to the latest Consul 1.10.2 and latest Helm chart and we've done a disaster scenario where we switched whole DC down. Then we faced issue where Consul Connect pods were unable to start because service was not registered in Consul.

The main issue is that in webhook injector is a function that will try to deregister services on all consul agents, but those agents are not reachable and pods are in Terminating state. After force deletion of those agents, webhook started to behave as expected.

The remedy could be to filter agent pods where it's container is not ready.

Reproduction Steps

  1. kill worker node ungracefully
  2. Consul connected pods will hang on init container.

Quick fix

  1. Force delete consul agent pod which is in terminting phase: kubectl delete pod consul-agent-example --force --wait=false

Logs

2021-10-12T08:38:47.996Z	ERROR	controller.endpoints	failed to deregister endpoints on all agents	{"name": "prometheus-node-exporter", "ns": "system-monitoring", "error": "Get \"http://10.121.0.107:8500/v1/agent/services?filter=Meta%5B%22k8s-service-name%22%5D+%3D%3D+%22prometheus-node-exporter%22+and+Meta%5B%22k8s-namespace%22%5D+%3D%3D+%22system-monitoring%22+and+Meta%5B%22managed-by%22%5D+%3D%3D+%22consul-k8s-endpoints-controller%22\": dial tcp 10.121.0.107:8500: i/o timeout"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/home/kohy/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:298
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/home/kohy/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:253
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/home/kohy/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:214

Expected behavior

Environment details

  • k8s version: 1.20.11
  • bare metal
  • Calico
@TomasKohout TomasKohout added the type/bug Something isn't working label Oct 12, 2021
@kschoche
Copy link
Contributor

Hi @TomasKohout ! Thanks for filing this issue.
Would you be able to provide more information on reproducing this? I'm a little confused on the approach because you mentioned deleting a Pod but that Pod would not exist if you'd power-cycled the node it was on.
Could you clarify which pods/nodes you restarted and what their configuration was?
Thanks!

@TomasKohout
Copy link
Author

TomasKohout commented Oct 19, 2021

@kschoche sorry for late reply. 馃檪

The issue is that if node is removed ungracefully, pods on that node will appear as Running for a short period of time (lease + toleration for not ready) and then they switch to Terminating.

The issue is that etcd contains those pods event when in Terminating phase and webhook will try to de/register on all pods of Consul agent, but agent on killed node is not reachable anymore and so injector will stuck.

When you force delete that dead consul agent, webhook injector will start to work again.

I think that I've mismatched reproduction steps and quick fix steps. Sorry for that. I've updated reproduction steps.

@TomasKohout TomasKohout changed the title Consul webhook injector is not able to register services if one (or many) of worker nodes is down. Consul webhook injector is not able to register services if one (or many) of worker nodes are down. Nov 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants