New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.13.1 kube-proxy ipvs don't work when pod ip is changed #72270
Comments
/sig network |
I have the same problem,Is there a solution that can be solved? |
might be related to #71071, if so v1.13.2 has the fix |
Yes there is definitely a problem with |
One better - if this still exists, please reopen |
having the same problem in v1.13.8, logs:
|
The logs don't indicate a problem, this is expected with graceful termination |
When updating a deployment, requests to the pods through the service ip are timeout on some nodes. After restart the kube-proxy on those nodes, the timeout disappeared. |
What happened:
In k8s 1.13.1 version。When a Pod is changed, the kube-proxy log(info) show that the LVS rs IP is changed. but using 'ipvsadm -Ln', it's unccorrectly. it's only Part of the node will have the situation
sometimes.
the bad node in kube-proxy log(info) don't have
graceful_termination.go:160] Trying to delete rs xxxxxx
andgraceful_termination.go:173] Deleting rs xxxxx
What you expected to happen:
The nodeport on the corresponding node is inaccessible
How to reproduce it (as minimally and precisely as possible):
in 1.13.1 version,and then update one deployment . use
ipvsadm -Ln
check nodeportAnything else we need to know?:
Environment:
1.13.1
):/kind bug
The text was updated successfully, but these errors were encountered: