-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NodePort service for hostNetwork pod fails when both Ready and Terminating pods are present #114440
Comments
@jason-i-vv: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig network |
/wg multitenancy |
/retitle NodePort service for hostNetwork pod fails when both Ready and Terminating pods are present The API docs say "No address will appear in both Addresses and NotReadyAddresses in the same subset" but I think that means "no completely identical Also, kube-proxy doesn't actually operate on the Endpoints any more anyway, it operates on EndpointSlices, which are slightly different, and the distinction is whether the I think probably we do want to indicate that there are simultaneous Ready and Terminating endpoints for the service on the same node, but the proxy code needs to be more careful to interpret this situation correctly... /cc @robscott |
I'm not sure I'm completely understanding this, but it sounds like we may need to update the kube-proxy code that handles the case where more than one endpoint exists for the same IP: kubernetes/pkg/proxy/endpointslicecache.go Lines 312 to 317 in 53906cb
|
a reproducer will be very useful, it seems an issue with the endpoints controller, no? there is an e2e that forces a disruption that can server as a basis of the reproducer kubernetes/test/e2e/network/service.go Lines 2387 to 2394 in 9edd4d8
|
as I know, kube-proxy check endpointslice condition, So this issue can be solved by update endpoint controller or endpointslice controller. Finally, this two ways will both update endpointslice condition to Ready , when both Ready and Terminating pods are present. My first way is to update endpoint controller in my pr , Do you suggest some other ways?
|
this test has been writed to expect the dup ip both in ready or not ready, I am not sure is my idea correct with remove the dup ip from not Ready ips set. |
which version are you using? I think that the problem here is that the endpoint is retaining the PodIP of the evicted pod and that is wrong. This was fixed by #110255 , please check that you are running in a version with that patch included , it seems you are running in 1.22.4 and this was fixed in v1.22.10, you should update to the latest stable version |
Got it ,I will check it. |
Any update here? |
still check for release-1.22. |
@thockin i had checked the stabe version release-1.22, and this problem was indeed solved。Sorry for take so lang. |
Thanks for following up!
…On Thu, Jan 19, 2023 at 1:08 AM jason-i-vv ***@***.***> wrote:
@thockin <https://github.com/thockin> i had checked the stabe version
release-1.22, and this problem was indeed solved。Sorry for take so lang.
—
Reply to this email directly, view it on GitHub
<#114440 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVAXWDCRGXJFTXEL2IDWTD77FANCNFSM6AAAAAAS44HVWU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
What happened?
We had a nodePort service which use port 30002, transport to a host-network pod which listen 8002.Now it is invalid , we can't access
nodeIp:30002
, butnodeIp:8002
is valid.We all see ,
30002
doesn't have any target.We all see , the same ipaddr is both in Addresses and NotReadyAddress.
And , there are some invalid pods and one valid pod , this is why the endpoint looks like.
What did you expect to happen?
if an ipaddr is both in readyAddrs and notReadyAddrs, it shouled be removed from notReadyAddrs to make sure that
valid IP addresses are not masked by invalid IP addresses。Then the url
http://nodeIP:30002
we called will always be accessible whenever the deployment has invalid pods or not.How can we reproduce it (as minimally and precisely as possible)?
BestEffort
, so it will be evicited first.Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
The text was updated successfully, but these errors were encountered: