-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support spec.externalIPs and NodePort services #11
Comments
@bootc Thanks for your comment. Now. k8s adds IPVS rules for |
The problem I ran into is that on nodes that are not running pods, the service is unreachable. The rules inserted by this controller work around that. |
@bootc I tested apiVersion: v1
kind: Service
metadata:
name: my-nginx-ipv4
namespace: default
spec:
...
externalTrafficPolicy: Local
ipFamily: IPv4
externalIPs:
- 192.168.0.54
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: my-nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.168.0.53 And I check the IPVS rules in the note that doesn't have my-nginx pod. Below is the IPVS rules.
As you can see, IPVS has a load balancing rule for the 192.168.0.54 address that in And I checked I think that if your app can't access `spec.externalIP', that's not IPVS issue. |
Interesting. For me, on Kubernetes 1.20, the behaviour is the same for externalIP or LoadBalancer IPs. On nodes with active pods, I only have the pod IPs that are on the local node. And on nodes with no pods I have no entries at all. I believe that in Kubernetes 1.21 this may be resolved by the addition of an |
Please show your service info and result of the below command in the node that doesn't have the pod.
|
I'm not sure why you don't believe me, but here is the service info on a node that isn't running a pod: # ipvsadm -L -n -t 10.0.255.59:1883
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.0.255.59:1883 rr And the same on a node running a pod: # ipvsadm -L -n -t 10.0.255.59:1883
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.0.255.59:1883 rr
-> 10.42.18.163:1883 Masq 1 0 0 This behaviour is identical for services using |
Sorry if you are offended. It wasn't that I couldn't believe you, but to keep track of the relevant history. I think it depends on Kubernetes (kube-proxy) version. Thanks. I will work. |
Currently the controller only supports
LoadBalancer
services, and only handles the IP addresses found in the.status.loadBalancer.ingress
field. The controller should also handle the IP addresses in the.spec.externalIPs
field, which can also be used to direct traffic into a cluster.Finally, because externalIPs can also be used with NodePort services, I feel that the controller should accept those too.
The text was updated successfully, but these errors were encountered: