Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defining a custom loadBalancerSourceRanges in a AWS NLB service is not respected #57212

Closed
aledbf opened this issue Dec 14, 2017 · 24 comments · Fixed by #74692
Closed

Defining a custom loadBalancerSourceRanges in a AWS NLB service is not respected #57212

aledbf opened this issue Dec 14, 2017 · 24 comments · Fixed by #74692
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@aledbf
Copy link
Member

aledbf commented Dec 14, 2017

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

Creating a NLB service with a custom loadBalancerSourceRanges is not respected

What you expected to happen:

The security group should be created with the defined range and not 0.0.0.0/0

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 14, 2017
@aledbf
Copy link
Member Author

aledbf commented Dec 14, 2017

/assign @micahhausler

@k8s-ci-robot
Copy link
Contributor

@aledbf: GitHub didn't allow me to assign the following users: micahhausler.

Note that only kubernetes members can be assigned.

In response to this:

/assign @micahhausler

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Dec 14, 2017
@aledbf
Copy link
Member Author

aledbf commented Dec 14, 2017

/sig aws

@aledbf
Copy link
Member Author

aledbf commented Dec 14, 2017

ping @micahhausler

@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Dec 14, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 14, 2018
@aledbf
Copy link
Member Author

aledbf commented Mar 14, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 14, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 12, 2018
@micahhausler
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 13, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 11, 2018
@2rs2ts
Copy link
Contributor

2rs2ts commented Sep 14, 2018

NLBs don't have security groups, as a general rule of AWS

@micahhausler
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 14, 2018
@2rs2ts
Copy link
Contributor

2rs2ts commented Sep 14, 2018

If you wanted to accomplish this with the NLB k8s would have to manipulate the security groups of the nodes themselves.

@micahhausler
Copy link
Member

micahhausler commented Sep 14, 2018

That is what the controller does right now, edit the security groups of the nodes

@2rs2ts
Copy link
Contributor

2rs2ts commented Sep 14, 2018

Do you mean that's how it handles the classic ELB LoadBalancer Service types? Because it changes the rules on the SGs on the ELB for that. Or do you mean something else? (Forgive my ignorance, I'm merely trying to be helpful)

@micahhausler
Copy link
Member

You are right, NLBs do not have Security Groups. The current NLB controller opens up the nodePort on the nodes' security group to 0.0.0.0/0 in order to allow traffic. When a user defines loadBalancerSourceRanges, it should respect the IP CIDRs they specify.

Classic ELBs do have security groups, that is not covered in this issue.

@2rs2ts
Copy link
Contributor

2rs2ts commented Sep 14, 2018

gotcha, thanks, sorry for the distraction

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 13, 2018
@jrnt30
Copy link

jrnt30 commented Jan 10, 2019

tl;dr - The current controller doesn't seem to reconcile updates to the loadBalancerSourceRanges for existing target groups properly however it does seem to do the right thing if there is a new target group created in conjunction with a change.

Initial Deployment

Our initial deployment did not include any restrictions via loadBalancerSourceRanges.

kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
  name: nginx-ingress-controller
  namespace: kube-system
spec:
  externalTrafficPolicy: Local
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app: nginx-ingress
    component: controller
    release: nginx-ingress
  sessionAffinity: None
  type: LoadBalancer

Resultant Security Group Ingress Rules

  • healthCheckNodePort created and restricted to VPC CIDR
  • nodePorts for http and https created and unrestricted (0.0.0.0/0)

Update - Add Net New loadBalancerSourceRanges

We had a desire then to lock down our ingress and added in the loadBalancerSourceRanges and simply added the loadBalancerSourceRanges to the spec. This did not have any changes on the Security Group for the nodes at all.

Resultant Security Group Ingress Rules

  • healthCheckNodePort created and restricted to VPC CIDR
  • nodePort for http and https created and unrestricted (0.0.0.0/0)

Forced Update - Adjust ports by removing nodePort

We tried "tricking" the Controller into doing additional work by removing the nodePort attribute. This does result in a meaningful change being propagated to the Security Group however it also means that a new Target Group is created which requires the heath checks to initialize and results in traffic being dropped until that finishes.

Resultant Security Group Ingress Rules

  • healthCheckNodePort created and restricted to VPC CIDR
  • New nodePort for modified has explicit ingress rules for IPs listed in loadBalancerSourceRanges
  • Original nodePorts * still have* unrestricted 0.0.0.0/0 rule present

Update - Append new IP to loadBalancerSourceRanges

Adding in another IP to the list did not result in any changes

Resultant Security Group Ingress Rules

  • healthCheckNodePort created and restricted to VPC CIDR
  • nodePort has ingress for only the initial IP that was defined in the previous step.
  • New IP was not added to the Security Group ingress rules

Update - Remove all IPs

Finally we tried just deleting the loadBalancerSourceRanges all together. This resulted in no changes to the rules.

Resultant Security Group Ingress Rules

  • healthCheckNodePort created and restricted to VPC CIDR
  • nodePort has ingress for only the initial IP that was defined in the previous step.
  • No rule created for 0.0.0.0/0

Logs

There are some log statements here that also indicate some issues. Specifically it seems the assessment of Additions/Removals is incorrect when adjustments to the loadBalancerSourceRanges occurs.

Initial Creation

These seemed fine.

kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:26:48.251487       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15560409", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.213824       1 service_controller.go:300] Ensuring LB for service kube-system/nlb-test
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.214234       1 aws.go:3247] EnsureLoadBalancer(kops.dev.nbox.site, kube-system, nlb-test, us-east-1, , [{http TCP 80 {1 0 http} 32063} {https TCP 443 {1 0 https} 31994}], map[service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:60 service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled:true service.beta.kubernetes.io/aws-load-balancer-type:nlb])
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.215605       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15561167", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer

Update with new IP logs

No updates to the SG ID to remove the 0.0.0.0/0 rules or the introduction of our new rule.

kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.215633       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15561167", FieldPath:""}): type: 'Normal' reason: 'LoadBalancerSourceRanges' [] -> [A.A.A.A/32]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.463376       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0a52a2b1038a587f0"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.463417       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-064ca6cd08abcce6f"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.463534       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0683d368ce47b5dd6"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:45.965417       1 service_controller.go:326] Not persisting unchanged LoadBalancerStatus for service kube-system/nlb-test to registry.
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:45.965631       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15561167", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer

Removal of node port - Force of new ingress

Looked pretty good

kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:36.502166       1 service_controller.go:300] Ensuring LB for service kube-system/nlb-test
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:36.502241       1 aws.go:3247] EnsureLoadBalancer(kops.dev.nbox.site, kube-system, nlb-test, us-east-1, , [{http TCP 80 {1 0 http} 31911} {https TCP 443 {1 0 https} 31503}], map[service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:60 service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled:true service.beta.kubernetes.io/aws-load-balancer-type:nlb])
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:36.503075       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15562183", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:36.724077       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0a52a2b1038a587f0"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:36.724113       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-064ca6cd08abcce6f"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:36.724123       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0683d368ce47b5dd6"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514453       1 aws_loadbalancer.go:650] Adding rule for client MTU discovery from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514496       1 aws_loadbalancer.go:651] Adding rule for client traffic from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514509       1 aws_loadbalancer.go:650] Adding rule for client MTU discovery from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514517       1 aws_loadbalancer.go:651] Adding rule for client traffic from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514529       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514539       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514549       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514610       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514623       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514632       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.577979       1 aws.go:2791] Existing security group ingress: sg-026f781d751fc164b [ {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 32063,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "0.0.0.0/0",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/client=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 32063
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "-1",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   UserIdGroupPairs: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       GroupId: "sg-026f781d751fc164b",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       UserId: "182258455885"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     },{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       GroupId: "sg-055ab035be8ef7fe4",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       UserId: "182258455885"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 22,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "A.A.A.A/32"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     },
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "B.B.B.B/32"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     },
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "10.0.0.0/16"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     },
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "C.C.C.C/32"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 22
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 31994,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "0.0.0.0/0",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/client=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 31994
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 30645,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "10.0.0.0/16",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/health=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 30645
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } ]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.578273       1 aws.go:2819] Adding security group ingress: sg-026f781d751fc164b [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 31911,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "A.A.A.A/32",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/client=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 31911
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 31503,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "A.A.A.A/32",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/client=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 31503
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager }]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager W0110 15:38:38.838869       1 aws_loadbalancer.go:725] Revoking ingress was not needed; concurrent change? groupId=sg-026f781d751fc164b
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.861796       1 service_controller.go:326] Not persisting unchanged LoadBalancerStatus for service kube-system/nlb-test to registry.
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.862571       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15562183", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer

Appending of new IP

Looks a bit strange, logs indicate it's going to remove the existing IP (which should remain since we just appended another item) and it they should add the new rule as well.

kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.740474       1 service_controller.go:300] Ensuring LB for service kube-system/nlb-test
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.740542       1 aws.go:3247] EnsureLoadBalancer(kops.dev.nbox.site, kube-system, nlb-test, us-east-1, , [{http TCP 80 {1 0 http} 31911} {https TCP 443 {1 0 https} 31503}], map[service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled:true service.beta.kubernetes.io/aws-load-balancer-type:nlb service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:60])
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.740978       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15563248", FieldPath:""}): type: 'Normal' reason: 'LoadBalancerSourceRanges' [A.A.A.A/32] -> [A.A.A.A/32 D.D.D.D/32]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.741006       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15563248", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.987059       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0a52a2b1038a587f0"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.987106       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-064ca6cd08abcce6f"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.987118       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0683d368ce47b5dd6"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.240474       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([A.A.A.A/32 D.D.D.D/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.240516       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([A.A.A.A/32 D.D.D.D/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.240527       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([A.A.A.A/32 D.D.D.D/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.240632       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([A.A.A.A/32 D.D.D.D/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.240645       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([A.A.A.A/32 D.D.D.D/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.240655       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([A.A.A.A/32 D.D.D.D/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager W0110 15:45:50.302613       1 aws_loadbalancer.go:725] Revoking ingress was not needed; concurrent change? groupId=sg-026f781d751fc164b
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.324063       1 service_controller.go:326] Not persisting unchanged LoadBalancerStatus for service kube-system/nlb-test to registry.
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.324168       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15563248", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer

Removing of IP (wasn't propagated to SG ID in the first place)

This seems to indicate it would be deleting the incorrect IP however nothing actually occurred because nothing was ever created for IP #2

kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.096429       1 service_controller.go:300] Ensuring LB for service kube-system/nlb-test
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.096513       1 aws.go:3247] EnsureLoadBalancer(kops.dev.nbox.site, kube-system, nlb-test, us-east-1, , [{http TCP 80 {1 0 http} 31911} {https TCP 443 {1 0 https} 31503}], map[service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:60 service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled:true service.beta.kubernetes.io/aws-load-balancer-type:nlb])
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.097251       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15563772", FieldPath:""}): type: 'Normal' reason: 'LoadBalancerSourceRanges' [A.A.A.A/32 D.D.D.D/32] -> [A.A.A.A/32]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.097395       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15563772", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.353097       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0a52a2b1038a587f0"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.353137       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-064ca6cd08abcce6f"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.353147       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0683d368ce47b5dd6"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.587503       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.587540       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.587553       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.587566       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.587630       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.587643       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager W0110 15:49:22.659825       1 aws_loadbalancer.go:725] Revoking ingress was not needed; concurrent change? groupId=sg-026f781d751fc164b
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.703219       1 service_controller.go:326] Not persisting unchanged LoadBalancerStatus for service kube-system/nlb-test to registry.
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.703326       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15563772", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer

Removal of ALL IPs from loadBalancerSourceRanges

This indicates a REMOVAL of 0.0.0.0/0 instead of a removal of the existing IPs and the CREATION of a ingress on 0.0.0.0/0

kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:14.947751       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15564496", FieldPath:""}): type: 'Normal' reason: 'LoadBalancerSourceRanges' [A.A.A.A/32] -> []
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:14.948198       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15564496", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:15.188303       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0a52a2b1038a587f0"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:15.188344       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-064ca6cd08abcce6f"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:15.188513       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0683d368ce47b5dd6"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.250192       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([0.0.0.0/0]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.250232       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([0.0.0.0/0]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.250244       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([0.0.0.0/0]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.250256       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([0.0.0.0/0]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.250333       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([0.0.0.0/0]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.250346       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([0.0.0.0/0]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.318052       1 aws.go:2879] Removing security group ingress: sg-026f781d751fc164b [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 31994,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "0.0.0.0/0",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/client=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 31994
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 32063,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "0.0.0.0/0",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/client=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 32063
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager }]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.592089       1 service_controller.go:326] Not persisting unchanged LoadBalancerStatus for service kube-system/nlb-test to registry.
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.592838       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15564496", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer

@jrnt30
Copy link

jrnt30 commented Jan 15, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 15, 2019
@M00nF1sh
Copy link
Contributor

I'm working a fix on this, should be available nearly next week.

@spiffxp
Copy link
Member

spiffxp commented Feb 24, 2019

/assign @M00nF1sh

@tewing-riffyn
Copy link

Hi @M00nF1sh, Thank you for the PR. Do you know when it's expected to be merged and available to test?

I'm anxiously awaiting this merge so we can use elasticIP's to offer static addresses to our web clients. Currently we use ELBs along with a pod annotation to assign an additional handcrafted security group containing our whitelist. The update in this ticket will allow us to trust that loadBalancerSourceRanges in our helm values are propagated to the node SG.

I'm wondering what happens if 5 NLBs point to the same worker nodes, each using 2 ports and each NLB service contains 15 loadBalancerSourceRanges. This would make 5 x 2 x 15 = 150 rules in a single SG. A single SG can't contain them all and I assume it will fail to add the SG rules beyond the maximum. I'm curious if the failure would be silent or if the failure to add all SG rules would cause the service to fail creation.

AWS documentation says I can get the number of rules per SG increased (default 60) , and I can increase or decrease the maximum number of SGs allowed to be attached to a network interface but
the Security groups per network interface limit multiplied by the Rules per security group limit can't exceed the hard limit of 1000.
https://aws.amazon.com/premiumsupport/knowledge-center/increase-security-group-rule-limit/

This limit applies across the entire AWS account so I need to consider other non-k8s deployments in our AWS account that rely on using 5 attached SGs. This tells me I can ask AWS to increase the number of rules-per-SG to 200 without decreasing from 5 SGs per interface we use elsewhere.

Thank you,
-Terry

@tewing-riffyn
Copy link

@jrnt30 - Your Jan 30 post in this issue should win an award.

@joshimhoff
Copy link

+1 to @tewing-riffyn's question about when @M00nF1sh's PR will be ready for use in EKS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.