Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load balancer security group not being created with IPv6 ingress rules #887

Closed
orrc opened this issue Mar 7, 2019 · 5 comments · Fixed by #991
Closed

Load balancer security group not being created with IPv6 ingress rules #887

orrc opened this issue Mar 7, 2019 · 5 comments · Fixed by #991
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@orrc
Copy link

orrc commented Mar 7, 2019

Creating an ALB ingress with the following annotations, almost everything appears to be set up correctly — an ALB with internet-facing, dualstack configuration is set up, along with the correct target groups.

However, the associated security group (labelled "managed LoadBalancer securityGroup by ALB Ingress Controller"), only creates two rules: one for each port (80, 443), with the IPv4 CIDR: 0.0.0.0/0. IPv6 clients are unable to connect unless I manually update the security group with a ::/0 rule for each port.

kubernetes.io/ingress.class: alb

alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ip-address-type: dualstack
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'

alb.ingress.kubernetes.io/actions.redirect-to-https: '{"Type": "redirect", "RedirectConfig":{"Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'

alb.ingress.kubernetes.io/target-type: ip

alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:<snip>

I see that the list of CIDRs to add to the security group comes from the alb.ingress.kubernetes.io/inbound-cidrs and defaults to 0.0.0.0/0, but when I try to add ::/0 for IPv6 to this list, it's rejected.

Am I missing something in my configuration to allow IPv6 to work when the ingress controller creates ALBs?

Logs when creating a new ingress
09:19:55.054314       1 association.go:224] kube-system/test-ingress: creating securityGroup 1173f105-kubesystem-testin-8be6:managed LoadBalancer securityGroup by ALB Ingress Controller
09:19:55.170031       1 tags.go:69] kube-system/test-ingress: modifying tags {  kubernetes.io/namespace: "kube-system",  kubernetes.io/ingress-name: "test-ingress",  kubernetes.io/cluster-name: "dev"} on sg-0f1dc3f17f3cdc32c
09:19:55.275894       1 security_group.go:50] kube-system/test-ingress: granting inbound permissions to securityGroup sg-0f1dc3f17f3cdc32c: [{    FromPort: 80,    IpProtocol: "tcp",    IpRanges: [{        CidrIp: "0.0.0.0/0",        Description: "Allow ingress on port 80 from 0.0.0.0/0"      }],    ToPort: 80  },{    FromPort: 443,    IpProtocol: "tcp",    IpRanges: [{        CidrIp: "0.0.0.0/0",        Description: "Allow ingress on port 443 from 0.0.0.0/0"      }],    ToPort: 443  }]
09:19:55.457910       1 lb_attachment.go:30] kube-system/test-ingress: modify securityGroup on LoadBalancer arn:aws:elasticloadbalancing:eu-west-1:<snip>:loadbalancer/app/1173f105-kubesystem-testin-8be6/3fb6225cee1a3f32 to be [sg-0f1dc3f17f3cdc32c]
09:19:55.682808       1 association.go:224] kube-system/test-ingress: creating securityGroup instance-1173f105-kubesystem-testin-8be6:managed instance securityGroup by ALB Ingress Controller
09:19:55.767053       1 tags.go:69] kube-system/test-ingress: modifying tags {  kubernetes.io/cluster-name: "dev",  kubernetes.io/namespace: "kube-system",  kubernetes.io/ingress-name: "test-ingress"} on sg-0a69ef0b7210dd33f
09:19:55.870881       1 security_group.go:50] kube-system/test-ingress: granting inbound permissions to securityGroup sg-0a69ef0b7210dd33f: [{    FromPort: 0,    IpProtocol: "tcp",    ToPort: 65535,    UserIdGroupPairs: [{        GroupId: "sg-0f1dc3f17f3cdc32c"      }]  }]
09:19:56.188585       1 instance_attachment.go:87] kube-system/test-ingress: attaching securityGroup sg-0a69ef0b7210dd33f to ENI eni-03fffdea715ad30c3
@M00nF1sh
Copy link
Collaborator

M00nF1sh commented Mar 8, 2019

Hi,
This is an missing gap to support IPV6.
/kind bug
As an temporary mitigation, you can do the following:

  1. Manually create an securityGroup sg-xxxx that allow IPV4&IPV6 traffic.
  2. Modify node securityGroups to allow inbound traffic from sg-xxxx.
  3. Add annotation to ingress: alb.ingress.kubernetes.io/security-groups: sg-xxxx.(this tell the controller to use this SG for LB instead of creating a new one)

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 8, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 6, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 7, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants