-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix two targetGroup related bug #1635
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's pls add a comment why port=1 has no impact on the datapath as we override with a target
/lgtm otherwise
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: fawadkhaliq, kishorj, M00nF1sh The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
…ause downtime The actual issue was fixed in kubernetes-sigs#1635.
* fix two targetGroup related bug * address PR comments * address PR comments
…ause downtime (kubernetes-sigs#1658) The actual issue was fixed in kubernetes-sigs#1635.
Changes done
Details:
1. in both AWSALBIngressController and AWSLoadBalancerController(v2.0.0), we used the backendPort specified as resourceID(so it can be either port name or port number). Ideally different resourceID should result in different TargetGroup.
2. in v2.0.0, even the model contains two targetGroup, only a single targetGroup will be created since the TG name is same, and due to ELBV2's API's idempotency. This works fine, however, it relies on ELBv2 API's implementation details and hacky.
3. an alternative design is to always use the numeric port as resourceID. and to be backwards-compatible for backend port referenced by name, we can add a logic in targetGroup's cleanup logic to ignore TargetGroups's arn been used. we decided not to take this approach as it's hacky. (also, in the future, we might have extra logic to compress TargetGroup usage across Ingresses, so no need to compress for this edge case this time).
Test done
Pre steps:
compatibility test with AWSALBIngressController:
echo: http
,echo: 80
,echo-ip: http
,echo-ip:80
. (echo
is instance TargetType,echo-ip
is ip TargetType, bothhttp
and80
refers to same servicePort).elbv2.k8s.aws/cluster":"m00nf1sh-dev
tags to resources, and kept old resources like LoadBalancer/TargetGroups.compatibility test with AWSLoadBalancerController:
echo: http
,echo: 80
,echo-ip: http
,echo-ip:80
. (echo
is instance TargetType,echo-ip
is ip TargetType, bothhttp
and80
refers to same servicePort).