-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dual-stack: make nodeipam compatible with existing single-stack clusters when dual-stack feature gate become enabled by default #90439
Dual-stack: make nodeipam compatible with existing single-stack clusters when dual-stack feature gate become enabled by default #90439
Conversation
Thanks for the PR but my personal opinion is to postpone all the dual-stack patches that are not addressing specific bugs to the resolution of the KEP, this is addressing a bug in the upgrade process but that's exactly one of the points that are being discussed |
/priority important-soon |
a1d9481
to
1ced4a1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code needs a little bit more reorganizing than this. It was wrong to have separate setNodeCIDRMaskSizes
/ setNodeCIDRMaskSizesDualStack
behavior based on whether the feature gate is set. The distinction in behavior should be based solely on whether the configuration is single-stack or dual-stack.
So looking at serviceCIDR
/ secondaryServiceCIDR
is right, except I think it would make more sense to look at clusterCIDRs
instead. (In theory either both should be single stack or both should be dual stack and so it doesn't matter, but anyway, the node CIDRs are allocated out of the cluster CIDRs, so logically it makes more sense to be looking at that.)
For backward compatibility with both before the dual-stack code was added, and with after the dual-stack code was added but before now, the behavior needs to be:
- if
clusterCIDRs
contains only an IPv4 CIDR then:- you can pass
--node-cidr-mask-size
regardless of the feature gate setting, without getting any warnings, and that will setnodeCIDRMaskSizeIPv4
- you can pass
--node-cidr-mask-size-ipv4
at least if the feature gate is enabled, and that will setnodeCIDRMaskSizeIPv4
. (It would be fine to make it so that you are allowed to use--node-cidr-mask-size-ipv4
when the feature gate is not enabled too...) - you get an error if you pass
--node-cidr-mask-size-ipv6
, because there's no IPv6 cluster CIDR
- you can pass
- if
clusterCIDRs
contains only an IPv6 CIDR then exactly as above with "4" and "6" swapped - if
clusterCIDRs
contains both an IPv4 CIDR and an IPv6 CIDR, then you have to pass--node-cidr-mask-size-ipv4
and--node-cidr-mask-size-ipv6
, and it's an error to pass--node-cidr-mask-size
.
1ced4a1
to
db71cd4
Compare
/test pull-kubernetes-e2e-kind-ipv6 |
db71cd4
to
e518935
Compare
/test pull-kubernetes-node-e2e |
Hi @aojea @danwinship Is it ok with you now? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so actually at this point maybe it would make sense to merge
setNodeCIDRMaskSizes
andgetNodeCIDRMaskSizes
into a single function?
So when I said that I was thinking that you could just do a single loop over clusterCIDRs
and pick the right flag to use to assign the nodeMaskCIDRs
value for each one, and it would simplify the code. But I guess that doesn't really work, and the previous organization, where getNodeCIDRMaskSizes
does the error checking and setNodeCIDRMaskSizes
does the assignment is probably cleaner.
I'm going to approve, but I want @khenidak to decide if this is too big of a rebase pain for his mega changes that are coming. /approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: SataQiu, thockin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This doesn't touch Services at all; it's just about |
…ers when dual-stack feature gate become enabled by default Signed-off-by: SataQiu <1527062125@qq.com>
b50500b
to
ec1efc3
Compare
@@ -156,13 +156,13 @@ func startNodeIpamController(ctx ControllerContext) (http.Handler, bool, error) | |||
} | |||
|
|||
var nodeCIDRMaskSizeIPv4, nodeCIDRMaskSizeIPv6 int | |||
if utilfeature.DefaultFeatureGate.Enabled(features.IPv6DualStack) { | |||
if dualStack { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dualStack
will be true
when the --cluster-cidr
is configured as dual-stack (at least one from each IPFamily) regardless of the IPv6DualStack
feature gate is enabled or not. So even IPv6DualStack
feature gate is enabled by default, if the user does not set --cluster-cidr
as dual-stack, we will not enter this if
branch, --node-cidr-mask-size
is valid as before.
oh, well, that's simple. So this version means that if there were users who were using nodeipam, and who enabled /hold cancel |
/retest Review the full test history for this PR. Silence the bot with an |
Hi @danwinship This PR has been waiting for a very long time, can we merge it now? |
@dims Can you help set the milestone label for the PR? |
@SataQiu did something happen to this PR? I can't see most of the changes 🤔 , is it my browser? |
/test pull-kubernetes-e2e-kind-ipv6 |
no, it just turned out that the final fix was much simpler than the intermediate versions; the old (current git) code does "if the In theory we could also allow using |
👍 thanks |
/retest Review the full test history for this PR. Silence the bot with an |
2 similar comments
/retest Review the full test history for this PR. Silence the bot with an |
/retest Review the full test history for this PR. Silence the bot with an |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Dual-stack: make nodeipam compatible with existing single-stack clusters when dual-stack feature gate become enabled by default
Which issue(s) this PR fixes:
Fixes #90251
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: