New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DualStack apiserver support #2438
Comments
/sig network |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
This is still very much desired ;-)
|
/remove-lifecycle stale |
/milestone v1.23 |
/stage beta |
Hi @danwinship! 1.23 Enhancements team here. Just checking in as we approach enhancements freeze at 11:59pm PST on Thursday 09/09. Here's where this enhancement currently stands:
For this one, looks like we'll need the kep.yaml updated to reflect the current stage and latest milestone. It also looks like you'll still need to complete a PRR. Thanks! |
@salaxander sorry, the initial description hadn't been updated in a while. This did not go alpha in 1.22 and thus is not scheduled to go beta in 1.23 (but should hopefully be alpha, and I think we are good there, because you don't need a completed PRR to go to alpha) |
@danwinship sounds good! Then we're all good once the KEP updated merges |
If you want to hit 1.23 you need a PRR soon. Should be pretty simple. |
/stage alpha |
Hi, 1.23 Enhancements Lead here 👋. With enhancements freeze now in effect, this enhancement has not met the criteria for the freeze and has been removed from the milestone. As a reminder, the criteria for enhancements freeze is:
Feel free to file an exception to add this back to the release. If you plan to do so, please file this as early as possible. Thanks! |
Hi @danwinship 1.24 Enhancements Team here, With Code Freeze approaching on 18:00 PDT Tuesday March 29th 2022, the enhancement status is (update) |
Hi @danwinship and @thockin 👋 1.24 Release Comms team here. We have an opt-in process for the feature blog delivery. If you would like to publish a feature blog for this issue in this cycle, then please opt in on this tracking sheet. The deadline for submissions and the feature blog freeze is scheduled for 01:00 UTC Wednesday 23rd March 2022 / 18:00 PDT Tuesday 22nd March 2022. Other important dates for delivery and review are listed here: https://github.com/kubernetes/sig-release/tree/master/releases/release-1.24#timeline. For reference, here is the blog for 1.23. Please feel free to reach out any time to me or on the #release-comms channel with questions or comments. Thanks! |
Hi, 1.24 Enhancements Lead here 👋. With code freeze now in effect, this enhancement has not met the criteria for the freeze and has been removed from the milestone. As a reminder, the criteria for code freeze is: All PRs to the kubernetes/kubernetes repo have merged by the code freeze deadline Thanks! |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
No progress for 1.26 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
(from the KEP)
I like the idea of an EndpointSlice per API server and recommending that as the discovery mechanism. There can still be a reconciler making a best effort to update a single-stack Endpoints based on those EndpointSlices; legacy support is important. The individual EndpointSlices then each have their own metadata, which allows things like (say) annotating one of the EndpointSlices to record self-observations, or labelling the endpoint slice with API server identity (cf https://kubernetes.io/docs/concepts/architecture/leases/#api-server-identity). It shouldn't be a scaling issue as few clusters have more than 9 API servers. |
Another benefit, potentially: If I (mis)configure two API servers to have the same IPv4 address, a mechanism with a single EndpointSlice per address family leads both API servers to conclude that their identity is reconciled. With an EndpointSlice per API server, the conflict allows all API servers in the cluster to spot the clash and report this via metrics. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Anyone touching this in 1.29? @danwinship |
Dual-stack apiserver support
k/enhancements
) update PR(s):k/k
) update PR(s):k/website
) update PR(s): WIP dual-stack apiserver documentation website#32034Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.
The text was updated successfully, but these errors were encountered: