Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add IPv4/IPv6 dual-stack support #563

Closed
leblancd opened this issue Apr 18, 2018 · 181 comments · Fixed by #808 or #3103
Closed

Add IPv4/IPv6 dual-stack support #563

leblancd opened this issue Apr 18, 2018 · 181 comments · Fixed by #808 or #3103
Assignees
Labels
sig/network Categorizes an issue or PR as relevant to SIG Network. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status
Milestone

Comments

@leblancd
Copy link

leblancd commented Apr 18, 2018

Feature Description

  • One-line feature description (can be used as a release note):
    IPv4/IPv6 dual-stack support and awareness for Kubernetes pods, nodes, and services
  • Primary contact (assignee): @leblancd
  • Responsible SIGs: sig-network
  • Design proposal link (community repo): Add IPv4/IPv6 dual stack KEP (old)
  • KEP: https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/563-dual-stack
  • Link to e2e and/or unit tests: TBD
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred: @thockin @dcbw @luxas
  • Approver (likely from SIG/area to which feature belongs): @thockin
  • Feature target (which target equals to which milestone):
    • Alpha release target 1.11
    • Beta release target 1.20
    • Stable release target 1.23

Corresponding kubernetes/kubernetes Issue: kubernetes/kubernetes#62822

@leblancd
Copy link
Author

Cross Reference with kubernetes/kubernetes: Issue #62822

@justaugustus
Copy link
Member

justaugustus commented Apr 20, 2018

Thanks for the update!

/assign @leblancd
/kind feature
/sig network
/milestone 1.11

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. sig/network Categorizes an issue or PR as relevant to SIG Network. labels Apr 20, 2018
@justaugustus justaugustus added the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label Apr 20, 2018
@justaugustus justaugustus added this to the v1.11 milestone Apr 20, 2018
@justaugustus justaugustus added the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Apr 29, 2018
@idvoretskyi
Copy link
Member

@leblancd any design document available?

/cc @thockin @dcbw @luxas @kubernetes/sig-network-feature-requests

@leblancd
Copy link
Author

leblancd commented May 8, 2018

@idvoretskyi - No design doc yet, but we'll start collaborating on one shortly.

@sb1975
Copy link

sb1975 commented May 10, 2018

Does this mean Kubernetes Ingress will support Dual-Stack ?
Does this mean CNI ( Calico) would need to run Dual stack ( both BIRD and BIRD6 daemons for example) ?

@leblancd
Copy link
Author

@sb1975 - Regarding dual-stack ingress support, that's something we'll need to hash out, but here are my preliminary thoughts:

  • Dual stack ingress support will mostly depend upon which ingress controller you use (whether it's supported and how it's implemented). Existing ingress controllers will probably need some changes to support dual-stack.
  • I expect that the ingress configuration for a typical ingress controller won't change (the config might e.g. still map an L7 address to a service name / service port, with no mention of V4/V6 family)
  • In the case where a service has endpoint pods that are dual-stack, the ingress controller(s) might need changes to map ingress packets based on the packets' family, i.e. map IPv4 ingress packets to an IPv4 endpoint, and map IPv6 ingress packets to an IPv6 endpoint. For the purposes of load-balance weighting, a dual-stack endpoint should count as a single endpoint target.
  • We might want to consider FUTURE support for having an ingress controller map across V4/V6 families (map ingress IPv4 packets to an IPv6 backend, and vice versa), but our initial development will be for strict dual-stack (i.e. separate, independent stacks).
    ========================
    Regarding Calico and other CNI plugins:
  • The CNI plugins won't HAVE TO run in dual-stack mode if a cluster scenario doesn't require dual-stack, they should still be able to run IPv4-only or IPv6-only (if the plugin supports it).
  • Dual-stack support will probably require changes in the various CNI plugins, but that work is considered outside of the scope of this Kubernetes issue (we're focusing on making Kubernetes work for any arbitrary dual-stack plugin, probably using the bridge plugin as a reference), and the CNI work will be done separately on a case-by-case basis.
  • For Calico specifically, I'm not an expert but I believe that a single BIRD daemon can be configured to handle both IPv4 and IPv6 routes (search for "template bgp" here: http://bird.network.cz/?get_doc&v=20&f=bird-3.html#ss3.1). That said, although Calico already supports dual-stack addresses at the pod, there might be changes required to get the BGP routing working for both families.

@sb1975
Copy link

sb1975 commented May 15, 2018

@leblancd : So here is the scenario :

  1. Lets say we will use NGINX ingress controller
  2. I am exposing my services via Ingress.
  3. I am running my pods configured on dual-stack
  4. I am trying to reach the service remotely using A and AAAA dns-records.
    Hope all of these
  5. In summary : I want to connect to pod interfaces using either IPv4 or IPv6 addresses, as resolved by my own queries for A and/or AAAA records for the pod service name.
    Can I get involved in this initiative to test,documentation,architecture: but need some guidance.
    How do I get to know about the progress of this please.

@leblancd
Copy link
Author

@sb1975 - Good question re. the NGINX ingress controller with dual-stack. I'm not an expert on the NGINX ingress controller (maybe someone more familiar can jump in), but here's how I would see the work flow:

  • When you try to reach the Kube services from outside, your DNS controller should resolve the service with A and AAAA DNS records for the ingress controller. It would be your client/app's choice to use the A vs. the AAAA record to reach the ingress controller. So external access to the ingress controller would be dual-stack.
  • At the NGINX ingress controller, NGINX would then look at the L7 URL (regardless of request being in an IPv4 or IPv6 packet) and load balance that to upstream endpoints. If the ingress controller load balancer is configured with ipv6=on (which is default, see https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/#configuring-http-load-balancing-using-dns), and the service endpoint(s) are dual-stack, then the upstream configuration should have both IPv4 and IPv6 entries for each dual-stack endpoint. As designed, the NGINX load balancer treats the IPv4 entry and the IPv6 entry for an endpoint as separate servers. (See the line in the fore-mentioned doc "If a domain name resolves to several IP addresses, the addresses are saved to the upstream configuration and load balanced.") This can be considered good news or bad news. The good news is that load balancing will be done across IPv4 and IPv6 endpoints (giving you some redundancy), where e.g. an incoming IPv4 request could get mapped to either an IPv4 or an IPv6 endpoint. But the potential bad news is with load balancing granularity: a connection to an IPv4 endpoint and a connection to the corresponding IPv6 endpoint will be treated (for load balancing considerations) as 2 loads to separate endpoints, rather than 2 separate loads to the same endpoint. If this load-balancing granularity is a concern, then someone could disable load balancing to IPv6 (or to IPv4, if there's a config knob for that?), so that load balancing would be to IPv4-only endpoints. OR, maybe NGINX load balancer can be modified to treat a connection to an IPv4 address and a connection to its corresponding IPv6 address as 2 loads to the same endpoint.

As for helping and getting involved, this would be greatly appreciated! We're about to start working in earnest on dual-stack (it's been a little delayed by the work in getting CI working for IPv6-only). I'm hoping to come out with an outline for a spec (Google Doc or KEPs WIP doc) soon, and would be looking for help in reviewing, and maybe writing some sections. We'll also DEFINITELY need help with official documentation (beyond the design spec), and with defining and implementing dual-stack E2E tests. Some of the areas which I'm still a bit sketchy on for the design include:

  • How are health/liveness/readiness probes affected or handled with dual-stack
  • Will there be an impact on network policies?
  • Load balancer concerns?
  • Cloud provider plugin concerns?
  • L3/L4 ingress concerns?
    If you've thought about any of these, maybe you could help with those sections?

We're also considering an intermediate "dual-stack at the edge" (with IPv6-only inside the cluster) approach, where access from outside the cluster to K8s services would be dual-stack, but this would be mapped (e.g. via NGINX ingress controller) to IPv6-only endpoints inside the cluster (or use stateless NAT46). Pods and services in the cluster would need to be all IPv6, but the big advantage would be that dual-stack external access would be available much more quickly from a time-to-market perspective.

@caseydavenport
Copy link
Member

/milestone 1.12

@justaugustus
Copy link
Member

@leblancd / @caseydavenport - I'm noticing a lot of discussion here and a milestone change.
Should this be pulled from the 1.11 milestone?

@leblancd
Copy link
Author

@justaugustus - Yes, this should be moved to 1.12. Do I need to delete a row in the release spreadsheet, or is there anything I need to do to get this changed?

@justaugustus
Copy link
Member

@leblancd I've got it covered. Thanks for following up! :)

@justaugustus justaugustus modified the milestones: v1.11, v1.12 Jun 1, 2018
@justaugustus justaugustus removed the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Jun 1, 2018
@justaugustus
Copy link
Member

@leblancd @kubernetes/sig-network-feature-requests --

This feature was removed from the previous milestone, so we'd like to check in and see if there are any plans for this in Kubernetes 1.12.

If so, please ensure that this issue is up-to-date with ALL of the following information:

  • One-line feature description (can be used as a release note):
  • Primary contact (assignee):
  • Responsible SIGs:
  • Design proposal link (community repo):
  • Link to e2e and/or unit tests:
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred:
  • Approver (likely from SIG/area to which feature belongs):
  • Feature target (which target equals to which milestone):
    • Alpha release target (x.y)
    • Beta release target (x.y)
    • Stable release target (x.y)

Set the following:

  • Description
  • Assignee(s)
  • Labels:
    • stage/{alpha,beta,stable}
    • sig/*
    • kind/feature

Please note that the Features Freeze is July 31st, after which any incomplete Feature issues will require an Exception request to be accepted into the milestone.

In addition, please be aware of the following relevant deadlines:

  • Docs deadline (open placeholder PRs): 8/21
  • Test case freeze: 8/28

Please make sure all PRs for features have relevant release notes included as well.

Happy shipping!

/cc @justaugustus @kacole2 @robertsandoval @rajendar38

@justaugustus
Copy link
Member

@leblancd --
Feature Freeze is today. Are you planning on graduating this to Beta in Kubernetes 1.12?
If so, can you make sure everything is up-to-date, so I can include it on the 1.12 Feature tracking spreadsheet?

@leblancd
Copy link
Author

Hi @justaugustus - Beta status will need to slip into Kubernetes 1.13. We are making (albeit slow) progress on the design KEP (kubernetes/community#2254), and we're getting close to re-engaging with the CI test PR, but the Kubernetes 1.12 target was a bit too optimistic.

I'll update the description/summary above with the information you requested earlier. Thank you for your patience.

@justaugustus justaugustus modified the milestones: v1.12, v1.13 Jul 31, 2018
@thockin
Copy link
Member

thockin commented Sep 8, 2021

Good question - what are the rules to trigger PRR? Just the PRR files right? Which ugh, was not included.

WE NEED TOOOOOOOOOLS

@johnbelamaric
Copy link
Member

If the stage changes, then it is supposed to validate whether a PRR approval exists for that stage. maybe it's broken?

@johnbelamaric
Copy link
Member

oh...status is wrong, it should be "implementable" until it's completely done. the CI probably only checks "implementable" KEPS for that validation

@lachie83
Copy link
Member

lachie83 commented Sep 8, 2021

Thanks @johnbelamaric @thockin. Heads up that we see this and will update according and will reach out should we have any questions.

@Priyankasaggu11929
Copy link
Member

Thanks @lachie83 for making the changes. The Enhancement is all good for the enhancements freeze now. :)

Thanks @johnbelamaric @thockin for pointing to the issue. It was a useful learning.

@lachie83
Copy link
Member

Code changes - kubernetes/kubernetes#104691

@thockin thockin moved this from Beta gated (merged) to GA (merged, gate not removed) in Obsolete: SIG-Network KEPs (see https://github.com/orgs/kubernetes/projects/148) Sep 25, 2021
@Priyankasaggu11929
Copy link
Member

Priyankasaggu11929 commented Nov 8, 2021

Hello @lachie83 👋

Checking in once more as we approach 1.23 code freeze at 6:00 pm PST on Tuesday, November 16.

Please ensure the following items are completed:

  • All PRs to the Kubernetes repo that are related to your enhancement are linked in the above issue description (for tracking purposes).
  • All PRs are fully merged by the code freeze deadline.

As always, we are here to help should questions come up.

Thank you so much! 🙂

@lachie83
Copy link
Member

@Priyankasaggu11929 - Confirming that we are good to go for the release. Doc updates recently merged via kubernetes/website#29386

@Priyankasaggu11929
Copy link
Member

Thanks so much the update, @lachie83. This enhancement status is all green & tracked. 🙂

@bridgetkromhout
Copy link
Member

Tests update is in kubernetes/test-infra#24488 (the bot added it to 1.24 but it's for 1.23).

@thockin
Copy link
Member

thockin commented Feb 16, 2022

Reminder: Remove gate in 1.26, milestone updated

@rhockenbury
Copy link

/stage stable

@k8s-ci-robot k8s-ci-robot added stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status and removed stage/beta Denotes an issue tracking an enhancement targeted for Beta status labels Sep 11, 2022
@rhockenbury rhockenbury removed the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Sep 20, 2022
@khenidak
Copy link
Contributor

khenidak commented Oct 3, 2022

@thockin gate longer exists in code except in cloud provider area.

@thockin thockin modified the milestones: v1.26, v1.27 Jan 5, 2023
@thockin thockin moved this from GA (merged, gate not removed) to GA (merged, gate removed) in Obsolete: SIG-Network KEPs (see https://github.com/orgs/kubernetes/projects/148) Jan 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/network Categorizes an issue or PR as relevant to SIG Network. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status
Projects
Obsolete: SIG-Network KEPs (see https...
DONE (GA, merged, gate removed)