New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support and/or exploit ipv6 #1443

Closed
bgrant0607 opened this Issue Sep 25, 2014 · 38 comments

Comments

Projects
None yet
@bgrant0607
Member

bgrant0607 commented Sep 25, 2014

Self explanatory. @MalteJ mentioned ipv6 in #188, and multiple partners have mentioned ipv6 as an attractive solution for the k8s networking model, which allocates ip addresses fairly freely, for both pods and (with ip-per-service) services.

@pires

This comment has been minimized.

Member

pires commented Dec 23, 2014

+1

@MalteJ

This comment has been minimized.

MalteJ commented Dec 23, 2014

If you are interested in IPv6 with Docker have a look at my PR moby/moby#8947 and feel free to test, review and upvote :)

@pires

This comment has been minimized.

Member

pires commented Dec 23, 2014

Will do @MalteJ. Thanks

@MalteJ

This comment has been minimized.

MalteJ commented Jan 9, 2015

OK, the Docker IPv6 pull request is merged. Now it's time for k8s IPv6 ;)

@aanm

This comment has been minimized.

Contributor

aanm commented Mar 15, 2016

Any ETA?

@MalteJ

This comment has been minimized.

MalteJ commented Mar 15, 2016

I don't think anyone is working on this.

@pires

This comment has been minimized.

Member

pires commented Mar 15, 2016

GCP and AWS don't support IPv6 so I'd figure this is very-low priority stuff.

@SuperQ

This comment has been minimized.

SuperQ commented Mar 15, 2016

What part of IPv6 isn't working? I've been able to scrape targets over v6
juts fine.

For example:
target_groups:

  • targets: ['[::1]:9090']

There are some strangeness with host:port specs in Go.
https://golang.org/src/net/ipsock.go#L107

On Tue, Mar 15, 2016 at 4:14 PM, André Martins notifications@github.com
wrote:

Any ETA?


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
#1443 (comment)

@SuperQ

This comment has been minimized.

SuperQ commented Mar 15, 2016

I'm sorry, too many mailing lists. I got this confused with the Prometheus
list. SIGH :)

On Tue, Mar 15, 2016 at 6:03 PM, Ben Kochie superq@gmail.com wrote:

What part of IPv6 isn't working? I've been able to scrape targets over v6
juts fine.

For example:
target_groups:

  • targets: ['[::1]:9090']

There are some strangeness with host:port specs in Go.
https://golang.org/src/net/ipsock.go#L107

On Tue, Mar 15, 2016 at 4:14 PM, André Martins notifications@github.com
wrote:

Any ETA?


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:

#1443 (comment)

@lattwood

This comment has been minimized.

lattwood commented Dec 1, 2016

@philips

This comment has been minimized.

Contributor

philips commented Dec 3, 2016

From @thockin via twitter on known things that would have to happen to make this work

  • API: IP & CIDR fields
  • iptables kubelet & proxy
  • CNI & bridge driver
  • grep and fix all places we have To4, ParseIP, ParseCIDR
@thockin

This comment has been minimized.

Member

thockin commented Dec 3, 2016

@pdecat

This comment has been minimized.

@leblancd

This comment has been minimized.

Contributor

leblancd commented Mar 16, 2017

Before we start auditing and/or implementing the various IPv6 pieces outlined above, I think we need to come to an agreement on the following: Do we impose a restriction on the maximum IPv6 prefix length (i.e. minimum IPv6 subnet space) that is allocated to a node? My strong preference would be to limit the length of an IP prefix that is allocated to a node to be 64 bits of prefix or shorter (i.e. subnet space equivalent to a /64 space or larger). This would allow for 64 bits of interface ID, which is the defacto standard for IPv6. I believe that this is a reasonable restriction, but I haven't seen it stated/written anywhere in Kubernetes or CNI documentation. There are several reasons for imposing this restriction:

  • Reasons outlined in RFC7421. Especially considering IPv6 features listed in Section 4.1 that depend on there being a 64-bit interface ID. Not sure if we'd need any particular feature listed here, but the code would be more future-proof if it allows for a 64-bit interface ID.
  • To avoid collisions in the IPv6 Neighbor Discovery cache: See the discussion on this CNI pull request: containernetworking/cni#394.
  • (Less important) To avoid crazy 128-bit arithmetic operations when selecting an IPv6 subnet for a node. For example, there's this line in the function newCIDRset() function in pkg/controller/node/cidr_set.go:
    maxCIDRs := 1 << unit32(subNetMaskSize - clusterMaskSize)
    This 32-bit operation currently works only for IPv4. For IPv6, this would have to be a 128-bit operation assuming that there was no restriction on subNetMaskSize (and Go language doesn't have a native 128-bit type). If subNetMaskSize is restricted to a max of 64, then a uint64 would work in place of the uint32.

Allocating a /64 space per node may sound wasteful, but it's very reasonable when you're starting with a /48 or a /50 cluster space. A /48 or /50 cluster space is reasonable esp. when it's private (a.k.a. ULA) address space.

Aside from this /64-vs-no-limit issue, some colleagues and I have started looking into the instances of To4(), ParseIP(), and ParseCIDR() in the kubernetes code, and how these might have to change for IPv6/dual-stack support. The not-so-good news is that there are lots of instances of where these are called. The good news is that 'net' library calls such as ParseIP() and ParseCIDR() are indifferent to whether they're operating on IPv6 or IPv4 addresses... they will do the right thing according to what they're passed. Similarly, the 'net' structures such as IP and IPNet work equally well for IPv4 and IPv6. Another help is that IPv4 addresses can be represented internally by their IPv4-mapped IPv6 address ::ffff:[ipv4-address] (see RFC4291, Sect. 2.5.5.2) using a 16-byte slice. In this way, a 'net' IP address can hold either:

  • 4-byte slice, IPv4
  • 16-byte slice, IPv4-mapped IPv6
  • 16-byte slice, IPv6 address
    The 'net' functions/utilities can work with any of these... it can differentiate between these as needed. In fact, the To4() function can be used to distinguish between the last 2 bullets above.
    For existing calls to To4(), these can probably be replaced with a call to To16(), or just remove the call to IP4() and leave the IP unmodified.
@pires

This comment has been minimized.

Member

pires commented Mar 17, 2017

Do we impose a restriction on the maximum IPv6 prefix length (i.e. minimum IPv6 subnet space) that is allocated to a node?

I believe we should.

My strong preference would be to limit the length of an IP prefix that is allocated to a node to be 64 bits of prefix or shorter (...)

You've outlined more details than I could think of. The arithmetic concerns are spot on!

Now, from the top of my head, I think most issues will happen with:

  • kube-proxy and the kubelet, which expose a few flags related to IP addresses + deal a lot with IPTables - I'm left wondering how to properly manage virtual IPv6 addresses (for services);
  • CNI plug-ins;
  • Bootstrap new clusters & upgrade story (if we'll be supporting it) for existing IPv4 clusters:
    • how addresses are managed, i.e. persisted in storage;
    • move components from IPv4 to IPv6, e.g. controller-manager --cluster-cidr where this component stops managing IPv4 and now manages IPv6
    • A potential dependency on external DNS, since:
      • component configuration still depends on knowing IPs beforehand, e.g. kubelet --cluster-dns=10.100.0.10
      • or kubeadm join --discovery-token 123456.abcdefghij <one or more apiserver IPv6 addresses>

k8s-merge-robot added a commit that referenced this issue Aug 17, 2017

Merge pull request #48228 from danehans/kubeadm_v6masterep
Automatic merge from submit-queue

Updates Kubeadm Master Endpoint for IPv6

**What this PR does / why we need it**:
Previously, kubeadm would use ip:port to construct a master
endpoint. This works fine for IPv4 addresses, but not for IPv6.
Per [RFC 3986](https://www.ietf.org/rfc/rfc3986.txt), IPv6 requires the ip to be encased in brackets
when being joined to a port with a colon.

This patch updates kubeadm to support wrapping a v6 address with
[] to form the master endpoint url. Since this functionality is
needed in multiple areas, a dedicated util function was created
for this purpose.

**Which issue this PR fixes**
Fixes Issue kubernetes/kubeadm#334

**Special notes for your reviewer**:
As part of a bigger effort to add IPv6 support to Kubernetes:
Issue #1443
Issue #47666

**Release note**:
```NONE
```
/area kubeadm
/area ipv6
/sig network
/sig cluster-ops
@valentin2105

This comment has been minimized.

valentin2105 commented Sep 15, 2017

@leblancd Thanks for your IPv6 work on k8s !!
Any idea in which release Kube-proxy will start to handle IP6table for Services ? and specially Service/ExternalIPs ?

@dims

This comment has been minimized.

Member

dims commented Sep 15, 2017

cc @sadasu

leblancd added a commit to leblancd/kubernetes that referenced this issue Sep 18, 2017

Fix kube-proxy to use proper iptables commands for IPv6 operation
For iptables save and restore operations, kube-proxy currently uses
the IPv4 versions of the iptables save and restore utilities
(iptables-save and iptables-restore, respectively). For IPv6 operation,
the IPv6 versions of these utilities needs to be used
(ip6tables-save and ip6tables-restore, respectively).

Both this change and PR kubernetes#48551 are needed to get Kubernetes services
to work in an IPv6-only Kubernetes cluster (along with setting
'--bind-address ::0' on the kube-proxy command line. This change
was alluded to in a discussion on services for issue kubernetes#1443.

fixes kubernetes#50474
@leblancd

This comment has been minimized.

Contributor

leblancd commented Sep 28, 2017

I have created a Kubernetes IPv6 deployment guide based on a forked release containing several outstanding PR's. The guide will be transitioned into upstream Kubernetes documentation for the 1.9 release. Feel free to use the guide in the interim for test/dev efforts. I would appreciate any feedback.

@leblancd

This comment has been minimized.

Contributor

leblancd commented Sep 28, 2017

@valentin2105 : Services/externalIPs should work with required cherry picks. See the Kubernetes IPv6 deployment guide which refers to Kubernetes IPv6 Version v1.9.0-alpha.0.ipv6.0. I would appreciate any feedback.

@lachie83

This comment has been minimized.

Member

lachie83 commented Sep 28, 2017

@leblancd thank you very much for putting this together!

@leblancd

This comment has been minimized.

Contributor

leblancd commented Sep 28, 2017

@lachie83 : YW on behalf of the IPv6 working group: @danehans, @pmichali, @rpothier, @aanm, and the sig-network team.

k8s-merge-robot added a commit that referenced this issue Oct 11, 2017

Merge pull request #50478 from leblancd/v6_iptables_cmds
Automatic merge from submit-queue (batch tested with PRs 52520, 52033, 53626, 50478). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix kube-proxy to use proper iptables commands for IPv6 operation

For iptables save and restore operations, kube-proxy currently uses
the IPv4 versions of the iptables save and restore utilities
(iptables-save and iptables-restore, respectively). For IPv6 operation,
the IPv6 versions of these utilities need to be used
(ip6tables-save and ip6tables-restore, respectively).

Both this change and PR #48551 are needed to get Kubernetes services
to work in an IPv6-only Kubernetes cluster (along with setting
'--bind-address ::0' on the kube-proxy command line. This change
was alluded to in a discussion on services for issue #1443.

fixes #50474



**What this PR does / why we need it**:
This change modifies kube-proxy so that it uses the proper commands for iptables save and
iptables restore for IPv6 operation. Currently kube-proxy uses 'iptables-save' and 'iptables-restore'
regardless of whether it is being used in IPv4 or IPv6 mode. This change fixes kube-proxy so
that it uses 'ip6tables-save' and 'ip6tables-restore' commands when kube-proxy is being run
in IPv6 mode.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50474

**Special notes for your reviewer**:

**Release note**:

```release-note NONE
```

k8s-merge-robot added a commit that referenced this issue Oct 13, 2017

Merge pull request #47621 from danehans/ipallocator
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Updates RangeSize error message and tests for IPv6.

**What this PR does / why we need it**:
Updates the RangeSize function's error message and tests for IPv6. Converts RangeSize unit test to a table test and tests for success and failure cases. This is needed to support IPv6. Previously, it was unclear whether RangeSize supported IPv6 CIDRs. These updates make IPv6 support explicit.

**Which issue this PR fixes**
Partially fixes Issue #1443

**Special notes for your reviewer**:
/area ipv6

**Release note**:
```NONE
```
@valentin2105

This comment has been minimized.

valentin2105 commented Nov 14, 2017

I wrote this post if it can help people to Deploy IPv6 in Kubernetes // https://opsnotice.xyz/kubernetes-ipv6-only/

k8s-merge-robot added a commit that referenced this issue Nov 15, 2017

Merge pull request #45551 from danehans/node_v6
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adds Support for Node Resource IPv6 Addressing

**What this PR does / why we need it**:
This PR adds support for the following:

1. A node resource to be assigned an IPv6 address.
2. Expands IPv4/v6 address validation checks.

**Which issue this PR fixes**:
Fixes Issue #44848 (in combination with PR #45116).

**Special notes for your reviewer**:
This PR is part of a larger effort, Issue #1443 to add IPv6 support to k8s.

**Release note**:
```
NONE
```
@burtonator

This comment has been minimized.

burtonator commented Dec 16, 2017

What's needed to get this implemented. We would love to see IPv6 at Scalefastr as that works really well for our use case.

IPv6 is awesome when you have a whole /64 and plenty of IPs to work with on the host machine.

@danehans

This comment has been minimized.

danehans commented Dec 18, 2017

@burtonator IPv6 will be added as an alpha feature in the 1.9 release. You can use kubeadm to deploy an IPv6 Kubernetes cluster by specifying an IPv6 address for --apiserver-advertise-address and using brackets around the IPv6 master address for kubeadm join --token <token> [<master-ip>]:<master-port>. The above information and other specifics will be part of the 1.9 release documentation. Prior to 1.9, @leblancd create the kube-v6 project to test Kubernetes with IPv6.

@valentin2105

This comment has been minimized.

valentin2105 commented Dec 18, 2017

I think be able to disable ClusterIP is an important point about IPv6 :

#57069

@leblancd

This comment has been minimized.

Contributor

leblancd commented Dec 18, 2017

@burtonator What CNI network plugin do you use?

@fejta-bot

This comment has been minimized.

fejta-bot commented Jun 14, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@leblancd

This comment has been minimized.

Contributor

leblancd commented Jun 14, 2018

This issue has been used to follow support for IPv6-only clusters (Alpha support in Kubernetes 1.9)
For dual-stack support, new issues have been filed and a design document has been proposed:
#62822
kubernetes/enhancements#563
kubernetes/community#2254

@fejta-bot

This comment has been minimized.

fejta-bot commented Jul 14, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

fejta-bot commented Aug 13, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment