New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Figuring out IPv6 support #245

Closed
dcbw opened this Issue Jun 10, 2016 · 16 comments

Comments

Projects
None yet
@dcbw
Member

dcbw commented Jun 10, 2016

There are a number of considerations:

  1. dual-stack configuration: @steveeJ has said in #235 (comment) (and I agree) that a CNI plugin should be able to both accept IPv4-only, IPv6-only, and IPv4/IPv6 IPAM configuration. This requires a spec change because there is currently only one ipam section. @achanda has done some of the work already.
  2. CNI standard IPAM plugins: none of the IPAM plugins currently support IPv6 IPAM and none return IP6 configuration
  3. CNI main plugins: none of the main plugins currently support configuring IPv6 on an interface, and all returned IPv6 information is entirely ignored, even if non-CNI IPAM plugins that do return IPv6 configuration are used
  4. CNI pkg/ip: ConfigureInterface() currently ignores IPv6 configuration

There are a number of PRs open that fix all of these in different ways.

  • #110 covers host-local (point 2) and ConfigureInterface() (point 4)
    • host-local: less clean/correct than #233; I think this hunk should be dropped
    • ConfigureInterface(): allows dual-stack configuration so better than #234
  • #234 covers ConfigureInterface() (point 4), and while small, it does not allow dual-stack interface configuration. I think this should be dropped in favor of #110 (as long as #110 drops the host-local hunk)
  • #233 covers host-local (point 2), and I think it could be merged as-is. It will allow host-local to allocate and return IPv6 addresses.
  • #235 covers the ptp plugin (point 3) but does not allow dual-stack configuration.
  • #137 covers IPv6 config (point 1) and host-local (point 2). It has the spec change to add an 'ipam6' section to the config. It also has testcases, which none of the others do, but needs a rebase and some additional validation.

My suggested plan of action on these is to:

  1. Bless (or not) adding an additional "ipam6" key to the host-local plugin's configuration as proposed in #137. This would provide dual-stack host-local configuration.
  2. Bless (or not) adding a 'IPVersion' field to host-local to help validation (see https://github.com/containernetworking/cni/pull/137/files#r58019103).
  3. have @aanm rework #110 to drop the host-local hunk, and then merge. This fixes point 4 above.
  4. drop #234 since it doesn't allow dual-stack
  5. drop #233 in favor of #137, since #233 doesn't support dual-stack
  6. ask @fnordahl to rework #235 to handle dual-stack by making setupContainerVeth() generic and calling that function once for v4 and once for v6 if necessary. This fixes part of point 3.
  7. ask @achanda to rebase #137, rework testcases to pick up the changes to pkg/ns/ns.go, add some validaton, and handle some upcoming review comments. This fixes point 1 and point 2.
@dcbw

This comment has been minimized.

Show comment
Hide comment
@achanda

This comment has been minimized.

Show comment
Hide comment
@achanda

achanda Jun 10, 2016

Contributor

Thanks for the summary @dcbw . I can re-work my PR once we have a decision.

Contributor

achanda commented Jun 10, 2016

Thanks for the summary @dcbw . I can re-work my PR once we have a decision.

@achanda

This comment has been minimized.

Show comment
Hide comment
@achanda

achanda Jun 23, 2016

Contributor

@dcbw @steveeJ was wondering if it would make sense to get on a google hangout to decide this?

Contributor

achanda commented Jun 23, 2016

@dcbw @steveeJ was wondering if it would make sense to get on a google hangout to decide this?

@jbrzozowski

This comment has been minimized.

Show comment
Hide comment
@jbrzozowski

jbrzozowski Nov 8, 2016

@dcbw wanted to chime in here, not sure it is the right place. I trust you will guide me otherwise if I should direct this post elsewhere. I work for Comcast and run much of the company's IPv6 related efforts. I was involved in some of our work to add support for IPv6 into OpenStack a while back. Somethings that may need to be considered from our point of view as broad adopter of IPv6 are IPv6 only containers and container networking. Additionally, regarding the IP provisioning process there have been many interesting advancements (some of which we have deployed) that might be worth considering as they pertain to IPv6 addressing and configuration. For example, have we looked at the use of IPv6 router advertisements for the assignment of addresses, prefixes, and configuration information in lieu or in addition to the use of DHCPv6?

I will pause here to see if I should direct these posts elsewhere. I am very much interested in contributing to the advancement of IPv6 support for containers, container networking, and Kubernetes.

@dcbw wanted to chime in here, not sure it is the right place. I trust you will guide me otherwise if I should direct this post elsewhere. I work for Comcast and run much of the company's IPv6 related efforts. I was involved in some of our work to add support for IPv6 into OpenStack a while back. Somethings that may need to be considered from our point of view as broad adopter of IPv6 are IPv6 only containers and container networking. Additionally, regarding the IP provisioning process there have been many interesting advancements (some of which we have deployed) that might be worth considering as they pertain to IPv6 addressing and configuration. For example, have we looked at the use of IPv6 router advertisements for the assignment of addresses, prefixes, and configuration information in lieu or in addition to the use of DHCPv6?

I will pause here to see if I should direct these posts elsewhere. I am very much interested in contributing to the advancement of IPv6 support for containers, container networking, and Kubernetes.

@prabhakar-pal

This comment has been minimized.

Show comment
Hide comment
@prabhakar-pal

prabhakar-pal Dec 29, 2016

Any idea when IPv6 support is targetted for CNI?

Any idea when IPv6 support is targetted for CNI?

@rahulwa

This comment has been minimized.

Show comment
Hide comment
@rahulwa

rahulwa Mar 16, 2017

Please prioritize this. Nowadays IPv6 support is becoming common among the providers (like aws, scaleway, Hetzner, Online.net).
Our company is becoming IPv6 only network for internal traffic due to easiness of doing cross cloud setup securely. (For example, as on aws, globally routable IPv6 range can be configured on VPC level. This makes easy to do cross providers setup securely as we already know ip6 address range for any machine on that VPC. It especially helps for autoscaling).

rahulwa commented Mar 16, 2017

Please prioritize this. Nowadays IPv6 support is becoming common among the providers (like aws, scaleway, Hetzner, Online.net).
Our company is becoming IPv6 only network for internal traffic due to easiness of doing cross cloud setup securely. (For example, as on aws, globally routable IPv6 range can be configured on VPC level. This makes easy to do cross providers setup securely as we already know ip6 address range for any machine on that VPC. It especially helps for autoscaling).

@squeed

This comment has been minimized.

Show comment
Hide comment
@squeed

squeed Mar 16, 2017

Member

@rahulwa this is certainly next on the list. Part of the process is gathering use cases. Some questions for you (and anyone else who would like to chime in):

  1. Do you intend to run in a dual-stack environment?
  2. How will you allocate addresses to each host?
    • Do you expect to run SLAAC?
    • Do you expect to run DHCP-PD?
  3. Do you intend to run an overlay network?
Member

squeed commented Mar 16, 2017

@rahulwa this is certainly next on the list. Part of the process is gathering use cases. Some questions for you (and anyone else who would like to chime in):

  1. Do you intend to run in a dual-stack environment?
  2. How will you allocate addresses to each host?
    • Do you expect to run SLAAC?
    • Do you expect to run DHCP-PD?
  3. Do you intend to run an overlay network?
@radhus

This comment has been minimized.

Show comment
Hide comment
@radhus

radhus Mar 16, 2017

@squeed The immediate use cases for us is currently:

  1. IPv6-only environments are more likely than dual-stack ones
  2. This will probably differ between environment, but I think we're initially seeing at least SLAAC for the hosts DHCP-PD. Maybe PD to get a prefix that pods can get addresses from, and SLAAC for the host (Kubernetes node) to use (doesn't have to be in same prefix). (updated a bit after some discussion)
  3. No, it feels unnecessary with IPv6 if you assume global addresses per pod, routed via the host.

radhus commented Mar 16, 2017

@squeed The immediate use cases for us is currently:

  1. IPv6-only environments are more likely than dual-stack ones
  2. This will probably differ between environment, but I think we're initially seeing at least SLAAC for the hosts DHCP-PD. Maybe PD to get a prefix that pods can get addresses from, and SLAAC for the host (Kubernetes node) to use (doesn't have to be in same prefix). (updated a bit after some discussion)
  3. No, it feels unnecessary with IPv6 if you assume global addresses per pod, routed via the host.
@vzcambria

This comment has been minimized.

Show comment
Hide comment
@vzcambria

vzcambria Mar 16, 2017

Contributor

@squeed: We need this ASAP as well.

  1. We run dual stack now (I'll explain shortly). We'll need dual stack as long as legacy apps hang around and until e.g. orchestration software finally speaks native IPv6.
  2. We run SLAAC now, but need both the ability to use SLAAC and host-local 'static' IPv6 address allocation
  3. Overlay in the future; but anticipate putting e.g. vtep as a member of a "CNI Bridge" such that CNI doesn't even need to know about the Overlay.

Poor Man's Dual-Stack:

Depending on distro, Linux is set to use IPv6 out of the box; else we set it up:

$ sudo ip link add name ds-example type bridge
$ sudo ip link show ds-example
107: ds-example: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 32:f1:e7:03:d4:9f brd ff:ff:ff:ff:ff:ff
$ sudo ip addr add 192.168.0.1/24 dev ds-example
$ sudo ip addr show ds-example
107: ds-example: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 32:f1:e7:03:d4:9f brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 scope global ds-example
       valid_lft forever preferred_lft forever
    inet6 fe80::30f1:e7ff:fe03:d49f/64 scope link tentative 
       valid_lft forever preferred_lft forever
$ 

Based on the HW Addr at link creation time, I now have a Link Local Address.
SLAAC will also create an address (not shown above) based off the same HW Addr as soon as the prefix is advertised.

We use SLAAC since CNI currently doesn't let us "statically" create an IPv6 address. To reach the router, either the physical link is added to the Linux bridge (macvlan doesn't need this step) or we run a radvd container that just advertises /64 to the CNI network.

As shown, we need to create IPv4 address for this. Just being able to create IPv6 "interface" such that LL and SLAAC can kick in would be a start.

Having Static IPv6 per interface is a must. Ideally, we can statically configure multiple addresses. For NFV, the static address would be Link Local fe80::1, allowing containers to hard code their default router "default via fe80::1". For the real world, global, ULA or both.

HTH

Contributor

vzcambria commented Mar 16, 2017

@squeed: We need this ASAP as well.

  1. We run dual stack now (I'll explain shortly). We'll need dual stack as long as legacy apps hang around and until e.g. orchestration software finally speaks native IPv6.
  2. We run SLAAC now, but need both the ability to use SLAAC and host-local 'static' IPv6 address allocation
  3. Overlay in the future; but anticipate putting e.g. vtep as a member of a "CNI Bridge" such that CNI doesn't even need to know about the Overlay.

Poor Man's Dual-Stack:

Depending on distro, Linux is set to use IPv6 out of the box; else we set it up:

$ sudo ip link add name ds-example type bridge
$ sudo ip link show ds-example
107: ds-example: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 32:f1:e7:03:d4:9f brd ff:ff:ff:ff:ff:ff
$ sudo ip addr add 192.168.0.1/24 dev ds-example
$ sudo ip addr show ds-example
107: ds-example: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 32:f1:e7:03:d4:9f brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 scope global ds-example
       valid_lft forever preferred_lft forever
    inet6 fe80::30f1:e7ff:fe03:d49f/64 scope link tentative 
       valid_lft forever preferred_lft forever
$ 

Based on the HW Addr at link creation time, I now have a Link Local Address.
SLAAC will also create an address (not shown above) based off the same HW Addr as soon as the prefix is advertised.

We use SLAAC since CNI currently doesn't let us "statically" create an IPv6 address. To reach the router, either the physical link is added to the Linux bridge (macvlan doesn't need this step) or we run a radvd container that just advertises /64 to the CNI network.

As shown, we need to create IPv4 address for this. Just being able to create IPv6 "interface" such that LL and SLAAC can kick in would be a start.

Having Static IPv6 per interface is a must. Ideally, we can statically configure multiple addresses. For NFV, the static address would be Link Local fe80::1, allowing containers to hard code their default router "default via fe80::1". For the real world, global, ULA or both.

HTH

@rahulwa

This comment has been minimized.

Show comment
Hide comment
@rahulwa

rahulwa Mar 17, 2017

@squeed Our use cases matched with @radhus on 1 and 3 points. But on point 2, most likely we are going to use DHCP-PD (if i understood correctly), as aws assigns one /64 /128 ipv6 address for each machines. And we are trying to run kubernetes on top of those machines so we need to make further subnets. for readability,

  1. IPv6-only.
  2. DHCP-PD.
  3. No, because ipv6 provided by providers are global routable.

rahulwa commented Mar 17, 2017

@squeed Our use cases matched with @radhus on 1 and 3 points. But on point 2, most likely we are going to use DHCP-PD (if i understood correctly), as aws assigns one /64 /128 ipv6 address for each machines. And we are trying to run kubernetes on top of those machines so we need to make further subnets. for readability,

  1. IPv6-only.
  2. DHCP-PD.
  3. No, because ipv6 provided by providers are global routable.
@squeed

This comment has been minimized.

Show comment
Hide comment
@squeed

squeed Mar 17, 2017

Member

Thanks for the input. I'll be offline for a week; please feel free to continue commenting.

Member

squeed commented Mar 17, 2017

Thanks for the input. I'll be offline for a week; please feel free to continue commenting.

@squeed

This comment has been minimized.

Show comment
Hide comment
@squeed

squeed Mar 17, 2017

Member

I suspect we'll have to extend the spec to allow multiple IPAM fields.

Separately, I see the use for a SLAAC ipam plugin that does nothing but return the generated IP.

Member

squeed commented Mar 17, 2017

I suspect we'll have to extend the spec to allow multiple IPAM fields.

Separately, I see the use for a SLAAC ipam plugin that does nothing but return the generated IP.

@TimWolla

This comment has been minimized.

Show comment
Hide comment
@TimWolla

TimWolla Mar 31, 2017

  1. Dual Stack.
  2. SLAAC.
  3. No.
  1. Dual Stack.
  2. SLAAC.
  3. No.
@csuttles

This comment has been minimized.

Show comment
Hide comment
@csuttles

csuttles Apr 13, 2017

Do you intend to run in a dual-stack environment?

Yes, but single stack ipv6 is more important than dual-stack

How will you allocate addresses to each host?

stateful dhcpv6

Do you expect to run SLAAC?

No

Do you expect to run DHCP-PD?

No

Do you intend to run an overlay network?

No

Do you intend to run in a dual-stack environment?

Yes, but single stack ipv6 is more important than dual-stack

How will you allocate addresses to each host?

stateful dhcpv6

Do you expect to run SLAAC?

No

Do you expect to run DHCP-PD?

No

Do you intend to run an overlay network?

No

@zdzichu

This comment has been minimized.

Show comment
Hide comment
@zdzichu

zdzichu Apr 13, 2017

  1. IPv6 only
  2. Mostly SLAAC
  3. No.

zdzichu commented Apr 13, 2017

  1. IPv6 only
  2. Mostly SLAAC
  3. No.

@bboreham bboreham added the area/ipv6 label May 19, 2017

@bboreham

This comment has been minimized.

Show comment
Hide comment
@bboreham

bboreham Aug 16, 2017

Member

This has pretty much been done now, and remaining issues are in https://github.com/containernetworking/plugins

Member

bboreham commented Aug 16, 2017

This has pretty much been done now, and remaining issues are in https://github.com/containernetworking/plugins

@bboreham bboreham closed this Aug 16, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment