New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

*: add IPv6 support #248

Open
Nurza opened this Issue Jul 10, 2015 · 24 comments

Comments

Projects
None yet
@Nurza

Nurza commented Jul 10, 2015

Hello, I am using Flannel in CoreOS 717.1.0 with IPv6 but it failed to launch. When I look at the logs, I see the message:

Failed to find IPv4 address for interface eth1.

Is Flannel compatible with IPv6 ?

Thank you.

@eyakubovich

This comment has been minimized.

Show comment
Hide comment
@eyakubovich

eyakubovich Jul 10, 2015

Contributor

@Nurza flannel currently does not support IPv6. However let's leave this issue open as we should add IPv6 support in the future.

Contributor

eyakubovich commented Jul 10, 2015

@Nurza flannel currently does not support IPv6. However let's leave this issue open as we should add IPv6 support in the future.

@pierrebeaucamp

This comment has been minimized.

Show comment
Hide comment

pierrebeaucamp commented Oct 15, 2015

+1

@eyakubovich

This comment has been minimized.

Show comment
Hide comment
@eyakubovich

eyakubovich Oct 15, 2015

Contributor

@patrickhoefler Can I ask about your use case for IPv6? Are you running out of RFC1918 IPv4 addresses? Or want to migrate to IPv6 throughout your org? Something else?

I'm not against IPv6 support but it is work and considering that it's usually ran in 1918 address space, I don't see it as a high priority item. But I would like to know more.

Contributor

eyakubovich commented Oct 15, 2015

@patrickhoefler Can I ask about your use case for IPv6? Are you running out of RFC1918 IPv4 addresses? Or want to migrate to IPv6 throughout your org? Something else?

I'm not against IPv6 support but it is work and considering that it's usually ran in 1918 address space, I don't see it as a high priority item. But I would like to know more.

@patrickhoefler

This comment has been minimized.

Show comment
Hide comment
@patrickhoefler

patrickhoefler Oct 16, 2015

Contributor

@eyakubovich You can definitely ask me, but it would probably make more sense to ask @Nurza instead ;)

Contributor

patrickhoefler commented Oct 16, 2015

@eyakubovich You can definitely ask me, but it would probably make more sense to ask @Nurza instead ;)

@Nurza

This comment has been minimized.

Show comment
Hide comment
@Nurza

Nurza Oct 16, 2015

Because I own a few servers with free IPv6 blocks (IPv4 is SO expensive) and I would like to use flannel with them.

Nurza commented Oct 16, 2015

Because I own a few servers with free IPv6 blocks (IPv4 is SO expensive) and I would like to use flannel with them.

@fskale

This comment has been minimized.

Show comment
Hide comment
@fskale

fskale Oct 19, 2015

I think there are enough use cases, which require ipv6 addresses. I'm working on such and the lack of ipv6 support in kubernetes, flannel forces me to use a single docker host with pipework. Docker supports ipv6 using the command line option: --ipv6 --fixed-cidr-v6="xxxx:xxxx:xxxx:xxxx:/64"". I'm Working with Unique Local Unicast Addresses. The usecase is clear. Ipv6only with DNATS ans SNATS to the pods. Personally i think, you should give ipv6 more priority. So i'm wirting on something like kiwi to track changes to a pod, and then issue pipework cmd to go back to kubernetes.

fskale commented Oct 19, 2015

I think there are enough use cases, which require ipv6 addresses. I'm working on such and the lack of ipv6 support in kubernetes, flannel forces me to use a single docker host with pipework. Docker supports ipv6 using the command line option: --ipv6 --fixed-cidr-v6="xxxx:xxxx:xxxx:xxxx:/64"". I'm Working with Unique Local Unicast Addresses. The usecase is clear. Ipv6only with DNATS ans SNATS to the pods. Personally i think, you should give ipv6 more priority. So i'm wirting on something like kiwi to track changes to a pod, and then issue pipework cmd to go back to kubernetes.

@jonboulle jonboulle changed the title from IPv6 issue to *: add IPv6 support Oct 19, 2015

@colinrgodsey

This comment has been minimized.

Show comment
Hide comment
@colinrgodsey

colinrgodsey Nov 4, 2015

+1

EDIT: also pitching in a use case- microservices. The bare-metal or VM way would generally have you putting a few microservices on one node (with one IP, then segregate by port for service), just for conservation of strictly allocated resources and... probably ipv4 addresses (not even public per-se, we actually have pretty high demand internally at our company for private 10.x.y.z subnets and its getting hard to reserve even a /20) you'll have a few services on a box.

Docker you're basically running a 'machine' per service (or can, not much reason to bunch things together), so you're naturally going to need more IPs than normal. IP addresses are also really the only strictly allocated resources when using docker (at least in the flannel/fabric way). IPv6 definitely fixes this...

colinrgodsey commented Nov 4, 2015

+1

EDIT: also pitching in a use case- microservices. The bare-metal or VM way would generally have you putting a few microservices on one node (with one IP, then segregate by port for service), just for conservation of strictly allocated resources and... probably ipv4 addresses (not even public per-se, we actually have pretty high demand internally at our company for private 10.x.y.z subnets and its getting hard to reserve even a /20) you'll have a few services on a box.

Docker you're basically running a 'machine' per service (or can, not much reason to bunch things together), so you're naturally going to need more IPs than normal. IP addresses are also really the only strictly allocated resources when using docker (at least in the flannel/fabric way). IPv6 definitely fixes this...

@goacid

This comment has been minimized.

Show comment
Hide comment
@goacid

goacid commented Dec 22, 2015

+1

@hwinkel

This comment has been minimized.

Show comment
Hide comment
@hwinkel

hwinkel commented Jan 31, 2016

+1

@atta

This comment has been minimized.

Show comment
Hide comment
@atta

atta commented Jan 31, 2016

+1

@jonboulle jonboulle added this to the v1.0.0 milestone Jan 31, 2016

@glennswest

This comment has been minimized.

Show comment
Hide comment
@glennswest

glennswest Mar 17, 2016

Currently if you start doing sizing, you can get 300-400 hosts in a rack. (Supermicro microblades), two racks will go over the flannel/ipv4 limites. If you start looking at maximums, the flannel/ipv4 limit sounds ok, at around 261K containers max, but the reality its only 1-2 racks today. If you actually start applying devops/microservices into realworld apps, the container counts explode. I did a sizing for a day trading app im designing and its about 1.8million containers.

There only a very small number of container types (20+) yet there reused in alot of ways for the app.
And there 4000 equities, and a average of 450 plus containers per equity.
Hardware Config:
https://www.linkedin.com/pulse/building-data-center-deep-neural-networks-thinking-wall-glenn-west
App Overview
https://www.linkedin.com/pulse/using-containers-docker-change-world-glenn-west

If you consider real wall street system, or if you look at credit risk or a lot of typical commercial apps,
this is not alot of hardware, or even really big apps. The trading app would be alot bigger if you do equities on a global scale with multiple exchanges.

When looking at hardware preference, you would actually want to do this across multiple data centers, multiple floors in data centers, and multiple racks. When you start looking at this, the number of nodes needs to be bigger.

glennswest commented Mar 17, 2016

Currently if you start doing sizing, you can get 300-400 hosts in a rack. (Supermicro microblades), two racks will go over the flannel/ipv4 limites. If you start looking at maximums, the flannel/ipv4 limit sounds ok, at around 261K containers max, but the reality its only 1-2 racks today. If you actually start applying devops/microservices into realworld apps, the container counts explode. I did a sizing for a day trading app im designing and its about 1.8million containers.

There only a very small number of container types (20+) yet there reused in alot of ways for the app.
And there 4000 equities, and a average of 450 plus containers per equity.
Hardware Config:
https://www.linkedin.com/pulse/building-data-center-deep-neural-networks-thinking-wall-glenn-west
App Overview
https://www.linkedin.com/pulse/using-containers-docker-change-world-glenn-west

If you consider real wall street system, or if you look at credit risk or a lot of typical commercial apps,
this is not alot of hardware, or even really big apps. The trading app would be alot bigger if you do equities on a global scale with multiple exchanges.

When looking at hardware preference, you would actually want to do this across multiple data centers, multiple floors in data centers, and multiple racks. When you start looking at this, the number of nodes needs to be bigger.

@choppsv1

This comment has been minimized.

Show comment
Hide comment
@choppsv1

choppsv1 Apr 11, 2016

+1 Use case: no IPv4 in the network infrastructure. None of my internal servers (next gen internet provider infrastructure) have ipv4 addresses.

choppsv1 commented Apr 11, 2016

+1 Use case: no IPv4 in the network infrastructure. None of my internal servers (next gen internet provider infrastructure) have ipv4 addresses.

@phillipCouto

This comment has been minimized.

Show comment
Hide comment

phillipCouto commented May 12, 2016

+1

@stevemcquaid

This comment has been minimized.

Show comment
Hide comment
@stevemcquaid

stevemcquaid Jun 9, 2016

+1 - this is probably the most critical issue for my company.

stevemcquaid commented Jun 9, 2016

+1 - this is probably the most critical issue for my company.

@kkirsche

This comment has been minimized.

Show comment
Hide comment
@kkirsche

kkirsche Jun 9, 2016

Instead of +1's, please use the github +1 feature using the smiley face in the upper right of the original post. This will help organize this and allow the actual discussions of how to solve this occur rather than polluting the issue with +1's. Thank you for your understanding and cooperation.

kkirsche commented Jun 9, 2016

Instead of +1's, please use the github +1 feature using the smiley face in the upper right of the original post. This will help organize this and allow the actual discussions of how to solve this occur rather than polluting the issue with +1's. Thank you for your understanding and cooperation.

@kkirsche

This comment has been minimized.

Show comment
Hide comment
@kkirsche

kkirsche Mar 16, 2017

Mobile applications for Apple also require IPv6 — https://developer.apple.com/news/?id=05042016a which may increase the desire to transition for some use cases

kkirsche commented Mar 16, 2017

Mobile applications for Apple also require IPv6 — https://developer.apple.com/news/?id=05042016a which may increase the desire to transition for some use cases

@tomdee tomdee self-assigned this Mar 17, 2017

@tomdee tomdee removed their assignment May 19, 2017

@oneiros-de

This comment has been minimized.

Show comment
Hide comment
@oneiros-de

oneiros-de Jul 8, 2017

Just curious since this ticket will be two years old soon: Is anybody working on it?

oneiros-de commented Jul 8, 2017

Just curious since this ticket will be two years old soon: Is anybody working on it?

@burton-scalefastr

This comment has been minimized.

Show comment
Hide comment
@burton-scalefastr

burton-scalefastr Dec 21, 2017

It's surprising that this isn't implemented yet since IPv6 is very attractive since most hosts have a full /64 to play with.

burton-scalefastr commented Dec 21, 2017

It's surprising that this isn't implemented yet since IPv6 is very attractive since most hosts have a full /64 to play with.

@tomdee

This comment has been minimized.

Show comment
Hide comment
@tomdee

tomdee Jan 9, 2018

Member

I'm trying to work out what to prioritize here, are a see a few different things that "IPv6 support" could mean

  1. Adding IPv6 support for the control plane. This means using IPv6 for contacting the etcd server or the kubernetes API server (I presume both of these support IPv6?)
  2. Using IPv6 addresses for containers with an IPv6 host network. This should work for almost all backends (though IPIP doesn't support IPv6 and maybe some of the cloud providers might not too).
  3. Using IPv6 addresses for containers with an IPv4 host network. This might be useful for running a large number of containers on a host when there is a limited IPv4 private address range available. This would only work with backends that encapsulate data (e.g. vxlan)
  4. Using IPv4 addresses for containers with an IPv6 host network. This would be useful for running in environments that only support IPv6 on the hosts. Again, this would only work on backends that support encapsulation.

For both 3) and 4) there would be the issue of getting traffic between containers and hosts outside the flannel network (NAT64 could be used).

There's also the possibility of providing containers with both IPv4 and IPv6 addresses. This could just be an extension of 2) or it could involve elements of 3) and/or 4) if it would it be useful to do this even when the hosts don't have both IPv4 and IPv6 connectivity.

Is doing just 1) and 2) (and maybe adding dual stack support) going to be enough for people, or is there any desire to get 3) and 4) done too. I'd love to hear people's thoughts on this.

Member

tomdee commented Jan 9, 2018

I'm trying to work out what to prioritize here, are a see a few different things that "IPv6 support" could mean

  1. Adding IPv6 support for the control plane. This means using IPv6 for contacting the etcd server or the kubernetes API server (I presume both of these support IPv6?)
  2. Using IPv6 addresses for containers with an IPv6 host network. This should work for almost all backends (though IPIP doesn't support IPv6 and maybe some of the cloud providers might not too).
  3. Using IPv6 addresses for containers with an IPv4 host network. This might be useful for running a large number of containers on a host when there is a limited IPv4 private address range available. This would only work with backends that encapsulate data (e.g. vxlan)
  4. Using IPv4 addresses for containers with an IPv6 host network. This would be useful for running in environments that only support IPv6 on the hosts. Again, this would only work on backends that support encapsulation.

For both 3) and 4) there would be the issue of getting traffic between containers and hosts outside the flannel network (NAT64 could be used).

There's also the possibility of providing containers with both IPv4 and IPv6 addresses. This could just be an extension of 2) or it could involve elements of 3) and/or 4) if it would it be useful to do this even when the hosts don't have both IPv4 and IPv6 connectivity.

Is doing just 1) and 2) (and maybe adding dual stack support) going to be enough for people, or is there any desire to get 3) and 4) done too. I'd love to hear people's thoughts on this.

@choppsv1

This comment has been minimized.

Show comment
Hide comment
@choppsv1

choppsv1 Jan 10, 2018

@tomdee We have no IPv4 addresses, we don't need them or want the complexity of running a dual stack. So I guess this means: 1 and 2.

choppsv1 commented Jan 10, 2018

@tomdee We have no IPv4 addresses, we don't need them or want the complexity of running a dual stack. So I guess this means: 1 and 2.

@abh

This comment has been minimized.

Show comment
Hide comment
@abh

abh Jan 11, 2018

For us the use case is similar. We can do pure IPv6 (and have a load balancer in front of the cluster for IPv4 ingress), but IPv4 itself is hard.

For a project at work we're out of IPv4 addresses. Even allocating a not-to-be-routed /19 in 10/8 is difficult.

abh commented Jan 11, 2018

For us the use case is similar. We can do pure IPv6 (and have a load balancer in front of the cluster for IPv4 ingress), but IPv4 itself is hard.

For a project at work we're out of IPv4 addresses. Even allocating a not-to-be-routed /19 in 10/8 is difficult.

@rahulwa

This comment has been minimized.

Show comment
Hide comment
@rahulwa

rahulwa Jan 11, 2018

I believe 1 and 2 should be good enough.

rahulwa commented Jan 11, 2018

I believe 1 and 2 should be good enough.

@petrus-v

This comment has been minimized.

Show comment
Hide comment
@petrus-v

petrus-v Jan 11, 2018

In our context 1 and 2 would be great!

petrus-v commented Jan 11, 2018

In our context 1 and 2 would be great!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment