Adds support for IPv6 containers #229

Open
wants to merge 3 commits into
from

Conversation

Projects
None yet
7 participants
@42wim

42wim commented Aug 15, 2015

Uses SERVICE_IPV6PORTS=port1/proto,port2/proto as metadata
(because ipv6 ports aren't mapped and can't be found by inspecting the container)

The specified ports are mapped as ExposedPorts for that container.
The service.ID is suffixed with ":ipv6" as not to clash with ipv4 listening on the same port.

@progrium

This comment has been minimized.

Show comment
Hide comment
@progrium

progrium Aug 15, 2015

Contributor

What about something like SERVICE_<port>_IPV6=<tcp/udp>?

Contributor

progrium commented Aug 15, 2015

What about something like SERVICE_<port>_IPV6=<tcp/udp>?

@42wim

This comment has been minimized.

Show comment
Hide comment
@42wim

42wim Aug 15, 2015

That's an option, although it wouldn't be consistent with the way SERVICE__X behaves, as it only modifies ports that are exposed or explicitly published (which is not the case for IPv6).

But if you want to have it implemented this way, I can modify it.

42wim commented Aug 15, 2015

That's an option, although it wouldn't be consistent with the way SERVICE__X behaves, as it only modifies ports that are exposed or explicitly published (which is not the case for IPv6).

But if you want to have it implemented this way, I can modify it.

@progrium

This comment has been minimized.

Show comment
Hide comment
@progrium

progrium Aug 15, 2015

Contributor

Depends on how you look at it. It's just extra meta data on port X.

On Sat, Aug 15, 2015 at 4:32 PM, 42wim notifications@github.com wrote:

That's an option, although it wouldn't be consistent with the way
SERVICE__X behaves, as it only modifies ports that are exposed or
explicitly published (which is not the case for IPv6).

But if you want to have it implemented this way, I can modify it.


Reply to this email directly or view it on GitHub
#229 (comment)
.

Jeff Lindsay
http://progrium.com

Contributor

progrium commented Aug 15, 2015

Depends on how you look at it. It's just extra meta data on port X.

On Sat, Aug 15, 2015 at 4:32 PM, 42wim notifications@github.com wrote:

That's an option, although it wouldn't be consistent with the way
SERVICE__X behaves, as it only modifies ports that are exposed or
explicitly published (which is not the case for IPv6).

But if you want to have it implemented this way, I can modify it.


Reply to this email directly or view it on GitHub
#229 (comment)
.

Jeff Lindsay
http://progrium.com

@42wim

This comment has been minimized.

Show comment
Hide comment
@42wim

42wim Aug 16, 2015

Modified it to use SERVICE_<port>_IPV6=<tcp/udp>
If no protocol specified it listens on tcp and udp (eg for DNS)

42wim commented Aug 16, 2015

Modified it to use SERVICE_<port>_IPV6=<tcp/udp>
If no protocol specified it listens on tcp and udp (eg for DNS)

@42wim 42wim referenced this pull request Aug 18, 2015

Closed

Add netfilter (ipv6) support #234

42wim added a commit to 42wim/registrator-work that referenced this pull request Aug 20, 2015

@progrium progrium added the investigate label Sep 1, 2015

@progrium

This comment has been minimized.

Show comment
Hide comment
@progrium

progrium Sep 4, 2015

Contributor

assign @andyshinn

Contributor

progrium commented Sep 4, 2015

assign @andyshinn

@andyshinn andyshinn self-assigned this Sep 4, 2015

@42wim

This comment has been minimized.

Show comment
Hide comment
@42wim

42wim Sep 5, 2015

Rebased the patch, besides SERVICE_<port>_IPV6=<tcp/udp> it now adds support for the metadata SERVICE_NAME_IPV6=name and SERVICE_NAME_IPV4=name

When you specify SERVICE_NAME_IPV6 it will override SERVICE_NAME for IPv6 services. (SERVICE_NAME_IPV4 does the same for IPv4 services).

Use case: consul and elasticsearch with IPv4 and IPv6 nodes.
The IPv4 nodes get port 19200 mapped (because of the NAT) and the IPv6 (no NAT) nodes are directly reachable by the default 9200 poort. If you got a single SERVICE_NAME=elastic you'll get IPv6 and IPv4 addresses but the ports differ for each stack.

Specifying SERVICE_NAME=elastic and SERVICE_NAME_IPV4=elastic-legacy now gives you elastic.service.consul containing only IPv6 addresses and elastic-legacy.service.consul containing the IPv4 services.

42wim commented Sep 5, 2015

Rebased the patch, besides SERVICE_<port>_IPV6=<tcp/udp> it now adds support for the metadata SERVICE_NAME_IPV6=name and SERVICE_NAME_IPV4=name

When you specify SERVICE_NAME_IPV6 it will override SERVICE_NAME for IPv6 services. (SERVICE_NAME_IPV4 does the same for IPv4 services).

Use case: consul and elasticsearch with IPv4 and IPv6 nodes.
The IPv4 nodes get port 19200 mapped (because of the NAT) and the IPv6 (no NAT) nodes are directly reachable by the default 9200 poort. If you got a single SERVICE_NAME=elastic you'll get IPv6 and IPv4 addresses but the ports differ for each stack.

Specifying SERVICE_NAME=elastic and SERVICE_NAME_IPV4=elastic-legacy now gives you elastic.service.consul containing only IPv6 addresses and elastic-legacy.service.consul containing the IPv4 services.

@andyshinn

This comment has been minimized.

Show comment
Hide comment
@andyshinn

andyshinn Sep 9, 2015

Contributor

It doesn't look like this was rebased properly against our master. The branch is missing commits from about 20 days ago. Can you check again?

Contributor

andyshinn commented Sep 9, 2015

It doesn't look like this was rebased properly against our master. The branch is missing commits from about 20 days ago. Can you check again?

@andyshinn

This comment has been minimized.

Show comment
Hide comment
@andyshinn

andyshinn Sep 10, 2015

Contributor

This seems to work as advertised. Here is a docker-compose.yml I used for testing:

consul:
  image: gliderlabs/consul-server:0.5
  net: "host"
  ports:
    - "8500:8500"
  command: "-bootstrap-expect 1"
registrator:
  build: ../..
  volumes:
    - /var/run/docker.sock:/tmp/docker.sock
  net: "host"
  command: consul://localhost:8500
redis:
  image: redis
  ports:
    - "6379"
  environment:
    - SERVICE_6379_IPV6=tcp
    - SERVICE_NAME_IPV6=redis6

SRV and AAAA records seems to be fine:

Andys-MBP:docker-redis andy$ dig +noall +answer +additional @192.168.99.101 -p 8600 redis6.service.consul SRV
redis6.service.consul.  0   IN  SRV 1 1 6379 dev.node.dc1.consul.
dev.node.dc1.consul.    0   IN  AAAA    fe80::242:ac11:22
Andys-MBP:docker-redis andy$ dig +noall +answer +additional @192.168.99.101 -p 8600 redis6.service.consul AAAA
redis6.service.consul.  0   IN  AAAA    fe80::242:ac11:22
Andys-MBP:docker-redis andy$ dig +noall +answer +additional @192.168.99.101 -p 8600 redis6.service.consul A

HTTP API as well:

Andys-MBP:docker-redis andy$ curl http://192.168.99.101:8500/v1/catalog/service/redis6
[{"Node":"dev","Address":"10.0.2.15","ServiceID":"dev:consul_redis_1:6379:ipv6","ServiceName":"redis6","ServiceTags":null,"ServiceAddress":"fe80::242:ac11:22","ServicePort":6379}]

LGTM after proper rebase.

Contributor

andyshinn commented Sep 10, 2015

This seems to work as advertised. Here is a docker-compose.yml I used for testing:

consul:
  image: gliderlabs/consul-server:0.5
  net: "host"
  ports:
    - "8500:8500"
  command: "-bootstrap-expect 1"
registrator:
  build: ../..
  volumes:
    - /var/run/docker.sock:/tmp/docker.sock
  net: "host"
  command: consul://localhost:8500
redis:
  image: redis
  ports:
    - "6379"
  environment:
    - SERVICE_6379_IPV6=tcp
    - SERVICE_NAME_IPV6=redis6

SRV and AAAA records seems to be fine:

Andys-MBP:docker-redis andy$ dig +noall +answer +additional @192.168.99.101 -p 8600 redis6.service.consul SRV
redis6.service.consul.  0   IN  SRV 1 1 6379 dev.node.dc1.consul.
dev.node.dc1.consul.    0   IN  AAAA    fe80::242:ac11:22
Andys-MBP:docker-redis andy$ dig +noall +answer +additional @192.168.99.101 -p 8600 redis6.service.consul AAAA
redis6.service.consul.  0   IN  AAAA    fe80::242:ac11:22
Andys-MBP:docker-redis andy$ dig +noall +answer +additional @192.168.99.101 -p 8600 redis6.service.consul A

HTTP API as well:

Andys-MBP:docker-redis andy$ curl http://192.168.99.101:8500/v1/catalog/service/redis6
[{"Node":"dev","Address":"10.0.2.15","ServiceID":"dev:consul_redis_1:6379:ipv6","ServiceName":"redis6","ServiceTags":null,"ServiceAddress":"fe80::242:ac11:22","ServicePort":6379}]

LGTM after proper rebase.

@andyshinn andyshinn added needs: rebase and removed investigate labels Sep 10, 2015

@andyshinn

This comment has been minimized.

Show comment
Hide comment
@andyshinn

andyshinn Sep 10, 2015

Contributor

Also, it looks like you have a bunch of IPv6 stuff in for Consul in the last 15 days: https://github.com/hashicorp/consul/search?utf8=%E2%9C%93&q=author%3A42wim&type=Issues. Is it best to wait for a Consul release before releasing this in Registrator?

Contributor

andyshinn commented Sep 10, 2015

Also, it looks like you have a bunch of IPv6 stuff in for Consul in the last 15 days: https://github.com/hashicorp/consul/search?utf8=%E2%9C%93&q=author%3A42wim&type=Issues. Is it best to wait for a Consul release before releasing this in Registrator?

@jovandeginste

This comment has been minimized.

Show comment
Hide comment
@jovandeginste

jovandeginste Sep 10, 2015

@andyshinn people not using ipv6 will not see any difference; people using it and waiting for the consul changes, will appreciate that the registrator already works with it :-)

@andyshinn people not using ipv6 will not see any difference; people using it and waiting for the consul changes, will appreciate that the registrator already works with it :-)

@42wim

This comment has been minimized.

Show comment
Hide comment
@42wim

42wim Sep 10, 2015

Andy, I've rebased against master.
I agree with @jovandeginste , the issues I fixed are issues you can mostly encounter when you go for IPv6-only services.

42wim commented Sep 10, 2015

Andy, I've rebased against master.
I agree with @jovandeginste , the issues I fixed are issues you can mostly encounter when you go for IPv6-only services.

@seeder

This comment has been minimized.

Show comment
Hide comment
@seeder

seeder Sep 30, 2015

A little bit disappointing implementation, treating ipv6 as worse than ipv4

  1. you can discover exposed ports
  2. just slap them in with ipv6 address
  3. done

that should be default behaviour , not some cryptic env vars enabling it

handling bad containers should be by fixing them, or at worse having cryptic env vars

seeder commented Sep 30, 2015

A little bit disappointing implementation, treating ipv6 as worse than ipv4

  1. you can discover exposed ports
  2. just slap them in with ipv6 address
  3. done

that should be default behaviour , not some cryptic env vars enabling it

handling bad containers should be by fixing them, or at worse having cryptic env vars

@42wim

This comment has been minimized.

Show comment
Hide comment
@42wim

42wim Sep 30, 2015

Well, you can only discover them if they are specified in the Dockerfile, which is not something you can assume for every image.
Also the behaviour should be exactly the same for IPv4 and IPv6, can you give an example where IPv6 is treated worse?

42wim commented Sep 30, 2015

Well, you can only discover them if they are specified in the Dockerfile, which is not something you can assume for every image.
Also the behaviour should be exactly the same for IPv4 and IPv6, can you give an example where IPv6 is treated worse?

@jovandeginste

This comment has been minimized.

Show comment
Hide comment
@jovandeginste

jovandeginste Oct 3, 2015

@Wim42 I think he means you may have to explicitly add _IPV6=... Although we tried to make sure you can have it with the least amount of 'extra' env vars.

@seeder the main concern here is to not break existing (v4) things. Perhaps you have an exported management port which is not published (mapped over v4) and you want it to remain like that over v6? Also consider that exposing ports is suddenly more documentation than necessity: any port in your container may be reachable over v6 depending on your ip6tables...

It might be useful to add a startup parameter to the registrator called 'ipv6-first' or similar, which does more or less what you suggest: register any exposed (!= published) ports over ipv6 unless some env var says not to.

@Wim42 I think he means you may have to explicitly add _IPV6=... Although we tried to make sure you can have it with the least amount of 'extra' env vars.

@seeder the main concern here is to not break existing (v4) things. Perhaps you have an exported management port which is not published (mapped over v4) and you want it to remain like that over v6? Also consider that exposing ports is suddenly more documentation than necessity: any port in your container may be reachable over v6 depending on your ip6tables...

It might be useful to add a startup parameter to the registrator called 'ipv6-first' or similar, which does more or less what you suggest: register any exposed (!= published) ports over ipv6 unless some env var says not to.

@seeder

This comment has been minimized.

Show comment
Hide comment
@seeder

seeder Oct 6, 2015

@jovandeginste , Yes, that's what I mean.

If you are concerned about people breaking stuff, be aware that it's a conscious decision for them to switch IPv6 on in docker, so they know what they are doing and what can happen.

Having every exposed port published when using container internal is exactly what people want.
This is consistent with current IPv4 behavior in docker.
What ports to expose is not registrator domain, it's dockers, and they control it already. If you find IPv6 implementation in docker lacking features, address them in docker, not in registrator.

Ideally, having simple flag on registrator command line, to enable or disable use of ipv6/4 addresses when using internal addressing, should be the maximum extend of mandatory regulation here.

Environment vars like proposed above shall be optional, but at the end, any kind of limitation.

seeder commented Oct 6, 2015

@jovandeginste , Yes, that's what I mean.

If you are concerned about people breaking stuff, be aware that it's a conscious decision for them to switch IPv6 on in docker, so they know what they are doing and what can happen.

Having every exposed port published when using container internal is exactly what people want.
This is consistent with current IPv4 behavior in docker.
What ports to expose is not registrator domain, it's dockers, and they control it already. If you find IPv6 implementation in docker lacking features, address them in docker, not in registrator.

Ideally, having simple flag on registrator command line, to enable or disable use of ipv6/4 addresses when using internal addressing, should be the maximum extend of mandatory regulation here.

Environment vars like proposed above shall be optional, but at the end, any kind of limitation.

@progrium

This comment has been minimized.

Show comment
Hide comment
@progrium

progrium Oct 7, 2015

Contributor

I would imagine you don't use just IPv4 or IPv6 across containers and so a simple switch might not be good enough. Definitely interested in simpler implementations though...

Contributor

progrium commented Oct 7, 2015

I would imagine you don't use just IPv4 or IPv6 across containers and so a simple switch might not be good enough. Definitely interested in simpler implementations though...

@seeder

This comment has been minimized.

Show comment
Hide comment
@seeder

seeder Oct 8, 2015

You don't need "OR" there, it can be "AND" , it's just a matter of treating them them both as first class citizens. On the other hand, nothing stops people spawning two registrators, one for different addressing and with different rules. After all, we are in the world of containers.

seeder commented Oct 8, 2015

You don't need "OR" there, it can be "AND" , it's just a matter of treating them them both as first class citizens. On the other hand, nothing stops people spawning two registrators, one for different addressing and with different rules. After all, we are in the world of containers.

@mpalmer

This comment has been minimized.

Show comment
Hide comment
@mpalmer

mpalmer Nov 16, 2015

I've got to say, I was a bit surprised to find that my IPv6-enabled containers weren't having their exposed ports registered with the IPv6 address of the container. It's certainly the way I expected registrator to work.

mpalmer commented Nov 16, 2015

I've got to say, I was a bit surprised to find that my IPv6-enabled containers weren't having their exposed ports registered with the IPv6 address of the container. It's certainly the way I expected registrator to work.

42wim and others added some commits Aug 15, 2015

Handle IPv6 address in http health check
IPv6 address needs to be surrounded by square brackets when calling a
service with the http library (e.g. http://[2a00:1450:4013:c01::5e]:80/foo)

42wim added a commit to 42wim/registrator-work that referenced this pull request Jul 28, 2016

arentrue added a commit to rbkmoney/registrator that referenced this pull request Jan 26, 2017

arentrue added a commit to rbkmoney/registrator that referenced this pull request Jan 27, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment