New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote drivers are (wrongly) assumed to be global #486

Closed
thockin opened this Issue Sep 2, 2015 · 101 comments

Comments

Projects
None yet
@thockin
Contributor

thockin commented Sep 2, 2015

https://github.com/docker/libnetwork/blob/master/drivers/remote/driver.go#L32

It should be possible to write local-only drivers.

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 2, 2015

Contributor

ping @squaremo @tomdee @shettyg. Since you worked on the remote driver implementation more closely, can you please help with defining a proper registration mechanism to determine if the remote driver is interested in being a local scope or global scoped driver ?

Contributor

mavenugo commented Sep 2, 2015

ping @squaremo @tomdee @shettyg. Since you worked on the remote driver implementation more closely, can you please help with defining a proper registration mechanism to determine if the remote driver is interested in being a local scope or global scoped driver ?

@squaremo

This comment has been minimized.

Show comment
Hide comment
@squaremo

squaremo Sep 2, 2015

Contributor

It may be difficult to do this without modifications to the plugin subsystem. My best guess at doing without that is to have two kinds of driver; "Implements": ["NetworkDriver"] means a globally scoped driver, and "Implements": ["LocalNetworkDriver"] means a locally scoped driver. (Immediately obvious problem: what does both mean?)

I agree that this is a gap; but I wonder if it is exactly what e.g., kubernetes needs. What assumptions does libnetwork make about locally scoped drivers?

Contributor

squaremo commented Sep 2, 2015

It may be difficult to do this without modifications to the plugin subsystem. My best guess at doing without that is to have two kinds of driver; "Implements": ["NetworkDriver"] means a globally scoped driver, and "Implements": ["LocalNetworkDriver"] means a locally scoped driver. (Immediately obvious problem: what does both mean?)

I agree that this is a gap; but I wonder if it is exactly what e.g., kubernetes needs. What assumptions does libnetwork make about locally scoped drivers?

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 2, 2015

Contributor

@squaremo i dont think we would need to change the plugin subsystem for this. We could add an explicit call for capability negotiation before having to register the plugin with libnetwork.

If it is a locally scoped driver, then libnetwork will not distribute the network or endpoint information or require a KV store to back these guarantees.

Contributor

mavenugo commented Sep 2, 2015

@squaremo i dont think we would need to change the plugin subsystem for this. We could add an explicit call for capability negotiation before having to register the plugin with libnetwork.

If it is a locally scoped driver, then libnetwork will not distribute the network or endpoint information or require a KV store to back these guarantees.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 2, 2015

Contributor

For example, Kubernetes endpoints are not going to be used across hosts.
If we end up implementing our own driver, IPAM decisions are local
decisions. We assign a CIDR per host. Those IP addresses are not usable on
any other host.

On Wed, Sep 2, 2015 at 4:03 PM, Michael Bridgen notifications@github.com
wrote:

It may be difficult to do this without modifications to the plugin
subsystem. My best guess at doing without that is to have two kinds of
driver; "Implements": ["NetworkDriver"] means a globally scoped driver,
and "Implements": ["LocalNetworkDriver"] means a locally scoped driver.
(Immediately obvious problem: what does both mean?)

I agree that this is a gap; but I wonder if it is exactly what e.g.,
kubernetes needs. What assumptions does libnetwork make about locally
scoped drivers?


Reply to this email directly or view it on GitHub
#486 (comment).

Contributor

thockin commented Sep 2, 2015

For example, Kubernetes endpoints are not going to be used across hosts.
If we end up implementing our own driver, IPAM decisions are local
decisions. We assign a CIDR per host. Those IP addresses are not usable on
any other host.

On Wed, Sep 2, 2015 at 4:03 PM, Michael Bridgen notifications@github.com
wrote:

It may be difficult to do this without modifications to the plugin
subsystem. My best guess at doing without that is to have two kinds of
driver; "Implements": ["NetworkDriver"] means a globally scoped driver,
and "Implements": ["LocalNetworkDriver"] means a locally scoped driver.
(Immediately obvious problem: what does both mean?)

I agree that this is a gap; but I wonder if it is exactly what e.g.,
kubernetes needs. What assumptions does libnetwork make about locally
scoped drivers?


Reply to this email directly or view it on GitHub
#486 (comment).

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 2, 2015

Contributor

@thockin as we discussed, this is a good reason to be a local-scoped driver.

Contributor

mavenugo commented Sep 2, 2015

@thockin as we discussed, this is a good reason to be a local-scoped driver.

@shettyg

This comment has been minimized.

Show comment
Hide comment
@shettyg

shettyg Sep 2, 2015

@mavenugo
I am likely missing a piece of the puzzle here. A locally scoped remote driver can't work if docker provides uuid instead of names. For e.g:

On host-1:
docker network created -d openvswitch foo

My driver currently receives just a uuid. On a different host, if I run:

docker service publish my-service.foo

I will likely get a "foo" network not found error.

What do you have in mind for a locally scoped driver? Can we get names instead of UUIDs?

shettyg commented Sep 2, 2015

@mavenugo
I am likely missing a piece of the puzzle here. A locally scoped remote driver can't work if docker provides uuid instead of names. For e.g:

On host-1:
docker network created -d openvswitch foo

My driver currently receives just a uuid. On a different host, if I run:

docker service publish my-service.foo

I will likely get a "foo" network not found error.

What do you have in mind for a locally scoped driver? Can we get names instead of UUIDs?

@shettyg

This comment has been minimized.

Show comment
Hide comment
@shettyg

shettyg Sep 2, 2015

Also I think a locally scoped driver will work if commands like "docker network ls" ask the local driver to list networks instead of trying to list UUIDs on its own. The local drivers can provide back UUIDs and names back which is listed by docker. So in theory the local drivers do the job of libkv.

shettyg commented Sep 2, 2015

Also I think a locally scoped driver will work if commands like "docker network ls" ask the local driver to list networks instead of trying to list UUIDs on its own. The local drivers can provide back UUIDs and names back which is listed by docker. So in theory the local drivers do the job of libkv.

@tomdee

This comment has been minimized.

Show comment
Hide comment
@tomdee

tomdee Sep 2, 2015

Contributor

The suggestion from @squaremo sounds sensible to me. The plugin already has to do a handshake to establish its capabilities so adding a different capability for "local" plugins sounds like a good idea

Contributor

tomdee commented Sep 2, 2015

The suggestion from @squaremo sounds sensible to me. The plugin already has to do a handshake to establish its capabilities so adding a different capability for "local" plugins sounds like a good idea

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 2, 2015

Contributor

Is "On a different host" compatible with a locally scoped driver?

On Wed, Sep 2, 2015 at 4:29 PM, Gurucharan Shetty notifications@github.com
wrote:

@mavenugo https://github.com/mavenugo
I am likely missing a piece of the puzzle here. A locally scoped remote
driver can't work if docker provides uuid instead of names. For e.g:

On host-1:
docker network created -d openvswitch foo

My driver currently receives just a uuid. On a different host, if I run:

docker service publish my-service.foo

I will likely get a "foo" network not found error.

What do you have in mind for a locally scoped driver? Can we get names
instead of UUIDs?


Reply to this email directly or view it on GitHub
#486 (comment).

Contributor

thockin commented Sep 2, 2015

Is "On a different host" compatible with a locally scoped driver?

On Wed, Sep 2, 2015 at 4:29 PM, Gurucharan Shetty notifications@github.com
wrote:

@mavenugo https://github.com/mavenugo
I am likely missing a piece of the puzzle here. A locally scoped remote
driver can't work if docker provides uuid instead of names. For e.g:

On host-1:
docker network created -d openvswitch foo

My driver currently receives just a uuid. On a different host, if I run:

docker service publish my-service.foo

I will likely get a "foo" network not found error.

What do you have in mind for a locally scoped driver? Can we get names
instead of UUIDs?


Reply to this email directly or view it on GitHub
#486 (comment).

@shettyg

This comment has been minimized.

Show comment
Hide comment
@shettyg

shettyg Sep 2, 2015

@thockin
So when you say "local" drivers, you are in effect saying that commands like "docker network ls" etc will simply not return information provided to locally scoped driver?

May be if you give an end to end workflow of a locally scoped driver that you have in mind, I will know better.

shettyg commented Sep 2, 2015

@thockin
So when you say "local" drivers, you are in effect saying that commands like "docker network ls" etc will simply not return information provided to locally scoped driver?

May be if you give an end to end workflow of a locally scoped driver that you have in mind, I will know better.

@tomdee

This comment has been minimized.

Show comment
Hide comment
@tomdee

tomdee Sep 2, 2015

Contributor

@shettyg That's an interesting thought. You're treating locally scoped drivers as a way to sidestep multi-host parts of libnetwork. Which as you point out, only works if libnetwork defers more control to the driver.

Contributor

tomdee commented Sep 2, 2015

@shettyg That's an interesting thought. You're treating locally scoped drivers as a way to sidestep multi-host parts of libnetwork. Which as you point out, only works if libnetwork defers more control to the driver.

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 2, 2015

Contributor

@shettyg locally scoped driver will make sure libnetwork doesnt synchronize the networks, endpoints and hence yes, it is upto the orchestration system to determine how this is handled across multi-hosts.
In the case of k8s, it will work, because it requires its own subnet space per host and doesnt require L2 mobility. And hence I suggested to open this PR. The drivers can determine what it prefer.

Contributor

mavenugo commented Sep 2, 2015

@shettyg locally scoped driver will make sure libnetwork doesnt synchronize the networks, endpoints and hence yes, it is upto the orchestration system to determine how this is handled across multi-hosts.
In the case of k8s, it will work, because it requires its own subnet space per host and doesnt require L2 mobility. And hence I suggested to open this PR. The drivers can determine what it prefer.

@shettyg

This comment has been minimized.

Show comment
Hide comment
@shettyg

shettyg Sep 2, 2015

@mavenugo
Got it. In that case, @squaremo suggestion is a nice starting point.

shettyg commented Sep 2, 2015

@mavenugo
Got it. In that case, @squaremo suggestion is a nice starting point.

@shettyg

This comment has been minimized.

Show comment
Hide comment
@shettyg

shettyg Sep 2, 2015

Another thought. Isn't a locally scoped driver same as starting docker daemon with a libkv store that is local only?

shettyg commented Sep 2, 2015

Another thought. Isn't a locally scoped driver same as starting docker daemon with a libkv store that is local only?

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 2, 2015

Contributor

@shettyg @squaremo @tomdee i dont think having another plugin endpoint is a good idea. This is a property of the network driver and must be honored as such. Introducing another plugin type will call for more changes to the libnetwork core to look for more plugin types, when the functionality provided by the driver is the same.

Hence my suggestion is to add a capability negotiation in the Plugin API & exchange this info,

Contributor

mavenugo commented Sep 2, 2015

@shettyg @squaremo @tomdee i dont think having another plugin endpoint is a good idea. This is a property of the network driver and must be honored as such. Introducing another plugin type will call for more changes to the libnetwork core to look for more plugin types, when the functionality provided by the driver is the same.

Hence my suggestion is to add a capability negotiation in the Plugin API & exchange this info,

@squaremo

This comment has been minimized.

Show comment
Hide comment
@squaremo

squaremo Sep 2, 2015

Contributor

Is "On a different host" compatible with a locally scoped driver?

Not in general. Because each host will assume it's acting only locally, it will cons a new UUID for a network it hasn't seen on that host. If "LocalScope" is being used to mean "let me do my own co-ordination", this is going to fail to do the expected thing.

Contributor

squaremo commented Sep 2, 2015

Is "On a different host" compatible with a locally scoped driver?

Not in general. Because each host will assume it's acting only locally, it will cons a new UUID for a network it hasn't seen on that host. If "LocalScope" is being used to mean "let me do my own co-ordination", this is going to fail to do the expected thing.

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 3, 2015

Contributor

@squaremo correct. bridge driver today is a localscoped driver and it doenst depend on the distributed states. Same I think will work for k8s and other drivers such as macvlan, ipvlan plugins.

Contributor

mavenugo commented Sep 3, 2015

@squaremo correct. bridge driver today is a localscoped driver and it doenst depend on the distributed states. Same I think will work for k8s and other drivers such as macvlan, ipvlan plugins.

@squaremo

This comment has been minimized.

Show comment
Hide comment
@squaremo

squaremo Sep 3, 2015

Contributor

@mavenugo Fair point about plugin types; minor shame to have another handshake exchange, but I agree it is better overall.

Contributor

squaremo commented Sep 3, 2015

@mavenugo Fair point about plugin types; minor shame to have another handshake exchange, but I agree it is better overall.

@squaremo

This comment has been minimized.

Show comment
Hide comment
@squaremo

squaremo Sep 3, 2015

Contributor

bridge driver today is a localscoped driver and it doenst depend on the distributed states

Right; this leaves systems that do their own co-ordination high and dry, unfortunately.

Contributor

squaremo commented Sep 3, 2015

bridge driver today is a localscoped driver and it doenst depend on the distributed states

Right; this leaves systems that do their own co-ordination high and dry, unfortunately.

@mrjana

This comment has been minimized.

Show comment
Hide comment
@mrjana

mrjana Sep 3, 2015

Contributor

@squaremo Are you looking for a notion of cluster to be provided to the drivers by libnetwork?

Contributor

mrjana commented Sep 3, 2015

@squaremo Are you looking for a notion of cluster to be provided to the drivers by libnetwork?

@squaremo

This comment has been minimized.

Show comment
Hide comment
@squaremo

squaremo Sep 3, 2015

Contributor

Are you looking for a notion of cluster to be provided to the drivers by libnetwork?

This wouldn't help, since it would still be the case that drivers only see UUIDs, and these are constructed assuming that Docker's is the only own shared state. So I would lean towards giving the drivers the information they need to do their own co-ordination, which pretty much means the user-supplied names.

Contributor

squaremo commented Sep 3, 2015

Are you looking for a notion of cluster to be provided to the drivers by libnetwork?

This wouldn't help, since it would still be the case that drivers only see UUIDs, and these are constructed assuming that Docker's is the only own shared state. So I would lean towards giving the drivers the information they need to do their own co-ordination, which pretty much means the user-supplied names.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 3, 2015

Contributor

Yeah, this would be better. I can then have kubernetes orchestrate each
individual docker node to create a network "kubernetes" and use that in all
my docker run calls, without having that try to synchronize across nodes
if I happen to have a libkv driver installed for some other reason.

On Wed, Sep 2, 2015 at 5:11 PM, Michael Bridgen notifications@github.com
wrote:

Are you looking for a notion of cluster to be provided to the drivers by
libnetwork?

This wouldn't help, since it would still be the case that drivers only see
UUIDs, and these are constructed assuming that Docker's is the only own
shared state. So I would lean towards giving the drivers the information
they need to do their own co-ordination, which pretty much means the
user-supplied names.


Reply to this email directly or view it on GitHub
#486 (comment).

Contributor

thockin commented Sep 3, 2015

Yeah, this would be better. I can then have kubernetes orchestrate each
individual docker node to create a network "kubernetes" and use that in all
my docker run calls, without having that try to synchronize across nodes
if I happen to have a libkv driver installed for some other reason.

On Wed, Sep 2, 2015 at 5:11 PM, Michael Bridgen notifications@github.com
wrote:

Are you looking for a notion of cluster to be provided to the drivers by
libnetwork?

This wouldn't help, since it would still be the case that drivers only see
UUIDs, and these are constructed assuming that Docker's is the only own
shared state. So I would lean towards giving the drivers the information
they need to do their own co-ordination, which pretty much means the
user-supplied names.


Reply to this email directly or view it on GitHub
#486 (comment).

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 3, 2015

Contributor

@thockin k8s can still do that. If you are planning on using local-scope driber and if the network is created by k8s, it has the mapping between the name <-> network-id across all the hosts.

Contributor

mavenugo commented Sep 3, 2015

@thockin k8s can still do that. If you are planning on using local-scope driber and if the network is created by k8s, it has the mapping between the name <-> network-id across all the hosts.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 3, 2015

Contributor

If I use local-scope driver, won't every node have a different network ID
for the "kubernetes" network?

On Wed, Sep 2, 2015 at 5:26 PM, Madhu Venugopal notifications@github.com
wrote:

@thockin https://github.com/thockin k8s can still do that. If you are
planning on using local-scope driber and if the network is created by k8s,
it has the mapping between the name <-> network-id across all the hosts.


Reply to this email directly or view it on GitHub
#486 (comment).

Contributor

thockin commented Sep 3, 2015

If I use local-scope driver, won't every node have a different network ID
for the "kubernetes" network?

On Wed, Sep 2, 2015 at 5:26 PM, Madhu Venugopal notifications@github.com
wrote:

@thockin https://github.com/thockin k8s can still do that. If you are
planning on using local-scope driber and if the network is created by k8s,
it has the mapping between the name <-> network-id across all the hosts.


Reply to this email directly or view it on GitHub
#486 (comment).

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 3, 2015

Contributor

@thockin yes. it will be. just like docker0 bridge today which is different in each host.
If you want the "kubernetes" network to have the exact same ID across all the hosts, then you are essentially looking for a Globally scoped driver.

Contributor

mavenugo commented Sep 3, 2015

@thockin yes. it will be. just like docker0 bridge today which is different in each host.
If you want the "kubernetes" network to have the exact same ID across all the hosts, then you are essentially looking for a Globally scoped driver.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 3, 2015

Contributor

I don't want Docker to try to manage it globally because we have our own
API. I don't want to implement generic KV store on top of our structured
API. I'm forced to use local drivers, but I need the name so I can find
info in my own API. Your surrogate key is not useful to me.

On Wed, Sep 2, 2015 at 9:40 PM, Madhu Venugopal notifications@github.com
wrote:

@thockin https://github.com/thockin yes. it will be. just like docker0
bridge today which is different in each host.
If you want the "kubernetes" network to have the exact same ID across all
the hosts, then you are essentially looking for a Globally scoped driver.


Reply to this email directly or view it on GitHub
#486 (comment).

Contributor

thockin commented Sep 3, 2015

I don't want Docker to try to manage it globally because we have our own
API. I don't want to implement generic KV store on top of our structured
API. I'm forced to use local drivers, but I need the name so I can find
info in my own API. Your surrogate key is not useful to me.

On Wed, Sep 2, 2015 at 9:40 PM, Madhu Venugopal notifications@github.com
wrote:

@thockin https://github.com/thockin yes. it will be. just like docker0
bridge today which is different in each host.
If you want the "kubernetes" network to have the exact same ID across all
the hosts, then you are essentially looking for a Globally scoped driver.


Reply to this email directly or view it on GitHub
#486 (comment).

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 3, 2015

Contributor

@thockin can you please help explain what you mean by Your surrogate key is not useful to me. ?
AFAIK, I don't own any surrogate key ;) . jokes apart, I really like to understand your concern here so that we can find a balance between the docker users and kubernetes users. Please note that docker addresses more use-cases than kubernetes.

Contributor

mavenugo commented Sep 3, 2015

@thockin can you please help explain what you mean by Your surrogate key is not useful to me. ?
AFAIK, I don't own any surrogate key ;) . jokes apart, I really like to understand your concern here so that we can find a balance between the docker users and kubernetes users. Please note that docker addresses more use-cases than kubernetes.

@jainvipin

This comment has been minimized.

Show comment
Hide comment
@jainvipin

jainvipin Sep 3, 2015

@tomdee, @shettyg

@shettyg That's an interesting thought. You're treating locally scoped drivers as a way to sidestep multi-host parts of libnetwork. Which as you point out, only works if libnetwork defers more control to the driver.

If this works, then I would not have to think two ways to implement drivers when it runs as a plugin in Kubernetes vs natively as remote driver on libnetwork. Assume that I have KV store available.

I imagine by more control you mean providing network name in the API.

jainvipin commented Sep 3, 2015

@tomdee, @shettyg

@shettyg That's an interesting thought. You're treating locally scoped drivers as a way to sidestep multi-host parts of libnetwork. Which as you point out, only works if libnetwork defers more control to the driver.

If this works, then I would not have to think two ways to implement drivers when it runs as a plugin in Kubernetes vs natively as remote driver on libnetwork. Assume that I have KV store available.

I imagine by more control you mean providing network name in the API.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 3, 2015

Contributor

Sorry, that came off snippier than I meant it.

My primary key is the network name. That's the key my API knows (or will know). That key exists before I ever call "docker network create".

I don't want docker to try to manage my driver globally because I already have a global control plane. So I have to use a local driver.

Because I have a local driver, every node is going to "docker network create" and get a different UUID.

When we join a container to a namespace you are only telling me the UUID. My driver can not use that UUID to look anything up in my own API.

When I start offering multiple Networks, this just gets worse. I have to make my node-agent keep MORE side-band state that maps the UUID (returned from 'docker network create' right?) to network name and then publish that state to my driver. I can probably do that, but surely you see how this is a terrible hack just to work around docker.

I know other people have asked for network name - why is docker stonewalling the community on this seemingly tiny thing?

Contributor

thockin commented Sep 3, 2015

Sorry, that came off snippier than I meant it.

My primary key is the network name. That's the key my API knows (or will know). That key exists before I ever call "docker network create".

I don't want docker to try to manage my driver globally because I already have a global control plane. So I have to use a local driver.

Because I have a local driver, every node is going to "docker network create" and get a different UUID.

When we join a container to a namespace you are only telling me the UUID. My driver can not use that UUID to look anything up in my own API.

When I start offering multiple Networks, this just gets worse. I have to make my node-agent keep MORE side-band state that maps the UUID (returned from 'docker network create' right?) to network name and then publish that state to my driver. I can probably do that, but surely you see how this is a terrible hack just to work around docker.

I know other people have asked for network name - why is docker stonewalling the community on this seemingly tiny thing?

@dcbw

This comment has been minimized.

Show comment
Hide comment
@dcbw

dcbw Sep 4, 2015

The current side-band hack to retrieve network name is made worse by the fact that the docker+libnetwork remote driver API is synchronous, so the driver cannot query docker during a driver operation or docker will deadlock. Instead the remote driver must cache the UUID and then somehow right after the CreateNetwork() hook request the network name from the docker API. That's obviously racy since there's no guarantee that docker will receive the ListNetworks() request from the driver before libnetwork calls the driver again with CreateEndpoint() or some other call.

dcbw commented Sep 4, 2015

The current side-band hack to retrieve network name is made worse by the fact that the docker+libnetwork remote driver API is synchronous, so the driver cannot query docker during a driver operation or docker will deadlock. Instead the remote driver must cache the UUID and then somehow right after the CreateNetwork() hook request the network name from the docker API. That's obviously racy since there's no guarantee that docker will receive the ListNetworks() request from the driver before libnetwork calls the driver again with CreateEndpoint() or some other call.

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 4, 2015

Contributor

@thockin @dcbw @squaremo I think we went way off tangent to the intent of the PR.

@thockin do you still see a need for this issue to be resolved ?
@squaremo Can you please share your thoughts on the remote api implementation for the request raised in this Issue ?

Contributor

mavenugo commented Sep 4, 2015

@thockin @dcbw @squaremo I think we went way off tangent to the intent of the PR.

@thockin do you still see a need for this issue to be resolved ?
@squaremo Can you please share your thoughts on the remote api implementation for the request raised in this Issue ?

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 4, 2015

Contributor

Yes! without the ability to have local-scope drivers, I can't use them at all, I think. But once we have local drivers, I need the name.

Contributor

thockin commented Sep 4, 2015

Yes! without the ability to have local-scope drivers, I can't use them at all, I think. But once we have local drivers, I need the name.

@lxpollitt

This comment has been minimized.

Show comment
Hide comment
@lxpollitt

lxpollitt Sep 4, 2015

The key from my point of view is that (almost all) existing SDN solutions have their own control plane. So @thockin's comments around control plane, state and KV stores are not Kubernetes specific. If introducing the idea of a "local" remote driver can solve that while maintaining a consistent UX for Docker users then that is a huge win for everyone.

To maintain a consistent UX for Docker users though, things like docker network ls need to work across all hosts networked by the underlying SDN without the user having to run the same docker network create on every host. (If that's not the case then we have not maintained a consistent UX, which I believe is one of @mavenugo & @mrjana main focusses for libnetwork.) That in turn means that Docker libnetwork needs to defer the state ownership of network creation (including the network name) to the "local" remote driver. e.g. When a user runs docker network ls, libnetwork will need to ask the driver for the list of networks.

What do people think?

lxpollitt commented Sep 4, 2015

The key from my point of view is that (almost all) existing SDN solutions have their own control plane. So @thockin's comments around control plane, state and KV stores are not Kubernetes specific. If introducing the idea of a "local" remote driver can solve that while maintaining a consistent UX for Docker users then that is a huge win for everyone.

To maintain a consistent UX for Docker users though, things like docker network ls need to work across all hosts networked by the underlying SDN without the user having to run the same docker network create on every host. (If that's not the case then we have not maintained a consistent UX, which I believe is one of @mavenugo & @mrjana main focusses for libnetwork.) That in turn means that Docker libnetwork needs to defer the state ownership of network creation (including the network name) to the "local" remote driver. e.g. When a user runs docker network ls, libnetwork will need to ask the driver for the list of networks.

What do people think?

@squaremo

This comment has been minimized.

Show comment
Hide comment
@squaremo

squaremo Sep 4, 2015

Contributor

I think we went way off tangent to the intent of the PR.

If one is very literal about the description, maybe. I think the intention was to make it possible to use libnetwork in a distributed setting without involving all of its kv-store machinery, and from that point of view, the whole discussion was pertinent.

Can you please share your thoughts on the remote api implementation for the PR request ?

I don't mind doing that (at some point), but I think it is necessary to go further and address the other things that came up in discussion.

Contributor

squaremo commented Sep 4, 2015

I think we went way off tangent to the intent of the PR.

If one is very literal about the description, maybe. I think the intention was to make it possible to use libnetwork in a distributed setting without involving all of its kv-store machinery, and from that point of view, the whole discussion was pertinent.

Can you please share your thoughts on the remote api implementation for the PR request ?

I don't mind doing that (at some point), but I think it is necessary to go further and address the other things that came up in discussion.

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 4, 2015

Contributor

If one is very literal about the description, maybe. I think the intention was to make it possible to use libnetwork in a distributed setting without involving all of its kv-store machinery,

@squaremo agreed & the reason this issue came about was to support exactly that requirement. We need a PR to back that. We can continue to discuss on the other discussions absolutely, without delaying getting this issue resolved.

Contributor

mavenugo commented Sep 4, 2015

If one is very literal about the description, maybe. I think the intention was to make it possible to use libnetwork in a distributed setting without involving all of its kv-store machinery,

@squaremo agreed & the reason this issue came about was to support exactly that requirement. We need a PR to back that. We can continue to discuss on the other discussions absolutely, without delaying getting this issue resolved.

@dcbw

This comment has been minimized.

Show comment
Hide comment
@dcbw

dcbw Sep 4, 2015

That in turn means that Docker libnetwork needs to defer the state ownership of network creation (including the network name) to the "local" remote driver. e.g. When a user runs docker network ls, libnetwork will need to ask the driver for the list of networks.

That would be a more perfect world I suppose, but how about a 1st step, simpler approach of (a) building libnetwork with the simple builtin local KV store (eg #466 ) to ensure docker restart keeps networks around, (b) having the control plane/Kubernetes add/remove networks from via the docker API when it wants to, and (c) telling users who do 'docker network add type=' to just Not Do That?

If people would rather take longer and do it right from the start, that's fine too of course...

dcbw commented Sep 4, 2015

That in turn means that Docker libnetwork needs to defer the state ownership of network creation (including the network name) to the "local" remote driver. e.g. When a user runs docker network ls, libnetwork will need to ask the driver for the list of networks.

That would be a more perfect world I suppose, but how about a 1st step, simpler approach of (a) building libnetwork with the simple builtin local KV store (eg #466 ) to ensure docker restart keeps networks around, (b) having the control plane/Kubernetes add/remove networks from via the docker API when it wants to, and (c) telling users who do 'docker network add type=' to just Not Do That?

If people would rather take longer and do it right from the start, that's fine too of course...

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 5, 2015

Contributor

I'm a big fan of incrementalism, these days

Contributor

thockin commented Sep 5, 2015

I'm a big fan of incrementalism, these days

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 5, 2015

Contributor

@dcbw sounds good & this issue is one of the first steps. Would be great if someone can back it with a PR.

Can you please elaborate on this

(c) telling users who do 'docker network add type=' to just Not Do That?

Contributor

mavenugo commented Sep 5, 2015

@dcbw sounds good & this issue is one of the first steps. Would be great if someone can back it with a PR.

Can you please elaborate on this

(c) telling users who do 'docker network add type=' to just Not Do That?

@squaremo

This comment has been minimized.

Show comment
Hide comment
@squaremo

squaremo Sep 5, 2015

Contributor

(b) having the control plane/Kubernetes add/remove networks from via the docker API when it wants to

Doesn't that run into either the "network only exists locally" problem or the "libnetwork thinks each host has a different network" problem?

Contributor

squaremo commented Sep 5, 2015

(b) having the control plane/Kubernetes add/remove networks from via the docker API when it wants to

Doesn't that run into either the "network only exists locally" problem or the "libnetwork thinks each host has a different network" problem?

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 5, 2015

Contributor

@squaremo

Doesn't that run into either the "network only exists locally" problem or the "libnetwork thinks each host has a different network" problem?

For a Locally managed networks, its upto the orchestration entity to create network on the "required" hosts and the corresponding driver (which wants to be a locally scoped) to manage the forwarding. Example of such drivers are Mac/IPVlan drivers and the orchestration manages individual host network with appropriate configurations (subnet range to use, etc.)

Contributor

mavenugo commented Sep 5, 2015

@squaremo

Doesn't that run into either the "network only exists locally" problem or the "libnetwork thinks each host has a different network" problem?

For a Locally managed networks, its upto the orchestration entity to create network on the "required" hosts and the corresponding driver (which wants to be a locally scoped) to manage the forwarding. Example of such drivers are Mac/IPVlan drivers and the orchestration manages individual host network with appropriate configurations (subnet range to use, etc.)

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 8, 2015

Contributor

@thockin yes and that is the reason we have to make progress step by step. To me, having this issue resolved and providing the labels support gets us 90% (or even 100%) of all the requirements that we are discussing here.

Contributor

mavenugo commented Sep 8, 2015

@thockin yes and that is the reason we have to make progress step by step. To me, having this issue resolved and providing the labels support gets us 90% (or even 100%) of all the requirements that we are discussing here.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 8, 2015

Contributor

libnetwork will need to ask the driver for the list of networks.

direct impact on the user experience that libnetwork guarantees to the user.

hmm, I'm not sure I see that. If installing a FooBar driver suddenly made all of the FooBar networks appear in 'docker network ls', I would actually be pretty impressed - that would be a great user experience.

Contributor

thockin commented Sep 8, 2015

libnetwork will need to ask the driver for the list of networks.

direct impact on the user experience that libnetwork guarantees to the user.

hmm, I'm not sure I see that. If installing a FooBar driver suddenly made all of the FooBar networks appear in 'docker network ls', I would actually be pretty impressed - that would be a great user experience.

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 8, 2015

Contributor

@thockin

hmm, I'm not sure I see that. If installing a FooBar driver suddenly made all of the FooBar networks appear in 'docker network ls', I would actually be pretty impressed - that would be a great user experience.

What if the foobar driver decides to consume user-created foobaz network in some of the nodes, because it feels like it :)

Contributor

mavenugo commented Sep 8, 2015

@thockin

hmm, I'm not sure I see that. If installing a FooBar driver suddenly made all of the FooBar networks appear in 'docker network ls', I would actually be pretty impressed - that would be a great user experience.

What if the foobar driver decides to consume user-created foobaz network in some of the nodes, because it feels like it :)

@FlorianOtel

This comment has been minimized.

Show comment
Hide comment
@FlorianOtel

FlorianOtel Sep 8, 2015

@mavenugo

Also, I would recommend folks to review #8951 where the discussion for native multi-host networking functionality in docker. If someone feels that docker should not do native multi-host networking, then they can always question the basic premise of this and we can have an open discussion on that. Infact we received similar comments from the vendor community that resulted in a polarizing effect moby/moby#8951 (comment).

Point taken. In the spirt of that disclaimer, I work for Nuage Networks and we do have a multi-host SDN solution (which predates Docker).

Back to your point, is not that I think Docker shouldn't have a multi-host solution -- on the contrary. Yes, I'm aware that my (current) vendor affiliation does make me look biased, but -- hard as it may be to believe -- I do firmly believe that synchronise network state across nodes is a very hard problem. On a second thought, make that I know first hand it is a hard problem, exacerbated by the scale and speed of the state changes imposed by the fast lifecycle for network containers (we have customers spinning up / down mid hundreds Docker containers per one host in fast sequence). The fact that those distributed platforms have their own control plane distribution mechanism, separated but in need to be kept in sync with my / SDN vendor state distribution mechanism is a challenge we know first hand.

This fundamentally challenges #8951

And that I read as directing me there, the topic is closed, or both. Understand.

@thockin

This is the kludge I mentioned elsewhere. You're forcing me to lie to your API and work around it. Is that really the best we can do?

Apparently so to me / so far. Let's settle for the ability of driver to select Local (host-only) context, agree that lying to libnetwork is what we can do, let it do its own thing and we all keep going on with our merry lives.

FlorianOtel commented Sep 8, 2015

@mavenugo

Also, I would recommend folks to review #8951 where the discussion for native multi-host networking functionality in docker. If someone feels that docker should not do native multi-host networking, then they can always question the basic premise of this and we can have an open discussion on that. Infact we received similar comments from the vendor community that resulted in a polarizing effect moby/moby#8951 (comment).

Point taken. In the spirt of that disclaimer, I work for Nuage Networks and we do have a multi-host SDN solution (which predates Docker).

Back to your point, is not that I think Docker shouldn't have a multi-host solution -- on the contrary. Yes, I'm aware that my (current) vendor affiliation does make me look biased, but -- hard as it may be to believe -- I do firmly believe that synchronise network state across nodes is a very hard problem. On a second thought, make that I know first hand it is a hard problem, exacerbated by the scale and speed of the state changes imposed by the fast lifecycle for network containers (we have customers spinning up / down mid hundreds Docker containers per one host in fast sequence). The fact that those distributed platforms have their own control plane distribution mechanism, separated but in need to be kept in sync with my / SDN vendor state distribution mechanism is a challenge we know first hand.

This fundamentally challenges #8951

And that I read as directing me there, the topic is closed, or both. Understand.

@thockin

This is the kludge I mentioned elsewhere. You're forcing me to lie to your API and work around it. Is that really the best we can do?

Apparently so to me / so far. Let's settle for the ability of driver to select Local (host-only) context, agree that lying to libnetwork is what we can do, let it do its own thing and we all keep going on with our merry lives.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 8, 2015

Contributor

@mrjana

GlobalScope by definition means libnetwork needs to have a cluster-wide consistent view of the objects backed by those drivers. Isn't it?
Why else would we have the term GlocalScope if libnetwork should not do "stuff" with it? If some other entity is always going to be the
control plane why would we even bother designing and implementing libnetwork? Basically you are suggesting docker/libnetwork should
not have it's own control plane. Why shouldn't docker try to enable distributed apps as it tried to enable successfully, the monolithic apps.

Then "Scope" is a complete misnomer. Had it originally been called useLibKV bool it would not have been a debate. And no, I dpn't think ANYONE suggested libnetwork (or properly docker) should not have a control plane, just that it should not insiste on being the ONLY control plane. And before you say it doesn't have to be, please keep in mind that this issue was JUST filed, and the defaulting to global scope (and thereby libkv) has existed for the entirety of the libnetwork development cycle.

Resolution of this issue, combined with the new-found clarity give us 75% of what we need. Labels and network name will be 24 of the remaining 25%. I reserve 1% for future problems TBD. :)

Contributor

thockin commented Sep 8, 2015

@mrjana

GlobalScope by definition means libnetwork needs to have a cluster-wide consistent view of the objects backed by those drivers. Isn't it?
Why else would we have the term GlocalScope if libnetwork should not do "stuff" with it? If some other entity is always going to be the
control plane why would we even bother designing and implementing libnetwork? Basically you are suggesting docker/libnetwork should
not have it's own control plane. Why shouldn't docker try to enable distributed apps as it tried to enable successfully, the monolithic apps.

Then "Scope" is a complete misnomer. Had it originally been called useLibKV bool it would not have been a debate. And no, I dpn't think ANYONE suggested libnetwork (or properly docker) should not have a control plane, just that it should not insiste on being the ONLY control plane. And before you say it doesn't have to be, please keep in mind that this issue was JUST filed, and the defaulting to global scope (and thereby libkv) has existed for the entirety of the libnetwork development cycle.

Resolution of this issue, combined with the new-found clarity give us 75% of what we need. Labels and network name will be 24 of the remaining 25%. I reserve 1% for future problems TBD. :)

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 8, 2015

Contributor

@FlorianOtel

Point taken.

We go back a long way. I need no disclaimer from you ;)

Contributor

mavenugo commented Sep 8, 2015

@FlorianOtel

Point taken.

We go back a long way. I need no disclaimer from you ;)

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 8, 2015

Contributor

What if the foobar driver decides to consume user-created foobaz in some of the nodes, because it feels like it :)

The user installed the foobar driver. You ought not go too far in protecting a user from themselves - it only makes you insane.

Contributor

thockin commented Sep 8, 2015

What if the foobar driver decides to consume user-created foobaz in some of the nodes, because it feels like it :)

The user installed the foobar driver. You ought not go too far in protecting a user from themselves - it only makes you insane.

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 8, 2015

Contributor

@thockin and that is the exact reason we are trying to keep management plane separated from data-plane. That is the premise of most of the SDN solutions out there. We expect the network-driver to provide the data-plane guarantees which makes sure the plumbing is done in the best possible way with their own awesome sauce. Management plane should not be mixed with this & that is one of the fundamental point that we are trying to bring across.

If you have not noticed, we just have another proposal on IPAM driver. We heard the end-user requests to keep the IP-Address management composable with the network drivers and hence trying to provide a way for the management of ip-address isolated and flexible for users and vendors alike. We can certainly do much more to various aspects of management plane as well (hint : more coming).

Contributor

mavenugo commented Sep 8, 2015

@thockin and that is the exact reason we are trying to keep management plane separated from data-plane. That is the premise of most of the SDN solutions out there. We expect the network-driver to provide the data-plane guarantees which makes sure the plumbing is done in the best possible way with their own awesome sauce. Management plane should not be mixed with this & that is one of the fundamental point that we are trying to bring across.

If you have not noticed, we just have another proposal on IPAM driver. We heard the end-user requests to keep the IP-Address management composable with the network drivers and hence trying to provide a way for the management of ip-address isolated and flexible for users and vendors alike. We can certainly do much more to various aspects of management plane as well (hint : more coming).

@mrjana

This comment has been minimized.

Show comment
Hide comment
@mrjana

mrjana Sep 8, 2015

Contributor

please keep in mind that this issue was JUST filed, and the defaulting to global scope (and thereby libkv) has existed for the entirety of the libnetwork development cycle.

Well, libnetwork is not that old :-) and is still a maturing project. I will be the first to say that we still have a few loose ends like this to tie up. It was certainly not our intention to keep remote plugins as GlobalScope only

Contributor

mrjana commented Sep 8, 2015

please keep in mind that this issue was JUST filed, and the defaulting to global scope (and thereby libkv) has existed for the entirety of the libnetwork development cycle.

Well, libnetwork is not that old :-) and is still a maturing project. I will be the first to say that we still have a few loose ends like this to tie up. It was certainly not our intention to keep remote plugins as GlobalScope only

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 8, 2015

Contributor

please keep in mind that this issue was JUST filed, and the defaulting to global scope (and thereby libkv) has existed for the entirety of the libnetwork development cycle.

Well, libnetwork is not that old :-) and is still a maturing project. I will be the first to say that we still have a few loose ends like this to tie up. It was certainly not our intention to keep remote plugins as GlobalScope only

... and ... we are still in experimental mode ;). So technically, this is a bug that is found during the experimental phase.

Contributor

mavenugo commented Sep 8, 2015

please keep in mind that this issue was JUST filed, and the defaulting to global scope (and thereby libkv) has existed for the entirety of the libnetwork development cycle.

Well, libnetwork is not that old :-) and is still a maturing project. I will be the first to say that we still have a few loose ends like this to tie up. It was certainly not our intention to keep remote plugins as GlobalScope only

... and ... we are still in experimental mode ;). So technically, this is a bug that is found during the experimental phase.

@FlorianOtel

This comment has been minimized.

Show comment
Hide comment
@FlorianOtel

FlorianOtel Sep 8, 2015

@thockin

And no, I don't think ANYONE suggested libnetwork (or properly docker) should not have a control plane, just that it should not insist on being the ONLY control plane.

+1

FlorianOtel commented Sep 8, 2015

@thockin

And no, I don't think ANYONE suggested libnetwork (or properly docker) should not have a control plane, just that it should not insist on being the ONLY control plane.

+1

@squaremo

This comment has been minimized.

Show comment
Hide comment
@squaremo

squaremo Sep 8, 2015

Contributor

I even said in my previous post that this does not mean we don't want other control planes. In fact we want them all to work with docker. But driver api is not the place to make them work.

Where is the place to make them work, @mrjana?

Contributor

squaremo commented Sep 8, 2015

I even said in my previous post that this does not mean we don't want other control planes. In fact we want them all to work with docker. But driver api is not the place to make them work.

Where is the place to make them work, @mrjana?

@mrjana

This comment has been minimized.

Show comment
Hide comment
@mrjana

mrjana Sep 8, 2015

Contributor

@squaremo As I've mentioned in my other posts, the place to make them work is north of docker/libnetwork api, very similar to what k8s is trying to do. Driver api is definitely not the place to expect a gateway to interface into control plane. Control plane doesn't even belong in that layer. Once you have control plane integrated on top of docker api you can have the driver portion of your solution to be plugged into docker using driver api

Contributor

mrjana commented Sep 8, 2015

@squaremo As I've mentioned in my other posts, the place to make them work is north of docker/libnetwork api, very similar to what k8s is trying to do. Driver api is definitely not the place to expect a gateway to interface into control plane. Control plane doesn't even belong in that layer. Once you have control plane integrated on top of docker api you can have the driver portion of your solution to be plugged into docker using driver api

@erikh

This comment has been minimized.

Show comment
Hide comment
@erikh

erikh Sep 8, 2015

Contributor

Most of the problems @thockin & co have been discussing are real problems I have been working around too. The network name, in particular, should be propagated to plugins. I have experimented with a plugin-per-network, plugin-per-tenant-per-network, plugin-that-calls-back-into-docker, etc. None of these solutions are good, even if they all do technically work. Labels and options get us some of the way but we're still playing "fit the square peg into the round hole", looking for a loophole to push a network name in.

Docker should formally clarify the expectations of network plugins from a capability perspective, so we can design one that fits your model. Either that, or open up the capabilities of libnetwork to plugin authors so they can become more than what the creators expect.

I personally favor the latter. :)

Contributor

erikh commented Sep 8, 2015

Most of the problems @thockin & co have been discussing are real problems I have been working around too. The network name, in particular, should be propagated to plugins. I have experimented with a plugin-per-network, plugin-per-tenant-per-network, plugin-that-calls-back-into-docker, etc. None of these solutions are good, even if they all do technically work. Labels and options get us some of the way but we're still playing "fit the square peg into the round hole", looking for a loophole to push a network name in.

Docker should formally clarify the expectations of network plugins from a capability perspective, so we can design one that fits your model. Either that, or open up the capabilities of libnetwork to plugin authors so they can become more than what the creators expect.

I personally favor the latter. :)

@bboreham

This comment has been minimized.

Show comment
Hide comment
@bboreham

bboreham Sep 9, 2015

Contributor

@mrjana while it is natural for an orchestrator like Kubernetes to live north of docker, I struggle to see how a pure network implementation could have its control plane do anything north of the api. Can you expand a bit more how that would work?

Disclaimer: I work on weave Net.

Contributor

bboreham commented Sep 9, 2015

@mrjana while it is natural for an orchestrator like Kubernetes to live north of docker, I struggle to see how a pure network implementation could have its control plane do anything north of the api. Can you expand a bit more how that would work?

Disclaimer: I work on weave Net.

@rade

This comment has been minimized.

Show comment
Hide comment
@rade

rade Sep 9, 2015

@mrjana

the driver api was not designed with the goal of enabling vendor control planes. It was designed to integrate various low level networking technologies and as a way to hide those details from app developers.

AFAICT everybody who is trying to integrate their networking tech with Docker really wants the former, i.e. integrate both their data plane and control plane. AND they want to do so in a way that fits in with the concepts and UI Docker provides for networking, rather than opting out of that completely.

rade commented Sep 9, 2015

@mrjana

the driver api was not designed with the goal of enabling vendor control planes. It was designed to integrate various low level networking technologies and as a way to hide those details from app developers.

AFAICT everybody who is trying to integrate their networking tech with Docker really wants the former, i.e. integrate both their data plane and control plane. AND they want to do so in a way that fits in with the concepts and UI Docker provides for networking, rather than opting out of that completely.

@mrjana

This comment has been minimized.

Show comment
Hide comment
@mrjana

mrjana Sep 9, 2015

Contributor

while it is natural for an orchestrator like Kubernetes to live north of docker, I struggle to see how a pure network implementation could have its control plane do anything north of the api. Can you expand a bit more how that would work?

@bboreham When I mentioned "control plane" I was referring mostly to the management side of the control plane i.e a control plane which has it's own UI/API and has different constructs and abstractions than Dockers Networks/Endpoints etc. For such a control plane, they most certainly need to sit on top of docker and treat docker as a low level ingredient.

For control planes who want to use Docker's Networks/Endpoints abstraction there is no problem, until those control planes try to do things like advertise their drivers as LocalScope but really want GlobalScope behavior. There can't be any middle ground here. Either the solution embraces the docker's abstractions and API/UI or provides their own abstractions and API/UI north of docker

This was mainly a response to somebody suggesting sending docker network ls to the driver and my response to that is driver api is not the place to extend/customize docker's api/ui

The only thing that can happen in the future is provide a remote api extension point, very similar to ClusterHQ's powerstrip solution if the ecosystem wants management plane hooks. But that's where it should happen if it ever happens and the shape of such an extension point is yet to be determined.

Contributor

mrjana commented Sep 9, 2015

while it is natural for an orchestrator like Kubernetes to live north of docker, I struggle to see how a pure network implementation could have its control plane do anything north of the api. Can you expand a bit more how that would work?

@bboreham When I mentioned "control plane" I was referring mostly to the management side of the control plane i.e a control plane which has it's own UI/API and has different constructs and abstractions than Dockers Networks/Endpoints etc. For such a control plane, they most certainly need to sit on top of docker and treat docker as a low level ingredient.

For control planes who want to use Docker's Networks/Endpoints abstraction there is no problem, until those control planes try to do things like advertise their drivers as LocalScope but really want GlobalScope behavior. There can't be any middle ground here. Either the solution embraces the docker's abstractions and API/UI or provides their own abstractions and API/UI north of docker

This was mainly a response to somebody suggesting sending docker network ls to the driver and my response to that is driver api is not the place to extend/customize docker's api/ui

The only thing that can happen in the future is provide a remote api extension point, very similar to ClusterHQ's powerstrip solution if the ecosystem wants management plane hooks. But that's where it should happen if it ever happens and the shape of such an extension point is yet to be determined.

@mrjana

This comment has been minimized.

Show comment
Hide comment
@mrjana

mrjana Sep 9, 2015

Contributor

@rade Please see my previous response

Contributor

mrjana commented Sep 9, 2015

@rade Please see my previous response

@dcbw

This comment has been minimized.

Show comment
Hide comment
@dcbw

dcbw Sep 9, 2015

So I'm back from vacation and I'd like to summarize my understanding of the discussion so far WRT kubernetes. It seems like @thockin and k8s would be happy if:

  1. libnetwork should remove the restriction that 'remote' drivers are assumed to be global scope
  2. libnetwork should merge the labels PR so that control-plane-defined data can actually be passed to the driver
  3. libnetwork should pass the network name to the driver in the CreateNetwork request
  4. libnetwork would prefer that consumers that opt out of the docker/libnetwork control plane simply use docker as a mechanism and not UI/API

Does that sound right to everyone?

dcbw commented Sep 9, 2015

So I'm back from vacation and I'd like to summarize my understanding of the discussion so far WRT kubernetes. It seems like @thockin and k8s would be happy if:

  1. libnetwork should remove the restriction that 'remote' drivers are assumed to be global scope
  2. libnetwork should merge the labels PR so that control-plane-defined data can actually be passed to the driver
  3. libnetwork should pass the network name to the driver in the CreateNetwork request
  4. libnetwork would prefer that consumers that opt out of the docker/libnetwork control plane simply use docker as a mechanism and not UI/API

Does that sound right to everyone?

@dcbw

This comment has been minimized.

Show comment
Hide comment
@dcbw

dcbw Sep 9, 2015

For control planes who want to use Docker's Networks/Endpoints abstraction there is no problem, until those control planes try to do things like advertise their drivers as LocalScope but really want GlobalScope behavior. There can't be any middle ground here. Either the solution embraces the docker's abstractions and API/UI or provides their own abstractions and API/UI north of docker

@mrjana As I understand it, "GlobalScope" really means "use Docker/libnetwork control plane", right? A driver that has its own control plane should currently advertise itself as LocalScope, which is where the confusion stems from. That driver is actually "global" scope, since it has its own control plane and could be orchestrating multiple instances of docker across nodes, just that docker has no knowledge of that.

So it would be clearer to just deprecate the Scope type, and instead add a new set of flags to the Capability type with one currently defined value: UseLibnetworkControlPlane. The 'overlay' driver would set this capability, and remote drivers could opt-in to it as well. All other drivers simply pass nothing for this flag.

If that makes sense I'll do a PR for it?

dcbw commented Sep 9, 2015

For control planes who want to use Docker's Networks/Endpoints abstraction there is no problem, until those control planes try to do things like advertise their drivers as LocalScope but really want GlobalScope behavior. There can't be any middle ground here. Either the solution embraces the docker's abstractions and API/UI or provides their own abstractions and API/UI north of docker

@mrjana As I understand it, "GlobalScope" really means "use Docker/libnetwork control plane", right? A driver that has its own control plane should currently advertise itself as LocalScope, which is where the confusion stems from. That driver is actually "global" scope, since it has its own control plane and could be orchestrating multiple instances of docker across nodes, just that docker has no knowledge of that.

So it would be clearer to just deprecate the Scope type, and instead add a new set of flags to the Capability type with one currently defined value: UseLibnetworkControlPlane. The 'overlay' driver would set this capability, and remote drivers could opt-in to it as well. All other drivers simply pass nothing for this flag.

If that makes sense I'll do a PR for it?

@WeiZhang555

This comment has been minimized.

Show comment
Hide comment
@WeiZhang555

WeiZhang555 Sep 10, 2015

Contributor

First I'm sorry that I didn't read all your comments so I may miss something of your talking. I'm doing some work on Libnetwork and find a possible solution for this, the related PR is already pushed. The code maybe need a little re-organised, but let's first see if this solution is OK for you.
With that patch, we can use docker daemon --label=com.docker.network.driver.MyNetPlugin.scope=local to set our private network plugin to run in local scope instead of default global scope, this can make the most of label capability.
You guys can have a look and tell me what's your opinion. :)

Contributor

WeiZhang555 commented Sep 10, 2015

First I'm sorry that I didn't read all your comments so I may miss something of your talking. I'm doing some work on Libnetwork and find a possible solution for this, the related PR is already pushed. The code maybe need a little re-organised, but let's first see if this solution is OK for you.
With that patch, we can use docker daemon --label=com.docker.network.driver.MyNetPlugin.scope=local to set our private network plugin to run in local scope instead of default global scope, this can make the most of label capability.
You guys can have a look and tell me what's your opinion. :)

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 10, 2015

Contributor

@WeiZhang555 thank you for actually pushing a PR to get it resolved :-) Have added comments with my opinion. PTAL.

Contributor

mavenugo commented Sep 10, 2015

@WeiZhang555 thank you for actually pushing a PR to get it resolved :-) Have added comments with my opinion. PTAL.

@dcbw

This comment has been minimized.

Show comment
Hide comment
@dcbw

dcbw Sep 10, 2015

@mavenugo would you object to changing the Scope definitions into just a single capability for whether or not the plugin wants to use the libnetwork control plane? I think that would remove some of the confusion we have here.

dcbw commented Sep 10, 2015

@mavenugo would you object to changing the Scope definitions into just a single capability for whether or not the plugin wants to use the libnetwork control plane? I think that would remove some of the confusion we have here.

@lxpollitt

This comment has been minimized.

Show comment
Hide comment
@lxpollitt

lxpollitt Sep 10, 2015

@dcbw I would say that strictly speaking, local-scoped drivers still use libnetwork's control plane. e.g. docker network ls will be handled by libnetwork because libnetwork is (currently) the master source of truth. This is a deliberate design choice by the libnetwork team which they see as key to some subtle UX differences they think are really important. My view (and I know there will be many others) is that local-scoped and global-scoped do have clear meanings currently and we should respect that. Of course that's completely separate from the desire many of us have expressed for a third kind of driver that uses libnetwork to provide the UX (as close as possible to existing drivers) but does not use libnetwork's control plane. I would love to see that happen, but I'm not sure changing the existing scope definitions is the best way of achieving that. I think it might just confuse things further.

lxpollitt commented Sep 10, 2015

@dcbw I would say that strictly speaking, local-scoped drivers still use libnetwork's control plane. e.g. docker network ls will be handled by libnetwork because libnetwork is (currently) the master source of truth. This is a deliberate design choice by the libnetwork team which they see as key to some subtle UX differences they think are really important. My view (and I know there will be many others) is that local-scoped and global-scoped do have clear meanings currently and we should respect that. Of course that's completely separate from the desire many of us have expressed for a third kind of driver that uses libnetwork to provide the UX (as close as possible to existing drivers) but does not use libnetwork's control plane. I would love to see that happen, but I'm not sure changing the existing scope definitions is the best way of achieving that. I think it might just confuse things further.

@dcbw

This comment has been minimized.

Show comment
Hide comment
@dcbw

dcbw Sep 10, 2015

Ok, so per #139 libnetwork will never, ever pass the network name down to drivers. @thockin is that a deal-breaker for the k8s side?

dcbw commented Sep 10, 2015

Ok, so per #139 libnetwork will never, ever pass the network name down to drivers. @thockin is that a deal-breaker for the k8s side?

@dcbw

This comment has been minimized.

Show comment
Hide comment
@dcbw

dcbw Sep 10, 2015

My view (and I know there will be many others) is that local-scoped and global-scoped do have clear meanings currently and we should respect that.

@lxpollitt I'm not sure I'd agree with "clear meanings". The meaning for GlobalScope is:

// GlobalScope represents the driver capable of providing networking services for containers across hosts

which is certainly true for drivers that do this outside of docker's control plane, yes these drivers have to advertise themselves as LocalScope. Perhaps just updating the API doc to specify that GlobalScope explicitly opts the driver into the docker control plane would be enough to clarify.

dcbw commented Sep 10, 2015

My view (and I know there will be many others) is that local-scoped and global-scoped do have clear meanings currently and we should respect that.

@lxpollitt I'm not sure I'd agree with "clear meanings". The meaning for GlobalScope is:

// GlobalScope represents the driver capable of providing networking services for containers across hosts

which is certainly true for drivers that do this outside of docker's control plane, yes these drivers have to advertise themselves as LocalScope. Perhaps just updating the API doc to specify that GlobalScope explicitly opts the driver into the docker control plane would be enough to clarify.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 10, 2015

Contributor

It isn't a deal breaker - we can force users to pass that redundantly as a
label or we can snoop docker in the background. They are ugly and
unfriendly answers, but if that's what docker thinks is the best way to do
things, that's what we'll do (assuming we pursue libnetwork and kubernetes).

I don't like pushing boulders up hills, so I am going to stop assuming that
it's open to significant changes and instead just work around stuff like
this.

On Thu, Sep 10, 2015 at 9:03 AM, Dan Williams notifications@github.com
wrote:

Ok, so per #139 #139
libnetwork will never, ever pass the network name down to drivers.
@thockin https://github.com/thockin is that a deal-breaker for the k8s
side?


Reply to this email directly or view it on GitHub
#486 (comment).

Contributor

thockin commented Sep 10, 2015

It isn't a deal breaker - we can force users to pass that redundantly as a
label or we can snoop docker in the background. They are ugly and
unfriendly answers, but if that's what docker thinks is the best way to do
things, that's what we'll do (assuming we pursue libnetwork and kubernetes).

I don't like pushing boulders up hills, so I am going to stop assuming that
it's open to significant changes and instead just work around stuff like
this.

On Thu, Sep 10, 2015 at 9:03 AM, Dan Williams notifications@github.com
wrote:

Ok, so per #139 #139
libnetwork will never, ever pass the network name down to drivers.
@thockin https://github.com/thockin is that a deal-breaker for the k8s
side?


Reply to this email directly or view it on GitHub
#486 (comment).

@rade

This comment has been minimized.

Show comment
Hide comment
@rade

rade Sep 10, 2015

The labels hack does not work in all cases. Specifically it wouldn't work for dynamically attaching containers to a network, since labels can only be set at container creation time.

rade commented Sep 10, 2015

The labels hack does not work in all cases. Specifically it wouldn't work for dynamically attaching containers to a network, since labels can only be set at container creation time.

@WeiZhang555

This comment has been minimized.

Show comment
Hide comment
@WeiZhang555

WeiZhang555 Sep 11, 2015

Contributor

@mavenugo

@shettyg @squaremo @tomdee i dont think having another plugin endpoint is a good idea. This is a >property of the network driver and must be honored as such. Introducing another plugin type will call >for more changes to the libnetwork core to look for more plugin types, when the functionality provided >by the driver is the same.

I agree with you that the functionality provided by the driver is the same, but why would it call for more changes to libnetwork core? I have tried to bring it to reality and found that it doesn't need too
many changes.

Hence my suggestion is to add a capability negotiation in the Plugin API & exchange this info,

Do you mean another negotiation after handshake or make negotiation during handshake? Does this need to change the whole plugin mechanism ?

Contributor

WeiZhang555 commented Sep 11, 2015

@mavenugo

@shettyg @squaremo @tomdee i dont think having another plugin endpoint is a good idea. This is a >property of the network driver and must be honored as such. Introducing another plugin type will call >for more changes to the libnetwork core to look for more plugin types, when the functionality provided >by the driver is the same.

I agree with you that the functionality provided by the driver is the same, but why would it call for more changes to libnetwork core? I have tried to bring it to reality and found that it doesn't need too
many changes.

Hence my suggestion is to add a capability negotiation in the Plugin API & exchange this info,

Do you mean another negotiation after handshake or make negotiation during handshake? Does this need to change the whole plugin mechanism ?

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 11, 2015

Contributor

@WeiZhang555 yes, the current changes are small. but am more worried about the long-term impact and the correctness of the design. once we start to treat it as a different plugin, even a slight change made to the driver api in one of the plugin will cause libnetwork to handle 2 different cases at the level of driver interaction. It can quickly become unmanageable.
So, IMO, since this is just a property of network driver, with all other APIs remains exactly the same, the approach must be to keep the plugin-type as the same. But introduce a way to get the driver capability (immediately after the initial plugin handshake). For example, we introduce a new rpc endpoint something like

d.call("GetCapability", nil, &api.DriverCapabilityResponse{})

can replace today's code that statically defines the capability as GlobalScope.

               c := driverapi.Capability{
                        Scope: driverapi.GlobalScope,
                }

just before invoking the dc.RegisterDriver here : https://github.com/docker/libnetwork/blob/master/drivers/remote/driver.go#L33

Do you think we can get that done ?

Contributor

mavenugo commented Sep 11, 2015

@WeiZhang555 yes, the current changes are small. but am more worried about the long-term impact and the correctness of the design. once we start to treat it as a different plugin, even a slight change made to the driver api in one of the plugin will cause libnetwork to handle 2 different cases at the level of driver interaction. It can quickly become unmanageable.
So, IMO, since this is just a property of network driver, with all other APIs remains exactly the same, the approach must be to keep the plugin-type as the same. But introduce a way to get the driver capability (immediately after the initial plugin handshake). For example, we introduce a new rpc endpoint something like

d.call("GetCapability", nil, &api.DriverCapabilityResponse{})

can replace today's code that statically defines the capability as GlobalScope.

               c := driverapi.Capability{
                        Scope: driverapi.GlobalScope,
                }

just before invoking the dc.RegisterDriver here : https://github.com/docker/libnetwork/blob/master/drivers/remote/driver.go#L33

Do you think we can get that done ?

@mrjana

This comment has been minimized.

Show comment
Hide comment
@mrjana

mrjana Sep 11, 2015

Contributor

@rade I don't think anybody is talking about container labels here. These are network labels attached to network objects and created during network creation time.

Contributor

mrjana commented Sep 11, 2015

@rade I don't think anybody is talking about container labels here. These are network labels attached to network objects and created during network creation time.

@WeiZhang555

This comment has been minimized.

Show comment
Hide comment
@WeiZhang555

WeiZhang555 Sep 12, 2015

Contributor

@mavenugo Got your point, you give a really clear and detailed illustration, I'd like to take another shot if you still need this, I think it won't take much time to get that done. :)

Contributor

WeiZhang555 commented Sep 12, 2015

@mavenugo Got your point, you give a really clear and detailed illustration, I'd like to take another shot if you still need this, I think it won't take much time to get that done. :)

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 12, 2015

Contributor

Absolutely. Please go ahead.

-Madhu

On Sep 11, 2015, at 8:30 PM, zhangwei_cs notifications@github.com wrote:

@mavenugo Got your point, you give a really clear and detailed illustration, I'd like to take another shot if you still need this, I think it won't take much time to get that done. :)


Reply to this email directly or view it on GitHub.

Contributor

mavenugo commented Sep 12, 2015

Absolutely. Please go ahead.

-Madhu

On Sep 11, 2015, at 8:30 PM, zhangwei_cs notifications@github.com wrote:

@mavenugo Got your point, you give a really clear and detailed illustration, I'd like to take another shot if you still need this, I think it won't take much time to get that done. :)


Reply to this email directly or view it on GitHub.

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 15, 2015

Contributor

closing this issue via #516 . Thanks @WeiZhang555 for the patch & everyone for a good discussion.

Contributor

mavenugo commented Sep 15, 2015

closing this issue via #516 . Thanks @WeiZhang555 for the patch & everyone for a good discussion.

@blaggacao

This comment has been minimized.

Show comment
Hide comment
@blaggacao

blaggacao Feb 8, 2018

I read it all and very thankfully. Now, I'm a little less ignorant on how moby was born. 😉

blaggacao commented Feb 8, 2018

I read it all and very thankfully. Now, I'm a little less ignorant on how moby was born. 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment