Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

race conditions lead to duplicate docker networks #20648

Closed
rpeleg1970 opened this issue Feb 24, 2016 · 19 comments
Closed

race conditions lead to duplicate docker networks #20648

rpeleg1970 opened this issue Feb 24, 2016 · 19 comments
Labels
area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/1.10

Comments

@rpeleg1970
Copy link

When using docker network create, or via docker-compose - concurrent calls will create a duplicate network.

Output of docker version:

Client:
 Version:      1.10.2
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   c3959b1
 Built:        Mon Feb 22 21:37:01 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.10.2
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   c3959b1
 Built:        Mon Feb 22 21:37:01 2016
 OS/Arch:      linux/amd64
vagrant@vagrant-ubuntu-tr

Output of docker info:

Containers: 5
 Running: 5
 Paused: 0
 Stopped: 0
Images: 24
Server Version: 1.10.2
Storage Driver: devicemapper
 Pool Name: docker-8:1-262711-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: ext4
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 2.007 GB
 Data Space Total: 107.4 GB
 Data Space Available: 37.2 GB
 Metadata Space Used: 4.063 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.143 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/231072.231072/devicemapper/devicemapper/data
 WARNING: Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
 Metadata loop file: /var/lib/docker/231072.231072/devicemapper/devicemapper/metadata
 Library Version: 1.02.77 (2012-10-15)
Execution Driver: native-0.2
Logging Driver: json-file
Plugins: 
 Volume: local
 Network: bridge null host
Kernel Version: 3.13.0-67-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.955 GiB
Name: vagrant-ubuntu-trusty-64
ID: XHSN:N4BF:J6HX:PVNW:GP7J:QES5:QYLN:W7YY:NFJK:4OKF:M2GH:6GHD
WARNING: No swap limit support

Provide additional environment details (AWS, VirtualBox, physical, etc.):
vagrant box ubuntu trusty, running on virtual-box 4.3.34 on mac mini OSX 10.10.5

List the steps to reproduce the issue:

  1. docker network create xyz & docker network create xyz

Describe the results you received:
network xyz is created twice:

vagrant@vagrant-ubuntu-trusty-64:~$ docker network ls
NETWORK ID          NAME                DRIVER
f0be1507dc2b        xyz                 bridge              
f091a7d18a7f        xyz                 bridge              
9bfb77c3d0c1        none                null                
fb8f222d4bb7        host                host                

Describe the results you expected:
Expected is a single network. This causes confusion with connected services.

Provide additional info you think is important:
The original issue was caused by 3 docker-compose calls that started concurrently, each launching different service in a compose file.; this causes the network to be created 3 times, and each service was on a different instance.

@thaJeztah thaJeztah added kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. area/networking labels Feb 24, 2016
@thaJeztah
Copy link
Member

Thanks for reporting! The possibility of having duplicate networks was previously discussed in #18864, and is by design, however, I can see this being problematic in compose, if services potentially are connected to different networks with the same name

@rpeleg1970
Copy link
Author

Thanks @thaJeztah for the quick reply, and sorry for missing the duplicate report.
Should I take this to the compose team? Anyway just to be clear the workaround is simple enough - we manually create the default network upfront, this ensures a single instance with the said name.

@thaJeztah
Copy link
Member

@rpeleg1970 I think it's fine to keep it open here as well, but reporting to the compose team may be a good thing, because this will more likely hit compose users. Be sure to check if there's an existing issue for this (they just released 1.6.2, so be sure to check if this issue may be resolved in that release)

@skohanim
Copy link

@thaJeztah I believe this shouldn't be happening when there is "check duplicate" set such as with the docker cli: https://github.com/docker/docker/blob/v1.10.2/api/client/network.go#L78
this doesn't happen if we run serially:
docker network create xyz && docker network create xyz
instead of parallel:
docker network create xyz & docker network create xyz

It looks like the bug is two clients racing to create a non duplicate network and both succeeding because check+create isn't atomic:

client 1 - check - doesn't exist
client 2 - check - doesn't exist
client 1 - create
client 2 - create

@rade
Copy link

rade commented Feb 24, 2016

@skohanim

I believe this shouldn't be happening when there is "check duplicate" set

From #18864 (comment): "The checkDuplicate option is just there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions."

rpeleg1970 added a commit to rpeleg1970/docker that referenced this issue Mar 2, 2016
…work name and create new network

Signed-off-by: Ron Peleg <ron.peleg@gmail.com>
rpeleg1970 added a commit to rpeleg1970/docker that referenced this issue Mar 2, 2016
Signed-off-by: Ron Peleg <ron.peleg@gmail.com>
@rpeleg1970
Copy link
Author

Hi I added 2 pull requests one is more simple, the other more optimized. see #20878 and #20877
Hope one of them sticks!
Ron

rpeleg1970 added a commit to rpeleg1970/docker that referenced this issue Mar 3, 2016
Signed-off-by: Ron Peleg <ron.peleg@gmail.com>
Signed-off-by: ronp@winter <ron.peleg@trusteer.com>
rpeleg1970 added a commit to rpeleg1970/docker that referenced this issue Mar 3, 2016
Signed-off-by: Ron Peleg <ron.peleg@gmail.com>
@rpeleg1970
Copy link
Author

Sorry for the mess - I closed all other and now have a single pull request open, with a single commit in it - implemented review comments

@thaJeztah
Copy link
Member

@rpeleg1970 no worries, really! Thanks so much for working on this

@thaJeztah
Copy link
Member

We've had far worse situations with people resubmitting a new PR for each typo fixed, haha

@rpeleg1970
Copy link
Author

I'll keep in mind for next time ;)

Cheers
R

On 3 Mar 2016, at 14:51, Sebastiaan van Stijn notifications@github.com wrote:

We've had far worse situations with people resubmitting a new PR for each typo fixed, haha


Reply to this email directly or view it on GitHub.

@BSWANG
Copy link
Contributor

BSWANG commented Apr 8, 2016

Will this problem occur on cross-host global network creation? PRs above are using local lock to solve this problem. Should we use libkv's lock to ensure network creation no collision? @thaJeztah

there is a distributed creation example:

import docker
import multiprocessing

default1 = "192.168.99.101:2375"
default2 = "192.168.99.102:2375"
networkName = "multi-host-network"


def createNetwork(host):
    client = docker.Client(base_url=host)
    print client.create_network(name=networkName, driver="overlay", check_duplicate=True)
    client.close()


if __name__ == "__main__":
    pool = multiprocessing.Pool(5)
    pool.map(createNetwork, (default1, default2))
    pool.close()
    pool.join()

After run this test, I got two networks with the same name "multi-host-network":

docker network ls
NETWORK ID          NAME                 DRIVER
36a03ed3b92d        multi-host-network   overlay
ddb9afde2e03        multi-host-network   overlay
33c8003ae961        docker_gwbridge      bridge
f61baf59cb82        bridge               bridge
c4f37afc77ba        none                 null
1866fd032f2e        host                 host

@vdemeester
Copy link
Member

This is still reproductible in 18.02 (using the cli)
cc @thaJeztah @selansen @ctelfer @ddebroy

@AkihiroSuda
Copy link
Member

@vdemeester This is by design
#18864 (comment)

@vdemeester
Copy link
Member

hum so should we close this issue then ? 😇

@BenTheElder
Copy link

This is pretty frustrating to deal with for situations / tools like compose (which seems to have not really solved this?)

The compose team seems to be repeatedly stating that this is a bug in the engine that should be solved here, at most they seem to be passing checkDuplicate, which appears to be best effort only.

#40901 (comment) suggests that if two networks with the same name are created, even if you use the IDs you will not be able to start containers attached to them ...

Is this really still intended / supported behavior? It seems pretty surprising / inconsistent with volumes and containers, and a bit user-hostile.

@eyJhb
Copy link

eyJhb commented Jul 2, 2020

I suggest, that YES it is OK to have multiple networks with the same name, I do not care, might be useful (??).
BUT! As stated in my issue #40901 , if creating a container with the ID specified it should continue to use that, instead of the name.

Any one have any suggestions for this? Please do comment in the thread.
Maybe @thaJeztah you have some time for this?

@lucasbasquerotto
Copy link

lucasbasquerotto commented Jul 19, 2021

If you create the network in a script, you can easily solve the duplicated network issue executing a run-one command to create the network (instead of calling it directly), to make sure that there aren't 2 processes trying to create the same network.

More details can be seen in the SO answer: https://stackoverflow.com/a/68448059/4850646

This may not be so easily achievable with docker-compose tough, unless you run it from a script that you can easily change and you know the names of the networks beforehand.

That said, this is just a workaround, and I think that a proper solution should be in the docker engine (if having duplicated named networks is by design, I would expect at least some flag to disable this behaviour, or some CLI & compose option to make sure that the network name is unique, even if by default duplicates are allowed).

@BenTheElder
Copy link

If you're sharing a remote docker engine or another tool may create the network that solution won't be sufficient.

Currently in KIND our approach is to check for and remove duplicate networks after creation by deterministically sorting them (including by creation time) and only use the resulting network by name not ID (so we don't care if the network created by our process winds up being a deleted duplicate). This seems to work fine. https://github.com/kubernetes-sigs/kind/blob/754da2484d288aa41d87606fb859153a3c5cb9f6/pkg/cluster/internal/providers/docker/network.go#L128

akerouanton added a commit to akerouanton/docker that referenced this issue Aug 17, 2023
Fixes moby#18864, moby#20648, moby#33561, moby#40901.

[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:

> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.

And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:

> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).

This decentralized model is what Classic Swarm was. It's been superseded
since then by Docker Swarm, which has a centralized control plane.

To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.

Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby#40901):

> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.

To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.

[1]: moby#18864 (comment)
[2]: moby#18864 (comment)

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
akerouanton added a commit to akerouanton/docker that referenced this issue Aug 17, 2023
Fixes moby#18864, moby#20648, moby#33561, moby#40901.

[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:

> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.

And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:

> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).

This decentralized model is what Classic Swarm was. It's been superseded
since then by Docker Swarm, which has a centralized control plane.

To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.

Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby#40901):

> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.

To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.

[1]: moby#18864 (comment)
[2]: moby#18864 (comment)

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
akerouanton added a commit to akerouanton/docker that referenced this issue Aug 17, 2023
Fixes moby#18864, moby#20648, moby#33561, moby#40901.

[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:

> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.

And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:

> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).

This decentralized model is what Classic Swarm was. It's been superseded
since then by Docker Swarm, which has a centralized control plane.

To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.

Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby#40901):

> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.

To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.

[1]: moby#18864 (comment)
[2]: moby#18864 (comment)

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
akerouanton added a commit to akerouanton/docker that referenced this issue Sep 8, 2023
Fixes moby#18864, moby#20648, moby#33561, moby#40901.

[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:

> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.

And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:

> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).

This decentralized model is what Classic Swarm was trying to be. It's
been superseded since then by Docker Swarm, which has a centralized
control plane.

To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.

Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby#40901):

> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.

To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.

[1]: moby#18864 (comment)
[2]: moby#18864 (comment)

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
akerouanton added a commit to akerouanton/docker that referenced this issue Sep 8, 2023
Fixes moby#18864, moby#20648, moby#33561, moby#40901.

[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:

> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.

And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:

> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).

This decentralized model is what Classic Swarm was trying to be. It's
been superseded since then by Docker Swarm, which has a centralized
control plane.

To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.

Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby#40901):

> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.

To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.

[1]: moby#18864 (comment)
[2]: moby#18864 (comment)

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
akerouanton added a commit to akerouanton/docker that referenced this issue Sep 8, 2023
Fixes moby#18864, moby#20648, moby#33561, moby#40901.

[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:

> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.

And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:

> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).

This decentralized model is what Classic Swarm was trying to be. It's
been superseded since then by Docker Swarm, which has a centralized
control plane.

To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.

Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby#40901):

> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.

To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.

[1]: moby#18864 (comment)
[2]: moby#18864 (comment)

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
akerouanton added a commit to akerouanton/docker that referenced this issue Sep 11, 2023
Fixes moby#18864, moby#20648, moby#33561, moby#40901.

[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:

> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.

And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:

> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).

This decentralized model is what Classic Swarm was trying to be. It's
been superseded since then by Docker Swarm, which has a centralized
control plane.

To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.

Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby#40901):

> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.

To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.

[1]: moby#18864 (comment)
[2]: moby#18864 (comment)

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
akerouanton added a commit to akerouanton/docker that referenced this issue Sep 11, 2023
Fixes moby#18864, moby#20648, moby#33561, moby#40901.

[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:

> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.

And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:

> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).

This decentralized model is what Classic Swarm was trying to be. It's
been superseded since then by Docker Swarm, which has a centralized
control plane.

To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.

Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby#40901):

> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.

To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.

[1]: moby#18864 (comment)
[2]: moby#18864 (comment)

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
akerouanton added a commit to akerouanton/docker that referenced this issue Sep 11, 2023
Fixes moby#18864, moby#20648, moby#33561, moby#40901.

[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:

> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.

And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:

> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).

This decentralized model is what Classic Swarm was trying to be. It's
been superseded since then by Docker Swarm, which has a centralized
control plane.

To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.

Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby#40901):

> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.

To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.

[1]: moby#18864 (comment)
[2]: moby#18864 (comment)

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
akerouanton added a commit to akerouanton/docker that referenced this issue Sep 11, 2023
Fixes moby#18864, moby#20648, moby#33561, moby#40901.

[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:

> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.

And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:

> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).

This decentralized model is what Classic Swarm was trying to be. It's
been superseded since then by Docker Swarm, which has a centralized
control plane.

To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.

Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby#40901):

> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.

To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.

[1]: moby#18864 (comment)
[2]: moby#18864 (comment)

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
akerouanton added a commit to akerouanton/docker that referenced this issue Sep 12, 2023
Fixes moby#18864, moby#20648, moby#33561, moby#40901.

[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:

> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.

And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:

> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).

This decentralized model is what Classic Swarm was trying to be. It's
been superseded since then by Docker Swarm, which has a centralized
control plane.

To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.

Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby#40901):

> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.

To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.

[1]: moby#18864 (comment)
[2]: moby#18864 (comment)

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
@akerouanton
Copy link
Member

#46251 has been merged. Starting with the next release, it won't be possible to create networks with duplicate names (even when doing concurrent API calls). So let me close this one.

akerouanton added a commit to akerouanton/docker that referenced this issue Dec 20, 2023
Fixes moby#18864, moby#20648, moby#33561, moby#40901.

[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:

> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.

And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:

> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).

This decentralized model is what Classic Swarm was. It's been superseded
since then by Docker Swarm, which has a centralized control plane.

To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.

Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby#40901):

> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.

To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.

[1]: moby#18864 (comment)
[2]: moby#18864 (comment)

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/1.10
Projects
None yet