-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
race conditions lead to duplicate docker networks #20648
Comments
Thanks for reporting! The possibility of having duplicate networks was previously discussed in #18864, and is by design, however, I can see this being problematic in compose, if services potentially are connected to different networks with the same name |
Thanks @thaJeztah for the quick reply, and sorry for missing the duplicate report. |
@rpeleg1970 I think it's fine to keep it open here as well, but reporting to the compose team may be a good thing, because this will more likely hit compose users. Be sure to check if there's an existing issue for this (they just released 1.6.2, so be sure to check if this issue may be resolved in that release) |
@thaJeztah I believe this shouldn't be happening when there is "check duplicate" set such as with the docker cli: https://github.com/docker/docker/blob/v1.10.2/api/client/network.go#L78 It looks like the bug is two clients racing to create a non duplicate network and both succeeding because check+create isn't atomic: client 1 - check - doesn't exist |
From #18864 (comment): "The checkDuplicate option is just there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions." |
…work name and create new network Signed-off-by: Ron Peleg <ron.peleg@gmail.com>
Signed-off-by: Ron Peleg <ron.peleg@gmail.com>
Signed-off-by: Ron Peleg <ron.peleg@gmail.com> Signed-off-by: ronp@winter <ron.peleg@trusteer.com>
Signed-off-by: Ron Peleg <ron.peleg@gmail.com>
Sorry for the mess - I closed all other and now have a single pull request open, with a single commit in it - implemented review comments |
@rpeleg1970 no worries, really! Thanks so much for working on this |
We've had far worse situations with people resubmitting a new PR for each typo fixed, haha |
I'll keep in mind for next time ;) Cheers
|
Will this problem occur on cross-host global network creation? PRs above are using local lock to solve this problem. Should we use libkv's lock to ensure network creation no collision? @thaJeztah there is a distributed creation example: import docker
import multiprocessing
default1 = "192.168.99.101:2375"
default2 = "192.168.99.102:2375"
networkName = "multi-host-network"
def createNetwork(host):
client = docker.Client(base_url=host)
print client.create_network(name=networkName, driver="overlay", check_duplicate=True)
client.close()
if __name__ == "__main__":
pool = multiprocessing.Pool(5)
pool.map(createNetwork, (default1, default2))
pool.close()
pool.join() After run this test, I got two networks with the same name "multi-host-network":
|
This is still reproductible in 18.02 (using the cli) |
@vdemeester This is by design |
hum so should we close this issue then ? 😇 |
This is pretty frustrating to deal with for situations / tools like compose (which seems to have not really solved this?) The compose team seems to be repeatedly stating that this is a bug in the engine that should be solved here, at most they seem to be passing checkDuplicate, which appears to be best effort only. #40901 (comment) suggests that if two networks with the same name are created, even if you use the IDs you will not be able to start containers attached to them ... Is this really still intended / supported behavior? It seems pretty surprising / inconsistent with volumes and containers, and a bit user-hostile. |
I suggest, that YES it is OK to have multiple networks with the same name, I do not care, might be useful (??). Any one have any suggestions for this? Please do comment in the thread. |
If you create the network in a script, you can easily solve the duplicated network issue executing a More details can be seen in the SO answer: https://stackoverflow.com/a/68448059/4850646 This may not be so easily achievable with That said, this is just a workaround, and I think that a proper solution should be in the docker engine (if having duplicated named networks is by design, I would expect at least some flag to disable this behaviour, or some CLI & compose option to make sure that the network name is unique, even if by default duplicates are allowed). |
If you're sharing a remote docker engine or another tool may create the network that solution won't be sufficient. Currently in KIND our approach is to check for and remove duplicate networks after creation by deterministically sorting them (including by creation time) and only use the resulting network by name not ID (so we don't care if the network created by our process winds up being a deleted duplicate). This seems to work fine. https://github.com/kubernetes-sigs/kind/blob/754da2484d288aa41d87606fb859153a3c5cb9f6/pkg/cluster/internal/providers/docker/network.go#L128 |
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
#46251 has been merged. Starting with the next release, it won't be possible to create networks with duplicate names (even when doing concurrent API calls). So let me close this one. |
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
When using
docker network create
, or viadocker-compose
- concurrent calls will create a duplicate network.Output of
docker version
:Output of
docker info
:Provide additional environment details (AWS, VirtualBox, physical, etc.):
vagrant box ubuntu trusty, running on virtual-box 4.3.34 on mac mini OSX 10.10.5
List the steps to reproduce the issue:
docker network create xyz & docker network create xyz
Describe the results you received:
network
xyz
is created twice:Describe the results you expected:
Expected is a single network. This causes confusion with connected services.
Provide additional info you think is important:
The original issue was caused by 3 docker-compose calls that started concurrently, each launching different service in a compose file.; this causes the network to be created 3 times, and each service was on a different instance.
The text was updated successfully, but these errors were encountered: