Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error from daemon: Cannot start container ... container already joined endpoint #14169

Closed
Stellaverse opened this issue Jun 24, 2015 · 7 comments

Comments

@Stellaverse
Copy link

Hi folks,

I realize this issue has been reported using --net=host, but I'm having the same issue otherwise as well.

Install experimental version:

$ wget -qO- https://experimental.docker.com/ | sh

Docker Version:

Client version: 1.8.0-dev
Client API version: 1.20
Go version (client): go1.4.2
Git commit (client): f39b9a0
OS/Arch (client): linux/amd64
Experimental (client): true
Server version: 1.8.0-dev
Server API version: 1.20
Go version (server): go1.4.2
Git commit (server): f39b9a0
OS/Arch (server): linux/amd64
Experimental (server): true

Docker info:

Containers: 7
Images: 11
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 25
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-48-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 1
Total Memory: 992.5 MiB
Name: ip-172-31-31-102
ID: ZMSB:WBYM:2W3C:24LL:54CA:XEPU:IIIO:T725:EGCI:DDOL:WWCW:76EK
WARNING: No swap limit support
Experimental: true

uname -a

Linux ip-172-31-31-102 3.13.0-48-generic #80-Ubuntu SMP Thu Mar 12 11:16:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Create a network and run a container:

sudo docker network create srv
sudo docker run -d --name Hello --publish-service hello.srv stellaverse/hello

And then another:

sudo docker run -d --name Hello2 --publish-service hello.srv stellaverse/hello

And this is the resutant error:

Error response from daemon: Cannot start container 01aadd181ba357ce8d8af5d4a9bdfc0cd2272e0fdf957450300083c967f24f46: a container has already joined the endpoint

Hope that helps narrow things down.

@GordonTheTurtle
Copy link

Hi!

Please read this important information about creating issues.

If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.

If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information.

This is an automated, informational response.

Thank you.

For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues


BUG REPORT INFORMATION

Use the commands below to provide key information from your environment:

docker version:
docker info:
uname -a:

Provide additional environment details (AWS, VirtualBox, physical, etc.):

List the steps to reproduce the issue:
1.
2.
3.

Describe the results you received:

Describe the results you expected:

Provide additional info you think is important:

----------END REPORT ---------

#ENEEDMOREINFO

@mavenugo
Copy link
Contributor

@Stellaverse this is as per design. A service can be backed by only 1 container. Hence if you try to create multiple containers with the sane service --publish-service hello.srv, its an invalid configuration. can you try to publish different service-name for these containers ?

@Stellaverse
Copy link
Author

Well, that sucks. Guess I go back to Weave.

My use case (and I believe everyone's use case) is that I want to run multiple containers under the same service name (domain) and have my overlay/bridge network automatically balance load across all available containers at that endpoint, across my cluster.

If I have to supply a unique service endpoint for all containers I may as well stick with my User-Data.sh script.

@mavenugo
Copy link
Contributor

@Stellaverse We are trying to get user feedback for these experimental feature under #14083. Can you please add your comment in that PR ?

In general, the reason we designed for 1-1 service-backend mapping is to provide the flexibility for the user to choose their own favorite load-balancing solution (instead of docker platform prescribing one by default). As an example, you could have published a HAProxy container via --publish-service and behind that HAProxy container there could be multiple containers attached in a different network. The HAProxy container can be connected to both the provider and consumer networks in effect providing the load-balancing service that you are looking for. Replacing the HAProxy with another container is just as easy with the service detach & service attach commands.

@Stellaverse
Copy link
Author

I see your point, and I appreciate the composability, but I've already got the setup you describe without using docker network at all.

On each instance in my cluster I've got one container called Proxy (node http-proxy) listening on 80/443. All requests from the outside internet (my ELB) hit Proxy and then are routed to the appropriate internal endpoint (10.0.0.1:3001, 10.0.0.2:3002, 10.0.0.3:3003, for example) using a bunch of glue that pulls the ip/port from the env vars created in the Proxy container upon launch using --link. If I want a second instance of my container on the same host, I have to give it different container name and link it up as well.

I'm not seeing the advantage to using docker network to get around this. If I still have to use a second endpoint (eg. hello2.srv), then why not just continue using --link and eliminate the extra complexity altogether?

@mavenugo
Copy link
Contributor

@Stellaverse I understand your point as well. But the difference here is that docker network using the overlay driver is multi-host enabled. Hence the internal endpoints can be across multiple hosts still under the same network. Essentially a single service backed by, say the haproxy container, in the front-end network can essentially distribute the load across multiple backend endpoints across the hosts without having to spawn the external facing service on each and every single host. Also, there is no need for --link as well.

@Stellaverse
Copy link
Author

Ah, so I can have multiple containers publishing the same service domain (e.g. hello.srv), but there can only be 1 per host?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants