Join GitHub today
[FEATURE REQUEST] Be able to use all the Portainer built-in functionalities in all the containers running in a swarm cluster #461
Currently when running under Docker Swarm mode portainer has visibility of services and their tasks, but from the task you cannot get the logs/ssh session/etc.
In the container tab you have only access to the containers running in the host where Portainer is connected. In case you want to get this functionality on containers running on other nodes of the cluster you need to add manually the connection details, which is not a valid solution in an elastic swarm cluster where nodes are added and removed continuously.
Proposal: From the service tags, when going to the tasks list, get the same functionalities than we currently have in the containers section.
Probably in the future Docker will add more built-in functionalities to support this natively, but I have not seen this feature in their roadmap. It shouldn't be so hard to get a working solution with not so much effort.
How to do it?
Thanks to the "global service" concept in Swarm mode we can run a service ensuring that each node is running a service task. In case the cluster adds more nodes is Swarm the responsible of launch a new task in the node, it's totally automatic.
Thanks to this and the possibility to expose the docker socket via a service it shouldn't be so hard to Portainer to connect to each task of this service.
Being more concrete, if I launch this configuration:
This version of portainer is aware of the docker-proxy service, then internally doing a DNS call like:
Once knowing the endpoint you can connect to each docker daemon in the same way you connect to the local socket daemon in the addresses
This was referenced
Feb 25, 2017
An extra thought on using the IP addresses of the nodes exposed by the API:
A swarm mode enabled cluster can be created with only the manager exposing the port 2375 to manage the cluster, all workers can run and expose only the Linux socket. So this approach cannot be used.
@man4j that's simple, at the moment you can't see all the containers in the cluster because the Docker API does not expose that (see it as the equivalent of
The motto of Portainer is: simplicity. We should be able to add this feature in Portainer and make it simple and easy to enable for our users.
Besides, even if you enable a worker on TCP/IP (which might be your case only), some users will use TLS setup, others will want to secure their workers by not enabling a TCP socket on them. We need to provide a solution that works for all the cases.
I think it's complicated today to detect swarm cluster endpoint list (with access), based on existing API
On swarm manager, we can get all linked endpoints toget aggregated view for containers/images/networks, and list them with a new column endpoint to see where object is defined
This was referenced
Apr 9, 2017
referenced this issue
Apr 21, 2017
Like the mechanism of the endpoints.json file, it could be interesting if portainer can check for DNS-SRV records to auto discover endpoints
Docker swarm by default embed a DNS server for communication between containers.
Prometheus, for example, use that to discover targets dynamically
I think it could be great if we can propose a stack (docker-compose v3) with portainer and a
referenced this issue
May 25, 2017
Latest thoughts on the subject.
Following a discussion with @bvis on Slack, each agent should be actually created inside an overlay network (Portainer should be started into this overlay network too). The communication between the agents and the main Portainer instance would then be done within this overlay network, removing the need to expose the agent on each node.
It implies the following:
As authentication and UAC is stored inside the main Portainer instance, any query must be executed against the Portainer instance.
In this context, the aggregated response would have applied UAC and filter access to resources.
Service creation would be something similar to:
# Requires the creation of an overlay network when deploying Portainer $ docker network create \ --driver overlay \ portainer-network # Deploy Portainer $ docker service create \ --name portainer \ --network portainer-network --publish 9000:9000 \ --constraint 'node.role == manager' \ --mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock \ portainer/portainer \ -H unix:///var/run/docker.sock # Automated from Portainer $ docker service create \ --name portainer-agent \ --network portainer-network --mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock \ portainer/portainer-agent
Multiple solutions can be used to register the agents into the main Portainer instance:
Still need to figure out which way would be the best. Note that I'm not sure how to retrieve the Portainer API location in (B) (should we use the browser URL ? e.g. when browsing http://myportainer.domain:com API should be located at
By default, Swarm workers only expose the Docker engine via the socket.
We could secure the communication between the agents using the following methods:
A list of thing that we still need to think about:
I personally prefer to use option (a) as it makes entire swarm cluster deployment more compact, but its not a hard task to deploy global service of Portainer agents.
I propose to consider gRPC protocol for secure and fast communication with Portainer agents. http://www.grpc.io/
I've been following this with interest as it would make Portainer much more useful to us.
Rather than needing to expose each instance with
If it's any use, for another project I used Nginx with SSL certificates to proxy the Docker socket ports. Something similar would mean you would only need to run portainer on the "master", with a "socket-proxy" container on the overlay networks. This proxies all the REST calls OK but I don't know if it would work with the tty/shell remote access.
Deploying agents inside an overlay network might work well when deploying Portainer inside that network as a service. But Portainer aims to manage different Swarm clusters from the same instance... using this solution won't work in that case.
We might want to be able to support both:
In both cases, it's a simple deployment configuration for the agent but it impacts how the agents will be registered inside the Portainer instance (the DNS-SRV records solution is working for the first case only, so we need to determine a solution to register agents that will support both cases).
The idea is to provide a compose file deploying 2 services
aggregator should discover socat agents by dns-srv
we have to establish rules on api calls
Then we can tell users to start this compose on swarm cluster, and plug portainer as usual to th aggregator endpoint
I think this feature is too big to be just a portainer feature, but i'm kind to work on that in future
@ncresswell - I shot you guys 1.76943 LTC.
Here is the transaction ID.
Thanks or all your hard work on this project!