Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for docker secrets #77

Open
ArwynFr opened this issue Jul 30, 2020 · 2 comments
Open

Add support for docker secrets #77

ArwynFr opened this issue Jul 30, 2020 · 2 comments
Labels
enhancement New feature or request

Comments

@ArwynFr
Copy link

ArwynFr commented Jul 30, 2020

Docker secrets is a Docker swarm feature. It mounts a file named /run/secrets/<secret_name> (by default) in the container, which contains the secret's value. The file can then be used by the container entrypoint to access sensitive configuration information such as passwords or keys, without storing them in an image layer, an environment variable, or the Docker stack file. Docker secrets can be manually managed through docker secret commands, or can be set to point to a file on the host machine.

Using docker secrets for storing the API key would allow to remove that sensitive information from the stack file. Storing the API key in the stack file is a problem for people who want to version their stack file, which is a common practice in gitops organizations. It would require to make the entrypoint (run.sh) able to read the API key from a file rather than from an environment variable. Image writers usually do this by using another environment variable such as SEQ_API_KEY_FILE.

Official documentation on docker secrets : https://docs.docker.com/engine/swarm/secrets/
Example docker image supporting docker secrets : https://hub.docker.com/_/mariadb/

@nblumhardt
Copy link
Member

Thanks for the suggestion 👍

@nblumhardt nblumhardt added the enhancement New feature or request label Jul 31, 2020
@ArwynFr
Copy link
Author

ArwynFr commented Aug 7, 2020

Hey, I've made more tests on that topic and I came across an important security information.

When configuring a docker container to use the gelf logging driver, the communication with the graylog endpoint is coming from the docker engine (at host level) and not from inside the container. The same applies with docker swarms: logging is done by the node rather than the swarm. This has multiple consequences :

  • You can't access a sqelf container over an overlay network / using a container name, even if sqelf runs in docker
  • You don't need to attach your source containers to a common network with the sqelf container to allow logging
  • Your sqelf container must be accessible from the host level, which you can do by binding port 12201 to the host
  • Since the graylog protocol has no authentication feature, the sqelf container must not be accessible from the network

You'll probably end up binding --port "127.0.0.1:12201:12201/udp"

This, purposely, can't be done on docker swarm services.

Swarm services are spread across multiple nodes and communicate on a virtual (overlay) network. You can't bind a swarm service to the lo interface, because if you were able to do so, the container would be unable to communicate with the rest of the swarm, which you can't tell which part is hosted on the same node and which part was moved to another one. This would, in turn, cause problems with the docker swarm load balancing features. Docker swarm are forcefully bound to the overlay network, either internally with no access from the hosts, or publicly accessible from the world.

Solving this problem is actually very simple. The graylog endpoint has to thought of as an infrastructure concern. Sqelf must not run on the docker swarm, which has to hosts only business services. You would just deploy a non-swarm docker stack to the local docker, with a lo port binding on a sqelf container. You would configure all your docker containers, including swarm services, to log with gelf towards udp://localhost:12201, which would point to your node's local sqelf instance. Whenever your container is moved across the swarm to another node, there would still be a locally accessible gelf instance listening on the same relative endpoint. The node's local gelf instance would then forward logs to your Seq ingress port using the local API key authentication. Each node could have a different API key, with custom filters, additional tags, etc ...

On the other hand, whether or not you run Seq on the swarm is up to you ; only sqelf has to be local.

The conclusion of all this is : you can't use sqelf with docker secrets (because secrets is a swarm feature).
Loading the API key from a file still is a nice feature I think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants