Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Security problem with mounted docker.sock? #20

Closed
pwFoo opened this issue Mar 16, 2018 · 15 comments · Fixed by #144
Closed

Security problem with mounted docker.sock? #20

pwFoo opened this issue Mar 16, 2018 · 15 comments · Fixed by #144
Labels

Comments

@pwFoo
Copy link

pwFoo commented Mar 16, 2018

Because docker.sock ist mountet to the caddy reverse proxy it could be a security issue? What do you think about? Would it possible to split it up in a caddy reverse proxy and a second container with a shared caddyfile / caddy config?

@lucaslorentz
Copy link
Owner

lucaslorentz commented Mar 16, 2018

I'm not a security expert. But this is what I think:

I don't think it is a security issue. Of course you should avoid mounting docker.sock to other containers. Only mount it on containers you trust and know it needs connection to docker. Like this project, that completely relies on docker host.

Currently we have 2 docker image variants, from alpine and from scratch. The scratch image not even have a shell installed, I recommend using that one if you're concerned about security.

I also want to mention that although mounting docker.sock is the most convenient way to run it, there are alternatives. If your container is capable of accessing the docker host by IP, you could use the connection environment variables (DOCKER_HOST...) to setup a connection.

And last, can you please explain a bit further your suggestion? Do you wan't to separate the caddyfile generation logic, from the caddy web server? So, one container would generate the caddyfile, and multiple others would use that caddyfile to serve http trafic?

@pwFoo
Copy link
Author

pwFoo commented Mar 16, 2018

Hi @lucaslorentz and thanks for quick reply. Separate cadddyfile generation logic (with access to docker.sock) from the reverse proxy / caddy (serve http traffic) should be more secure I think. But from scratch without installed shell / linux root filesystem should be fine! So I just have to change the tag from alpine to scratch? I'll try it.

@lucaslorentz
Copy link
Owner

lucaslorentz commented Mar 16, 2018

Actually, just remove -alpine. Scratch is the default one.
And I recommend not using the CI version, use a released one.
Like 0.1 to allow patch updates, it meas 0.1.x
Or 0.1.1 to lock to a specific version.

https://hub.docker.com/r/lucaslorentz/caddy-docker-proxy/tags/

@lucaslorentz
Copy link
Owner

lucaslorentz commented Mar 26, 2018

Scratch image seems to be a secure option for you.
Please, reopen this issue if you think otherwise.

@deeky666
Copy link

I still have headaches with the docker socket mounted into the Caddy container. Also this means containers can only run on manager nodes (swarm), right? This is somewhat a limitation if you need to use replication mode host (in case you need to inspect the real client ip from requests).
Wouldn't it be possible to separate the plugin into a dedicated service and then have a shared Caddy folder that is used by the plugin and the server? The plugin service could write the generated Caddyfile to the shared folder then to make it available for all Caddy server instances.
What I don't know though is how to signal a reload of the configuration for all server instances. Is it maybe possible to have all server instances share a single PID file and then have the plugin service do the signalling onto that file?

@lucaslorentz lucaslorentz reopened this Jun 23, 2018
@deeky666
Copy link

Okay at least figured out we could use Docker API cli.ContainerKill(ctx context.Context, containerID, signal string) with signal SIGUSR1 to reload Caddy config after Caddyfile generation. Unfortunately there is no "kill" API for services in Swarm mode. Not sure if there is a way to accomplish the same in swarm...

@lucaslorentz
Copy link
Owner

@deeky666 I just faced the issue where i need to inspect the real client IP. And I switched to global replication and host port binding. But I still use a criteria to run only on manager nodes and I only load balance traffic through manager nodes.

So, lately I've been thinking about how to split the file generation.

It would be nice to use this project to generate the caddyfile and @abiosoft caddy docker image to run caddy. His image is well maintained and much easier to build and add plugins.

The option I prefer, is to generate a caddyfile into a Swarm Config, and map the Swarm Config into caddy containers. But I could also add an option to generate into a file.

Everytime the generator updates the config it could trigger a service update --force, to update all caddy containers. I believe that way it would even respect the update_config in the service: parallelism and delay.

@pwFoo
Copy link
Author

pwFoo commented May 2, 2019

Would be nice to split dockerfile generation from the caddy to run it.

@Dreamwalker666
Copy link

@lucaslorentz that proposal sounds ok.

@mxrlkn
Copy link

mxrlkn commented Jul 15, 2019

Another solution is to have a container on the manager nodes that mounts the docker socket and then exposes it via tcp. I've seen this project being used: https://github.com/Tecnativa/docker-socket-proxy

Then you'll have caddy-docker-proxy on the worker nodes listening to the socket proxy container for events on a separate network. the socket proxy only exposes read-only events and would not be accessible to the outside.

can caddy-docker-proxy use a tcp endpoint for DOCKER_HOST?
DOCKER_HOST=tcp://socketproxy:2375

@Dorsug
Copy link

Dorsug commented Jul 16, 2019

@mxrlkn If the objective is just to have read-only events, just specifying the socket as ro sould be enough ? /var/run/docker.sock:/var/run/docker.sock:ro

@pwFoo
Copy link
Author

pwFoo commented Jul 16, 2019

I'm not sure, but I think that isn't enough. The socket file would be read only, but socket would be usable?

@mxrlkn
Copy link

mxrlkn commented Jul 16, 2019

@Dorsug :ro only means that the socket file is read-only. all events will still go through.

@mxrlkn
Copy link

mxrlkn commented Jul 16, 2019

I just tested it with https://github.com/Tecnativa/docker-socket-proxy and it works fine.

Here's my compose file:

services:
  caddy:
    image: lucaslorentz/caddy-docker-proxy
    networks:
      - default
      - socket
    ports:
      - "80:80"
      - "443:443"
    environment:
      DOCKER_API_VERSION: 1.37
      DOCKER_HOST: tcp://socket:2375


  socket:
    image: tecnativa/docker-socket-proxy
    networks:
      - socket
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      CONTAINERS: 1
      SERVICES: 1
      NETWORKS: 1
      CONFIGS: 1
      TASKS: 1
      NODES: 1
      INFO: 1

networks:
  socket:

I needed to allow a few more api paths for caddy-docker-proxy to work.
The socket proxy only filters by api path, so it allows more paths than caddy-docker-proxy needs.

It's a lot better, but a custom proxy for what this project specifically needs would be more secure.

@mxrlkn
Copy link

mxrlkn commented Aug 19, 2019

I just want to point out that using tecnativa/docker-socket-proxy with the above settings only exposes read only events. At worst an attacker could see what you're running and information about your system. So I'd say this is quite secure. It's only when the POST environment variable is on that an attacker could change your system. Of course you need to trust the docker-socket-proxy to block that properly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants