Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Docker Engine Keys for Docker Remote API Authentication and Authorization #7667

Closed
jlhawn opened this issue Aug 21, 2014 · 18 comments
Labels
area/security kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny

Comments

@jlhawn
Copy link
Contributor

jlhawn commented Aug 21, 2014

Motivation

Currently, client to daemon authentication over a TCP port can only be achieved through generating TLS certificates for both the client and daemon. Each daemon instance then needs to be configured to use the generated TLS certificate and the client must specify its own certificate as well. Production critical, large-scale deployments should already be using this method to secure and control access to Docker daemons, but the extra setup required by generating your own keys, getting them signed by a certificate authority, and distributing those certificates is too much overhead for setting up small-scale deployments such as a Boot2Docker VM running on a developer's Mac, for example. Software developers are already familiar with how SSH key distribution works: through a list of authorized_keys on the server and known_host keys on the client. Ideally each instance of the Docker engine (client or daemon) would have a unique identity represented by its own public key. With a list of trusted public keys, two engines can authenticate to eachother and the daemon can authorize the connection. This can be done at the TLS layer after initially loading a list of trusted public keys into a CA Pool.

Proposal Summary

Every instance of Docker will have its own public key which it either generates and saves on first run or loads from a file on subsequent runs. The public key will be distributed to other instances by a user of docker or system administrator to allow connections between two docker engines. Each instance will have a list of public keys which are trusted to accept connections from (trusted clients) and a separate list which it trusts to make connections to (trusted hosts). These public keys will be stored as JSON Web Keys and can be distributed as a JSON file, or as a standard PEM file. For TLS connections, the Docker engine's key pair will be used to generate a self-signed TLS certificate and the list of public keys will be used to generate a certificate pool with a certificate authority for each public key. For TLS servers the list of public keys will be loaded from an authorization file (authorized_keys.json) and for TLS clients the list will be loaded from a known hosts file (allowed_hosts.json), a client must always provide its certificate if the daemon requires it. In addition, a certificate authority PEM file will be allowed to be specified to maintain the existing TLS behavior. As another possible addition, upon connecting to a previously unknown server, a CLI user can be prompted to allow a public key now and in the future, leaving it up the user’s discretion.

Key Files

Docker will support key files in either JSON Web Key format or more traditional PEM format.

Private and Public Key files

Both the docker daemon and client will have a private key file and a public key file in either of these formats. A client's private key default location will be at ~/.docker/key.(pem|json|jwk) and public key at ~/.docker/pub_key.(pem|json|jwk) where ~ is the home directory of the user running the docker client. The daemon's private key default location will be at /etc/docker/key.(pem|json|jwk) and public key at /etc/docker/pub_key.(pem|json|jwk). Unix file permissions for these private keys MUST be set to 0600, or 'Read/Write only by User'. It is suggested that the public keys have permissions set to 0644, or 'Read/Write by User, Read by group/others'. Because these keys may have a variable file extension, Docker will load whichever one matches the glob key.* first, so it is NOT RECOMMENDED that there be multiple key.* files to avoid any ambiguity over which key file will be used. If the --tlskey=KEYFILE argument is used, that exact file will be used. Optionally, we may add a config file for Docker client and daemon in which users may specify the file to use, but that possibility is up for discussion.

Example Private Key using JSON Web Key format:

{
    "comment": "My Docker Key",
    "crv": "P-256",
    "d": "D00we1lvii5JRuD_FbunAsVxJoSurE3eMAyG-p1U_bo",
    "kid": "YM2K:C6TY:2V27:DRZO:LTFC:L5L3:A6GZ:KVOV:BLEN:6P72:2YMB:7LQJ",
    "kty": "EC",
    "x": "8qmksN-_VZuRMFdXhzc0kpCyOh3mnyulBFdsq0vMpUE",
    "y": "uNds3LDn05Y7UOUePfOS9qATKfXsCKUPep-pBn32aE4"
}

Example Private Key using PEM format:

-----BEGIN EC PRIVATE KEY-----
comment: My Docker Key
keyID: PI6I:3UVA:5FXA:KUS6:DJOV:A6X6:HVET:T5HR:7WKY:45ZL:JXOO:FLFU

MHcCAQEEICKzsxR5bPJOsONaXcIUvDfT5v56zA5f+Tnqxjute633oAoGCCqGSM49
AwEHoUQDQgAEhCsfa2wxQbYt+eIH2O0nEQ1+5fdz81wbnZc8r2UpBKqBQJQ1AGnD
WnlsuUy0rRrw1kSUwcW9WvhEoHGEGrTKnw==
-----END EC PRIVATE KEY-----

Example Public Key using JSON Web Key format:

{
    "comment": "My Docker Key",
    "crv": "P-256",
    "kid": "YM2K:C6TY:2V27:DRZO:LTFC:L5L3:A6GZ:KVOV:BLEN:6P72:2YMB:7LQJ",
    "kty": "EC",
    "x": "8qmksN-_VZuRMFdXhzc0kpCyOh3mnyulBFdsq0vMpUE",
    "y": "uNds3LDn05Y7UOUePfOS9qATKfXsCKUPep-pBn32aE4"
}

Example Public Key using PEM format:

-----BEGIN PUBLIC KEY-----
comment: My Docker Key
keyID: PI6I:3UVA:5FXA:KUS6:DJOV:A6X6:HVET:T5HR:7WKY:45ZL:JXOO:FLFU

MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEhCsfa2wxQbYt+eIH2O0nEQ1+5fdz
81wbnZc8r2UpBKqBQJQ1AGnDWnlsuUy0rRrw1kSUwcW9WvhEoHGEGrTKnw==
-----END PUBLIC KEY-----
Authorized Keys file

An instance of the Docker engine in daemon mode will need to know which clients are authorized to connect. We propose a file which contains a list of public keys which are authorized to access the Docker Remote API. This idea is borrowed from SSH's authorized_keys file. Any client which has the corresponding private key for any public key in this list will be able to connect. This is accomplished by generating a Certificate Authority Pool with a CA certificate automatically generated by the daemon for each key in this list. The server's TLS configuration will allow clients which present a self-signed certificate using one of these keys. Like today, the daemon can still be configured to use a traditional Certificate Authority (the --tlscacert=CACERTFILE option). The default location for this file will be /etc/docker/authorized_keys.(pem|json|jwk). Docker will also look for trusted client keys in individual files in a directory at /etc/docker/authorized_keys.d in either PEM or JWK format.

Example Authorized Keys file using JSON Web Key Set format:

{
    "keys": [
        {
            "comment": "Demo Client A",
            "crv": "P-256",
            "kid": "JGVF:PQA4:NC5N:KWQY:3E7I:BI5V:QH6L:ZM3W:IIF6:6WNQ:LS3Q:IOYC",
            "kty": "EC",
            "x": "NEVMqNRwBF6mPWITr7pFWN2vL1DBVQLwBYSrvL79Y2g",
            "y": "eufpD4nTxcZ2hp-sbyuLImQFQE9jjuZtsUnpdukKgAc"
        },
        {
            "comment": "Demo Client B",
            "crv": "P-256",
            "kid": "YM2K:C6TY:2V27:DRZO:LTFC:L5L3:A6GZ:KVOV:BLEN:6P72:2YMB:7LQJ",
            "kty": "EC",
            "x": "8qmksN-_VZuRMFdXhzc0kpCyOh3mnyulBFdsq0vMpUE",
            "y": "uNds3LDn05Y7UOUePfOS9qATKfXsCKUPep-pBn32aE4"
        }
    ]
}

Example Authorized Keys file using PEM Bundle format:

-----BEGIN PUBLIC KEY-----
comment: Demo Client A
keyID: 3YHT:AUOJ:KMSD:4VJ5:7XHT:I375:7KXA:LBTD:KSWW:HICE:AAMH:MRSU

MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEUPOCQPjwK5BmTYjjtQUuWBCpW5ER
p/kBNNxwA88qs/XlG3uppzm53HMTRBGZIZn3cv4C/OQItDkm8hFYzvZekw==
-----END PUBLIC KEY-----
-----BEGIN PUBLIC KEY-----
comment: Demo Client B
keyID: PI6I:3UVA:5FXA:KUS6:DJOV:A6X6:HVET:T5HR:7WKY:45ZL:JXOO:FLFU

MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEhCsfa2wxQbYt+eIH2O0nEQ1+5fdz
81wbnZc8r2UpBKqBQJQ1AGnDWnlsuUy0rRrw1kSUwcW9WvhEoHGEGrTKnw==
-----END PUBLIC KEY-----
Trusted Hosts file

An instance of the Docker engine in client mode will need to know which hosts it trusts to connect to. We propose a file which contains a list of public keys which the client trusts to be the key of the Docker Remote API server it wishes to connect to. This idea is borrowed from SSH's know_hosts file. Any daemon which has the corresponding private key for a public key in this list AND presents a self-signed server certificate in the TLS handshake which has the desired server name (hostname or IP address of $DOCKER_HOST) using one of these keys. Like today, the client can still be configured to use a traditional Certificate Authority (the --tlscacert=CACERTFILE option). The TCP address (in the form of <hostname_or_ip>:<port>) will be specified for each key using extended attributes for the key, i.e, a address JSON field if in JWK format or a address header if in PEM format. The default location for this file will be ~/.docker/trusted_hosts.(pem|json|jwk). Docker will also look for trusted host keys in individual files in a directory at ~/.docker/trusted_hosts.d in either PEM or JWK format.

Example Trusted Hosts file using JSON Web Key Set format:

{
    "keys": [
        {
            "crv": "P-256",
            "hosts": [
                "localhost",
                "docker.example.com"
            ],
            "kid": "OQNF:PCXT:FAYS:SDTY:ZHXZ:SLDD:3PY3:V3GI:URTX:BDXQ:TQDW:CRQ6",
            "kty": "EC",
            "x": "mc1wSZWrrgBOsWBg3XXYiuL8vhBNdoMZANkk2hvj8-g",
            "y": "SsIVJn5VZzuCigKkuqIl7EPdFTnCU5TSR-gD6DkxSG8"
        },
        {
            "crv": "P-256",
            "hosts": [
                "localhost",
                "docker.example.com"
            ],
            "kid": "KUPF:4OZR:MZAE:GJAT:HOJR:PGE4:ORNX:AXEM:5OSL:7IC6:HSAD:EOPH",
            "kty": "EC",
            "x": "aDP4a11PJjtMuOZd9C2PIAOs37l1AMHMxIok5Ie3jhY",
            "y": "ARm9r8eGmc575viOZKU4hXzHKPzwnClDuiNcGlHdyrU"
        }
    ]
}

Example Trusted Hosts file using PEM Bundle format:

-----BEGIN PUBLIC KEY-----
hosts: localhost,docker.example.com
keyID: ZIIF:X7DE:LYA4:SCQH:XYRH:EM7X:SLED:IIUU:XVVP:FSRN:QZ4T:3VMQ

MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEhTTP5gZlLUETXGFvVaDrZSBilr6P
RDZ28kji+nbjQo+kh3c9rG4ath+fukTag2LaqnluxiJPUxCTsj3R9MfEVA==
-----END PUBLIC KEY-----
-----BEGIN PUBLIC KEY-----
hosts: localhost,docker.example.com
keyID: KM54:SCR6:NVJX:PKBB:A5YN:BWMO:P455:VCLD:KF22:TRAL:YSJC:JXOD

MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAELANz7WzbZdK/foHvF8sR4S4IzCwz
WfgrXHD5a+eiu2uXC6RJj/durYTuFhJOdGOtBJJJpmLkLQpnlWv2YAW0ZQ==
-----END PUBLIC KEY-----

Key Types

By default, a Docker engine will generate an ECDSA key, using the standard P-256 elliptic curve, if a private key file does not already exist. Supported Elliptic Curves are P-256, P-384, and P-521. RSA keys are also supported. The default of Elliptic Curve Cryptography was chosen due to more efficient key generation and smaller key sizes for equivalent levels of security when compared to RSA [reference].

User visible changes

  • TLS is always used for when using tcp:// (unix:// does not require)
  • Client TLS verification is on by default (--insecure flag added to disable)
  • Server TLS verification is on by default (--insecure flag added to disable)
  • --tls and --tlsverify flags removed
  • -i/--identity flag to specify the identity (private key) file
  • User prompt added when connecting to unknown server

Backwards Compatibility

In order to maintain backwards compatibility, existing TLS ca, cert, and key options for setting up TLS connections will be allowed. Scripts using --tls and --tlsverify will need to remove these options since these are now the default. To use the existing insecure behavior, run scripts will need to be modified to use --insecure, this is not recommended. These changes do no have any effect on servers using unix sockets.

  • Connecting from older client: The client must generate a certificate which is distributed to the server. Optionally the newer server can run with --insecure which will require no changes to the client.
  • Connecting to an older server: If non-TLS, Client will maintain ability to connect to endpoint using the --insecure flag. If TLS is manually configured, no changes should be required.

Usage Pattern

  • Single Machine - Setup using Unix socket, no changes
  • Single Machine (with non-B2D VM) -
    • Invoke docker on host to generate key.json
    • Invoke docker on guest to generate key.json
    • Copy ~/.docker/pub_key.json on guest to /etc/docker/trusted_hosts.d/guest.json on host
    • Copy /etc/docker/pub_key.json on host to ~/.docker/authorized_keys.d/host.json on guest (optionally use prompt)
  • Single Machine (B2D) - Boot2Docker installation generates and copies keys
  • Two Machines -
    • Invoke docker on client to generate key.json
    • Invoke docker on server to generate key.json
    • Copy ~/.docker/pub_key.json on client to ~/.docker/authorized_keys.d/client.json on server
    • Copy /etc/docker/pub_key.json on server to ~/.docker/trusted_hosts.d/server.json on client

Updated

Updated location for files for server from /var/lib/docker to /etc/docker

@tianon
Copy link
Member

tianon commented Aug 22, 2014

I like the general idea, but I strongly dislike the storage location, since
it encourages users to muck about in /var/lib/docker when they really
should never do so.

Why not something in /etc/docker instead like other daemons do?

@jlhawn
Copy link
Contributor Author

jlhawn commented Aug 22, 2014

Why not something in /etc/docker instead like other daemons do?

I like it! We don't currently have any config in /etc/docker/ do we? If not, I almost guarantee that there is past discussion on the subject.

Edit:

Ah, there is use of it! https://github.com/docker/docker/blob/0d70706b4b6bf9d5a5daf46dd147ca71270d0ab7/registry/registry.go#L89

@thaJeztah
Copy link
Member

Good idea and well-written. Pairing a client to a server looks quite straightforward with this proposal, which is always a good thing!

I'm not really sure if this is already included in your proposal, but to improve usability, it would be nice to be able to add keys without having to manually copying/moving those files. (Not sure if this would be technically possible without being insecure)

This may need to become a separate proposal, but I wanted to add it here for an initial idea); Basically comparable to confirming a servers fingerprint when using SSH;

on the client machine:

sudo docker connect tcp://docker-host:2376
> Add host permanently to trusted hosts [Y/n]? Y

Host has been added to trusted hosts. 
A connection-request was sent to the server, please confirm 
request [unique identifier of the request] on the server
to complete the pairing.

On the docker-host / server (or using an already verified client?)

docker connect --approve [identifier]
> Add client xyz to trusted clients [Y/n]?

or, additionally, get an overview of outstanding pairing requests;

docker connect --list
.... outputs a list of client connection requests with unique identifiers

@dmcgowan
Copy link
Member

@thaJeztah included in the proposal is a user prompt for the client. "User prompt added when connecting to unknown server" under user visible changes. The docker host is a bit trickier since a user prompt won't be a appropriate in daemon mode. We greatly welcome suggestions for improving this flow that doesn't require a separate connection to transfer though. The connection list is an interesting feature that could be proposed after this one is approved.

@thaJeztah
Copy link
Member

@dmcgowan I saw the "user prompt", but wasn't entirely sure what that implied :) And, yes, was struggling a bit on the server/daemon side because the daemon itself doesn't have a CLI, so probably at least one client must be paired manually, but I'll think on this a bit further. Enabling users to pair client/daemons without having to fiddle with the underlying mechanisms would be a great feature and I see a lot of potential for that in this proposal.

I'll stop expanding on my suggestions for now to keep this discussion clean. I think my example contains enough information to get the "rough" idea of what I envision this could become and will create a separate proposal once this gets a thumbs up

Now let's hope this feature gets approved. Thanks!

@xiaods
Copy link
Contributor

xiaods commented Aug 23, 2014

another request, I hope this proposal can support user friendly command for generate engine keys.
the reference is here:
https://github.com/substack/peerca

@dmcgowan
Copy link
Member

@xiaods this proposal is generating keys and TLS certificates automatically without the need for specification in the command line args. In the future we might add a command line option to produce a CSR instead of a self-signed cert to add an alternative method of verfication other than the authorized_keys.json file. What is important is that the public key pair that is generated for the engine is used to generate the TLS configuration such that we can uniquely identity an engine by its public key. An external tool should only be used if it is using the engine's key to generate the cert.

@SvenDowideit
Copy link
Contributor

is this biasing user's setup towards using the same client keys for all the servers they talk to? I wonder if it may be useful to copy more of ssh's workings - where it negotiates using the keys it has available in the .docker dir.

That way, boot2docker-cli doesn't risk using a key that may be for talking to the production server. Also - for the sake of securing production servers, perhaps having key passwords (again, like ssh & gpg keys), with b2d's being defaulted to empty.

@dmcgowan
Copy link
Member

is this biasing user's setup towards using the same client keys for all the servers they talk to?

Yes, this is assuming the key is attached to the identity of the client user and useful for representing that identity on any server which is connected to. Negotiating multiple keys will not be possible using TLS as far as I know. The best way to use multiple keys based on the proposal is to specify the key through command line args or add support in the proposal for specifying which hosts a particular key will be used for.

for the sake of securing production servers, perhaps having key passwords (again, like ssh & gpg keys)

Passphrase protection is a planned feature but it is less useful/tricky without key agent integration, and there are no plans to build our own agent. For servers passphrase protection is even more tricky, how would you ( @SvenDowideit ) propose going about that without a user manning the console?

@SvenDowideit
Copy link
Contributor

I would hope that there are existing solutions to the 'securing the server-side key` problem - I wasn't trying to limit the solution set to my suggestion.

Fundamentally, we have a problem at the moment, where having access to the API means you can create a container that has access to the server side key, and to the list of authorised client keys.

I guess if I'm really interested in a higher level of security, I'll continue to use the existing tlsverify code, and keep the CA cert on a separate and secured system?

@dmcgowan
Copy link
Member

dmcgowan commented Sep 9, 2014

I understand the problem but this proposal should not increase or decrease any risk associated with accessing the private key. Whenever TLS is used, whether the new method described here or through the existing tlsverify, the private key is stored on disk without passphrase protection. The CA cert can live safely on the machine since there is no sensitive material, but the CA private key should always be protected on a separate secure system.

If any private key is compromised using the existing tlsverify code, the only method for revocation is creating a new CA and redistributing keys (no revocation lists). The new method would require removing the key from allowed hosts and not re-accepting connections, could be useful to have a banned list of keys as well in the future (for either method).

@xogeny
Copy link

xogeny commented Sep 23, 2014

I was considering something much simpler. I'm currently opening an SSH tunnel to perform remote docker operations. It works quite well and is very simple. There is no need to establish any additional keys beyond those already associated with the user and the only server side configuration required is to make sure that the user can access the docker host via ssh. Using public key authentication, it works very nicely in conjunction with ssh-agent.

What I currently do is simply run an ssh session as:

ssh -f user@docker-server -L 12375:127.0.0.1:2375 -N

Then I can just set DOCKER_HOST to be tcp://localhost:12375 and everything works very nicely. But it is a little bit of a pain to have to run this ssh command in the background and keep it open all the time.

So to me, as a user, it would be very nice if (on the client side), I could simply specify a DOCKER_HOST value of ssh://username@dockerhost:XYZ where XYZ was the port on the server side. What you can then do is run an ssh tunnel (potentially using a control socket) first. This ssh tunnel could then open a tunnel on port ABC and then run the docker client with an effective DOCKER_HOST value of tcp://localhost:ABC. When the docker client terminates, it can then close the tunnel (again, by using a control socket).

This approach wouldn't require any change to the server side docker code and it would simply piggyback in a clean way on top of existing ssh functionality in a well understood way. It also would mean that key management and deployment would be done in a very similar way to the way it is handled with Git. And none of the docker code would have to implement any real security features (again, because it would be piggy backing on ssh).

I'm curious what people think of this kind of an approach.

@dmcgowan
Copy link
Member

@xogeny I agree ssh is a simpler method and would require less code changes. I know there is some hacking already going on using the ssh protocol for the docker api, however this is still involving server side changes. There is a larger objective this proposal is trying to accomplish around identity of the client connecting. By running a TLS (or an SSH) server, the server process is able to establish the public key identity of the connecting client which is not only checked against the authorized keys file, but can be used in the future for authorization checks to perform api actions. You mentioned key management and this is something we would like to make simpler, probably by integrating with existing agents. We are not supporting sha1 which we found limits the ability to use many agents out there today.

I would be curious to see a separate proposal or simple shell script which could setup your environment and port forward. These kind of port forwards are pretty useful and I think many could benefit from integrating into their shell environment.

@xogeny
Copy link

xogeny commented Sep 24, 2014

For @dmcgowan or anybody who is interested, I created a very simple Go program called sdocker that works just like docker except it can tunnel everything over ssh. It should be a drop in replacement for the docker client (if I did everything right, which I'm sure I didn't).

You can find it at: https://github.com/xogeny/sdocker

You can install the sdocker binary by simply running:

$ go get github.com/xogeny/sdocker

See the README for a few details on how it is supposed to work (not much testing at this point).

Let me know what you think.

@ndeloof
Copy link
Contributor

ndeloof commented Dec 9, 2014

Proposal looks good as it relies on successful ssh model. I just wonder why you don't just use ssh as a transport layer, just like git does, and need to re-implement.
Also, why not just require a secure transport layer, using ssh by default, and have docker rely on it ? "batteries includes, but replaceable" principle would then offer opportunities to plug third party secure remote communication solutions, typically based on VPN solutions.

@xogeny
Copy link

xogeny commented Dec 11, 2014

@ndeloof Is your comment a response to me (about sdocker) or @dmcgowan? I can't quite tell.

@ndeloof
Copy link
Contributor

ndeloof commented Dec 11, 2014

was a general comment on this proposal, sdocker seem actually to do what I suggest.

@thaJeztah
Copy link
Member

This is a very old ticket, and current versions of the engine now allow using SSH for connecting to a remote daemon. I don't think there's plans to implement other mechanisms as part of the engine itself currently, so I'll go ahead and close this for now.

@thaJeztah thaJeztah closed this as not planned Won't fix, can't repro, duplicate, stale Mar 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/security kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny
Projects
None yet
Development

No branches or pull requests

10 participants