-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Docker Engine Keys for Docker Remote API Authentication and Authorization #7667
Comments
I like the general idea, but I strongly dislike the storage location, since Why not something in /etc/docker instead like other daemons do? |
I like it! We don't currently have any config in Edit: Ah, there is use of it! https://github.com/docker/docker/blob/0d70706b4b6bf9d5a5daf46dd147ca71270d0ab7/registry/registry.go#L89 |
Good idea and well-written. Pairing a client to a server looks quite straightforward with this proposal, which is always a good thing! I'm not really sure if this is already included in your proposal, but to improve usability, it would be nice to be able to add keys without having to manually copying/moving those files. (Not sure if this would be technically possible without being insecure) This may need to become a separate proposal, but I wanted to add it here for an initial idea); Basically comparable to confirming a servers fingerprint when using SSH; on the client machine:
On the docker-host / server (or using an already verified client?)
or, additionally, get an overview of outstanding pairing requests;
|
@thaJeztah included in the proposal is a user prompt for the client. "User prompt added when connecting to unknown server" under user visible changes. The docker host is a bit trickier since a user prompt won't be a appropriate in daemon mode. We greatly welcome suggestions for improving this flow that doesn't require a separate connection to transfer though. The connection list is an interesting feature that could be proposed after this one is approved. |
@dmcgowan I saw the "user prompt", but wasn't entirely sure what that implied :) And, yes, was struggling a bit on the server/daemon side because the daemon itself doesn't have a CLI, so probably at least one client must be paired manually, but I'll think on this a bit further. Enabling users to pair client/daemons without having to fiddle with the underlying mechanisms would be a great feature and I see a lot of potential for that in this proposal. I'll stop expanding on my suggestions for now to keep this discussion clean. I think my example contains enough information to get the "rough" idea of what I envision this could become and will create a separate proposal once this gets a thumbs up Now let's hope this feature gets approved. Thanks! |
another request, I hope this proposal can support user friendly command for generate engine keys. |
@xiaods this proposal is generating keys and TLS certificates automatically without the need for specification in the command line args. In the future we might add a command line option to produce a CSR instead of a self-signed cert to add an alternative method of verfication other than the authorized_keys.json file. What is important is that the public key pair that is generated for the engine is used to generate the TLS configuration such that we can uniquely identity an engine by its public key. An external tool should only be used if it is using the engine's key to generate the cert. |
is this biasing user's setup towards using the same client keys for all the servers they talk to? I wonder if it may be useful to copy more of That way, boot2docker-cli doesn't risk using a key that may be for talking to the production server. Also - for the sake of securing production servers, perhaps having key passwords (again, like ssh & gpg keys), with b2d's being defaulted to empty. |
Yes, this is assuming the key is attached to the identity of the client user and useful for representing that identity on any server which is connected to. Negotiating multiple keys will not be possible using TLS as far as I know. The best way to use multiple keys based on the proposal is to specify the key through command line args or add support in the proposal for specifying which hosts a particular key will be used for.
Passphrase protection is a planned feature but it is less useful/tricky without key agent integration, and there are no plans to build our own agent. For servers passphrase protection is even more tricky, how would you ( @SvenDowideit ) propose going about that without a user manning the console? |
I would hope that there are existing solutions to the 'securing the server-side key` problem - I wasn't trying to limit the solution set to my suggestion. Fundamentally, we have a problem at the moment, where having access to the API means you can create a container that has access to the server side key, and to the list of authorised client keys. I guess if I'm really interested in a higher level of security, I'll continue to use the existing tlsverify code, and keep the CA cert on a separate and secured system? |
I understand the problem but this proposal should not increase or decrease any risk associated with accessing the private key. Whenever TLS is used, whether the new method described here or through the existing tlsverify, the private key is stored on disk without passphrase protection. The CA cert can live safely on the machine since there is no sensitive material, but the CA private key should always be protected on a separate secure system. If any private key is compromised using the existing tlsverify code, the only method for revocation is creating a new CA and redistributing keys (no revocation lists). The new method would require removing the key from allowed hosts and not re-accepting connections, could be useful to have a banned list of keys as well in the future (for either method). |
I was considering something much simpler. I'm currently opening an SSH tunnel to perform remote docker operations. It works quite well and is very simple. There is no need to establish any additional keys beyond those already associated with the user and the only server side configuration required is to make sure that the user can access the docker host via ssh. Using public key authentication, it works very nicely in conjunction with ssh-agent. What I currently do is simply run an
Then I can just set So to me, as a user, it would be very nice if (on the client side), I could simply specify a This approach wouldn't require any change to the server side docker code and it would simply piggyback in a clean way on top of existing I'm curious what people think of this kind of an approach. |
@xogeny I agree ssh is a simpler method and would require less code changes. I know there is some hacking already going on using the ssh protocol for the docker api, however this is still involving server side changes. There is a larger objective this proposal is trying to accomplish around identity of the client connecting. By running a TLS (or an SSH) server, the server process is able to establish the public key identity of the connecting client which is not only checked against the authorized keys file, but can be used in the future for authorization checks to perform api actions. You mentioned key management and this is something we would like to make simpler, probably by integrating with existing agents. We are not supporting sha1 which we found limits the ability to use many agents out there today. I would be curious to see a separate proposal or simple shell script which could setup your environment and port forward. These kind of port forwards are pretty useful and I think many could benefit from integrating into their shell environment. |
For @dmcgowan or anybody who is interested, I created a very simple Go program called You can find it at: https://github.com/xogeny/sdocker You can install the
See the README for a few details on how it is supposed to work (not much testing at this point). Let me know what you think. |
Proposal looks good as it relies on successful ssh model. I just wonder why you don't just use ssh as a transport layer, just like git does, and need to re-implement. |
was a general comment on this proposal, sdocker seem actually to do what I suggest. |
This is a very old ticket, and current versions of the engine now allow using SSH for connecting to a remote daemon. I don't think there's plans to implement other mechanisms as part of the engine itself currently, so I'll go ahead and close this for now. |
Motivation
Currently, client to daemon authentication over a TCP port can only be achieved through generating TLS certificates for both the client and daemon. Each daemon instance then needs to be configured to use the generated TLS certificate and the client must specify its own certificate as well. Production critical, large-scale deployments should already be using this method to secure and control access to Docker daemons, but the extra setup required by generating your own keys, getting them signed by a certificate authority, and distributing those certificates is too much overhead for setting up small-scale deployments such as a Boot2Docker VM running on a developer's Mac, for example. Software developers are already familiar with how SSH key distribution works: through a list of authorized_keys on the server and known_host keys on the client. Ideally each instance of the Docker engine (client or daemon) would have a unique identity represented by its own public key. With a list of trusted public keys, two engines can authenticate to eachother and the daemon can authorize the connection. This can be done at the TLS layer after initially loading a list of trusted public keys into a CA Pool.
Proposal Summary
Every instance of Docker will have its own public key which it either generates and saves on first run or loads from a file on subsequent runs. The public key will be distributed to other instances by a user of docker or system administrator to allow connections between two docker engines. Each instance will have a list of public keys which are trusted to accept connections from (trusted clients) and a separate list which it trusts to make connections to (trusted hosts). These public keys will be stored as JSON Web Keys and can be distributed as a JSON file, or as a standard PEM file. For TLS connections, the Docker engine's key pair will be used to generate a self-signed TLS certificate and the list of public keys will be used to generate a certificate pool with a certificate authority for each public key. For TLS servers the list of public keys will be loaded from an authorization file (authorized_keys.json) and for TLS clients the list will be loaded from a known hosts file (allowed_hosts.json), a client must always provide its certificate if the daemon requires it. In addition, a certificate authority PEM file will be allowed to be specified to maintain the existing TLS behavior. As another possible addition, upon connecting to a previously unknown server, a CLI user can be prompted to allow a public key now and in the future, leaving it up the user’s discretion.
Key Files
Docker will support key files in either JSON Web Key format or more traditional PEM format.
Private and Public Key files
Both the docker daemon and client will have a private key file and a public key file in either of these formats. A client's private key default location will be at
~/.docker/key.(pem|json|jwk)
and public key at~/.docker/pub_key.(pem|json|jwk)
where~
is the home directory of the user running thedocker
client. The daemon's private key default location will be at/etc/docker/key.(pem|json|jwk)
and public key at/etc/docker/pub_key.(pem|json|jwk)
. Unix file permissions for these private keys MUST be set to0600
, or 'Read/Write only by User'. It is suggested that the public keys have permissions set to0644
, or 'Read/Write by User, Read by group/others'. Because these keys may have a variable file extension, Docker will load whichever one matches the globkey.*
first, so it is NOT RECOMMENDED that there be multiplekey.*
files to avoid any ambiguity over which key file will be used. If the--tlskey=KEYFILE
argument is used, that exact file will be used. Optionally, we may add a config file for Docker client and daemon in which users may specify the file to use, but that possibility is up for discussion.Example Private Key using JSON Web Key format:
Example Private Key using PEM format:
Example Public Key using JSON Web Key format:
Example Public Key using PEM format:
Authorized Keys file
An instance of the Docker engine in daemon mode will need to know which clients are authorized to connect. We propose a file which contains a list of public keys which are authorized to access the Docker Remote API. This idea is borrowed from SSH's
authorized_keys
file. Any client which has the corresponding private key for any public key in this list will be able to connect. This is accomplished by generating a Certificate Authority Pool with a CA certificate automatically generated by the daemon for each key in this list. The server's TLS configuration will allow clients which present a self-signed certificate using one of these keys. Like today, the daemon can still be configured to use a traditional Certificate Authority (the--tlscacert=CACERTFILE
option). The default location for this file will be/etc/docker/authorized_keys.(pem|json|jwk)
. Docker will also look for trusted client keys in individual files in a directory at/etc/docker/authorized_keys.d
in either PEM or JWK format.Example Authorized Keys file using JSON Web Key Set format:
Example Authorized Keys file using PEM Bundle format:
Trusted Hosts file
An instance of the Docker engine in client mode will need to know which hosts it trusts to connect to. We propose a file which contains a list of public keys which the client trusts to be the key of the Docker Remote API server it wishes to connect to. This idea is borrowed from SSH's
know_hosts
file. Any daemon which has the corresponding private key for a public key in this list AND presents a self-signed server certificate in the TLS handshake which has the desired server name (hostname or IP address of$DOCKER_HOST
) using one of these keys. Like today, the client can still be configured to use a traditional Certificate Authority (the--tlscacert=CACERTFILE
option). The TCP address (in the form of<hostname_or_ip>:<port>
) will be specified for each key using extended attributes for the key, i.e, aaddress
JSON field if in JWK format or aaddress
header if in PEM format. The default location for this file will be~/.docker/trusted_hosts.(pem|json|jwk)
. Docker will also look for trusted host keys in individual files in a directory at~/.docker/trusted_hosts.d
in either PEM or JWK format.Example Trusted Hosts file using JSON Web Key Set format:
Example Trusted Hosts file using PEM Bundle format:
Key Types
By default, a Docker engine will generate an ECDSA key, using the standard P-256 elliptic curve, if a private key file does not already exist. Supported Elliptic Curves are P-256, P-384, and P-521. RSA keys are also supported. The default of Elliptic Curve Cryptography was chosen due to more efficient key generation and smaller key sizes for equivalent levels of security when compared to RSA [reference].
User visible changes
--insecure
flag added to disable)--insecure
flag added to disable)--tls
and--tlsverify
flags removed-i
/--identity
flag to specify the identity (private key) fileBackwards Compatibility
In order to maintain backwards compatibility, existing TLS ca, cert, and key options for setting up TLS connections will be allowed. Scripts using
--tls
and--tlsverify
will need to remove these options since these are now the default. To use the existing insecure behavior, run scripts will need to be modified to use--insecure
, this is not recommended. These changes do no have any effect on servers using unix sockets.--insecure
which will require no changes to the client.--insecure
flag. If TLS is manually configured, no changes should be required.Usage Pattern
Updated
Updated location for files for server from /var/lib/docker to /etc/docker
The text was updated successfully, but these errors were encountered: