Skip to content
This repository has been archived by the owner on Oct 19, 2022. It is now read-only.

Add support for SSH keys #10

Closed
wants to merge 1 commit into from
Closed

Add support for SSH keys #10

wants to merge 1 commit into from

Conversation

davidwilliamson
Copy link

  • New -o identity <filename> command argument for docker volume create
  • Add "mounts" section to config.json
  • Update README

Signed-off-by: David Williamson david.williamson@docker.com

Testing

  • Tested with SSH keys to remote host on AWS

  • Tested with passwords to remote host on docker LAN

  • negative test, trying to use both key and password

  • Tests run on:

    • OSX 10.12.2 (Sierra); docker4Mac Version 1.13.0-rc6-beta36
    • Ubuntu 16.04; docker version 1.13.0-rc4
  • Commands testing

    • docker plugin install
    • docker plugin inspect
    • docker plugin ls
    • docker plugin disable
    • docker plugin set
    • docker plugin enable
    • docker plugin rm
  • A build of the plugin for this PR is available on hub at davidw135/sshfs if reviewers want to test for themselves. docker plugin install davidw135/sshfs

cc/ @cyli

* New `-o identity <filename>` command argument for `docker volume create`
* Add "mounts" section to config.json
* Update README

Signed-off-by: David Williamson <david.williamson@docker.com>

2 - Configure driver to point to your SSH keys

_NOTE_: The plugin defaults to looking for SSH keys in `/tmp`. You can copy your `<identity_file_name>` to `/tmp` and omit this step, e.g.
Copy link

@cyli cyli Jan 18, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we should suggest they generate a new set of keys and add them to the authorized_keys file of the remote host, rather than copy? Mainly because if they wanted to do this on a remote daemon, they probably don't want to copy their local keys over? (Also, for clarity, we might want to specifically say "the plugin defaults to looking for SSH keys in /tmp on the daemon machine" - I know there's a note at the bottom, but it might be more useful up here.)

$ docker run --rm -it -v sshvolume:<path> busybox ls <path>
```

### Alternatively, use with passwords instead of SSH keys
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-blocking: Maybe instead of "Alternately", we should say "Deprecated: use password login", and mention that this is only provided for backwards compatibility?

As a note, I was the one that specifically requested backwards compatibility, just because I tend towards not breaking compatibility, but you are definitely correct in that eliminating this option is probably better if this were being written for the first time.

I'm fine with being overruled on breaking compatibility with the previous version of this plugin, if folks feel strongly. Now that I think about it, this being a proof of concept plugin implementation means that perhaps backwards compatibility is not as important?

@cyli
Copy link

cyli commented Jan 27, 2017

cc @stevvooe @thaJeztah


2 - Configure driver to point to your SSH keys

_NOTE_: The plugin defaults to looking for SSH keys in `/tmp`. You can copy your `<identity_file_name>` to `/tmp` and omit this step, e.g.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not in the standard location? $HOME/.ssh.

Placing them in /tmp is completely unexpected for anyone familiar with ssh.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC (please correct me if I'm wrong @davidwilliamson) the config.json may need absolute paths and cannot expand on environment variables. If that's the case /tmp seems less risky of a default than /root. It also needs to already exist or the plugin will fail to install I think, and /tmp is on all linux distributions.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cyli I wonder if we can find a nice way to get this to be the docker user, whatever that may be. /tmp may have all sorts of junk and mapping it there seems risky.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stevvooe Hm... I don't know enough about what is provided to plugins to know whether we can find that out - is it possible from within a container to find the docker user, without mounting the docker directory in or some such?

Otherwise, the only thing I can think of is /home? :|

Also, the user can manually change the mounted directory later to their actual home directory.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ying is correct in that environment variables like $HOME can not be used in the plugin's config.json file (which defines the privileges the plugin will request.

Some things to bear in mind about how plugins work:

  1. The mount path defined by the plugin must exist on the user's machine prior to performing the plugin install, or else the install will fail.

  2. Installing a plugin causes the plugin to not only be downloaded, but also enabled. This means that granting the requested privileges during the install is the last chance anyone has to limit the scope of what the plugin can do before the plugin's code begins executing. So whatever path we set as the default path is readable by the plugin as soon as it is installed.

I chose /tmp as a compromise between "must exist on a wide collection of OS families" and "least likely to contain sensitive user or system files at the time the plugin is installed"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"least likely to contain sensitive user or system files at the time the plugin is installed"

/tmp can be very sensitive. I really don't have a better suggestion but I'd like this to feel more like ssh, if we possibly can.

"name": "KeyPath",
"description": "Path to SSH keys",
"source": "/tmp",
"destination": "/tmp/sshfs",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should only take the path to a single key.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current design allows the user to create multiple volumes using different keys, so long as the keys are all in the same directory.

Changing the plugin to take the path to a single key would effectively limit the user to working with remote hosts that all share the same key at any one time.

This is because changing the setting to accept a different path requires stopping any containers that are using the volume created by the plugin, and removing those volumes.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we need stronger isolation than that? Can we not run multiple instances of a plugin with different configuration?

This is because changing the setting to accept a different path requires stopping any containers that are using the volume created by the plugin, and removing those volumes.

I see. Is there a way to parameterize it each mount?

It seems very dangerous to let each mount have access to all the remotes.

cc @vieux

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what you are asking. The way the plugin works in this PR is:

  1. User configures the plugin to work with some directory path
  2. User creates a volume that defines a remote host + key:
    docker volume create -d vieux/sshfs -o sshcmd=<user@host:path> -o identity=<identity_file_name> <volumename>
  3. user bind-mounts that volume to a container

If the user wants to work with additional remote hosts (using the same or possibly a different key), they can repeat steps 2 and 3

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidwilliamson If I am understand correctly, that identity file is accessible in all container instances that reference it. There doesn't seem to be a point where we check that a particular container is using a particular remote identity.

@cyli Do we have a way of accessing these keys from the secrets store? Like, can we mount the secret into the plugin?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cyli Do you think parameterizing for each volume, rather than container, is sufficient here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cyli I agree that secrets don't seem to be visible to plugins. I tried creating a secret and accessing it as /run/secrets/foo from within the plugin and had no success.

Copy link

@cyli cyli Jan 30, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stevvooe I'm not entirely sure if I understand the question, so please bear with me. By "container" in "...parameterizing for each volume, rather than container,...", do you mean the possible plugin containers? (e.g. the runc containers that are running the plugin code?) Or do you mean each container that requests a volume?

If the first, AFAIK (@davidwilliamson and @vieux please correct me if I'm wrong) only one instance of the plugin container runs on each node? So the multiple SSHFS volumes that can be created will all be mounted into that one container, and I guess provided to other containers that call it? (after that I have no idea how it gets attached to running containers that require the volume).

If you mean mapping container that requests an sshfs volume to a volume that contains keys, that can't be done prior to the plugin install. I think the volume plugin also doesn't know anything about the containers that request it, other than the caller ID (is this the container ID maybe?).

Even if we added the ssh identities via secrets though, since there is just one global plugin, I think all users would have access to all the ssh identities mounted in via secrets.

That said only other option I can think of to implement what I think you are suggesting (have the user mount the ssh key or otherwise provide the SSH key into the container that desires the sshfs volume, and have the volume driver use that) is to give the plugin permission to mount the docker socket and introspect containers (this is assuming the caller ID on the mount API is the container ID), and somehow exec into that or otherwise copy the identity out of that container?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cyli I am just wondering if we can provide the credentials to the instance of the volume, rather than to the whole plugin. Whether we make that available to the plugin container, to the user container or elsewhere is an implementation detail.

I guess the main issue is that we pluck --identity from the plugin mount, rather than a location associated with the particular volume mount.

I think we are missing a something here to make this work correctly.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stevvooe: Hm - I suppose one solution is to mount all of / into the plugin container, and allow the user to specify the .ssh directory to use for the credentials. I think the goal was to try to not have the plugin container mount everything, though.

@stevvooe
Copy link

I'm wondering if having a mount on the plugin is the wrong approach. We end up having a situation where keys are shared by all users of the plugin.

Perhaps, it would make more sense for the plugin to take a common location for the ssh key and one mounts that into the container, via secrets or another mechanism.

@davidwilliamson
Copy link
Author

Since we can't reach a consensus, I am withdrawing this PR

@stevvooe
Copy link

@davidwilliamson I have no intention of blocking this PR. We just need to come up with a way to do this securely that integrates well with existing sshfs workflows.

To be honest, I think we might be missing something critical to be able to parameterize plugins with common filesystem data per plugin use.

@mattzuba
Copy link

Just saw this after playing for a little bit with config.json - see my very minor diff and associated comments in #24 if you have a moment. While it works, is there anything wrong or bad about it?

@amit213
Copy link

amit213 commented Aug 20, 2017

could someone please guide me on how best to specify the ssh key file for creating docker volume?

@blocknotes
Copy link

Are there any news about this feature? It would be great to avoid clear passwords in docker file

@vieux
Copy link
Owner

vieux commented Sep 4, 2017

@blocknotes a new release will be soon, see #24

@vieux
Copy link
Owner

vieux commented Sep 4, 2017

sorry I meant #33

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants