Skip to content
This repository has been archived by the owner on Oct 19, 2022. It is now read-only.

Error response from daemon: VolumeDriver.Mount: exit status 1%!(EXTRA []interface {}=[]). #45

Open
swvajanyatek opened this issue Jan 3, 2018 · 24 comments

Comments

@swvajanyatek
Copy link

I'm able to create the volume successfully, but when I try to utilize it, I get the following error:

create:

docker volume create -d vieux/sshfs -o sshcmd=root@192.168.1.2:/mnt/docker/vieux_sshfs/nexus3/nexus-data \
    -o IdentityFile=/root/.ssh/id_rsa.pub \
    -o transform_symlinks                 \
    -o follow_symlinks                    \
    -o allow_other                        \
    -o reconnect                          \
    -o StrictHostKeyChecking=no           \
    -o kernel_cache                       \
    -o cache=yes                          \
    -o auto_cache                         \
    -o big_writes                         \
    -o compression=no                     \
  sshvolume_nexus3

sshvolume_nexus3
[root@docker-test ~]# docker run -d -p 8081:8081 --name nexus -v sshvolume_nexus3:/nexus-data sonatype/nexus3
docker: Error response from daemon: VolumeDriver.Mount: exit status 1%!(EXTRA []interface {}=[]).
See 'docker run --help'.

this is my first foray into vieux/sshfs, so I'm not entierly sure this is a bug.

@athurg
Copy link
Contributor

athurg commented Jan 5, 2018

Maybe it caused by the remote path is not exists.

Try to create the remote path first, then mount the volume.

@swvajanyatek
Copy link
Author

swvajanyatek commented Jan 5, 2018 via email

@ghost
Copy link

ghost commented Feb 1, 2018

@swvajanyatek - did you get this figured out? I am having exactly the same error. I am also new to this plugin and docker.

@swvajanyatek
Copy link
Author

@jfinlins - unfortunately, no. i moved forward with sshfs from inside the container.

@hungrybirder
Copy link

the same issue.

@vincentracine
Copy link

Getting the same issue

@quwolt
Copy link

quwolt commented Feb 28, 2018

seems like plugin doesn't work...(((
got the same issue...

@adamelliotfields
Copy link

@swvajanyatek, try changing id_rsa.pub to id_rsa. When SSHing into a server, you use the private key; the contents of the public key go into the authorized_keys file.

You can also update your settings to automatically include you key by running docker plugin set vieux/sshfs sshkey.source=/home/<user>/.ssh/

I just started playing around with sshfs today on a 3-node local cluster using docker-machine. Everything is working as advertised, although I haven't tried Swarm Mode yet.

@kamikat
Copy link

kamikat commented Mar 5, 2018

Same issue here.

Using sshfs inside container works with /dev/fuse mounted and --previleged option.

@aliaskovski
Copy link

docker run -it -v sshvolume:/tmp busybox pwd
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
d070b8ef96fc: Pull complete
Digest: sha256:2107a35b58593c58ec5f4e8f2c4a70d195321078aebfadfbfb223a2ff4a4ed21
Status: Downloaded newer image for busybox:latest
docker: Error response from daemon: VolumeDriver.Mount: exit status 1%!(EXTRA []interface {}=[]).

Same issue (

@kifeo
Copy link

kifeo commented Mar 14, 2018

I have the same issue here :
docker run -it -v sshvolume:/mnt/here busybox ls /mnt/here

run on latest debian

@dulom
Copy link

dulom commented Mar 16, 2018

Same here with Docker version 18.02.0-ce, build fc4de44

@jnials
Copy link

jnials commented Mar 20, 2018

Same problem here with 17.12.1-ce.

@chgarnier
Copy link

Same issue for me with Docker version 17.12.1-ce, build 7390fc6

@Cyber1000
Copy link

Same here (17.04.0-ce, build 4845c56).
Installing the plugin with DEBUG=1, doesn't change anything.
Paths are ok, "normal" sshfs is working

Any solutions or at least ideas how to debug this?

@Cyber1000
Copy link

Found a solution:

  • update 2018-05-02: The error-message was better here, for my system it said "connection reset by peer", which means for this specific system, that password wasn't found (or other kind of login problem)
  • I normally use sshkey.
  • At first time I have forgotten to set the sshkey (as stated in the Readme here on github)
  • I thought I could set it afterwards with "docker plguin set", but it seems that this didn't work.
  • I deleted this plugin and installed it with "docker plugin install vieux/sshfs sshkey.source=/home//.ssh/", like stated on the first site
  • That solved my problem, perhaps you are having something similar

@marco-brandizi
Copy link

To me it was a problem of understanding the needed paths correctly. I could clarify this thanks to #58, but only partially:

  • sshkey.source, a plugin setting, is where keys are taken, to be copied into the containers. This will contain the keys of the hosts+filesystems that you want to mount (it should be a good idea to set authorized_keys too under this directory, putting the public keys of the same hosts in it).
  • docker volume create will work with -o IdentityFile=/root/.ssh/sshserver_rsa, if ssh.keysource contains sshserver_rsa . I cannot understand why (I don't see any /root/.ssh when I connect to the container that is mounting the sshfs volume correctly)
  • This is also relevant, if you want to connect to the filesystem of your hosting OS (I'm testing this for the first time, so that's a good target): you need to set up the server IP correctly (in the case of macos, its IP works, even if it isn't public, but something like 192.168.*.*, your case, it seems, might be different).

I think this should be clarified in the README.

@bscheshirwork
Copy link

I see:

docker run -it -v sshvolume:/testfolder busybox ls /testfolder
docker: Error response from daemon: error while mounting volume '/mnt/volumes/099665408449fffc8b87e51dd9f93d85': VolumeDriver.Mount: sshfs command execute failed: exit status 1 (read: Connection reset by peer

if I use key from non-root user. Docker run volume plugin from root and
I have a problem with permission for secret key.

$ ls -luha /home/dev/.ssh/id_docker_to_dev_service
-rw------- 1 dev dev 3,2K jun 13 12:04 /home/dev/.ssh/id_docker_to_dev_service

Solution (work for me)
Use ssh for root user. Create a new key pair and use it:

docker plugin install vieux/sshfs DEBUG=1 sshkey.source=/root/.ssh/
sudo su
# ssh-keygen -t rsa -b 4096 -C "root@localmachine to dev@service"
Enter file in which to save the key (/root/.ssh/id_rsa): /root/.ssh/id_root_to_dev_service
# ssh-copy-id -i /root/.ssh/id_root_to_dev_service dev@remote-ip-here-for-access-within-password
exit 
docker volume create -d vieux/sshfs --name sshvolume -o sshcmd=dev@remote-ip-here:/remote-folder-on-service -o IdentityFile=/root/.ssh/id_root_to_dev_service
docker run -it -v sshvolume:/testfolder busybox ls /testfolder

@houxiyao
Copy link

@bscheshirwork Hello, I also encountered this problem, but I have this problem under the root user, is there any way to solve it? Or where is the problem?

@bscheshirwork
Copy link

@houxiyao try to check ssh connection from root to destination without docker sshfs plugin (try to only connect by ssh)

@bscheshirwork
Copy link

also check config sshkey.source=/root/.ssh/ and corresponded folder /root/.ssh/

@chandu2035
Copy link

chandu2035 commented Jan 17, 2020

Seems there is bug in vieux/sshfs volume driver plugin.
Tried below steps as part of troubleshooting:

  1. Enabled ssh authentication between two nodes (manager and worker) with same docker version.
  2. Used same user id for ssh (docker).
  3. Created directory (/mnt/data) in worker node with full permissions
  4. Created volume with sshfs volume driver with above mentioned path
    $ docker volume ls (manager node)
    DRIVER VOLUME NAME
    vieux/sshfs:latest ssh-vol

$ docker volume ls (worker node)
DRIVER VOLUME NAME
local ssh-vol

  1. When I try to run container using the created volume (ex: ssh-vol), throwing an error in manager node:
    Error response from daemon: error while mounting volume '/mnt/sda1/var/lib/docker/plugins/0834c105bb86b3d1f26d134cbfed823af1032fe77844a4ba6c8b294dd483a2c8/rootfs': VolumeDriver.Mount: sshfs command execute failed: exit status 1 (read: Connection reset by peer
  2. When I create the service with 2 replicas, both replicas are running fine in worker node, by failing to start 1 on manager node, since manager availability is "active"

Is there any way to look around to solve ....? please suggest

@a-pashkov
Copy link

I've added single quotes and it worked

docker volume create -d vieux/sshfs \
    -o sshcmd='user@192.168.0.1:/pictures' \
    -o password='secret' \
    pictures

@antontre1
Copy link

Found this that can potentially be helpful ...

Look at : #19 (comment)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests