Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VolumeDriver.Mount: exit status 1 #12

Open
cron410 opened this issue Dec 8, 2017 · 12 comments
Open

VolumeDriver.Mount: exit status 1 #12

cron410 opened this issue Dec 8, 2017 · 12 comments

Comments

@cron410
Copy link

cron410 commented Dec 8, 2017

I currently have 2 gluster servers also running docker with app1 in a container using this plugin.

I recently set up a 3rd docker host in the same datacenter as the other two. They are essentially on a public LAN with 1ms ping between them. The only difference is the two gluster servers/docker hosts are running Debian linux and the 3rd host is running RancherOS. I temporarily disabled the firewall on both gluster servers for testing and re-enabled when finished.

Volume info from one Gluster server.

gluster volume info

Volume Name: docker-app1
Type: Replicate
Volume ID: 4bf3fb99-0ee7-4345-9f98-fc3039204749
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1.mydomain.com:/docker
Brick2: gluster2.mydomain.com:/docker
Options Reconfigured:
auth.allow: all
transport.address-family: inet
nfs.disable: on

on the RancherOS host:

[rancher@rancher ~]$ docker volume create --driver sapk/plugin-gluster --opt voluri="gluster.mydomain.com:docker-app1" --name docker-app1                  
docker-app1

[rancher@rancher ~]$ docker volume ls
DRIVER                       VOLUME NAME
local                        f9a1fe0d3dbaa575b0c3e17753a6931f727eb7e610f0173b574b7cac42419044
local                        fa0909ccc4200a6af2fe52d47e460514ccd0e80c06a23c4caca49215a073ae61
local                        test
sapk/plugin-gluster:latest   docker-app1

[rancher@rancher ~]$ docker plugin ls
ID                  NAME                         DESCRIPTION                   ENABLED
d1e1965d1ebe        sapk/plugin-gluster:latest   GlusterFS plugin for Docker   true

[rancher@rancher ~]$ docker run -v docker-app1:/mnt --rm -ti ubuntu
docker: Error response from daemon: VolumeDriver.Mount: exit status 1.
See 'docker run --help'.
@sapk
Copy link
Owner

sapk commented Dec 8, 2017

Does the host have fuse intalled ?

@cron410
Copy link
Author

cron410 commented Dec 10, 2017

[rancher@rancher ~]$ ls /dev/fuse
/dev/fuse

I have an rclone container that is able to use /dev/fuse to mount a cloud storage drive.

@sapk
Copy link
Owner

sapk commented Dec 10, 2017

Can you debug the plugin to read the gluster client logs https://docs.docker.com/engine/extend/#debugging-plugins ?
In most case, it should be the host that doesn't resolve one of the name of the gluster hosts.

@badele
Copy link

badele commented Dec 14, 2017

Hi,

I have a same issue,

I try test you plugin with this docker recipe

Note: the glusterfs cluster it seem work with this recipe

I use a archlinux host, and it seem not log for the docker plugin or i dont't know activate the docker plugin debug/verbose mode (I have actived with docker plugin set sapk/plugin-gluster DEBUG=1 command).

Thank for you help

@sapk
Copy link
Owner

sapk commented Dec 14, 2017

@badele That normal, the plugin does have to resolve nodename of volume (node-1, node-2, node-...) but those names are only resolved by docker inside the same network.

I can recommend you to replace nodename by there respectives IP at volume creation. gluster volume create dockerstore replica 3 IP-node-1:/data/glusterfs/store/dockerstore IP-node-2:/data/glusterfs/store/dockerstore IP-node-3:/data/glusterfs/store/dockerstore (Note: you can keep nodename for peer probe)

This is how gluster works when you mount it, the client will retrieve the configuration (node-1:/data/glusterfs/store/dockerstore node-2:/data/glusterfs/store/dockerstore ...) but gluster can't outside the docker network (on host or in plugin container) resolve those names. A other solution is to add node-X container ip to host file /etc/hosts to resolve those names.

If you want, I setup the same type of configuration for integration testing. Just clone the repo and do
make test-integration (code here : https://github.com/sapk/docker-volume-gluster/tree/master/gluster/integration)

@badele
Copy link

badele commented Dec 14, 2017

What an idiot ! i have forgot of replace the hostname by ip for volume creation :)

I think you're right, i will test this this evening

Thank for you help and good job for you project

@sapk
Copy link
Owner

sapk commented Dec 14, 2017

I did the same ^^ :

log.Print(cmd("docker-compose", "-f", pwd+"/docker/gluster-cluster/docker-compose.yml", "exec", "-T", "node-1", "gluster", "volume", "create", "test-replica", "replica", "3", "node-1:/brick/replica", "node-2:/brick/replica", "node-3:/brick/replica"))
We always learn from our mistake ;-)

@badele
Copy link

badele commented Dec 14, 2017

It's work fine for me, @sapk Thanks for your help :)

@cron410
Copy link
Author

cron410 commented Dec 14, 2017 via email

@sapk
Copy link
Owner

sapk commented Dec 14, 2017

If the host can resolve the domains of gluster server it is good.
In the case of @badele it is the container name that he use for gluster server name in volume but those only resolve inside the corresponding docker network.

@badele
Copy link

badele commented Dec 15, 2017

I complete my previous message with the sample code (only work if we use host containers IPs)

Edit: I use @sapk tips for getting IPs

@sapk
Copy link
Owner

sapk commented Dec 15, 2017

@badele I can recommend you to use docker inspect --format {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} containerName to get the container IP more reliably.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants