New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't use s3fs plugin with other user than root #907

Open
galindro opened this Issue Jun 29, 2017 · 27 comments

Comments

Projects
None yet
9 participants
@galindro

galindro commented Jun 29, 2017

Summary

I need to start a container with other user than root (like graylog for example) and use rexray/s3fs as a persistent volume. But it not works. I can only access the persistent volume as root user from inside the container.

Bug Reports

Version

docker plugin ls
ID                  NAME                DESCRIPTION               ENABLED
19a00812f486        rexray/s3fs:0.8.2   REX-Ray for Amazon S3FS   true

Expected Behavior

I expect to user rexray/s3fs plugin in containers that runs with other users than root.

Actual Behavior

This is what occurs if I try to access a volume from rexray/s3fs plugin in a container running with a non root user:

graylog@33098acdd9b2:/usr/share/graylog$ ls -lh data/
ls: cannot access data/config: Permission denied
total 12K
d????????? ? ?       ?          ?            ? config
drwxr-xr-x 2 graylog graylog 4.0K Apr  4 12:58 contentpacks
drwxr-xr-x 2 root    root    4.0K Apr  4 12:58 journal
drwxr-xr-x 2 root    root    4.0K Apr  4 12:58 log

Steps To Reproduce

  1. Create a s3 bucket
  2. Install rexray/s3fs v.0.8.2
  3. Run graylog container with graylog user:
docker run -ti --user=graylog --rm -v my.s3.volume:/usr/share/graylog/data/config graylog2/server:2.2.3-1 /bin/bash
  1. Try to access /usr/share/graylog/data/config from within container

Configuration Files

docker plugin inspect rexray/s3fs:0.8.2 
[
    {
        "Config": {
            "Args": {
                "Description": "",
                "Name": "",
                "Settable": null,
                "Value": null
            },
            "Description": "REX-Ray for Amazon S3FS",
            "Documentation": "https://github.com/codedellemc/rexray/.docker/plugin/s3fs",
            "Entrypoint": [
                "/rexray.sh",
                "rexray",
                "start",
                "-f",
                "--nopid"
            ],
            "Env": [
                {
                    "Description": "",
                    "Name": "REXRAY_FSTYPE",
                    "Settable": [
                        "value"
                    ],
                    "Value": "ext4"
                },
                {
                    "Description": "",
                    "Name": "REXRAY_LOGLEVEL",
                    "Settable": [
                        "value"
                    ],
                    "Value": "warn"
                },
                {
                    "Description": "",
                    "Name": "REXRAY_PREEMPT",
                    "Settable": [
                        "value"
                    ],
                    "Value": "false"
                },
                {
                    "Description": "",
                    "Name": "S3FS_ACCESSKEY",
                    "Settable": [
                        "value"
                    ],
                    "Value": ""
                },
                {
                    "Description": "",
                    "Name": "S3FS_REGION",
                    "Settable": [
                        "value"
                    ],
                    "Value": ""
                },
                {
                    "Description": "",
                    "Name": "S3FS_SECRETKEY",
                    "Settable": [
                        "value"
                    ],
                    "Value": ""
                }
            ],
            "Interface": {
                "Socket": "rexray.sock",
                "Types": [
                    "docker.volumedriver/1.0"
                ]
            },
            "IpcHost": false,
            "Linux": {
                "AllowAllDevices": true,
                "Capabilities": [
                    "CAP_SYS_ADMIN"
                ],
                "Devices": null
            },
            "Mounts": [
                {
                    "Description": "",
                    "Destination": "/dev",
                    "Name": "",
                    "Options": [
                        "rbind"
                    ],
                    "Settable": null,
                    "Source": "/dev",
                    "Type": "bind"
                }
            ],
            "Network": {
                "Type": "host"
            },
            "PidHost": false,
            "PropagatedMount": "/var/lib/libstorage/volumes",
            "User": {},
            "WorkDir": "",
            "rootfs": {
                "diff_ids": [
                    "sha256:a7f0d37906c7f57b73a838cee49fe6068628bd7613eea766a12ca914c0921aaf"
                ],
                "type": "layers"
            }
        },
        "Enabled": true,
        "Id": "19a00812f486d4965c1b7347ed7746b473bd48028b6142c05fbc487e2733bbaf",
        "Name": "rexray/s3fs:0.8.2",
        "PluginReference": "docker.io/rexray/s3fs:0.8.2",
        "Settings": {
            "Args": [],
            "Devices": [],
            "Env": [
                "REXRAY_FSTYPE=ext4",
                "REXRAY_LOGLEVEL=warn",
                "REXRAY_PREEMPT=false",
                "S3FS_ACCESSKEY=myaccesskey",
                "S3FS_REGION=sa-east-1",
                "S3FS_SECRETKEY=mysecretkey"
            ],
            "Mounts": [
                {
                    "Description": "",
                    "Destination": "/dev",
                    "Name": "",
                    "Options": [
                        "rbind"
                    ],
                    "Settable": null,
                    "Source": "/dev",
                    "Type": "bind"
                }
            ]
        }
    }
]

Logs

How could I get docker plugin logs??

@clintkitson

This comment has been minimized.

Show comment
Hide comment
@clintkitson

clintkitson Jun 29, 2017

Member

Hello @galindro. I believe the parameter that may help you out would be linux.volume.filemode. This parameter unfortunately is not exposed currently as a plugin, but is available when using REX-Ray as a standalone process with S3FS. It would be a minimal amount of work to update the configuration file and create a new plugin with this option exposed.

Can you try REX-Ray without the plugin and set this parameter? If set correctly when you run rexray | grep filemode you should see it set to the value you configure. It could be 777 in this case.

Open to ideas here.

Member

clintkitson commented Jun 29, 2017

Hello @galindro. I believe the parameter that may help you out would be linux.volume.filemode. This parameter unfortunately is not exposed currently as a plugin, but is available when using REX-Ray as a standalone process with S3FS. It would be a minimal amount of work to update the configuration file and create a new plugin with this option exposed.

Can you try REX-Ray without the plugin and set this parameter? If set correctly when you run rexray | grep filemode you should see it set to the value you configure. It could be 777 in this case.

Open to ideas here.

@clintkitson

This comment has been minimized.

Show comment
Hide comment
@clintkitson

clintkitson Jun 29, 2017

Member

The logs should be displayed in the Docker log. You can also send the plugin the REXRAY_DEBUG=true environment variable to get more verbose. You can also look for rexray.log in the container OS filesystem.

Member

clintkitson commented Jun 29, 2017

The logs should be displayed in the Docker log. You can also send the plugin the REXRAY_DEBUG=true environment variable to get more verbose. You can also look for rexray.log in the container OS filesystem.

@galindro

This comment has been minimized.

Show comment
Hide comment
@galindro

galindro Jun 29, 2017

@clintkitson, is linux.volume.filemode parameter used in rexray/ebs plugin? I'm asking because this issue doesn't occurs with rexray/ebs plugin, only with S3FS plugin. IMHO, this is related to s3fs, not with rexray....

I've checked s3fs container tags in docker hub and noticed that there are new s3fs releases. The latest of them (0.9.2) has the S3FS_OPTIONS evironment variable. With it, I could try to set allow_other s3fs option, like described here: #809 . I think that using this option, I could solve this issue. I'll give a try...

galindro commented Jun 29, 2017

@clintkitson, is linux.volume.filemode parameter used in rexray/ebs plugin? I'm asking because this issue doesn't occurs with rexray/ebs plugin, only with S3FS plugin. IMHO, this is related to s3fs, not with rexray....

I've checked s3fs container tags in docker hub and noticed that there are new s3fs releases. The latest of them (0.9.2) has the S3FS_OPTIONS evironment variable. With it, I could try to set allow_other s3fs option, like described here: #809 . I think that using this option, I could solve this issue. I'll give a try...

@galindro

This comment has been minimized.

Show comment
Hide comment
@galindro

galindro Jun 29, 2017

@clintkitson , please check this: #908 . 0.9.2 version isn't working... :(

galindro commented Jun 29, 2017

@clintkitson , please check this: #908 . 0.9.2 version isn't working... :(

@clintkitson

This comment has been minimized.

Show comment
Hide comment
@clintkitson

clintkitson Jun 29, 2017

Member

Sounds good, thanks for checking that out and the info.

Member

clintkitson commented Jun 29, 2017

Sounds good, thanks for checking that out and the info.

@galindro

This comment has been minimized.

Show comment
Hide comment
@galindro

galindro Jul 4, 2017

@clintkitson just for record: I've tried to use rexray s3fs version 0.8.2 with latest docker-ce release: 17.06.0-ce and I've see another bug: if I try to use the same volume in two services of tha same stack, only one container mounts the volume and the other fails with this error:

ybmfe0pcp7rytem7oea00o9cq   backup_s3.1         galindro/galintools:latest@sha256:3771400755eaa36685e06c68781f8101dbc6c3bf4b806566e56b34eb28afaa98   ip-10-0-3-246       Shutdown            Failed 21 seconds ago   "starting container failed: oci runtime error: container_linux.go:262: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:57: mounting \\\"/var/lib/docker/plugins/0ce6acc31c2d2d2a69107063f003a1c2ad834e95ff8c02324c044428c65b3fa6/rootfs/var/lib/libstorage/volumes/mybucket/data\\\" to rootfs \\\"/var/lib/docker/overlay2/1e21012fe5fa4c0175ede28b304f49bf1f5018357eeaa4779704198ecc46428f/merged\\\" at \\\"/opt/galintools/scripts\\\" caused \\\"stat /var/lib/docker/plugins/0ce6acc31c2d2d2a69107063f003a1c2ad834e95ff8c02324c044428c65b3fa6/rootfs/var/lib/libstorage/volumes/mybucket/data: no such file or directory\\\"\""

galindro commented Jul 4, 2017

@clintkitson just for record: I've tried to use rexray s3fs version 0.8.2 with latest docker-ce release: 17.06.0-ce and I've see another bug: if I try to use the same volume in two services of tha same stack, only one container mounts the volume and the other fails with this error:

ybmfe0pcp7rytem7oea00o9cq   backup_s3.1         galindro/galintools:latest@sha256:3771400755eaa36685e06c68781f8101dbc6c3bf4b806566e56b34eb28afaa98   ip-10-0-3-246       Shutdown            Failed 21 seconds ago   "starting container failed: oci runtime error: container_linux.go:262: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:57: mounting \\\"/var/lib/docker/plugins/0ce6acc31c2d2d2a69107063f003a1c2ad834e95ff8c02324c044428c65b3fa6/rootfs/var/lib/libstorage/volumes/mybucket/data\\\" to rootfs \\\"/var/lib/docker/overlay2/1e21012fe5fa4c0175ede28b304f49bf1f5018357eeaa4779704198ecc46428f/merged\\\" at \\\"/opt/galintools/scripts\\\" caused \\\"stat /var/lib/docker/plugins/0ce6acc31c2d2d2a69107063f003a1c2ad834e95ff8c02324c044428c65b3fa6/rootfs/var/lib/libstorage/volumes/mybucket/data: no such file or directory\\\"\""
@clintkitson

This comment has been minimized.

Show comment
Hide comment
@clintkitson

clintkitson Jul 5, 2017

Member

If you manually use the docker run command are you able to mount the volume twice?

Member

clintkitson commented Jul 5, 2017

If you manually use the docker run command are you able to mount the volume twice?

@robertgartman

This comment has been minimized.

Show comment
Hide comment
@robertgartman

robertgartman Aug 13, 2017

Based on the previous entries in this case I've tried creating a volume like this:

docker volume create --driver rexray/s3fs:0.9.2 --opt linux.volume.fileMode=0777 --opt allow_other=true myrandoms3bucketname876756234

I run the above on Docker version 17.06.0-ce, build 02c1d87

My compose.yml file:

version: '3.3'

services:
  test:
    image: docker.elastic.co/logstash/logstash:5.5.1
    command: 'bash -c "whoami && mount | grep s3fs && ls -la /usr/share/logstash/pipeline/test.txt"'
    volumes:
      - rexray-logstash:/usr/share/logstash/pipeline
    deploy:
      replicas: 1
      placement:
        constraints:
         - node.labels.datacenter==grottan

volumes:
  rexray-logstash:
    external:
      name: 'myrandoms3bucketname876756234'

Running

docker stack deploy -c "compose.yml" test && docker service logs -f test_test
on the compose file generates:

results in

test_test.1.4og7a8d6xb7q@R520    | logstash
test_test.1.4og7a8d6xb7q@R520    | s3fs on /usr/share/logstash/pipeline type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
test_test.1.4og7a8d6xb7q@R520    | ls: cannot access /usr/share/logstash/pipeline/test.txt: Permission denied

Conclusion:

  • logstash image runs under the user logstash
  • the mount looks healthy
  • there is (still) a permission issue even though --opt linux.volume.fileMode=0777 --opt allow_other=true was used when creating the volume.

Mounting the volume on a busybox image works as expected and files are visible (commands being executed as root).

@clintkitson Is my usage of docker volume create supported? It's not clear to me if I hit a bug or running the command with a bad syntax.

robertgartman commented Aug 13, 2017

Based on the previous entries in this case I've tried creating a volume like this:

docker volume create --driver rexray/s3fs:0.9.2 --opt linux.volume.fileMode=0777 --opt allow_other=true myrandoms3bucketname876756234

I run the above on Docker version 17.06.0-ce, build 02c1d87

My compose.yml file:

version: '3.3'

services:
  test:
    image: docker.elastic.co/logstash/logstash:5.5.1
    command: 'bash -c "whoami && mount | grep s3fs && ls -la /usr/share/logstash/pipeline/test.txt"'
    volumes:
      - rexray-logstash:/usr/share/logstash/pipeline
    deploy:
      replicas: 1
      placement:
        constraints:
         - node.labels.datacenter==grottan

volumes:
  rexray-logstash:
    external:
      name: 'myrandoms3bucketname876756234'

Running

docker stack deploy -c "compose.yml" test && docker service logs -f test_test
on the compose file generates:

results in

test_test.1.4og7a8d6xb7q@R520    | logstash
test_test.1.4og7a8d6xb7q@R520    | s3fs on /usr/share/logstash/pipeline type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
test_test.1.4og7a8d6xb7q@R520    | ls: cannot access /usr/share/logstash/pipeline/test.txt: Permission denied

Conclusion:

  • logstash image runs under the user logstash
  • the mount looks healthy
  • there is (still) a permission issue even though --opt linux.volume.fileMode=0777 --opt allow_other=true was used when creating the volume.

Mounting the volume on a busybox image works as expected and files are visible (commands being executed as root).

@clintkitson Is my usage of docker volume create supported? It's not clear to me if I hit a bug or running the command with a bad syntax.

@clintkitson

This comment has been minimized.

Show comment
Hide comment
@clintkitson

clintkitson Aug 14, 2017

Member

@robertgartman The linux.volume.fileMode is a rexray service option and not one that gets passed as an option from Docker. Can you try running REX-Ray as a standalone service (not managed plugin) and configuring this option in the config.yml file and then restart the rexray service?

Member

clintkitson commented Aug 14, 2017

@robertgartman The linux.volume.fileMode is a rexray service option and not one that gets passed as an option from Docker. Can you try running REX-Ray as a standalone service (not managed plugin) and configuring this option in the config.yml file and then restart the rexray service?

@robertgartman

This comment has been minimized.

Show comment
Hide comment
@robertgartman

robertgartman Aug 14, 2017

Before checking out the standalone service setup - what is the roadmap and intended usage of the plugin model? I'm already using it successfully with containers running as root and the setup is quite smooth. Are there plans to support non-root user for the plugin as well?

robertgartman commented Aug 14, 2017

Before checking out the standalone service setup - what is the roadmap and intended usage of the plugin model? I'm already using it successfully with containers running as root and the setup is quite smooth. Are there plans to support non-root user for the plugin as well?

@clintkitson

This comment has been minimized.

Show comment
Hide comment
@clintkitson

clintkitson Aug 14, 2017

Member

It isn't specifically on the roadmap right now, but we are open for contributions in the area if it is a useful update.

There is really one file you would look at adding additional options to. See the config plugin file below for s3fs.

https://github.com/codedellemc/rexray/blob/master/.docker/plugins/s3fs/config.json

Member

clintkitson commented Aug 14, 2017

It isn't specifically on the roadmap right now, but we are open for contributions in the area if it is a useful update.

There is really one file you would look at adding additional options to. See the config plugin file below for s3fs.

https://github.com/codedellemc/rexray/blob/master/.docker/plugins/s3fs/config.json

@robertgartman

This comment has been minimized.

Show comment
Hide comment
@robertgartman

robertgartman Aug 15, 2017

I've spent some hours trying to get Rexray running as as a standalone service. Something seems broken since I get an error whatever I put inside the config.yml

root@ip-192-168-105-134:/etc/systemd/system# cat /etc/libstorage/config.yml
# Generator: http://rexrayconfig.emccode.com/
rexray:
  logLevel: debug
linux:
  volume:
    rootPath: /
    fileMode: 0777
libstorage:
  logging:
    level: debug
  service: s3fs
  integration:
    volume:
      operations:
        mount:
          rootPath: /
        remove:
          disable: true
s3fs:
  accessKey: AKI...VA
  region: eu-west-1
  secretKey: <secret>
  disablePathStyle: true
root@ip-192-168-105-134:/etc/systemd/system# systemctl restart rexray
root@ip-192-168-105-134:/etc/systemd/system# systemctl status rexray
● rexray.service - rexray
   Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Tue 2017-08-15 17:09:47 UTC; 2s ago
  Process: 13713 ExecStart=/usr/bin/rexray start -f (code=exited, status=0/SUCCESS)
 Main PID: 13713 (code=exited, status=0/SUCCESS)

Aug 15 17:09:46 ip-192-168-105-134 rexray[13713]: ----------
Aug 15 17:09:46 ip-192-168-105-134 rexray[13713]: SemVer: 0.6.2
Aug 15 17:09:46 ip-192-168-105-134 rexray[13713]: OsArch: Linux-x86_64
Aug 15 17:09:46 ip-192-168-105-134 rexray[13713]: Branch: v0.9.2
Aug 15 17:09:46 ip-192-168-105-134 rexray[13713]: Commit: 7368564cc71be9a0dbe03ddfbaeda281232eee72
Aug 15 17:09:46 ip-192-168-105-134 rexray[13713]: Formed: Wed, 28 Jun 2017 20:08:31 UTC
Aug 15 17:09:47 ip-192-168-105-134 rexray[13713]: time="2017-08-15T17:09:47Z" level=error msg="error starting libStorage server" error.configKey=libstorage.server.services error.obj=<nil> time=1502816987057
Aug 15 17:09:47 ip-192-168-105-134 rexray[13713]: time="2017-08-15T17:09:47Z" level=error msg="default module(s) failed to initialize" error.configKey=libstorage.server.services error.obj=<nil> time=1502816987057
Aug 15 17:09:47 ip-192-168-105-134 rexray[13713]: time="2017-08-15T17:09:47Z" level=error msg="daemon failed to initialize" error.configKey=libstorage.server.services error.obj=<nil> time=1502816987057
Aug 15 17:09:47 ip-192-168-105-134 rexray[13713]: time="2017-08-15T17:09:47Z" level=error msg="error starting rex-ray" error.configKey=libstorage.server.services error.obj=<nil> time=1502816987057
root@ip-192-168-105-134:/etc/systemd/system# uname -a
Linux ip-192-168-105-134 4.4.0-1030-aws #39-Ubuntu SMP Wed Aug 9 09:43:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Any clues to why the conf is not being picked up?

robertgartman commented Aug 15, 2017

I've spent some hours trying to get Rexray running as as a standalone service. Something seems broken since I get an error whatever I put inside the config.yml

root@ip-192-168-105-134:/etc/systemd/system# cat /etc/libstorage/config.yml
# Generator: http://rexrayconfig.emccode.com/
rexray:
  logLevel: debug
linux:
  volume:
    rootPath: /
    fileMode: 0777
libstorage:
  logging:
    level: debug
  service: s3fs
  integration:
    volume:
      operations:
        mount:
          rootPath: /
        remove:
          disable: true
s3fs:
  accessKey: AKI...VA
  region: eu-west-1
  secretKey: <secret>
  disablePathStyle: true
root@ip-192-168-105-134:/etc/systemd/system# systemctl restart rexray
root@ip-192-168-105-134:/etc/systemd/system# systemctl status rexray
● rexray.service - rexray
   Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Tue 2017-08-15 17:09:47 UTC; 2s ago
  Process: 13713 ExecStart=/usr/bin/rexray start -f (code=exited, status=0/SUCCESS)
 Main PID: 13713 (code=exited, status=0/SUCCESS)

Aug 15 17:09:46 ip-192-168-105-134 rexray[13713]: ----------
Aug 15 17:09:46 ip-192-168-105-134 rexray[13713]: SemVer: 0.6.2
Aug 15 17:09:46 ip-192-168-105-134 rexray[13713]: OsArch: Linux-x86_64
Aug 15 17:09:46 ip-192-168-105-134 rexray[13713]: Branch: v0.9.2
Aug 15 17:09:46 ip-192-168-105-134 rexray[13713]: Commit: 7368564cc71be9a0dbe03ddfbaeda281232eee72
Aug 15 17:09:46 ip-192-168-105-134 rexray[13713]: Formed: Wed, 28 Jun 2017 20:08:31 UTC
Aug 15 17:09:47 ip-192-168-105-134 rexray[13713]: time="2017-08-15T17:09:47Z" level=error msg="error starting libStorage server" error.configKey=libstorage.server.services error.obj=<nil> time=1502816987057
Aug 15 17:09:47 ip-192-168-105-134 rexray[13713]: time="2017-08-15T17:09:47Z" level=error msg="default module(s) failed to initialize" error.configKey=libstorage.server.services error.obj=<nil> time=1502816987057
Aug 15 17:09:47 ip-192-168-105-134 rexray[13713]: time="2017-08-15T17:09:47Z" level=error msg="daemon failed to initialize" error.configKey=libstorage.server.services error.obj=<nil> time=1502816987057
Aug 15 17:09:47 ip-192-168-105-134 rexray[13713]: time="2017-08-15T17:09:47Z" level=error msg="error starting rex-ray" error.configKey=libstorage.server.services error.obj=<nil> time=1502816987057
root@ip-192-168-105-134:/etc/systemd/system# uname -a
Linux ip-192-168-105-134 4.4.0-1030-aws #39-Ubuntu SMP Wed Aug 9 09:43:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Any clues to why the conf is not being picked up?

@clintkitson

This comment has been minimized.

Show comment
Hide comment
@clintkitson

clintkitson Aug 15, 2017

Member

It should be in /etc/rexray/config.yml. You can run rexray env to see whether your config files populated the expected vars.

Member

clintkitson commented Aug 15, 2017

It should be in /etc/rexray/config.yml. You can run rexray env to see whether your config files populated the expected vars.

@robertgartman

This comment has been minimized.

Show comment
Hide comment
@robertgartman

robertgartman Aug 16, 2017

Thanks putting me on track with the config path. Rexray is now up and running and a command like..

docker volume create --driver rexray --name "works123321123"

... does indeed create the bucket works123321123 on my S3 account. So far so good.

But running something like...

root#     docker run -tid --volume-driver=rexray \
>                      -v abucketname-143245:/tmp/test \
>                      --name temp01 busybox  \
>                      'bash -c "whoami && mount  && ls -la /tmp/test"'
82588d82524a612dcc2f5c2ec5f63e72ab30d30af34aba47120c59ac7ed54727
docker: Error response from daemon: error while mounting volume '/': VolumeDriver.Mount: 
{"Error":"missing instance ID"}.

... fails all the time with the complaint about missing instance ID

root# rexray env
INFO[0000] updated log level                             logLevel=debug
DEBU[0000] os.args                                       time=1502865607064 val=[rexray env]
AZUREUD_RESOURCEGROUP=
S3FS_MAXRETRIES=10
GCEPD_ZONE=
S3FS_REGION=eu-west-1
EFS_STATUSMAXATTEMPTS=6
LIBSTORAGE_LOGGING_HTTPREQUESTS=false
VIRTUALBOX_VOLUMEPATH=
AWS_KMSKEYID=
SCALEIO_USERNAME=
CINDER_REGIONNAME=
CINDER_AUTHURL=
S3FS_HOSTNAME=R520
LIBSTORAGE_SERVER_TASKS_LOGTIMEOUT=0s
LIBSTORAGE_LOGGING_STDERR=
EBS_STATUSTIMEOUT=2m
SCALEIO_THINORTHICK=
SCALEIO_STORAGEPOOLID=
SCALEIO_INSECURE=false
AZUREUD_CLIENTSECRET=
ISILON_SHAREDMOUNTS=false
S3FS_CMD=s3fs
EFS_DISABLESESSIONCACHE=false
LIBSTORAGE_SERVER_AUTH_KEY=
LIBSTORAGE_CLIENT_AUTH_TOKEN=
LIBSTORAGE_DEVICE_SCANTYPE=0
AWS_REGION=
ISILON_ENDPOINT=
S3FS_OPTIONS=
REXRAY_MODULES_DEFAULT-DOCKER_TYPE=docker
REXRAY_HOST=
EFS_TAG=
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DISABLE=false
SCALEIO_USECERTS=false
EC2_SECRETKEY=
AZUREUD_CLIENTID=
ISILON_QUOTAS=false
CINDER_DELETETIMEOUT=10m
SCALEIO_ENDPOINT=
EFS_STATUSTIMEOUT=2m
LIBSTORAGE_SERVER_PARSEREQUESTOPTS=false
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_UNMOUNT_IGNOREUSEDCOUNT=false
LIBSTORAGE_INTEGRATION_DRIVER=linux
VIRTUALBOX_CONTROLLERNAME=SATA
CINDER_DOMAINNAME=
CINDER_CREATETIMEOUT=10m
GCEPD_TAG=
LIBSTORAGE_TLS_SERVERNAME=
SCALEIO_USERID=
ISILON_VOLUMEPATH=
LIBSTORAGE_SERVER_AUTH_DISABLED=false
LIBSTORAGE_EXECUTOR_PATH=/var/lib/libstorage/lsx-linux
LIBSTORAGE_EXECUTOR_DISABLEDOWNLOAD=false
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_REMOVE_DISABLE=true
VIRTUALBOX_DISKIDPATH=/dev/disk/by-id
CINDER_DOMAINID=
GCEPD_KEYFILE=
AWS_MAXRETRIES=10
CINDER_USERNAME=
REXRAY_MODULE_STARTTIMEOUT=10s
LIBSTORAGE_LOGGING_LEVEL=debug
LIBSTORAGE_OS_DRIVER=linux
EBS_TAG=
AWS_ENDPOINT=
EBS_STATUSINITIALDELAY=100ms
AZUREUD_TENANTID=
EFS_ENDPOINT=
LIBSTORAGE_EMBEDDED=false
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_REMOVE_FORCE=false
VIRTUALBOX_TLS=false
VIRTUALBOX_PASSWORD=
AWS_TAG=
DOBS_STATUSMAXATTEMPTS=10
CINDER_TRUSTID=
LIBSTORAGE_SERVER_AUTOENDPOINTMODE=unix
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_ROOTPATH=/data
LIBSTORAGE_TLS_INSECURE=
VIRTUALBOX_ENDPOINT=http://10.0.2.2:18083
VIRTUALBOX_LOCALMACHINENAMEORID=
DOBS_CONVERTUNDERSCORES=false
REXRAY_LOGLEVEL=debug
EFS_REGION=
LIBSTORAGE_TLS_KEYFILE=/etc/libstorage/tls/libstorage.key
AZUREUD_CERTPATH=
AZUREUD_STORAGEACCOUNT=
CINDER_TENANTID=
RBD_TESTMODULE=true
RBD_DEFAULTPOOL=rbd
EFS_SECURITYGROUPS=
EBS_ENDPOINT=
SCALEIO_PROTECTIONDOMAINID=
EC2_TAG=
ISILON_PASSWORD=
REXRAY_MODULES_DEFAULT-DOCKER_DISABLED=false
GCEPD_CONVERTUNDERSCORES=false
LIBSTORAGE_TLS_DISABLED=
VIRTUALBOX_SCSIHOSTPATH=/sys/class/scsi_host/
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_AVAILABILITYZONE=
EBS_STATUSMAXATTEMPTS=10
AZUREUD_SUBSCRIPTIONID=
ISILON_NFSHOST=
S3FS_TAG=
REXRAY_LIBSTORAGE_LOGGING_LEVEL=debug
EFS_STATUSINITIALDELAY=1s
LIBSTORAGE_SERVER_AUTH_ALLOW=
EBS_KMSKEYID=
SCALEIO_DRVCFG=/opt/emc/scaleio/sdc/bin/drv_cfg
DOBS_REGION=
S3FS_SECRETKEY=<cut>
REXRAY_MODULES_DEFAULT-DOCKER_SPEC=/etc/docker/plugins/rexray.spec
LIBSTORAGE_LOGGING_STDOUT=
LIBSTORAGE_LOGGING_HTTPRESPONSES=false
EBS_ACCESSKEY=
SCALEIO_SYSTEMNAME=
AZUREUD_STORAGEACCESSKEY=
CINDER_TOKENID=
CINDER_TENANTNAME=
DOBS_STATUSINITIALDELAY=100ms
EFS_SECRETKEY=
LIBSTORAGE_SERVER_AUTH_DENY=
EBS_REGION=
SCALEIO_PROTECTIONDOMAINNAME=
REXRAY_SERVICE=
GCEPD_STATUSMAXATTEMPTS=10
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_FSTYPE=ext4
LIBSTORAGE_HTTP_READTIMEOUT=300
VIRTUALBOX_USERNAME=
SCALEIO_VERSION=
LIBSTORAGE_TLS_CLIENTCERTREQUIRED=
SCALEIO_SYSTEMID=
EC2_MAXRETRIES=10
CINDER_ATTACHTIMEOUT=1m
EFS_ENDPOINTFORMAT=elasticfilesystem.%s.amazonaws.com
GCEPD_DEFAULTDISKTYPE=pd-ssd
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_IOPS=
LIBSTORAGE_TLS_CERTFILE=/etc/libstorage/tls/libstorage.crt
EC2_REGION=
S3FS_ACCESSKEY=<cut>
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_PATH_CACHE_ENABLED=true
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_PATH_CACHE_ASYNC=true
SCALEIO_GUID=
LIBSTORAGE_HTTP_DISABLEKEEPALIVE=false
LIBSTORAGE_HOST=
EC2_ACCESSKEY=
ISILON_GROUP=
CINDER_AVAILABILITYZONENAME=
LIBSTORAGE_SERVER_AUTH_ALG=HS256
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_RETRYCOUNT=0
LIBSTORAGE_TLS_KNOWNHOSTS=/etc/libstorage/tls/known_hosts
AWS_SECRETKEY=
AWS_ACCESSKEY=
AZUREUD_CONTAINER=vhds
AZUREUD_USEHTTPS=true
ISILON_USERNAME=
S3FS_DISABLEPATHSTYLE=true
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_TYPE=
LIBSTORAGE_CLIENT_CACHE_INSTANCEID=30m
LINUX_VOLUME_ROOTPATH=/data
LIBSTORAGE_SERVER_TASKS_EXETIMEOUT=1m
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_PREEMPT=false
REXRAY_MODULES_DEFAULT-DOCKER_HOST=unix:///run/docker/plugins/rexray.sock
REXRAY_MODULES_DEFAULT-DOCKER_DESC=The default docker module.
EFS_ACCESSKEY=
GCEPD_STATUSINITIALDELAY=100ms
LIBSTORAGE_STORAGE_DRIVER=libstorage
EBS_MAXRETRIES=10
AZUREUD_TAG=
CINDER_USERID=
LINUX_VOLUME_FILEMODE=511
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_PATH=/var/lib/libstorage/volumes
LIBSTORAGE_CLIENT_TYPE=integration
DOBS_STATUSTIMEOUT=2m
EC2_KMSKEYID=
GCEPD_STATUSTIMEOUT=2m
LIBSTORAGE_SERVICE=s3fs
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_RETRYWAIT=5s
LIBSTORAGE_HTTP_WRITETIMEOUT=300
SCALEIO_PASSWORD=
EC2_ENDPOINT=
REXRAY_LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_PATH_CACHE_ENABLED=false
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_SIZE=16
VFS_ROOT=/var/lib/libstorage/vfs
DOBS_TOKEN=
LIBSTORAGE_DEVICE_ATTACHTIMEOUT=30s
EBS_SECRETKEY=
ISILON_INSECURE=false
ISILON_DATASUBNET=
CINDER_SNAPSHOTTIMEOUT=10m
CINDER_PASSWORD=
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_IMPLICIT=true
LIBSTORAGE_TLS_TRUSTEDCERTSFILE=/etc/libstorage/tls/cacerts
SCALEIO_STORAGEPOOLNAME=
DEBU[0000] completed cli execution                       time=1502865607178
INFO[0000] exiting process                               time=1502865607178
DEBU[0000] completed onExit at end of program            time=1502865607178

robertgartman commented Aug 16, 2017

Thanks putting me on track with the config path. Rexray is now up and running and a command like..

docker volume create --driver rexray --name "works123321123"

... does indeed create the bucket works123321123 on my S3 account. So far so good.

But running something like...

root#     docker run -tid --volume-driver=rexray \
>                      -v abucketname-143245:/tmp/test \
>                      --name temp01 busybox  \
>                      'bash -c "whoami && mount  && ls -la /tmp/test"'
82588d82524a612dcc2f5c2ec5f63e72ab30d30af34aba47120c59ac7ed54727
docker: Error response from daemon: error while mounting volume '/': VolumeDriver.Mount: 
{"Error":"missing instance ID"}.

... fails all the time with the complaint about missing instance ID

root# rexray env
INFO[0000] updated log level                             logLevel=debug
DEBU[0000] os.args                                       time=1502865607064 val=[rexray env]
AZUREUD_RESOURCEGROUP=
S3FS_MAXRETRIES=10
GCEPD_ZONE=
S3FS_REGION=eu-west-1
EFS_STATUSMAXATTEMPTS=6
LIBSTORAGE_LOGGING_HTTPREQUESTS=false
VIRTUALBOX_VOLUMEPATH=
AWS_KMSKEYID=
SCALEIO_USERNAME=
CINDER_REGIONNAME=
CINDER_AUTHURL=
S3FS_HOSTNAME=R520
LIBSTORAGE_SERVER_TASKS_LOGTIMEOUT=0s
LIBSTORAGE_LOGGING_STDERR=
EBS_STATUSTIMEOUT=2m
SCALEIO_THINORTHICK=
SCALEIO_STORAGEPOOLID=
SCALEIO_INSECURE=false
AZUREUD_CLIENTSECRET=
ISILON_SHAREDMOUNTS=false
S3FS_CMD=s3fs
EFS_DISABLESESSIONCACHE=false
LIBSTORAGE_SERVER_AUTH_KEY=
LIBSTORAGE_CLIENT_AUTH_TOKEN=
LIBSTORAGE_DEVICE_SCANTYPE=0
AWS_REGION=
ISILON_ENDPOINT=
S3FS_OPTIONS=
REXRAY_MODULES_DEFAULT-DOCKER_TYPE=docker
REXRAY_HOST=
EFS_TAG=
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DISABLE=false
SCALEIO_USECERTS=false
EC2_SECRETKEY=
AZUREUD_CLIENTID=
ISILON_QUOTAS=false
CINDER_DELETETIMEOUT=10m
SCALEIO_ENDPOINT=
EFS_STATUSTIMEOUT=2m
LIBSTORAGE_SERVER_PARSEREQUESTOPTS=false
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_UNMOUNT_IGNOREUSEDCOUNT=false
LIBSTORAGE_INTEGRATION_DRIVER=linux
VIRTUALBOX_CONTROLLERNAME=SATA
CINDER_DOMAINNAME=
CINDER_CREATETIMEOUT=10m
GCEPD_TAG=
LIBSTORAGE_TLS_SERVERNAME=
SCALEIO_USERID=
ISILON_VOLUMEPATH=
LIBSTORAGE_SERVER_AUTH_DISABLED=false
LIBSTORAGE_EXECUTOR_PATH=/var/lib/libstorage/lsx-linux
LIBSTORAGE_EXECUTOR_DISABLEDOWNLOAD=false
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_REMOVE_DISABLE=true
VIRTUALBOX_DISKIDPATH=/dev/disk/by-id
CINDER_DOMAINID=
GCEPD_KEYFILE=
AWS_MAXRETRIES=10
CINDER_USERNAME=
REXRAY_MODULE_STARTTIMEOUT=10s
LIBSTORAGE_LOGGING_LEVEL=debug
LIBSTORAGE_OS_DRIVER=linux
EBS_TAG=
AWS_ENDPOINT=
EBS_STATUSINITIALDELAY=100ms
AZUREUD_TENANTID=
EFS_ENDPOINT=
LIBSTORAGE_EMBEDDED=false
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_REMOVE_FORCE=false
VIRTUALBOX_TLS=false
VIRTUALBOX_PASSWORD=
AWS_TAG=
DOBS_STATUSMAXATTEMPTS=10
CINDER_TRUSTID=
LIBSTORAGE_SERVER_AUTOENDPOINTMODE=unix
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_ROOTPATH=/data
LIBSTORAGE_TLS_INSECURE=
VIRTUALBOX_ENDPOINT=http://10.0.2.2:18083
VIRTUALBOX_LOCALMACHINENAMEORID=
DOBS_CONVERTUNDERSCORES=false
REXRAY_LOGLEVEL=debug
EFS_REGION=
LIBSTORAGE_TLS_KEYFILE=/etc/libstorage/tls/libstorage.key
AZUREUD_CERTPATH=
AZUREUD_STORAGEACCOUNT=
CINDER_TENANTID=
RBD_TESTMODULE=true
RBD_DEFAULTPOOL=rbd
EFS_SECURITYGROUPS=
EBS_ENDPOINT=
SCALEIO_PROTECTIONDOMAINID=
EC2_TAG=
ISILON_PASSWORD=
REXRAY_MODULES_DEFAULT-DOCKER_DISABLED=false
GCEPD_CONVERTUNDERSCORES=false
LIBSTORAGE_TLS_DISABLED=
VIRTUALBOX_SCSIHOSTPATH=/sys/class/scsi_host/
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_AVAILABILITYZONE=
EBS_STATUSMAXATTEMPTS=10
AZUREUD_SUBSCRIPTIONID=
ISILON_NFSHOST=
S3FS_TAG=
REXRAY_LIBSTORAGE_LOGGING_LEVEL=debug
EFS_STATUSINITIALDELAY=1s
LIBSTORAGE_SERVER_AUTH_ALLOW=
EBS_KMSKEYID=
SCALEIO_DRVCFG=/opt/emc/scaleio/sdc/bin/drv_cfg
DOBS_REGION=
S3FS_SECRETKEY=<cut>
REXRAY_MODULES_DEFAULT-DOCKER_SPEC=/etc/docker/plugins/rexray.spec
LIBSTORAGE_LOGGING_STDOUT=
LIBSTORAGE_LOGGING_HTTPRESPONSES=false
EBS_ACCESSKEY=
SCALEIO_SYSTEMNAME=
AZUREUD_STORAGEACCESSKEY=
CINDER_TOKENID=
CINDER_TENANTNAME=
DOBS_STATUSINITIALDELAY=100ms
EFS_SECRETKEY=
LIBSTORAGE_SERVER_AUTH_DENY=
EBS_REGION=
SCALEIO_PROTECTIONDOMAINNAME=
REXRAY_SERVICE=
GCEPD_STATUSMAXATTEMPTS=10
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_FSTYPE=ext4
LIBSTORAGE_HTTP_READTIMEOUT=300
VIRTUALBOX_USERNAME=
SCALEIO_VERSION=
LIBSTORAGE_TLS_CLIENTCERTREQUIRED=
SCALEIO_SYSTEMID=
EC2_MAXRETRIES=10
CINDER_ATTACHTIMEOUT=1m
EFS_ENDPOINTFORMAT=elasticfilesystem.%s.amazonaws.com
GCEPD_DEFAULTDISKTYPE=pd-ssd
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_IOPS=
LIBSTORAGE_TLS_CERTFILE=/etc/libstorage/tls/libstorage.crt
EC2_REGION=
S3FS_ACCESSKEY=<cut>
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_PATH_CACHE_ENABLED=true
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_PATH_CACHE_ASYNC=true
SCALEIO_GUID=
LIBSTORAGE_HTTP_DISABLEKEEPALIVE=false
LIBSTORAGE_HOST=
EC2_ACCESSKEY=
ISILON_GROUP=
CINDER_AVAILABILITYZONENAME=
LIBSTORAGE_SERVER_AUTH_ALG=HS256
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_RETRYCOUNT=0
LIBSTORAGE_TLS_KNOWNHOSTS=/etc/libstorage/tls/known_hosts
AWS_SECRETKEY=
AWS_ACCESSKEY=
AZUREUD_CONTAINER=vhds
AZUREUD_USEHTTPS=true
ISILON_USERNAME=
S3FS_DISABLEPATHSTYLE=true
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_TYPE=
LIBSTORAGE_CLIENT_CACHE_INSTANCEID=30m
LINUX_VOLUME_ROOTPATH=/data
LIBSTORAGE_SERVER_TASKS_EXETIMEOUT=1m
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_PREEMPT=false
REXRAY_MODULES_DEFAULT-DOCKER_HOST=unix:///run/docker/plugins/rexray.sock
REXRAY_MODULES_DEFAULT-DOCKER_DESC=The default docker module.
EFS_ACCESSKEY=
GCEPD_STATUSINITIALDELAY=100ms
LIBSTORAGE_STORAGE_DRIVER=libstorage
EBS_MAXRETRIES=10
AZUREUD_TAG=
CINDER_USERID=
LINUX_VOLUME_FILEMODE=511
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_PATH=/var/lib/libstorage/volumes
LIBSTORAGE_CLIENT_TYPE=integration
DOBS_STATUSTIMEOUT=2m
EC2_KMSKEYID=
GCEPD_STATUSTIMEOUT=2m
LIBSTORAGE_SERVICE=s3fs
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_RETRYWAIT=5s
LIBSTORAGE_HTTP_WRITETIMEOUT=300
SCALEIO_PASSWORD=
EC2_ENDPOINT=
REXRAY_LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_PATH_CACHE_ENABLED=false
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_SIZE=16
VFS_ROOT=/var/lib/libstorage/vfs
DOBS_TOKEN=
LIBSTORAGE_DEVICE_ATTACHTIMEOUT=30s
EBS_SECRETKEY=
ISILON_INSECURE=false
ISILON_DATASUBNET=
CINDER_SNAPSHOTTIMEOUT=10m
CINDER_PASSWORD=
LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_IMPLICIT=true
LIBSTORAGE_TLS_TRUSTEDCERTSFILE=/etc/libstorage/tls/cacerts
SCALEIO_STORAGEPOOLNAME=
DEBU[0000] completed cli execution                       time=1502865607178
INFO[0000] exiting process                               time=1502865607178
DEBU[0000] completed onExit at end of program            time=1502865607178
@clintkitson

This comment has been minimized.

Show comment
Hide comment
@clintkitson

clintkitson Aug 16, 2017

Member

That error is likely caused by REX not having access to tools that are expected when using S3FS.

Check the docker file that is used for planting dependencies in the S3FS docker plugin.
https://github.com/codedellemc/rexray/blob/master/.docker/plugins/s3fs/.Dockerfile#L4-L7

Member

clintkitson commented Aug 16, 2017

That error is likely caused by REX not having access to tools that are expected when using S3FS.

Check the docker file that is used for planting dependencies in the S3FS docker plugin.
https://github.com/codedellemc/rexray/blob/master/.docker/plugins/s3fs/.Dockerfile#L4-L7

@robertgartman

This comment has been minimized.

Show comment
Hide comment
@robertgartman

robertgartman Aug 21, 2017

I've spent quite some time getting the Rexray service install working but in wain. I'm not sure if I got lost in the docs when trying to figure out the right conf but from an end user experience this docker plugin approach is preferable. It works with S3 (which I could not accomplish with Rexray service install) and it's a very smooth setup.

It would be really great if the Rexray maintainers would put "non-root user access for Docker S3 plugin" higher up on the agenda. The plugin fills a great purpose covering for current weaknesses in Docker Swarm storage scheduling.

robertgartman commented Aug 21, 2017

I've spent quite some time getting the Rexray service install working but in wain. I'm not sure if I got lost in the docs when trying to figure out the right conf but from an end user experience this docker plugin approach is preferable. It works with S3 (which I could not accomplish with Rexray service install) and it's a very smooth setup.

It would be really great if the Rexray maintainers would put "non-root user access for Docker S3 plugin" higher up on the agenda. The plugin fills a great purpose covering for current weaknesses in Docker Swarm storage scheduling.

@rreinurm

This comment has been minimized.

Show comment
Hide comment
@rreinurm

rreinurm Sep 15, 2017

Would be nice to have filemode as option with docker plugin. It was not easy to track it down why file permissions are limited to root user only. Additionally for filemode it would be nice to set owner id-s as well.

rreinurm commented Sep 15, 2017

Would be nice to have filemode as option with docker plugin. It was not easy to track it down why file permissions are limited to root user only. Additionally for filemode it would be nice to set owner id-s as well.

@rreinurm

This comment has been minimized.

Show comment
Hide comment
@rreinurm

rreinurm Sep 15, 2017

After playing around with rexray as a host service all I needed was to set /etc/rexray/config.yml

  volume:
    fileMode: 0755

Could we have fileMode set to 0755 by default ?https://github.com/codedellemc/rexray/blob/78e95e6d7a0ecfd53d167d0142aa65788bb10948/libstorage/drivers/os/linux/linux.go#L43
This way it would act same way as local volume driver and we have less hassle for containers where processes run as non-root users.

rreinurm commented Sep 15, 2017

After playing around with rexray as a host service all I needed was to set /etc/rexray/config.yml

  volume:
    fileMode: 0755

Could we have fileMode set to 0755 by default ?https://github.com/codedellemc/rexray/blob/78e95e6d7a0ecfd53d167d0142aa65788bb10948/libstorage/drivers/os/linux/linux.go#L43
This way it would act same way as local volume driver and we have less hassle for containers where processes run as non-root users.

@rreinurm

This comment has been minimized.

Show comment
Hide comment
@rreinurm

rreinurm Sep 21, 2017

any opinions regarding to my suggestion @clintkitson or anyone

rreinurm commented Sep 21, 2017

any opinions regarding to my suggestion @clintkitson or anyone

@clintkitson

This comment has been minimized.

Show comment
Hide comment
@clintkitson

clintkitson Sep 24, 2017

Member

@rreinurm The modification to default permissions that you are requesting yields -rwxr-xr-x. I see this adding read access for non-root users, is that generally what we would want as default?

Is it possible that what you're really going after is more fine-grained control of the user that owns the files?

Member

clintkitson commented Sep 24, 2017

@rreinurm The modification to default permissions that you are requesting yields -rwxr-xr-x. I see this adding read access for non-root users, is that generally what we would want as default?

Is it possible that what you're really going after is more fine-grained control of the user that owns the files?

@rreinurm

This comment has been minimized.

Show comment
Hide comment
@rreinurm

rreinurm Sep 25, 2017

The fine-grained access control would be perfect solution, can it be on Rexray roadmap?

But I would like to fix it using low-hanging fruit. I don't have statistics, but if people follow container philosophy having one process per container then there is already reduced risk that some other process could access data on mounted volume.

If I just compare with docker local volume driver then -rwxr-xr-x exactly what is happening now. The parent folder /var/lib/docker/volumes has restricted rigths only to root user (700), outside containers non-root users don't have read access. Inside container the volume has to be mounted to corresponding container before process can have access to data.

Another security advice is to drop root user privileges in container. Using some images which has already implemented that best practise inside the image, can be quite hard using rexray plugin directly from store. Even if I set file-permissions or ownership for mounted folder those settings are lost every time I remount the volume. But ownership and access rights for data inside the mounted folder can be still restricted to correct user.

I don't see big risks for adjusting default setting in fact it support other security practise for non-root users inside process even userns-remap.

rreinurm commented Sep 25, 2017

The fine-grained access control would be perfect solution, can it be on Rexray roadmap?

But I would like to fix it using low-hanging fruit. I don't have statistics, but if people follow container philosophy having one process per container then there is already reduced risk that some other process could access data on mounted volume.

If I just compare with docker local volume driver then -rwxr-xr-x exactly what is happening now. The parent folder /var/lib/docker/volumes has restricted rigths only to root user (700), outside containers non-root users don't have read access. Inside container the volume has to be mounted to corresponding container before process can have access to data.

Another security advice is to drop root user privileges in container. Using some images which has already implemented that best practise inside the image, can be quite hard using rexray plugin directly from store. Even if I set file-permissions or ownership for mounted folder those settings are lost every time I remount the volume. But ownership and access rights for data inside the mounted folder can be still restricted to correct user.

I don't see big risks for adjusting default setting in fact it support other security practise for non-root users inside process even userns-remap.

@andrewnazarov

This comment has been minimized.

Show comment
Hide comment
@andrewnazarov

andrewnazarov Jan 17, 2018

Any news on this?
I'm facing the same issue with permissions mounting volume to a custom tomcat container with rexray docker plugin. And yes, I'm running a container as a non-root user.

Is it a problem with s3fs-fuse?

andrewnazarov commented Jan 17, 2018

Any news on this?
I'm facing the same issue with permissions mounting volume to a custom tomcat container with rexray docker plugin. And yes, I'm running a container as a non-root user.

Is it a problem with s3fs-fuse?

@akutz

This comment has been minimized.

Show comment
Hide comment
@akutz

akutz Jan 18, 2018

Member

It could be the permissions given to the container when it’s launched. The user that runs the container must be able to mount filesystems.

Member

akutz commented Jan 18, 2018

It could be the permissions given to the container when it’s launched. The user that runs the container must be able to mount filesystems.

@dradux

This comment has been minimized.

Show comment
Hide comment
@dradux

dradux Mar 17, 2018

This is a relatively old issue but thought I should update with the info I found as I do have a solution for my issue (could not write to an S3FS backed rexray/plugin volume as a user other than root from within a container).

There is now a LINUX_VOLUME_FILEMODE=0777 flag which can be set for the s3fs plugin. After setting this I was able to mount a volume in a container and read/write to the container as a user other than root (apache in my case).

My plugin install command:

docker plugin install rexray/s3fs \
  LINUX_VOLUME_FILEMODE=0777
  S3FS_ACCESSKEY=<aws-access-key> \
  S3FS_SECRETKEY=<aws-secret-key> \
  S3FS_REGION=<aws-region> \
  S3FS_OPTIONS=use_cache=/tmp,allow_other,use_rrs

I am using this with docker-compose, my relevant compose info:

version: '2'
services:
    app:
      build:
        context: ./app/
      env_file:
        - .env   # pass full .env file into container.
      volumes:
        - mybucket:/opt/share
      ports:
        - "${APP_PORT}:80"
      depends_on:
        - db
        - redis-sentinel
      # NOTICE: the sys_admin capability is needed to bind mount.
      cap_add:
        - sys_admin
        
volumes:
    mybucket:
      external: true

dradux commented Mar 17, 2018

This is a relatively old issue but thought I should update with the info I found as I do have a solution for my issue (could not write to an S3FS backed rexray/plugin volume as a user other than root from within a container).

There is now a LINUX_VOLUME_FILEMODE=0777 flag which can be set for the s3fs plugin. After setting this I was able to mount a volume in a container and read/write to the container as a user other than root (apache in my case).

My plugin install command:

docker plugin install rexray/s3fs \
  LINUX_VOLUME_FILEMODE=0777
  S3FS_ACCESSKEY=<aws-access-key> \
  S3FS_SECRETKEY=<aws-secret-key> \
  S3FS_REGION=<aws-region> \
  S3FS_OPTIONS=use_cache=/tmp,allow_other,use_rrs

I am using this with docker-compose, my relevant compose info:

version: '2'
services:
    app:
      build:
        context: ./app/
      env_file:
        - .env   # pass full .env file into container.
      volumes:
        - mybucket:/opt/share
      ports:
        - "${APP_PORT}:80"
      depends_on:
        - db
        - redis-sentinel
      # NOTICE: the sys_admin capability is needed to bind mount.
      cap_add:
        - sys_admin
        
volumes:
    mybucket:
      external: true
@clintkitson

This comment has been minimized.

Show comment
Hide comment
@clintkitson

clintkitson Mar 20, 2018

Member

Excellent @dradux thank you for this.

Member

clintkitson commented Mar 20, 2018

Excellent @dradux thank you for this.

@ConstantinElse

This comment has been minimized.

Show comment
Hide comment
@ConstantinElse

ConstantinElse Jun 26, 2018

I've tried to use @dradux advice as below with version rexray/s3fs:0.11.3 and it doesn't work.
The file permissions are still ---------- inside the container.

docker plugin install rexray/s3fs \
  LINUX_VOLUME_FILEMODE=0755
  S3FS_ACCESSKEY=<aws-access-key> \
  S3FS_SECRETKEY=<aws-secret-key> \
  S3FS_REGION=<aws-region> \
  S3FS_OPTIONS=use_cache=/tmp,allow_other

ConstantinElse commented Jun 26, 2018

I've tried to use @dradux advice as below with version rexray/s3fs:0.11.3 and it doesn't work.
The file permissions are still ---------- inside the container.

docker plugin install rexray/s3fs \
  LINUX_VOLUME_FILEMODE=0755
  S3FS_ACCESSKEY=<aws-access-key> \
  S3FS_SECRETKEY=<aws-secret-key> \
  S3FS_REGION=<aws-region> \
  S3FS_OPTIONS=use_cache=/tmp,allow_other
@zironycho

This comment has been minimized.

Show comment
Hide comment
@zironycho

zironycho commented Sep 3, 2018

@ConstantinElse 0.11.1 is worked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment