Skip to content
Docker Volume Plugin for Google Cloud Storage
Go Python Dockerfile
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
plugin
.gitattributes
.gitignore
.travis.yml
LICENSE-APACHE
LICENSE-MIT
README.md
build.py
config.json
gcsfuse_flags

README.md

gcsfs

Travis CI Docker - Pulls License - MIT/Apache-2.0 Say Thanks


An easy-to-use, cross-platform, and highly optimized Docker Volume Plugin for mounting Google Cloud Storage buckets.

Table of Contents

Installation

gcsfs is distributed on Docker Hub, allowing a seamless install:

$ docker plugin install ofekmeister/gcsfs

You will also need at least one service account key.

Usage

Standard

Create a volume with the key contents:

$ docker volume create -d ofekmeister/gcsfs -o key=$(cat service_account_key_file) <BUCKET_NAME>

or via docker-compose:

version: '3.4'

volumes:
  mybucket:
    name: <BUCKET_NAME>
    driver: ofekmeister/gcsfs
    driver_opts:
      key: ${KEY_CONTENTS_IN_ENV_VAR}

Then create a container that uses the volume:

$ docker run -v <BUCKET_NAME>:/data --rm -d --name gcsfs-test alpine tail -f /dev/null

or via docker-compose:

services:
  test:
    container_name: gcsfs-test
    image: alpine
    entrypoint: ['tail', '-f', '/dev/null']
    volumes:
    - mybucket:/data

At this point you should be able to access your bucket:

$ docker exec gcsfs-test ls /data

Key mounting

Alternatively, you can mount a directory of service account keys and reference the file name.

First disable the plugin:

$ docker plugin disable ofekmeister/gcsfs

then set the keys.source option:

$ docker plugin set ofekmeister/gcsfs keys.source=/path/to/keys

If you don't yet have the plugin, this can also be done during the installation:

$ docker plugin install ofekmeister/gcsfs keys.source=/path/to/keys

Note: On Windows you'll need to use host_mnt paths e.g. C:\path\to\keys would become /host_mnt/c/path/to/keys.

Assuming there is a file named credentials.json in /path/to/keys, you can now create a volume by doing:

$ docker volume create -d ofekmeister/gcsfs -o key=credentials.json <BUCKET_NAME>

or via docker-compose:

version: '3.4'

volumes:
  mybucket:
    name: <BUCKET_NAME>
    driver: ofekmeister/gcsfs
    driver_opts:
      key: credentials.json

Driver options

  • key - The file name of the key in the keys.source directory, or else the raw key contents if it doesn't exist.
  • bucket - The Google Cloud Storage bucket to use. If this is not specified, the volume name is assumed to be the desired bucket.
  • flags - Extra flags to pass to gcsfuse e.g. -o flags="--limit-ops-per-sec=10 --only-dir=some/nested/folder".
  • debug - A timeout (in seconds) used only for testing. This will attempt to mount the bucket, wait for logs, then un-mount and print debug info.

Permission

In order to access anything stored in Google Cloud Storage, you will need service accounts with appropriate IAM roles. Read more about them here. If writes are needed, you will usually select roles/storage.admin scoped to the desired buckets.

The easiest way to create service account keys, if you don't yet have any, is to run:

$ gcloud iam service-accounts list

to find the email of a desired service account, then run:

$ gcloud iam service-accounts keys create <FILE_NAME>.json --iam-account <EMAIL>

to create a key file.

Tip: If you have a service account with write access you want to share with containers that should only be able to read, you can append the standard :ro to avoid creating a new read-only service account.

License

gcsfs is distributed under the terms of both

at your option.

Credits

Future

I also want to make a Kubernetes CSI driver. However, that won't happen for a while as it appears to me I'll need to learn everything about everything.

You can’t perform that action at this time.