Skip to content
Docker container with a data volume from s3.
Branch: master
Clone or download
fabiob Merge pull request #8 from chrisns/patch-1
fix readme docs specify /data
Latest commit 1288239 Feb 1, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore initial commit Aug 30, 2014
Dockerfile Updated Alpine, and changed install method Jun 6, 2017
LICENSE Added the MIT License Jun 6, 2017
Makefile fix readme docs specify /data Feb 1, 2018
s3cfg initial commit Aug 30, 2014
watch Added a new configuration option, BACKUP_INTERVAL Sep 10, 2015


Creates a Docker container that is restored and backed up to a directory on s3. You could use this to run short lived processes that work with and persist data to and from S3.


For the simplest usage, you can just start the data container:

docker run -d --name my-data-container \
           elementar/s3-volume /data s3://mybucket/someprefix

This will download the data from the S3 location you specify into the container's /data directory. When the container shuts down, the data will be synced back to S3.

To use the data from another container, you can use the --volumes-from option:

docker run -it --rm --volumes-from=my-data-container busybox ls -l /data

Configuring a sync interval

When the BACKUP_INTERVAL environment variable is set, a watcher process will sync the /data directory to S3 on the interval you specify. The interval can be specified in seconds, minutes, hours or days (adding s, m, h or d as the suffix):

docker run -d --name my-data-container -e BACKUP_INTERVAL=2m \
           elementar/s3-volume /data s3://mybucket/someprefix

Configuring credentials

If you are running on EC2, IAM role credentials should just work. Otherwise, you can supply credential information using environment variables:

docker run -d --name my-data-container \
           -e AWS_ACCESS_KEY_ID=... -e AWS_SECRET_ACCESS_KEY=... \
           elementar/s3-volume /data s3://mybucket/someprefix

Any environment variable available to the aws-cli command can be used. see for more information.

Forcing a sync

A final sync will always be performed on container shutdown. A sync can be forced by sending the container the USR1 signal:

docker kill --signal=USR1 my-data-container

Forcing a restoration

The first time the container is ran, it will fetch the contents of the S3 location to initialize the /data directory. If you want to force an initial sync again, you can run the container again with the --force-restore option:

docker run -d --name my-data-container \
           elementar/s3-volume --force-restore /data s3://mybucket/someprefix

Using Compose and named volumes

Most of the time, you will use this image to sync data for another container. You can use docker-compose for that:

# docker-compose.yaml
version: "2"

    driver: local

    image: elementar/s3-volume
    command: /data s3://mybucket/someprefix
      - s3data:/data
    image: postgres
      - s3data:/var/lib/postgresql/data


  1. Fork it!
  2. Create your feature branch: git checkout -b my-new-feature
  3. Commit your changes: git commit -am 'Add some feature'
  4. Push to the branch: git push origin my-new-feature
  5. Submit a pull request :D


  • Original Developer - Dave Newman (@whatupdave)
  • Current Maintainer - Fábio Batista (@fabiob)


This repository is released under the MIT license:

You can’t perform that action at this time.