Opinionated Neos CMS docker image based on Alpine linux with nginx + php-fpm 7.2 + s6 process manager, packing everything needed for development and production usage of Neos in under 100mb.
The image does a few things:
- Automatically install and provision a Neos website, based on environment vars documented below
- Pack a few useful things like XDEBUG integration, git, beard etc.
Be ready to be used in production and serve as a rolling deployment target with this Ansible script https://github.com/psmb/ansible-deploy(I gave up on this deployment method, use 2.x version of this image if you need it, but rather consider full conatiner deployment)
Check out this shell script to see what exactly this image can do for you.
This image supports following environment variable for automatically configuring Neos at container startup:
|Docker env variable||Description|
|REPOSITORY_URL||Link to Neos website distribution|
|VERSION||Git repository branch, commit SHA or release tag, defaults to
|SITE_PACKAGE||Neos website package with exported website data to be imported, optional|
|ADMIN_PASSWORD||If set, would create a Neos
|2.x image only. If set, set the
|DONT_PUBLISH_PERSISTENT||Don't publish persistent assets on init. Needed e.g. for cloud resources.|
|AWS_BACKUP_ARN||Automatically import the database from
|DB_AUTO_BACKUP||Automatically backup database at given interval, possible values:
|XDEBUG_CONFIG||Pass xdebug config string, e.g.
|IMPORT_GITHUB_PUB_KEYS||Will pull authorized keys allowed to connect to this image from your Github account(s).|
|DB_DATABASE||Database name, defaults to
|DB_HOST||Database host, defaults to
|DB_PASS||Database password, defaults to
|DB_USER||Database user, defaults to
In addition to these settings, if you place database sql dump at
Data/Persistent/db.sql, it would automatically be imported on the first container launch. See above for options to automatically download the data from AWS S3.
beard.json file is present, your distribution will get bearded.
The container has the
crond daemon running, put your scripts to
Example docker-compose.yml configuration:
web: image: dimaip/docker-neos-alpine:latest ports: - '80' - '22' links: - db:db volumes: - /data environment: REPOSITORY_URL: 'https://github.com/neos/neos-development-distribution' SITE_PACKAGE: 'Neos.Demo' VERSION: '3.3' ADMIN_PASSWORD: 'password' IMPORT_GITHUB_PUB_KEYS: 'your-github-user-name' AWS_RESOURCES_ARN: 's3://some-bucket/sites/demo/' db: image: mariadb:latest expose: - 3306 volumes: - /var/lib/data environment: MYSQL_DATABASE: 'db' MYSQL_USER: 'admin' MYSQL_PASSWORD: 'pass' MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
Also this container provides a couple of utility scripts, they are located in the
|backupDb.sh||Dumps database into
|syncCode.sh||For development purpose only! pulls latest code from git, does composer install and a few other things, see code.|
|syncAll.sh||Runs both syncDb and syncCode|
Each container automatically takes care of daily backing up itself by running the
/data/backupDb.sh script, which dumps DB and optionally uploads it to AWS S3. So if you store persistent resources on AWS S3, you are good to go (you should probably additionally backup the contents of S3 to some offline storage, but that's a different story).
You may build pre-provisioned images dedicated for your project from such Dockerfile:
FROM dimaip/docker-neos-alpine:latest ENV PHP_TIMEZONE=Europe/Moscow ENV REPOSITORY_URL=https://github.com/sfi-ru/ErmDistr RUN /provision-neos.sh
It will already pre-install your project by running
composer install, so the remaining startup will take about 10s, instead of minutes.