Is a Volume Needed? #3

Open
jeden25 opened this Issue May 27, 2015 · 58 comments

Comments

Projects
None yet
@jeden25

jeden25 commented May 27, 2015

Drupal would require some local storage for images and the like, right? So in order for the container to be persistent wouldn't I need to include a volume as well as the database?

If it is needed, I'm confused as to why the official Wordpress Dockerfile specifies a volume while the official Drupal Dockerfile does not?

@jonpugh

This comment has been minimized.

Show comment
Hide comment
@jonpugh

jonpugh Jul 8, 2015

Yes, this is the main problem with this image. There should be a volume for the source code files as well so we can add our own Drupal codebase.

Drupal core by itself isn't going to cut it for most people.

jonpugh commented Jul 8, 2015

Yes, this is the main problem with this image. There should be a volume for the source code files as well so we can add our own Drupal codebase.

Drupal core by itself isn't going to cut it for most people.

@yosifkit

This comment has been minimized.

Show comment
Hide comment
@yosifkit

yosifkit Jul 8, 2015

Member

A VOLUME line is not necessary in the Dockerfile for --volumes-from or -v to work (yes, you would need a -v on the first container for --volumes-from to work to a second container), so docker run -v /local/images/path/:/drupal/path/to/images/ drupal will work fine But if you have a place that you think a VOLUME should be defined, a PR is welcome.

Member

yosifkit commented Jul 8, 2015

A VOLUME line is not necessary in the Dockerfile for --volumes-from or -v to work (yes, you would need a -v on the first container for --volumes-from to work to a second container), so docker run -v /local/images/path/:/drupal/path/to/images/ drupal will work fine But if you have a place that you think a VOLUME should be defined, a PR is welcome.

@tianon

This comment has been minimized.

Show comment
Hide comment
@tianon

tianon Jul 8, 2015

Member
Member

tianon commented Jul 8, 2015

@skyred

This comment has been minimized.

Show comment
Hide comment
@skyred

skyred Aug 5, 2015

Contributor

D8 uses a different structure now. For example, contributed modules are downloaded at /modules/, unlike D7, contributed modules are located in /sites/*/modules/

I was thinking the same question. But, there are different ways people would use this image. For example, I can potentially use a docker d8 image for

  1. spin up sandboxes for testing. Then, I don't need Volume.
  2. start a new project. Then, I want the whole Drupal directory to be managed by Git (except user files). It doesn't matter D7 or D8, it's best practise to put the whole Drupal directory under Git. If we put the while Drupal directory in a Volume, then we don't we just use PHP container and a Data container to start with?
Contributor

skyred commented Aug 5, 2015

D8 uses a different structure now. For example, contributed modules are downloaded at /modules/, unlike D7, contributed modules are located in /sites/*/modules/

I was thinking the same question. But, there are different ways people would use this image. For example, I can potentially use a docker d8 image for

  1. spin up sandboxes for testing. Then, I don't need Volume.
  2. start a new project. Then, I want the whole Drupal directory to be managed by Git (except user files). It doesn't matter D7 or D8, it's best practise to put the whole Drupal directory under Git. If we put the while Drupal directory in a Volume, then we don't we just use PHP container and a Data container to start with?
@alexanderjulo

This comment has been minimized.

Show comment
Hide comment
@alexanderjulo

alexanderjulo Nov 4, 2015

In my opinion we do need volumes, so that this image is actually usable. We should define mount points for the following directories (in D8):

  • /var/www/html/modules
  • /var/www/html/profiles
  • /var/www/html/themes

These three are no-brainers, they are empty by default (besides a README.txt) and will ensure that the container can be upgraded by just bumping the image version.

We should also include /var/www/html/sites/default/files, which will make sure user content survives.

Much more difficult is the question on how to deal with settings.php and default.settings.php. From what I understand we can not just define volumes on these files, as docker would just overwrite them with directories and the drupal install expects them to be files in a certain state with a certain content.

I'd be interested in any ideas. If we just mount /var/www/html/sites/ or a subdirectory we also have the problem that the structure is lost and drupal will either not install at all or refuse installation.

In my opinion we do need volumes, so that this image is actually usable. We should define mount points for the following directories (in D8):

  • /var/www/html/modules
  • /var/www/html/profiles
  • /var/www/html/themes

These three are no-brainers, they are empty by default (besides a README.txt) and will ensure that the container can be upgraded by just bumping the image version.

We should also include /var/www/html/sites/default/files, which will make sure user content survives.

Much more difficult is the question on how to deal with settings.php and default.settings.php. From what I understand we can not just define volumes on these files, as docker would just overwrite them with directories and the drupal install expects them to be files in a certain state with a certain content.

I'd be interested in any ideas. If we just mount /var/www/html/sites/ or a subdirectory we also have the problem that the structure is lost and drupal will either not install at all or refuse installation.

@alexanderjulo

This comment has been minimized.

Show comment
Hide comment
@alexanderjulo

alexanderjulo Nov 4, 2015

Actually, scratch that, you don't need it. You can run this fine without any volumes, by using a drupal:8 as the image for the storage, too. See the following docker-compose.yml for an example. This also fixes issues with settings.php and so on. We can bump the image for drupal without bumping storage-drupal:

drupal:
    image: drupal:8
    volumes_from:
        - storage-drupal
    links:
        - "db:mysql"
    ports:
        - "80:80"
db:
    image: mysql
    volumes_from:
        - storage-mysql
    environment:
        - MYSQL_USER=someuser
        - MYSQL_PASSWORD=thispasswordsucks
        - MYSQL_DATABASE=somedb
        - MYSQL_ROOT_PASSWORD=thispasswordsuckstoo
storage-drupal:
    image: drupal:8
    volumes:
        - /var/www/html/modules
        - /var/www/html/profiles
        - /var/www/html/themes
        - /var/www/html/sites
storage-mysql:
    image: mysql
    volumes:
        - /var/lib/mysql

Actually, scratch that, you don't need it. You can run this fine without any volumes, by using a drupal:8 as the image for the storage, too. See the following docker-compose.yml for an example. This also fixes issues with settings.php and so on. We can bump the image for drupal without bumping storage-drupal:

drupal:
    image: drupal:8
    volumes_from:
        - storage-drupal
    links:
        - "db:mysql"
    ports:
        - "80:80"
db:
    image: mysql
    volumes_from:
        - storage-mysql
    environment:
        - MYSQL_USER=someuser
        - MYSQL_PASSWORD=thispasswordsucks
        - MYSQL_DATABASE=somedb
        - MYSQL_ROOT_PASSWORD=thispasswordsuckstoo
storage-drupal:
    image: drupal:8
    volumes:
        - /var/www/html/modules
        - /var/www/html/profiles
        - /var/www/html/themes
        - /var/www/html/sites
storage-mysql:
    image: mysql
    volumes:
        - /var/lib/mysql
@iamfrntdv

This comment has been minimized.

Show comment
Hide comment
@iamfrntdv

iamfrntdv Nov 14, 2015

Volume for Drupal's root folder would be great!

Volume for Drupal's root folder would be great!

@alexanderjulo

This comment has been minimized.

Show comment
Hide comment
@alexanderjulo

alexanderjulo Nov 15, 2015

Then you could not upgrade anymore by upgrading the container, because the drupal core folder would be in the volume.

On 14 Nov 2015 21:46 +0100, admdhnotifications@github.com, wrote:

Volume for Drupal's root folder would be great!


Reply to this email directly orview it on GitHub(#3 (comment)).

Then you could not upgrade anymore by upgrading the container, because the drupal core folder would be in the volume.

On 14 Nov 2015 21:46 +0100, admdhnotifications@github.com, wrote:

Volume for Drupal's root folder would be great!


Reply to this email directly orview it on GitHub(#3 (comment)).

@iamfrntdv

This comment has been minimized.

Show comment
Hide comment
@iamfrntdv

iamfrntdv Nov 15, 2015

oh, i see. than this folders would be great:
/var/www/html/modules
/var/www/html/profiles
/var/www/html/themes
/var/www/html/sites/default/files
that @alexex mentioned before

oh, i see. than this folders would be great:
/var/www/html/modules
/var/www/html/profiles
/var/www/html/themes
/var/www/html/sites/default/files
that @alexex mentioned before

@aborilov

This comment has been minimized.

Show comment
Hide comment
@aborilov

aborilov Nov 27, 2015

Can't use this image with kubernetes, because after every restart, installation process start again.

Can't use this image with kubernetes, because after every restart, installation process start again.

@juliencarnot

This comment has been minimized.

Show comment
Hide comment
@juliencarnot

juliencarnot Dec 14, 2015

Interesting issue&thread! As a newcomer to Docker, I've been struggling to find definitive answers on data/volumes/volume containers management for drupal, let alone some kind of benchmark of the different options, so it's quite difficult to determine which one would apply better to my usecase... If there's some consensus, adding some pointers on the description page would be great!

Interesting issue&thread! As a newcomer to Docker, I've been struggling to find definitive answers on data/volumes/volume containers management for drupal, let alone some kind of benchmark of the different options, so it's quite difficult to determine which one would apply better to my usecase... If there's some consensus, adding some pointers on the description page would be great!

@aborilov

This comment has been minimized.

Show comment
Hide comment
@aborilov

aborilov Dec 15, 2015

The problem is that there are no docker-entrypoint.sh, which have to copy /var/www/html to volume, if it is empty, like it does in WordPress Dockerfile.

The problem is that there are no docker-entrypoint.sh, which have to copy /var/www/html to volume, if it is empty, like it does in WordPress Dockerfile.

@YvanDaSilva

This comment has been minimized.

Show comment
Hide comment
@YvanDaSilva

YvanDaSilva Feb 18, 2016

@alexex Your solution does fix the issue of upgrading to a new version.
However what it does not fix, is the ability to sync your data with the host.
"/var/www/html/sites" Can't still be mounted from host as they contain default settings that drupal wants.

So you are still prone to loose your data by removing the data container, or in the case of k8s users like in the previous comments loose your pod and by so loose your data.

Something still needs to be done here, so that drupal does become a real containerized application that can survive: stop & destroy

Update: Just noticed that on your example your are using for your storage ambassador container mariadb and drupal images and you are not passing a command to execute. This would have for effect to keep these containers up. It is a good idea to use the same images though, as those are already pulled and would not need extra space.
You can change this by adding -> command: bash or using another image (busybox for example)

@alexex Your solution does fix the issue of upgrading to a new version.
However what it does not fix, is the ability to sync your data with the host.
"/var/www/html/sites" Can't still be mounted from host as they contain default settings that drupal wants.

So you are still prone to loose your data by removing the data container, or in the case of k8s users like in the previous comments loose your pod and by so loose your data.

Something still needs to be done here, so that drupal does become a real containerized application that can survive: stop & destroy

Update: Just noticed that on your example your are using for your storage ambassador container mariadb and drupal images and you are not passing a command to execute. This would have for effect to keep these containers up. It is a good idea to use the same images though, as those are already pulled and would not need extra space.
You can change this by adding -> command: bash or using another image (busybox for example)

@ahuffman

This comment has been minimized.

Show comment
Hide comment
@ahuffman

ahuffman Feb 22, 2016

I'd move the drupal core download and extract stuff into an entrypoint.sh (instead of the dockerfile,) with a VOLUME at /var/www/html, as part of entrypoint check to see if the core exists, and if not pull down the core and extract into place.

I'd move the drupal core download and extract stuff into an entrypoint.sh (instead of the dockerfile,) with a VOLUME at /var/www/html, as part of entrypoint check to see if the core exists, and if not pull down the core and extract into place.

@ahuffman

This comment has been minimized.

Show comment
Hide comment
@ahuffman

ahuffman Feb 22, 2016

You could also use Environment variables to pull in all the config stuff that might get lost on restarting a container. Check out the Wordpress or Joomla containers. There should be something like a DRUPAL_DB_HOST, DRUPAL_DB_PASSWORD, DRUPAL_DATABASE at minimum.

You could also use Environment variables to pull in all the config stuff that might get lost on restarting a container. Check out the Wordpress or Joomla containers. There should be something like a DRUPAL_DB_HOST, DRUPAL_DB_PASSWORD, DRUPAL_DATABASE at minimum.

@ahuffman

This comment has been minimized.

Show comment
Hide comment
@ahuffman

ahuffman Feb 24, 2016

@aborilov The copy doesn't have to be in entrypoint.sh. They need a VOLUME or multiple VOLUME entries in the Dockerfile. A VOLUME entry flushes the data to a bind mounted volume at runtime. I think this would solve image change concerns as well. Please see the reference here, as it explains perfectly: https://docs.docker.com/engine/reference/builder/#volume

@aborilov The copy doesn't have to be in entrypoint.sh. They need a VOLUME or multiple VOLUME entries in the Dockerfile. A VOLUME entry flushes the data to a bind mounted volume at runtime. I think this would solve image change concerns as well. Please see the reference here, as it explains perfectly: https://docs.docker.com/engine/reference/builder/#volume

@aborilov

This comment has been minimized.

Show comment
Hide comment
@aborilov

aborilov Feb 24, 2016

@ahuffman As you said here #3 (comment), there must be VOLUME and extract(or copy) in entrypoint.sh. Usually, you can download and extract in dockerfile, but extract in some src dir, and copy to /var/www/html only if it empty. There is nothing new here, just see how it work in wordpress docker-entrypoint.sh

if ! [ -e index.php -a -e wp-includes/version.php ]; then
        echo >&2 "WordPress not found in $(pwd) - copying now..."
        if [ "$(ls -A)" ]; then
            echo >&2 "WARNING: $(pwd) is not empty - press Ctrl+C now if this is an error!"
            ( set -x; ls -A; sleep 10 )
        fi
        tar cf - --one-file-system -C /usr/src/wordpress . | tar xf -
        echo >&2 "Complete! WordPress has been successfully copied to $(pwd)"

This is it, and will work everywhere, with any storages.

@ahuffman As you said here #3 (comment), there must be VOLUME and extract(or copy) in entrypoint.sh. Usually, you can download and extract in dockerfile, but extract in some src dir, and copy to /var/www/html only if it empty. There is nothing new here, just see how it work in wordpress docker-entrypoint.sh

if ! [ -e index.php -a -e wp-includes/version.php ]; then
        echo >&2 "WordPress not found in $(pwd) - copying now..."
        if [ "$(ls -A)" ]; then
            echo >&2 "WARNING: $(pwd) is not empty - press Ctrl+C now if this is an error!"
            ( set -x; ls -A; sleep 10 )
        fi
        tar cf - --one-file-system -C /usr/src/wordpress . | tar xf -
        echo >&2 "Complete! WordPress has been successfully copied to $(pwd)"

This is it, and will work everywhere, with any storages.

@ahuffman

This comment has been minimized.

Show comment
Hide comment
@ahuffman

ahuffman Feb 25, 2016

I've taken a crack at fixing this in my fork. I've almost got it, but could use some assistance troubleshooting, as I'm not too familiar with how the php-fpm image works/serves.

You can check out my fork here: https://github.com/ahuffman/drupal/tree/master/8/fpm

Made some changes to the Dockerfile build and created a 1/3 complete drupal_entrypoint.sh (seems to build out the settings.php properly from provided environment variables.)

I've only created one VOLUME at /var/www/html for now during testing.

Environment Variables to provide for MySQL:
MYSQL_DB_PASS
MYSQL_DB_HOST
MYSQL_DB_USER (falls back to root if not provided)
MYSQL_DB_PORT (falls back to 3306 if not provided)
MYSQL_DB_NAME (falls back to drupal if not provided)
DRUPAL_TBL_PREFIX (falls back to blank if not provided)
DRUPAL_DB_TYPE (falls back to 'mysql' if not provided, choices are mysql, postgres, and sqlite ***I've only written MySQL so far)

I've taken a crack at fixing this in my fork. I've almost got it, but could use some assistance troubleshooting, as I'm not too familiar with how the php-fpm image works/serves.

You can check out my fork here: https://github.com/ahuffman/drupal/tree/master/8/fpm

Made some changes to the Dockerfile build and created a 1/3 complete drupal_entrypoint.sh (seems to build out the settings.php properly from provided environment variables.)

I've only created one VOLUME at /var/www/html for now during testing.

Environment Variables to provide for MySQL:
MYSQL_DB_PASS
MYSQL_DB_HOST
MYSQL_DB_USER (falls back to root if not provided)
MYSQL_DB_PORT (falls back to 3306 if not provided)
MYSQL_DB_NAME (falls back to drupal if not provided)
DRUPAL_TBL_PREFIX (falls back to blank if not provided)
DRUPAL_DB_TYPE (falls back to 'mysql' if not provided, choices are mysql, postgres, and sqlite ***I've only written MySQL so far)

@yosifkit

This comment has been minimized.

Show comment
Hide comment
@yosifkit

yosifkit Mar 1, 2016

Member

The docker run command initializes the newly created volume with any data that exists at the specified location within the base image.
https://docs.docker.com/engine/reference/builder/#volume (emphasis added)

Just to ensure that it is understood, docker only copies files within a container's directory to new volumes created at docker run and this never happens on a bind mount.

  • -v /container/dir/ or VOLUME /container/dir/ will copy
  • -v /my/local/dir/:/container/dir/ will only contain what was in /my/local/dir/

It does seem like we need to define a VOLUME; but you should be able to get around it by defining them when it is run (using the folders suggested previously):

$ docker run -d -v /var/www/html/modules  -v /var/www/html/profiles -v /var/www/html/themes -v /var/www/html/sites/default/files drupal
Member

yosifkit commented Mar 1, 2016

The docker run command initializes the newly created volume with any data that exists at the specified location within the base image.
https://docs.docker.com/engine/reference/builder/#volume (emphasis added)

Just to ensure that it is understood, docker only copies files within a container's directory to new volumes created at docker run and this never happens on a bind mount.

  • -v /container/dir/ or VOLUME /container/dir/ will copy
  • -v /my/local/dir/:/container/dir/ will only contain what was in /my/local/dir/

It does seem like we need to define a VOLUME; but you should be able to get around it by defining them when it is run (using the folders suggested previously):

$ docker run -d -v /var/www/html/modules  -v /var/www/html/profiles -v /var/www/html/themes -v /var/www/html/sites/default/files drupal
@ahuffman

This comment has been minimized.

Show comment
Hide comment
@ahuffman

ahuffman Mar 2, 2016

https://github.com/ahuffman/drupal/tree/master/8/apache

Check my fork there. I now have a working apache and MySQL setup with an entrypoint.sh and VOLUME at /var/www/html.

It can also do auto upgrades if the container drupal source changes.

For kubernetes support we need to add some code into my entrypoint script to check the DB for tables and if not there kick off the schema install. I need help on that piece, because I'm not a PHP guy so I don't really know what that would look like.

The entrypoint.sh builds the settings.php off of the provided environment variables.
This seems to be working so far for MySQL, as I haven't written the postgres stuff yet.

Let me know what you think.

ahuffman commented Mar 2, 2016

https://github.com/ahuffman/drupal/tree/master/8/apache

Check my fork there. I now have a working apache and MySQL setup with an entrypoint.sh and VOLUME at /var/www/html.

It can also do auto upgrades if the container drupal source changes.

For kubernetes support we need to add some code into my entrypoint script to check the DB for tables and if not there kick off the schema install. I need help on that piece, because I'm not a PHP guy so I don't really know what that would look like.

The entrypoint.sh builds the settings.php off of the provided environment variables.
This seems to be working so far for MySQL, as I haven't written the postgres stuff yet.

Let me know what you think.

@karelbemelmans

This comment has been minimized.

Show comment
Hide comment
@karelbemelmans

karelbemelmans Mar 6, 2016

@alexex How do you solve the permission issue with running 2 mysql containers that use the same database files? When I start my compose I get this:

[ERROR] InnoDB: Unable to lock ./ibdata1 error: 11
[Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.

@alexex How do you solve the permission issue with running 2 mysql containers that use the same database files? When I start my compose I get this:

[ERROR] InnoDB: Unable to lock ./ibdata1 error: 11
[Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
@karelbemelmans

This comment has been minimized.

Show comment
Hide comment
@karelbemelmans

karelbemelmans Mar 6, 2016

@ahuffman Nice work there! Make a pull request on the main repo to get this into the hub image.

@ahuffman Nice work there! Make a pull request on the main repo to get this into the hub image.

@alexanderjulo

This comment has been minimized.

Show comment
Hide comment
@alexanderjulo

alexanderjulo Mar 7, 2016

@kbemelmans that's not a permissions issue, that would be two mysql server trying to store their databases in the same folder which will definitely lead to conflicts and not work. What are you trying to achieve?

@kbemelmans that's not a permissions issue, that would be two mysql server trying to store their databases in the same folder which will definitely lead to conflicts and not work. What are you trying to achieve?

@karelbemelmans

This comment has been minimized.

Show comment
Hide comment
@karelbemelmans

karelbemelmans Mar 7, 2016

@alexex I literally copy/pasted your docker-compose.yml from this thread, where you use the mysql image for both the db and the storage-mysql container.

I've been reading up on it yesterday, and what you need is some kind of "do not actually start this container, just use it for data" option. But I wonder if that file actually worked for you like this?

@alexex I literally copy/pasted your docker-compose.yml from this thread, where you use the mysql image for both the db and the storage-mysql container.

I've been reading up on it yesterday, and what you need is some kind of "do not actually start this container, just use it for data" option. But I wonder if that file actually worked for you like this?

@alexanderjulo

This comment has been minimized.

Show comment
Hide comment
@alexanderjulo

alexanderjulo Mar 7, 2016

@kbemelmans ah, now I can see what you mean, just set /bin/true as command for the storage container. Also read up on the docker volume command and docker-compose v2 files, this is probably not the preferred method of storing data anymore. :-)

@kbemelmans ah, now I can see what you mean, just set /bin/true as command for the storage container. Also read up on the docker volume command and docker-compose v2 files, this is probably not the preferred method of storing data anymore. :-)

@ahuffman

This comment has been minimized.

Show comment
Hide comment
@ahuffman

ahuffman Mar 7, 2016

@kbemelmans there's still a little bit of work that needs to be done here, which I need help on.

First, we need some php code written to be able to check the postgres/mysql db connections to see if the tables exist, and if not run through the table install procedure (similar to what the wordpress entrypoint does.)

The second piece is that we need to be able to see if there's a better (more drupal native way) to check on the drupal version, and if an upgrade is being performed automatically run the php code to upgrade the table schemas. I'm not familiar enough with the drupal code to know the answers to these questions. Other than that, my entrypoint script takes care of building the settings.php pretty well, and the drupal container can be scaled on a kubernetes environment.

I'm able to kill the running drupal containers (or their settings.php) and they return to normal after restarting.

ahuffman commented Mar 7, 2016

@kbemelmans there's still a little bit of work that needs to be done here, which I need help on.

First, we need some php code written to be able to check the postgres/mysql db connections to see if the tables exist, and if not run through the table install procedure (similar to what the wordpress entrypoint does.)

The second piece is that we need to be able to see if there's a better (more drupal native way) to check on the drupal version, and if an upgrade is being performed automatically run the php code to upgrade the table schemas. I'm not familiar enough with the drupal code to know the answers to these questions. Other than that, my entrypoint script takes care of building the settings.php pretty well, and the drupal container can be scaled on a kubernetes environment.

I'm able to kill the running drupal containers (or their settings.php) and they return to normal after restarting.

@rjbrown99

This comment has been minimized.

Show comment
Hide comment
@rjbrown99

rjbrown99 Mar 15, 2016

@ahuffman if you haven't explored drush that would be a good place to start.
https://github.com/drush-ops/drush

Assuming you package drush with your Drupal container, it can already check the database health, check the Drupal version, or even perform full upgrades of core or packages.

'drush core-status' will dump you a response that looks like this:

Drupal version : 7.43
Site URI : http://default
Database driver : mysql
Database username : dbusername
Database name : dbname
Database : Connected
Drupal bootstrap : Successful
Drupal user : Anonymous
Default theme : themename
Administration theme : seven
PHP executable : /usr/bin/php
PHP configuration : /etc/php.ini
PHP OS : Linux
Drush version : 6.2.0
Drush configuration :
Drush alias files :
Drupal root : /path/to/drupal/root
Site path : sites/default
File directory path : sites/default/files
Temporary file directory path : sites/default/tmp

Newer versions of drush support a '--format json' or '--format yaml' option to dump the above results in a format that is friendlier for parsing by other scripts or tools.

In nearly all cases you will want to package drush with drupal anyway, so this might kill a few birds with one stone. For example, drush can also be used to kick off the Drupal cron jobs that need to run periodically.

@ahuffman if you haven't explored drush that would be a good place to start.
https://github.com/drush-ops/drush

Assuming you package drush with your Drupal container, it can already check the database health, check the Drupal version, or even perform full upgrades of core or packages.

'drush core-status' will dump you a response that looks like this:

Drupal version : 7.43
Site URI : http://default
Database driver : mysql
Database username : dbusername
Database name : dbname
Database : Connected
Drupal bootstrap : Successful
Drupal user : Anonymous
Default theme : themename
Administration theme : seven
PHP executable : /usr/bin/php
PHP configuration : /etc/php.ini
PHP OS : Linux
Drush version : 6.2.0
Drush configuration :
Drush alias files :
Drupal root : /path/to/drupal/root
Site path : sites/default
File directory path : sites/default/files
Temporary file directory path : sites/default/tmp

Newer versions of drush support a '--format json' or '--format yaml' option to dump the above results in a format that is friendlier for parsing by other scripts or tools.

In nearly all cases you will want to package drush with drupal anyway, so this might kill a few birds with one stone. For example, drush can also be used to kick off the Drupal cron jobs that need to run periodically.

@ahuffman

This comment has been minimized.

Show comment
Hide comment
@ahuffman

ahuffman Mar 17, 2016

@rjbrown99 thanks, that looks like the best way to get this done. I'll start digging into it when I have more free time. If anyone else has some experience with drush and wants to add to the fork I have, please do. I have the script working up to recreating a missing settings.php off of environment variables. The next step is to check the db and if not there initialize it off of the env variables provided/settings.php.

@rjbrown99 thanks, that looks like the best way to get this done. I'll start digging into it when I have more free time. If anyone else has some experience with drush and wants to add to the fork I have, please do. I have the script working up to recreating a missing settings.php off of environment variables. The next step is to check the db and if not there initialize it off of the env variables provided/settings.php.

@rjbrown99

This comment has been minimized.

Show comment
Hide comment
@rjbrown99

rjbrown99 Mar 17, 2016

@ahuffman I'll look into it, but likely for Drupal 7 as opposed to 8 just based on my own current needs.

Ultimately, even if settings.php is auto-populated, in a large number of use cases there will be a need to otherwise modify that file beyond database settings. Not sure yet how to approach that, possibly through another set of environment variables.

One thing that the folks at Pantheon also did that I really like is to enable the setting of variables to show whether you are in a dev/test/prod environment. This link has details: https://pantheon.io/docs/settings-php. It's super useful as you can then configure if statements in the settings.php to, for example, enable/disable memcache or redis depending on which environment you are running the app.

Perhaps @davidstrauss would care to chime in with input as well as most of Pantheon is built on top of Docker if I'm not mistaken.

@ahuffman I'll look into it, but likely for Drupal 7 as opposed to 8 just based on my own current needs.

Ultimately, even if settings.php is auto-populated, in a large number of use cases there will be a need to otherwise modify that file beyond database settings. Not sure yet how to approach that, possibly through another set of environment variables.

One thing that the folks at Pantheon also did that I really like is to enable the setting of variables to show whether you are in a dev/test/prod environment. This link has details: https://pantheon.io/docs/settings-php. It's super useful as you can then configure if statements in the settings.php to, for example, enable/disable memcache or redis depending on which environment you are running the app.

Perhaps @davidstrauss would care to chime in with input as well as most of Pantheon is built on top of Docker if I'm not mistaken.

@pirog

This comment has been minimized.

Show comment
Hide comment
@pirog

pirog Mar 20, 2016

AFAIK pantheon uses LXC containers directly.

Re: volumes, i think this really should be up to the user. Using a compose file to also add a DB and relevant data containers is ideal (and easy). This is adapted from the stock drupal7 kalabox app so its built primarily for a local dev use case. You will obviously want to have better sql passwords.

data:
  image: busybox
  volumes:
    - /var/lib/mysql
    - /var/www/html

appserver:
  image: drupal:7
  volumes_from:
    - data
  links:
    - db:database
    - db:mysql
  ports:
    - "80"

db:
  image: mysql
  volumes_from:
    - data
  ports:
    - "3306"
  environment:
    MYSQL_USER: drupal
    MYSQL_PASSWORD: drupal
    MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
    MYSQL_DATABASE: drupal

re: settings it would be a worthwhile effort to try and get drupal core to actually use ENV based db config a la pressflow (which is what pantheon uses). This is a fairly common model that is also used in wordpress.

pirog commented Mar 20, 2016

AFAIK pantheon uses LXC containers directly.

Re: volumes, i think this really should be up to the user. Using a compose file to also add a DB and relevant data containers is ideal (and easy). This is adapted from the stock drupal7 kalabox app so its built primarily for a local dev use case. You will obviously want to have better sql passwords.

data:
  image: busybox
  volumes:
    - /var/lib/mysql
    - /var/www/html

appserver:
  image: drupal:7
  volumes_from:
    - data
  links:
    - db:database
    - db:mysql
  ports:
    - "80"

db:
  image: mysql
  volumes_from:
    - data
  ports:
    - "3306"
  environment:
    MYSQL_USER: drupal
    MYSQL_PASSWORD: drupal
    MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
    MYSQL_DATABASE: drupal

re: settings it would be a worthwhile effort to try and get drupal core to actually use ENV based db config a la pressflow (which is what pantheon uses). This is a fairly common model that is also used in wordpress.

@weitzman

This comment has been minimized.

Show comment
Hide comment
@weitzman

weitzman Mar 23, 2016

For composer managed sites, the recommended packaging is to list Drush as a dependency of your site and thus is available at vendor/bin/drush. See github.com/drupal-composer/drupal-project for an excellent way to package Drupal and Drush.

For composer managed sites, the recommended packaging is to list Drush as a dependency of your site and thus is available at vendor/bin/drush. See github.com/drupal-composer/drupal-project for an excellent way to package Drupal and Drush.

@mstenta

This comment has been minimized.

Show comment
Hide comment
@mstenta

mstenta Sep 30, 2016

I'm very interested in seeing this move forward - I'm very new to Docker, but I would gladly lend a hand if there is consensus on first steps.

I'm exploring the use of this image with a distribution I'm working on (http://drupal.org/project/farm), and I outlined some of the requirements as I see them in this comment: farmOS/farmOS#15 (comment)

I think they are general to Drupal + Docker, though, so they might be helpful to this discussion.

mstenta commented Sep 30, 2016

I'm very interested in seeing this move forward - I'm very new to Docker, but I would gladly lend a hand if there is consensus on first steps.

I'm exploring the use of this image with a distribution I'm working on (http://drupal.org/project/farm), and I outlined some of the requirements as I see them in this comment: farmOS/farmOS#15 (comment)

I think they are general to Drupal + Docker, though, so they might be helpful to this discussion.

@mstenta

This comment has been minimized.

Show comment
Hide comment
@mstenta

mstenta Sep 30, 2016

It sounds like one of the first things we need is an entrypoint script that will auto-generate settings.php if it doesn't exist. Do you agree?

In the current state, the settings.php file is lost if you destroy and rebuild the container, which means the Drupal site no longer has it's connection information for the database. In Drupal 8 at least (maybe D7 too?), you can resolve this simply by going through the first two steps of installation, and Drupal will recreate the settings.php script and detect that the database already exists, providing you with a link to the existing site.

Ideally, though, settings.php would be automatically generated when you create a container.

@rjbrown99 - You also mentioned the fact that settings.php is often used for additional config. We could follow the approach of Aegir to solve this, by allowing an optional "local.settings.php" to be included if it exists. In this way, settings.php would be a standard template generated for the container, and local.settings.php could be used for any additional stuff that people need for their particular deployment.

mstenta commented Sep 30, 2016

It sounds like one of the first things we need is an entrypoint script that will auto-generate settings.php if it doesn't exist. Do you agree?

In the current state, the settings.php file is lost if you destroy and rebuild the container, which means the Drupal site no longer has it's connection information for the database. In Drupal 8 at least (maybe D7 too?), you can resolve this simply by going through the first two steps of installation, and Drupal will recreate the settings.php script and detect that the database already exists, providing you with a link to the existing site.

Ideally, though, settings.php would be automatically generated when you create a container.

@rjbrown99 - You also mentioned the fact that settings.php is often used for additional config. We could follow the approach of Aegir to solve this, by allowing an optional "local.settings.php" to be included if it exists. In this way, settings.php would be a standard template generated for the container, and local.settings.php could be used for any additional stuff that people need for their particular deployment.

@mstenta

This comment has been minimized.

Show comment
Hide comment
@mstenta

mstenta Sep 30, 2016

Does it make sense to create a separate issue for the entrypoint script? It is not strictly related to this "Is a Volume Needed?" question.

mstenta commented Sep 30, 2016

Does it make sense to create a separate issue for the entrypoint script? It is not strictly related to this "Is a Volume Needed?" question.

@mstenta

This comment has been minimized.

Show comment
Hide comment
@mstenta

mstenta Sep 30, 2016

As for the volume question specifically: I don't think we need to do anything. As @yosifkit, @alexanderjulo, and @pirog pointed out - volumes can be defined when a container is created, or in a docker-compose.yml file - in whatever way makes sense for the necessary use-case.

I don't think we should impose any default volumes at all in this Dockerfile, but I do propose that we include some example docker run commands and docker-compose.yml files that can be used for common use cases.

Ultimately, Drupal can be used in a lot of ways, so I'm not sure it makes sense to try to solve all those use cases in code... but rather improve the documentation around best-practices for common patterns.

mstenta commented Sep 30, 2016

As for the volume question specifically: I don't think we need to do anything. As @yosifkit, @alexanderjulo, and @pirog pointed out - volumes can be defined when a container is created, or in a docker-compose.yml file - in whatever way makes sense for the necessary use-case.

I don't think we should impose any default volumes at all in this Dockerfile, but I do propose that we include some example docker run commands and docker-compose.yml files that can be used for common use cases.

Ultimately, Drupal can be used in a lot of ways, so I'm not sure it makes sense to try to solve all those use cases in code... but rather improve the documentation around best-practices for common patterns.

@mstenta

This comment has been minimized.

Show comment
Hide comment
@mstenta

mstenta Sep 30, 2016

FYI @ahuffman I created a pull request that adds Drush: #62

mstenta commented Sep 30, 2016

FYI @ahuffman I created a pull request that adds Drush: #62

@pirog

This comment has been minimized.

Show comment
Hide comment
@pirog

pirog Sep 30, 2016

FWIW

you can use the "official" image/drush container and volume mount your drupal container onto it. this should allow you to run drush commands on your codebase without needing to add drush to the drupal container itself.

that said, i do think there are a lot of good reasons to add drush directly to containers that are serving drupal but im not sure drush should live in the OFFICIAL drupal image.

pirog commented Sep 30, 2016

FWIW

you can use the "official" image/drush container and volume mount your drupal container onto it. this should allow you to run drush commands on your codebase without needing to add drush to the drupal container itself.

that said, i do think there are a lot of good reasons to add drush directly to containers that are serving drupal but im not sure drush should live in the OFFICIAL drupal image.

@mstenta

This comment has been minimized.

Show comment
Hide comment
@mstenta

mstenta Sep 30, 2016

@pirog Fair point. If we end up using Drush as part of the entrypoint script, then perhaps it makes sense to include it - otherwise leaving it up to users of the image is better, I agree.

mstenta commented Sep 30, 2016

@pirog Fair point. If we end up using Drush as part of the entrypoint script, then perhaps it makes sense to include it - otherwise leaving it up to users of the image is better, I agree.

@mstenta mstenta referenced this issue Oct 3, 2016

Closed

Volume #63

@mstenta

This comment has been minimized.

Show comment
Hide comment
@mstenta

mstenta Oct 3, 2016

I spent some time working on this, and put together a new pull request for review: #63

It does the following:

  • Adds an entrypoint script.
  • Defines a VOLUME at /var/www/html
  • Installs Drush (this branch includes the commit from #62)
  • Unpacks the desired Drupal version in the entrypoint script (uses Drush for version comparison).
  • Preserves the "sites", "modules", and "themes" folders (for Drupal 8.1 and 8.2 - and only "sites" for 7) so that any modifications in them are not overwritten when a new version is downloaded.
  • Implemented for Drupal 7, 8.1, and 8.2

@ahuffman - I started by rebasing and reviewing your branch, but ultimately decided to take a slightly different approach. The rebased copy is available at https://github.com/mstenta/drupal-docker/tree/ahuffman, in case you need it. I had to refactor it a bit because the repo has an "8.1" and "8.2" folder now, instead of just "8".

I think that branch was a great start, and it helped me to understand some of the requirements a bit better. Ultimately, I decided that a simpler approach could be taken. We don't need to generate settings.php in the entrypoint script, because Drupal's install.php will generate it for us the first time you go to the site. I think this is preferable over generating one automatically for a number of reasons. The biggest reason for me is that I WANT to go through the installation steps in install.php - and I'm sure others will as well. Also, by deferring the settings.php generation to Drupal itself, we have less code to maintain here.

With that in mind, we also don't need to worry about installing Drupal in the entrypoint. That will happen during the normal (manual) installation.

So the only thing that the entrypoint script needs to do is ensure that the codebase for the desired Drupal version exists in /var/www/html. And it needs to be able to update that version when a new Docker image is available. To do this, I'm using Drush to find the "current" version of Drupal, based on the files that exist in /var/www/html. This version is then compared against the "desired" version which is defined in the DRUPAL_VERSION environment variable in the Dockerfile. If they do not match, then the version defined in the Dockerfile is downloaded and used to replace the version in /var/www/html.

Before an upgrade occurs, the entrypoint script creates an archive of a couple of folders, and then restores them after the codebase is updated. In Drupal 8.1 and 8.2, the "sites", "modules", and "themes" folders are preserved. In Drupal 7, just the "sites" folder is preserved. This ensures that Drupal's files, settings.php, and any custom modules and themes are not lost when upgrading.

To use this, I recommended that you use the -v flag of docker run to mount /var/www/html as a volume. Then, you have the full Drupal codebase stored outside of the container - which makes it easy to use for development/debugging.

Please review the pull request! Let me know if I missed anything!

mstenta commented Oct 3, 2016

I spent some time working on this, and put together a new pull request for review: #63

It does the following:

  • Adds an entrypoint script.
  • Defines a VOLUME at /var/www/html
  • Installs Drush (this branch includes the commit from #62)
  • Unpacks the desired Drupal version in the entrypoint script (uses Drush for version comparison).
  • Preserves the "sites", "modules", and "themes" folders (for Drupal 8.1 and 8.2 - and only "sites" for 7) so that any modifications in them are not overwritten when a new version is downloaded.
  • Implemented for Drupal 7, 8.1, and 8.2

@ahuffman - I started by rebasing and reviewing your branch, but ultimately decided to take a slightly different approach. The rebased copy is available at https://github.com/mstenta/drupal-docker/tree/ahuffman, in case you need it. I had to refactor it a bit because the repo has an "8.1" and "8.2" folder now, instead of just "8".

I think that branch was a great start, and it helped me to understand some of the requirements a bit better. Ultimately, I decided that a simpler approach could be taken. We don't need to generate settings.php in the entrypoint script, because Drupal's install.php will generate it for us the first time you go to the site. I think this is preferable over generating one automatically for a number of reasons. The biggest reason for me is that I WANT to go through the installation steps in install.php - and I'm sure others will as well. Also, by deferring the settings.php generation to Drupal itself, we have less code to maintain here.

With that in mind, we also don't need to worry about installing Drupal in the entrypoint. That will happen during the normal (manual) installation.

So the only thing that the entrypoint script needs to do is ensure that the codebase for the desired Drupal version exists in /var/www/html. And it needs to be able to update that version when a new Docker image is available. To do this, I'm using Drush to find the "current" version of Drupal, based on the files that exist in /var/www/html. This version is then compared against the "desired" version which is defined in the DRUPAL_VERSION environment variable in the Dockerfile. If they do not match, then the version defined in the Dockerfile is downloaded and used to replace the version in /var/www/html.

Before an upgrade occurs, the entrypoint script creates an archive of a couple of folders, and then restores them after the codebase is updated. In Drupal 8.1 and 8.2, the "sites", "modules", and "themes" folders are preserved. In Drupal 7, just the "sites" folder is preserved. This ensures that Drupal's files, settings.php, and any custom modules and themes are not lost when upgrading.

To use this, I recommended that you use the -v flag of docker run to mount /var/www/html as a volume. Then, you have the full Drupal codebase stored outside of the container - which makes it easy to use for development/debugging.

Please review the pull request! Let me know if I missed anything!

@mstenta

This comment has been minimized.

Show comment
Hide comment
@mstenta

mstenta Oct 3, 2016

With a little more work, we may be able to enable Drupal distributions that inherit from these images: #64

Let's review #63 first, though, and possibly look at that as a followup...

mstenta commented Oct 3, 2016

With a little more work, we may be able to enable Drupal distributions that inherit from these images: #64

Let's review #63 first, though, and possibly look at that as a followup...

@skyred

This comment has been minimized.

Show comment
Hide comment
@skyred

skyred Oct 3, 2016

Contributor

Installs Drush (this branch includes the commit from #62)

Although, I use Drush for every site, it is not a requirement to run Drupal. Therefore, it won't be included in this official image.

Preserves the "sites", "modules", and "themes" folders (for Drupal 8.1 and 8.2 - and only "sites" for 7) so that any modifications in them are not overwritten when a new version is downloaded.

I thought this is already done. If not, then this should be an independent bug fix.

As far as my understanding, your proposed changes is just one opinion to do this. Another opinion to do install and manage Drupal and add-ons is to use Composer. This image tries to stay minimal therefore follows closely to the official Drupal installation documentation on drupal.org, so users can use this as a common base and add their own tools and workflow.

Contributor

skyred commented Oct 3, 2016

Installs Drush (this branch includes the commit from #62)

Although, I use Drush for every site, it is not a requirement to run Drupal. Therefore, it won't be included in this official image.

Preserves the "sites", "modules", and "themes" folders (for Drupal 8.1 and 8.2 - and only "sites" for 7) so that any modifications in them are not overwritten when a new version is downloaded.

I thought this is already done. If not, then this should be an independent bug fix.

As far as my understanding, your proposed changes is just one opinion to do this. Another opinion to do install and manage Drupal and add-ons is to use Composer. This image tries to stay minimal therefore follows closely to the official Drupal installation documentation on drupal.org, so users can use this as a common base and add their own tools and workflow.

@mstenta

This comment has been minimized.

Show comment
Hide comment
@mstenta

mstenta Oct 3, 2016

@skyred - I agree we should keep the images as general as possible. Currently, however, they seem to be more of a proof-of-concept than something that can be built upon or used for a real website. All they do is download and unpack the Drupal tar.gz file from drupal.org currently - so if you don't set up a volume everything is lost when you update the image.

Although, I use Drush for every site, it is not a requirement to run Drupal. Therefore, it won't be included in this official image.

Agreed. Drush is not strictly a requirement for Drupal. My branch uses it to check the currently installed version of Drupal, which is why I included it. See my point from above:

  • Unpacks the desired Drupal version in the entrypoint script (uses Drush for version comparison).

And more specifically, here is the commit that utilizes it: a46dc00

If we don't take the approach that I proposed, or if we can find another way to do the same thing without Drush, then we don't need it.

Regarding preservation of "sites", "modules", and "themes" folders...

I thought this is already done. If not, then this should be an independent bug fix.

Currently these images do not do anything to preserve data when a container is destroyed and rebuilt. It is left completely up to the user to implement their own persistence with volumes. So it is not already done.

Another opinion to do install and manage Drupal and add-ons is to use Composer.

Yes I like the idea of using composer, but that is somewhat unrelated to these images currently. They are downloading and unpacking the pre-packaged release from drupal.org - they are not using composer. If someone wants to use composer to build Drupal, they should not use these images as a base. Maybe we should change that, but that isn't the topic of this issue.

The changes I propose provide a very basic and general ability to persist Drupal's data in the container, while also still allowing Drupal core to be updated by destroying and rebuilding the container with a new version of the image. Similar to @ahuffman's branch, it takes a similar approach to that of the official Wordpress and Joomla images by using an entrypoint script to decide whether or not to update core, and preserve certain folders at the same time. I tried to do it in a way that was a simple iteration on top of what we have already - which is simply an image that downloads and unpacks a tar.gz file.

My changes DO add an assumption that the "sites", "modules", and "themes" folders are the ONLY folders that should be preserved across core updates - so they are not completely un-opinionated. But I would argue that those assumptions are in line with the official installation and upgrade instructions of Drupal. If someone wants to make changes to other files/folders outside of those, then they should be aware of the way in which these images work, and perhaps they should create a more custom Dockerfile for their purposes. If there are other standard practices we could adopt, we can also discuss those as additional changes later.

What do you think? Are there other specific issues you can see in the commits I'm proposing? Or things I may not be foreseeing? I'm eager to understand all the considerations and come to a joint conclusion on how to proceed.

mstenta commented Oct 3, 2016

@skyred - I agree we should keep the images as general as possible. Currently, however, they seem to be more of a proof-of-concept than something that can be built upon or used for a real website. All they do is download and unpack the Drupal tar.gz file from drupal.org currently - so if you don't set up a volume everything is lost when you update the image.

Although, I use Drush for every site, it is not a requirement to run Drupal. Therefore, it won't be included in this official image.

Agreed. Drush is not strictly a requirement for Drupal. My branch uses it to check the currently installed version of Drupal, which is why I included it. See my point from above:

  • Unpacks the desired Drupal version in the entrypoint script (uses Drush for version comparison).

And more specifically, here is the commit that utilizes it: a46dc00

If we don't take the approach that I proposed, or if we can find another way to do the same thing without Drush, then we don't need it.

Regarding preservation of "sites", "modules", and "themes" folders...

I thought this is already done. If not, then this should be an independent bug fix.

Currently these images do not do anything to preserve data when a container is destroyed and rebuilt. It is left completely up to the user to implement their own persistence with volumes. So it is not already done.

Another opinion to do install and manage Drupal and add-ons is to use Composer.

Yes I like the idea of using composer, but that is somewhat unrelated to these images currently. They are downloading and unpacking the pre-packaged release from drupal.org - they are not using composer. If someone wants to use composer to build Drupal, they should not use these images as a base. Maybe we should change that, but that isn't the topic of this issue.

The changes I propose provide a very basic and general ability to persist Drupal's data in the container, while also still allowing Drupal core to be updated by destroying and rebuilding the container with a new version of the image. Similar to @ahuffman's branch, it takes a similar approach to that of the official Wordpress and Joomla images by using an entrypoint script to decide whether or not to update core, and preserve certain folders at the same time. I tried to do it in a way that was a simple iteration on top of what we have already - which is simply an image that downloads and unpacks a tar.gz file.

My changes DO add an assumption that the "sites", "modules", and "themes" folders are the ONLY folders that should be preserved across core updates - so they are not completely un-opinionated. But I would argue that those assumptions are in line with the official installation and upgrade instructions of Drupal. If someone wants to make changes to other files/folders outside of those, then they should be aware of the way in which these images work, and perhaps they should create a more custom Dockerfile for their purposes. If there are other standard practices we could adopt, we can also discuss those as additional changes later.

What do you think? Are there other specific issues you can see in the commits I'm proposing? Or things I may not be foreseeing? I'm eager to understand all the considerations and come to a joint conclusion on how to proceed.

@mstenta

This comment has been minimized.

Show comment
Hide comment
@mstenta

mstenta Oct 3, 2016

Idea: perhaps we could include an environment variable that turns on/off the automatic updating of Drupal core. So if that is too intrusive, it could be disabled in a Dockerfile that inherits from this one. Then core updates would always be left up to the user if they prefer.

mstenta commented Oct 3, 2016

Idea: perhaps we could include an environment variable that turns on/off the automatic updating of Drupal core. So if that is too intrusive, it could be disabled in a Dockerfile that inherits from this one. Then core updates would always be left up to the user if they prefer.

@bluefoxicy

This comment has been minimized.

Show comment
Hide comment
@bluefoxicy

bluefoxicy Oct 19, 2016

The point of Docker is to make the application and its contents a packed-up image. That means if the user destroys and re-creates the container, nothing breaks.

For example, this postgres in docker-compose.yml:

db:
  image:  postgres:9.5
  environment:
    POSTGRES_USER: "database"
    POSTGRES_PASSWORD: "password"
  expose:
    - 5432
  volumes:
    - /var/lib/docker/opt/gitlab/data/db:/var/lib/postgresql/data
    - /etc/localtime:/etc/localtime:ro
  restart: always

If I docker pull postgres:9.5, and then docker-compose stop; docker-compose rm ; docker-compose up -d, it deletes this container and recreates it from scratch using postgres 9.5, with current patches. The actual database contents remain.

I'm trying to do similar with Drupal:

drupal:
  image: drupal:8-fpm
  log_opt:
    max-size: "20M"
    max-file: "4"
  volumes:
    - /var/lib/docker/opt/universal25/data/sites:/var/www/html/sites
    # non-site universal data stuff
    - /var/lib/docker/opt/universal25/data/modules:/var/www/html/modules:ro
    - /var/lib/docker/opt/universal25/data/themes:/var/www/html/themes:ro
    - /var/lib/docker/opt/universal25/data/profiles:/var/www/html/profiles:ro
  restart: always

Thus sites/, modules/, themes/, and profiles/ are all local directories. Destroying and re-creating the container leaves these in place.

In the example above, I set the last three read-only to prevent Drupal from putting files there. Not sure Drupal will like that; I can always change it in the compose file, then destroy and recreate the container.

The point of Docker is to make the application and its contents a packed-up image. That means if the user destroys and re-creates the container, nothing breaks.

For example, this postgres in docker-compose.yml:

db:
  image:  postgres:9.5
  environment:
    POSTGRES_USER: "database"
    POSTGRES_PASSWORD: "password"
  expose:
    - 5432
  volumes:
    - /var/lib/docker/opt/gitlab/data/db:/var/lib/postgresql/data
    - /etc/localtime:/etc/localtime:ro
  restart: always

If I docker pull postgres:9.5, and then docker-compose stop; docker-compose rm ; docker-compose up -d, it deletes this container and recreates it from scratch using postgres 9.5, with current patches. The actual database contents remain.

I'm trying to do similar with Drupal:

drupal:
  image: drupal:8-fpm
  log_opt:
    max-size: "20M"
    max-file: "4"
  volumes:
    - /var/lib/docker/opt/universal25/data/sites:/var/www/html/sites
    # non-site universal data stuff
    - /var/lib/docker/opt/universal25/data/modules:/var/www/html/modules:ro
    - /var/lib/docker/opt/universal25/data/themes:/var/www/html/themes:ro
    - /var/lib/docker/opt/universal25/data/profiles:/var/www/html/profiles:ro
  restart: always

Thus sites/, modules/, themes/, and profiles/ are all local directories. Destroying and re-creating the container leaves these in place.

In the example above, I set the last three read-only to prevent Drupal from putting files there. Not sure Drupal will like that; I can always change it in the compose file, then destroy and recreate the container.

@mstenta

This comment has been minimized.

Show comment
Hide comment
@mstenta

mstenta Oct 19, 2016

@bluefoxicy Agreed the goal is to be able to destroy and recreate the Drupal container without losing data. The approach you're taking of mounting /var/www/html/sites as a volume (as well as modules, themes, and profiles) is one way of doing this, yes. The main drawback to this approach is that when you mount /var/www/html/sites as a volume, it is empty to start with. This means there is no default.settings.php file available for Drupal to copy during the install.php procedure. You need to then create either default.settings.php or settings.php manually. So while that is a possible approach, it is not ideal, in my opinion, because it involves extra steps. And it makes it harder for this image to be used in some environments. For example: maybe a hosting company provides an easy way to spin up Docker containers from Docker Hub images - currently this image does not provide ANY data persistence by default, which means it is not usable in that scenario without extra configuration. As opposed to MANY other images on Docker Hub that do provide a default volume in the Dockerfile.

mstenta commented Oct 19, 2016

@bluefoxicy Agreed the goal is to be able to destroy and recreate the Drupal container without losing data. The approach you're taking of mounting /var/www/html/sites as a volume (as well as modules, themes, and profiles) is one way of doing this, yes. The main drawback to this approach is that when you mount /var/www/html/sites as a volume, it is empty to start with. This means there is no default.settings.php file available for Drupal to copy during the install.php procedure. You need to then create either default.settings.php or settings.php manually. So while that is a possible approach, it is not ideal, in my opinion, because it involves extra steps. And it makes it harder for this image to be used in some environments. For example: maybe a hosting company provides an easy way to spin up Docker containers from Docker Hub images - currently this image does not provide ANY data persistence by default, which means it is not usable in that scenario without extra configuration. As opposed to MANY other images on Docker Hub that do provide a default volume in the Dockerfile.

@bluefoxicy

This comment has been minimized.

Show comment
Hide comment
@bluefoxicy

bluefoxicy Jan 26, 2017

Dispute.

docker-library/docs#811 resolves this by blaming the user.

The correct resolution is for Drupal to respond to a missing default.settings.php by creating one from a non-volatile location (i.e. a location within Drupal to which the application should never write).

Seriously, who closes a bug by simply telling the user the application is written too poorly to do the right thing in an expected condition?

Dispute.

docker-library/docs#811 resolves this by blaming the user.

The correct resolution is for Drupal to respond to a missing default.settings.php by creating one from a non-volatile location (i.e. a location within Drupal to which the application should never write).

Seriously, who closes a bug by simply telling the user the application is written too poorly to do the right thing in an expected condition?

@tianon

This comment has been minimized.

Show comment
Hide comment
@tianon

tianon Jan 26, 2017

Member

Please keep the tone of conversion positive.

I'm happy to reopen this to track the "further enhancements in the image" the additional docs text references. I made the docs PR because that's a simple concrete change we can add now without agreement on what additional behavior the image should include to make this issue simpler to handle.

Member

tianon commented Jan 26, 2017

Please keep the tone of conversion positive.

I'm happy to reopen this to track the "further enhancements in the image" the additional docs text references. I made the docs PR because that's a simple concrete change we can add now without agreement on what additional behavior the image should include to make this issue simpler to handle.

@tianon tianon reopened this Jan 26, 2017

@skyred

This comment has been minimized.

Show comment
Hide comment
@skyred

skyred Jan 26, 2017

Contributor

Keep in mind, Drupal is more like a framework than something would just work out of box. Drupal has its own best practice and tools for DevOps and it keeps changing (In D8, there is a trend to let Composer manage dependence including Core, therefore a new file structure). Docker comes in as additional or a new approach to manage DevOps for Drupal. However, by far, it's really hard to standardize Drupal related DevOps on a single Image.

If you are a developer, who knows Drupal, then this Image can already offer a lot of efficiency.

If you are a system admin, who would treat Drupal as a black box, then you need to work with your dev team to figure out the file structure and what's considered "data", then extend this image, or use the parent of this image PHP.

Contributor

skyred commented Jan 26, 2017

Keep in mind, Drupal is more like a framework than something would just work out of box. Drupal has its own best practice and tools for DevOps and it keeps changing (In D8, there is a trend to let Composer manage dependence including Core, therefore a new file structure). Docker comes in as additional or a new approach to manage DevOps for Drupal. However, by far, it's really hard to standardize Drupal related DevOps on a single Image.

If you are a developer, who knows Drupal, then this Image can already offer a lot of efficiency.

If you are a system admin, who would treat Drupal as a black box, then you need to work with your dev team to figure out the file structure and what's considered "data", then extend this image, or use the parent of this image PHP.

@geerlingguy

This comment has been minimized.

Show comment
Hide comment
@geerlingguy

geerlingguy Jan 26, 2017

However, by far, it's really hard to standardize Drupal related DevOps on a single Image.

Very true; this image's main focus, I should hope, is to make it easy to build a new Drupal [current version] site locally, quickly.

Secondarily, it can be made flexible enough to support more modes of local development. But there's almost no chance there will be 'one Drupal Docker environment to rule them all', as Drupal is way more complex in the real world than any simple node, go, java, python, etc. app.

However, by far, it's really hard to standardize Drupal related DevOps on a single Image.

Very true; this image's main focus, I should hope, is to make it easy to build a new Drupal [current version] site locally, quickly.

Secondarily, it can be made flexible enough to support more modes of local development. But there's almost no chance there will be 'one Drupal Docker environment to rule them all', as Drupal is way more complex in the real world than any simple node, go, java, python, etc. app.

@pirog

This comment has been minimized.

Show comment
Hide comment
@pirog

pirog Jan 26, 2017

pirog commented Jan 26, 2017

@bluefoxicy

This comment has been minimized.

Show comment
Hide comment
@bluefoxicy

bluefoxicy Jan 26, 2017

Actually, you can always docker exec [drupal container] [command], so it is in fact possible to use drush or Composer or whatever is actually installed, so long as it's installed inside the Docker container.

As long as there's a separation between the system (the version-controlled code, installed packages, etc.) and the user files (paths to which users write data), you can build whatever you want. Replacing a Docker image is analogous to upgrading the deployed software version or upgrading the operating system: the files you change should be files the user hasn't changed.

Think like how Linux has configuration files in /etc and state data in /var, and expects the user to not modify /usr/share. The user might install some custom administration scripts or compiled software into /usr/local, which we expect.

You seem to be thinking about a Drupal docker image as a fancy command to make Drupal happen. It's actually a method to supply a software with its entire supporting system.

Consider node. A node container will pull down node modules to provide new commands like watchify or browserify, and uses tools like git and curl; node containers often volume-mount the node module cache, although that can be re-installed readily and so clearing it just means the container takes 2-3 minutes to run node install and restore everything you wiped out. Many node applications do use advanced commands made available by first installing required node modules, and the same container acts as a build system, a unit testing environment, and even the server for a node app.

Rather than claiming the problem is unapproachable, I suggest first defining the problem in terms of what use cases you expect to encounter. What do you expect administrators to do when managing Drupal?

Actually, you can always docker exec [drupal container] [command], so it is in fact possible to use drush or Composer or whatever is actually installed, so long as it's installed inside the Docker container.

As long as there's a separation between the system (the version-controlled code, installed packages, etc.) and the user files (paths to which users write data), you can build whatever you want. Replacing a Docker image is analogous to upgrading the deployed software version or upgrading the operating system: the files you change should be files the user hasn't changed.

Think like how Linux has configuration files in /etc and state data in /var, and expects the user to not modify /usr/share. The user might install some custom administration scripts or compiled software into /usr/local, which we expect.

You seem to be thinking about a Drupal docker image as a fancy command to make Drupal happen. It's actually a method to supply a software with its entire supporting system.

Consider node. A node container will pull down node modules to provide new commands like watchify or browserify, and uses tools like git and curl; node containers often volume-mount the node module cache, although that can be re-installed readily and so clearing it just means the container takes 2-3 minutes to run node install and restore everything you wiped out. Many node applications do use advanced commands made available by first installing required node modules, and the same container acts as a build system, a unit testing environment, and even the server for a node app.

Rather than claiming the problem is unapproachable, I suggest first defining the problem in terms of what use cases you expect to encounter. What do you expect administrators to do when managing Drupal?

@raymond880824

This comment has been minimized.

Show comment
Hide comment
@raymond880824

raymond880824 Mar 1, 2017

May I know how to setup Drupal High Availability with Docker
is it possible using docker swarm

Thanks

May I know how to setup Drupal High Availability with Docker
is it possible using docker swarm

Thanks

@marxenegls

This comment has been minimized.

Show comment
Hide comment
@marxenegls

marxenegls Jul 24, 2017

docker run --name mydroopal -h host.local:3306 -e MYSQL_USER=drupal -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=drupal -v /home/minime/VOL/Drupal:/var/www/html -p 8080:80 -d drupal:latest

I'm getting access denied on / across all ip address:8080 , I'm gussing the drupal user once linked to volume should be added to local users/sudoers ?

I had access without the volume but failed to access mysql via both root and privileged user .I created manually using phpmyadmin on localhost the drupaldb user and gave it grant privileges .
....
NO it didn't do it created drupal user and appended to sudo group still getting connection refused and permission denied .
It' not fw either. maybe permission or ownership of volume ? I'm gonna check it
You don't have permission to access / on this server. see drupal icon though, it's something , must be close .
No changed ownership of volume dir and file permissions still no go.
managed the wp directly , linked to another mysql container , but did work with this image.

i got it i changed dile permission to 755 from 777 and added volume:var/www/html/* and added drupal user to the docker group
thanks for nothing guys : !
p.s the db i created didn't go had to let the installer make one on its own so i added db to drupal and gave it my host ip and voila

marxenegls commented Jul 24, 2017

docker run --name mydroopal -h host.local:3306 -e MYSQL_USER=drupal -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=drupal -v /home/minime/VOL/Drupal:/var/www/html -p 8080:80 -d drupal:latest

I'm getting access denied on / across all ip address:8080 , I'm gussing the drupal user once linked to volume should be added to local users/sudoers ?

I had access without the volume but failed to access mysql via both root and privileged user .I created manually using phpmyadmin on localhost the drupaldb user and gave it grant privileges .
....
NO it didn't do it created drupal user and appended to sudo group still getting connection refused and permission denied .
It' not fw either. maybe permission or ownership of volume ? I'm gonna check it
You don't have permission to access / on this server. see drupal icon though, it's something , must be close .
No changed ownership of volume dir and file permissions still no go.
managed the wp directly , linked to another mysql container , but did work with this image.

i got it i changed dile permission to 755 from 777 and added volume:var/www/html/* and added drupal user to the docker group
thanks for nothing guys : !
p.s the db i created didn't go had to let the installer make one on its own so i added db to drupal and gave it my host ip and voila

@jayachandralingam

This comment has been minimized.

Show comment
Hide comment
@jayachandralingam

jayachandralingam Sep 14, 2017

I'm getting below errors when using mount point and I'm declaring as below, where /jaya is my mount point. Please help soon possible.
"docker run -v /jaya:/var/www/html/modules -v /jaya:/var/www/html/profiles -v /jaya:/var/www/html/sites -v /jaya:/var/www/html/themes -v /jaya:/var/www/html/sites/default/files -p 8080:80 drupal:8.2-apache"

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
[Thu Sep 14 18:03:58.753281 2017] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/7.1.5 configured -- resuming normal operations
[Thu Sep 14 18:03:58.753318 2017] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'

I'm getting below errors when using mount point and I'm declaring as below, where /jaya is my mount point. Please help soon possible.
"docker run -v /jaya:/var/www/html/modules -v /jaya:/var/www/html/profiles -v /jaya:/var/www/html/sites -v /jaya:/var/www/html/themes -v /jaya:/var/www/html/sites/default/files -p 8080:80 drupal:8.2-apache"

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
[Thu Sep 14 18:03:58.753281 2017] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/7.1.5 configured -- resuming normal operations
[Thu Sep 14 18:03:58.753318 2017] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'

@gateway

This comment has been minimized.

Show comment
Hide comment
@gateway

gateway Apr 24, 2018

Im a bit late to this discussion and since the last post was back in Sept 2017 I thought I would see if anyone has tried to mount a aws EFS system, which is perfect for keeping anything like /sites/default/files intact esp if your scaling your containers, each one can mount the user content, css and js stuff. See https://aws.amazon.com/efs/ ..

any thoughts?

gateway commented Apr 24, 2018

Im a bit late to this discussion and since the last post was back in Sept 2017 I thought I would see if anyone has tried to mount a aws EFS system, which is perfect for keeping anything like /sites/default/files intact esp if your scaling your containers, each one can mount the user content, css and js stuff. See https://aws.amazon.com/efs/ ..

any thoughts?

@geerlingguy

This comment has been minimized.

Show comment
Hide comment
@geerlingguy

geerlingguy Apr 24, 2018

@gateway - I am using EFS for a shared volume mount for Drupal files and Magento media directories and it works pretty well. One caveat—if you mount a shared folder that does a lot of small file writes, or flocks (file locks), then it can cause some performance problems.

But for serving up image, CSS, JS, etc., EFS is a pretty good solution. Note that I would recommend using some sort of CDN in front of your site if it gets a lot of traffic, as EFS reads can be a lot slower than local filesystems when you have a lot of load—and you can run into limits with EFS unless you drop some giant files inside your filesystem (see Getting the best performance out of Amazon EFS).

@gateway - I am using EFS for a shared volume mount for Drupal files and Magento media directories and it works pretty well. One caveat—if you mount a shared folder that does a lot of small file writes, or flocks (file locks), then it can cause some performance problems.

But for serving up image, CSS, JS, etc., EFS is a pretty good solution. Note that I would recommend using some sort of CDN in front of your site if it gets a lot of traffic, as EFS reads can be a lot slower than local filesystems when you have a lot of load—and you can run into limits with EFS unless you drop some giant files inside your filesystem (see Getting the best performance out of Amazon EFS).

@wglambert wglambert added the question label Apr 24, 2018

@gateway

This comment has been minimized.

Show comment
Hide comment
@gateway

gateway Apr 24, 2018

thanks @geerlingguy , it seems silly to not really have a viable solution for drupals /default/files section. When you need to scale horizontally each instance needs to connect to some sort of shared file system for user type of content, css, js etc. I'm some what new to docker and just struggling with this part of to allow for proper scaling.. thanks for the link and hard work on it!

gateway commented Apr 24, 2018

thanks @geerlingguy , it seems silly to not really have a viable solution for drupals /default/files section. When you need to scale horizontally each instance needs to connect to some sort of shared file system for user type of content, css, js etc. I'm some what new to docker and just struggling with this part of to allow for proper scaling.. thanks for the link and hard work on it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment