Docker php image with most extensions enabled
The repository will only contain php-fpm
The idea behind this project is to create a php image that has as many extensions enabled as possible.
The case for this is having to migrate about 20 or more legacy php projects from bare-metal installation to Docker. The projects are different from each other and make use of different extensions. For example, most use
imagemagick but there may be that single one using GD which forces you to install
gd as well.
The right way to do it would be to check all the projects and create a specific docker image for each one. That though would require a huge amount of time.
The cost of analyzing each project outweighs by far the cost of having unused extensions enabled in the docker image.
The compromise would be to enable the most used extensions to make the image ready for the majority of the projects and then, if there's a project that requires extra components, they should be added on top. Like for example OCI8 is used in a couple of projects in our case.
First step was to determine which extensions where enabled on the legacy servers by executing
The dockerfile doesn't copy the code in the image yet. You sohuld not work on the Dockerfile provided, rather work on your own dockerfile extending this one.
An example Dockerfile would be one that has two build stages, the first one to install dependencies with composer and the second one to copy the source code to the image and add extra configuration or missing extensions like OCI8 mentioned above.
To avoid wasting time you shouldn't rebuild the provided Dockerfile but should use it from Docker hub.
You can put the code in the
src folder. Otherwise if you put it elsewhere you should update
For the nature of php, it needs to be coupled with a web server. In our case, we always use nginx. That means that we need only
php-fpm (that's why the disclaimer above) and the source code should be availbe to nginx.
Installing nginx in the same container as php though is WRONG. It means you would have two "main" processes in the container. It's hard to debug and it forces the use of process managers such as
supervisordwhich in turn, forbids docker from managing the states of it's main process and restarting when php is not working. And it also doesn't respect a micro-services architecture.
Our solution is to use two containers, nginx and php. To share the code though, there are a few options.
- Copying the code on both containers at the same path. Personally I'm not a fan of this approach as it means the code must be copied twice which results in a waste of resources and possible bugs considering
.dockerignoreor env varibales.
- Copying the source code to a temporary directory,
/initin my case, and on container start-up, mv (using
mvbecause it's faster than
cp) everything to the WORKDIR. Meanwhile, the workdir should be set as a
Volumefor both containers.
Using the second approach, the responsibility of the source code remains to the php container.
The image is available in Docker Hub
The versioning and tags will reflect the php tagging. So if the image is based on 7.2-fpm-alpine it will be tagged as 7.2-fpm-alpine.
Not all tags will be maintained.