Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use a single Dockerfile for the API images #666

Merged
merged 1 commit into from Jun 22, 2018

Conversation

Projects
None yet
6 participants
@teohhanhui
Copy link
Member

teohhanhui commented May 3, 2018

Q A
Bug fix? no
New feature? no
BC breaks? no
Deprecations? no
Tests pass? yes
Fixed tickets N/A
License MIT
Doc PR N/A?

@teohhanhui teohhanhui requested review from dunglas , sroze and Simperfit May 3, 2018

@teohhanhui teohhanhui force-pushed the teohhanhui:feature/all-in-one-Dockerfile branch from 7ea2efe to 03950a4 May 3, 2018

@@ -0,0 +1,19 @@
#!/bin/sh

This comment has been minimized.

@teohhanhui

@teohhanhui teohhanhui force-pushed the teohhanhui:feature/all-in-one-Dockerfile branch from 03950a4 to 27f356a May 3, 2018


WORKDIR /srv/api

COPY --from=api_platform_php /srv/api/public public/

This comment has been minimized.

@teohhanhui

teohhanhui May 3, 2018

Author Member

This means we copy installed assets too. So the nginx image is now more production ready. Yay! 😄

This comment has been minimized.

@Simperfit

Simperfit May 4, 2018

Member

I don't think we should use --from, because it does not work with google cloud atm :/. (so won't work with managed kubernetes and our helm)

This comment has been minimized.

@teohhanhui

teohhanhui May 4, 2018

Author Member

What do you mean? The images are built by Docker. Kubernetes has nothing to do with it?

This comment has been minimized.

@Simperfit

Simperfit May 4, 2018

Member

I mean if you are using Google Container Builder or you are building directly from Google it won't work.

This comment has been minimized.

@teohhanhui

teohhanhui May 4, 2018

Author Member

Uhh... But I don't think we should let that stop us from using the new features.

This comment has been minimized.

@dunglas

dunglas May 5, 2018

Member

Having full GCP support is a nice feature of API Platform (and we at @coopTilleuls use it on some project). I think we shouldn't switch to the new feature until it is supported by at least GCP.

WDYT @api-platform/core-team

This comment has been minimized.

@teohhanhui

teohhanhui May 5, 2018

Author Member

But we already use it for composer?

This comment has been minimized.

@dunglas

dunglas May 5, 2018

Member

Indeed, so if it wasn't working already, then 👍

if [ "$1" = 'varnishd' ] && [ $# -eq 1 ]; then
set -- varnishd \
-F \
-a :"${VARNISH_PORT:-6081}" \

This comment has been minimized.

@dunglas

dunglas May 3, 2018

Member

Why not 80?

This comment has been minimized.

@dunglas

dunglas May 3, 2018

Member

According to the Varnish docs:

Until now we've been running with Varnish on a high port which is great for testing purposes.

80 looks like a better fit right, and it will be more production ready too.

This comment has been minimized.

@teohhanhui

teohhanhui May 3, 2018

Author Member

Because the default is 6081. There's no need for us to change it.

This comment has been minimized.

@dunglas

dunglas May 3, 2018

Member

Please keep 80. It makes everything more obvious, and it will allow to reuse this container with Kubernetes easily (for instance, this change will break our current Kubernetes setup).

This comment has been minimized.

@teohhanhui

teohhanhui May 3, 2018

Author Member

No, it doesn't make any difference in a Docker context. And they only suggest using 80 because that's the standard HTTP port. It's only relevant when you're using Varnish in front.

This comment has been minimized.

@dunglas

dunglas May 3, 2018

Member

I don't understand why we need to change that. It's a container with a single process, using the default HTTP port:

  • looks obvious
  • makes the .env easier to understand
  • allows to not use port mapping in our Kubernetes setup

And anyway, we cannot merge it as is because it will break this: https://github.com/api-platform/api-platform/blob/master/api/helm/api/templates/varnish-service.yaml#L13-L16

This comment has been minimized.

@teohhanhui

teohhanhui May 3, 2018

Author Member

I don't see what's the point of changing it to port 80 when the upstream default is 6081. And that's what people would be familiar with / expect it to be. It's just like port 3000 for an Express.js app. We could of course use port 80. But that'd go against the conventions.

But anyway, if it breaks but our build is still green, then we need to add more tests to the build? :)

This comment has been minimized.

@dunglas

dunglas May 3, 2018

Member

Unlike client and admin, this isn't (only) a development container: it can be used in production. The Varnish documentation I quoted explicitly states that 6081 is for testing purpose, I actually never seen this port used for Varnish in production.

This isn't against the conventions:

HTTP communication usually takes place over TCP/IP connections. The default port is TCP 80

(HTTP 1.1 RFC)

Adding smoke tests for Kubernetes as we did for Docker Compose would be great. We'll have to install Minikube in Travis for that.

This comment has been minimized.

@dunglas

dunglas May 3, 2018

Member

However, it would make sense to expose Varnish on the host's 6081 port (because 80 isn't free). But it would be a BC break, and would require us to update the documentation, so I suggest to keep this as is.

@teohhanhui teohhanhui force-pushed the teohhanhui:feature/all-in-one-Dockerfile branch 13 times, most recently from f2b9659 to d9d482d May 4, 2018

postgresUser: api-platform
postgresPassword: ChangeMe
postgresDatabase: api
# Persistent Volume Storage configuration.
# ref: https://kubernetes.io/docs/user-guide/persistent-volumes
persistence:
enabled: false
enabled: true

This comment has been minimized.

@teohhanhui

teohhanhui May 4, 2018

Author Member

A sensible default? Since the postgresql chart also defaults to persistence enabled.

This comment has been minimized.

@dunglas

dunglas May 5, 2018

Member

The rationale was to strongly encourage to use the managed RDBMS provided by the cloud platform (Google Cloud SQL, Amazon RDS, Heroku Postgres...) instead. Please keep it as is (maintaining a RDBMS in Kubernetes is very hard).

This comment has been minimized.

@teohhanhui

teohhanhui May 24, 2018

Author Member

If we want to encourage using a managed RDBMS, then the right way is to set

postgresql:
  enabled: false

instead of having the postgresql service being deployed by default without data persistence (really bad!)

@@ -8,21 +8,21 @@ secret: ChangeMe
corsAllowOrigin: http://example.com

php:
repository: gcr.io/test-api-platform/php
repository: quay.io/api-platform/php

This comment has been minimized.

@teohhanhui

teohhanhui May 4, 2018

Author Member

This should make it easier to deploy since we have the built images in the repositories now.

@@ -1,5 +1,5 @@
dependencies:
- name: postgresql
version: 0.8.1
version: 0.11.0

This comment has been minimized.

@teohhanhui

teohhanhui May 4, 2018

Author Member

@dunglas Is it a good idea to bump the version here?

This comment has been minimized.

@dunglas

dunglas May 5, 2018

Member

Unfortunately we cannot, Postgres 11 is buggy with Doctrine (especially the doctrine:schema:update command).

This comment has been minimized.

@teohhanhui

teohhanhui May 5, 2018

Author Member

No, this is the version of the Helm chart, not the version of PostgreSQL. 😄

This comment has been minimized.

@dunglas

dunglas May 5, 2018

Member

Oops 🤣. It works then 👍 !

@@ -0,0 +1,6 @@
dependencies:

This comment has been minimized.

@teohhanhui

teohhanhui May 4, 2018

Author Member

@dunglas Is it a good idea to check this in? Why or why not?

This comment has been minimized.

@dunglas

dunglas May 5, 2018

Member

I don't know what is the good practice regarding this. I guess it's a good idea but I'm not sure.

@teohhanhui teohhanhui force-pushed the teohhanhui:feature/all-in-one-Dockerfile branch 2 times, most recently from 9070e24 to ba033c9 May 4, 2018

@teohhanhui teohhanhui force-pushed the teohhanhui:feature/all-in-one-Dockerfile branch 2 times, most recently from 1eb5dd9 to 3313bd3 May 24, 2018

@teohhanhui teohhanhui force-pushed the teohhanhui:feature/all-in-one-Dockerfile branch from 3313bd3 to 5e0377f May 24, 2018

ARG ALPINE_VERSION=3.7
FROM php:${PHP_VERSION}-fpm-alpine${ALPINE_VERSION}

FROM php:${PHP_VERSION}-fpm-alpine AS api_platform_php

This comment has been minimized.

@teohhanhui

teohhanhui May 24, 2018

Author Member

We no longer need the ALPINE_VERSION here since docker-library/php#650 🎉

@teohhanhui teohhanhui force-pushed the teohhanhui:feature/all-in-one-Dockerfile branch from 5e0377f to b81fde5 Jun 4, 2018

@@ -16,7 +16,7 @@ backend default {
}

# Hosts allowed to send BAN requests
acl ban {
acl invalidators {

This comment has been minimized.

@teohhanhui

teohhanhui Jun 4, 2018

Author Member

It seems using the same name as a built-in action throws an error since Varnish 6.0

@@ -7,15 +7,18 @@ if [ "${1#-}" != "$1" ]; then
fi

if [ "$1" = 'php-fpm' ] || [ "$1" = 'bin/console' ]; then
mkdir -p var/cache var/log var/sessions
mkdir -p var/cache var/log
setfacl -R -m u:www-data:rwX -m u:"$(whoami)":rwX var

This comment has been minimized.

@teohhanhui

teohhanhui Jun 4, 2018

Author Member

Are we finally ready for setfacl? I hope we are. Testers needed for Mac OS and Windows.

@teohhanhui teohhanhui force-pushed the teohhanhui:feature/all-in-one-Dockerfile branch 2 times, most recently from ec5d17a to cbdb579 Jun 4, 2018

@dunglas

This comment has been minimized.

Copy link
Member

dunglas commented Jun 5, 2018

😈 I'm afraid to merge this evil PR (#666).

@soyuka

soyuka approved these changes Jun 6, 2018

Copy link
Member

soyuka left a comment

I'm not merging 666 but feel free to do so 😈

@teohhanhui

This comment has been minimized.

Copy link
Member Author

teohhanhui commented Jun 6, 2018

I think we should have confirmation from Mac OS and Windows users that the setfacl works as expected.

@bendavies

This comment has been minimized.

Copy link

bendavies commented Jun 6, 2018

Not in favour of this. Goes against the one process per container docker best practice philosophy

@teohhanhui

This comment has been minimized.

Copy link
Member Author

teohhanhui commented Jun 6, 2018

@bendavies This builds 3 images, so 3 containers.

@teohhanhui

This comment has been minimized.

Copy link
Member Author

teohhanhui commented Jun 6, 2018

@bendavies

This comment has been minimized.

Copy link

bendavies commented Jun 6, 2018

Apologies I misread the intention!

@soyuka

This comment has been minimized.

Copy link
Member

soyuka commented Jun 6, 2018

I think we should have confirmation from Mac OS and Windows users that the setfacl works as expected.

I'm not using any of these anymore :|.

@teohhanhui teohhanhui force-pushed the teohhanhui:feature/all-in-one-Dockerfile branch from cbdb579 to ac3524d Jun 6, 2018

@teohhanhui

This comment has been minimized.

Copy link
Member Author

teohhanhui commented Jun 6, 2018

@bendavies No worries! I think that's helpful as it might not be immediately obvious. I've added a note at the top of the Dockerfile. 😄

@Simperfit

This comment has been minimized.

Copy link
Member

Simperfit commented Jun 7, 2018

I'm going to test it on mac

@dunglas dunglas force-pushed the teohhanhui:feature/all-in-one-Dockerfile branch from ac3524d to a87f074 Jun 20, 2018

@dunglas

This comment has been minimized.

Copy link
Member

dunglas commented Jun 20, 2018

I can confirm it works on Mac.
Anyone tested with Windows?

@maxhelias

This comment has been minimized.

Copy link
Contributor

maxhelias commented Jun 22, 2018

When we build the services it takes the steps from the beginning of the file.
Example with the nginx service, it will build the steps of the service php before and not just the steps that is under the alias

@teohhanhui

This comment has been minimized.

Copy link
Member Author

teohhanhui commented Jun 22, 2018

@maxhelias That is correct, and it's intended because the nginx image needs to copy files from the php container anyway. 😄

@teohhanhui

This comment has been minimized.

Copy link
Member Author

teohhanhui commented Jun 22, 2018

The build steps are cached so there should be no overhead.

@maxhelias

This comment has been minimized.

Copy link
Contributor

maxhelias commented Jun 22, 2018

Yes indeed, but the public file is already mounted in volumes in the docker-compose.yml.
And does not that increase the size of the nginx image too much just for this relationship ?

@teohhanhui

This comment has been minimized.

Copy link
Member Author

teohhanhui commented Jun 22, 2018

We only mount host directories in development. It's not a feasible solution in production.

The earlier stages do not in any way get into the stage being built - the resultant image only contains what's in the target stage.

@teohhanhui

This comment has been minimized.

Copy link
Member Author

teohhanhui commented Jun 22, 2018

Tested working on Windows. 🎉

@teohhanhui teohhanhui merged commit ea5a374 into api-platform:master Jun 22, 2018

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details

@teohhanhui teohhanhui deleted the teohhanhui:feature/all-in-one-Dockerfile branch Jun 22, 2018

@soyuka

This comment has been minimized.

Copy link
Member

soyuka commented Jun 22, 2018

Thanks @teohhanhui !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.