Skip to content
This repository has been archived by the owner on May 20, 2022. It is now read-only.

Production Hardening #241

Merged
merged 14 commits into from
Apr 15, 2018
Merged

Production Hardening #241

merged 14 commits into from
Apr 15, 2018

Conversation

abarbare
Copy link
Contributor

Hello,

Running a containerized application with a root user is not a best practice as if the container is breached out, the attacker is able to get a root access on your Docker host.

I make sure that the app run with non-root user. The user UID and GID is configurable via ARG during the build process. As a non root user is not able to bind port less than 1024, I modified the web container to use the 8000 port instead of 80.

For web and db container, it is not possible to switch to non-root user:

  • web has to bind ports 80 and 443 < 1024
  • db is started with root and then postgresql process started with postgres user

I make sure those process start with read-only so even if the attacker is able to be inside the container, it won't be possible to modify the container except declared volumes in Dockerfiles

For every container, I also added a health check. It is used by Docker daemon to know the status of the process in the container.

Regards,

Copy link
Contributor

@pichouk pichouk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this great PR !

Your changes introduce features (read_only on docker-compose or HEALTHCHECK on Dockerfile) only available on Docker 1.12+ and docker-compose 3.0+.

I'm ok to move on with our current requirements, but you should update the Requirement part on README file

@@ -24,6 +25,7 @@ services:
# comment out 2 following lines for team edition
# args:
# - edition=team
#read_only: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess you can remove this commented line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, I tryied to make it woks with read only but not mounted (ie: plugin folder) volumes were created with random UID/GID in the container.

@pichouk pichouk added Enhancement 2: Dev Review Requires review by a core committer need tests in production null labels Feb 27, 2018
@pichouk
Copy link
Contributor

pichouk commented Feb 27, 2018

I also add @xcompass as reviewer, I think it could be better to have double checks on that kind of changes.

Add PUID/PGID in docker-compose args
@abarbare
Copy link
Contributor Author

abarbare commented Feb 28, 2018

Thanks for the feedback.

Indeed, those changes depend on newer versions of docker products.
I saw on issue #203 that this subject was quite sensitive. As those products don't have a support or even an end of life. it sounds hard to force user to upgrade to a newer version.
I don't know if there is a better solution than an other for this kind of thing. Having a dompose-v3 file with latest features like docker-volumes, network and read-only sounds good but it can make user a bit lost. May be the best answer is to put those changes in contrib folder and have one labeled with hardened deployment

Let me know what do you think about that.

@pichouk
Copy link
Contributor

pichouk commented Mar 1, 2018

In fact there is already a stale PR (#195) about moving to docker-compose v3. We have a lot of discussions about this subject, but I think we cannot maintain "legacy" Docker versions. We need to move on and users who need to use old Docker versions should modify image to match their needs themselves;

So I think this PR is OK like that.

@abarbare
Copy link
Contributor Author

@xcompass Hello, any news on this PR ?

@pichouk
Copy link
Contributor

pichouk commented Mar 27, 2018

I made some tests on new and existing installations. Everything is working fine as long as you remember to change file permission for app mounted volumes (as explained on the README).
I'll merge this PR at the next Mattermost release (April 16th) to add to the changelog a specific warning about this.

Thanks for your contribution :)

@pichouk pichouk merged commit f79bbea into mattermost:master Apr 15, 2018
@Kiina
Copy link

Kiina commented Apr 16, 2018

Is it possible that it breaks deployments behind reverse proxys? I always get connection refused, no matter what ports I try

@pichouk
Copy link
Contributor

pichouk commented Apr 16, 2018

@Kiina As explained in the Changelog the port use by Mattermost app changed from 80 to 8000. You need to change this value on the config/config.json file of your Mattermost app + on your reverse proxy.

@Kiina
Copy link

Kiina commented Apr 16, 2018

Yeah my config ist changed to port 8000. The app container runs fine:

docker logs -f mattermost_app_1
Using existing config file /mattermost/config/config.json
Configure database connection...OK
Wait until database db:5432 is ready...
Starting platform
docker logs -f mattermost_web_1
linking plain config
ln: /etc/nginx/conf.d/mattermost.conf: File exists
2018/04/16 13:11:08 [error] 11#11: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.27, server: , request: "GET / HTTP/1.1", upstream: "http://172.17.0.6:8000/", host: "chat.*************"
172.17.0.27 - - [16/Apr/2018:13:11:08 +0200] "GET / HTTP/1.1" 502 568 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36" "**************"

docker-compose:

db:
  build: db
  restart: always
  volumes:
    - /srv/docker/mattermost/db/var/lib/postgresql/data:/var/lib/postgresql/data
    - /etc/localtime:/etc/localtime:ro
  environment:
    - MM_USERNAME=mmuser
    - MM_PASSWORD=mmuser_password
    - MM_DBNAME=mattermost
app:
  build: app
  links:
    - db:db
  restart: always
  volumes:
    - /srv/docker/mattermost/config:/mattermost/config:rw
    - /srv/docker/mattermost/data:/mattermost/data:rw
    - /etc/localtime:/etc/localtime:ro
web:
  build: web
  expose:
    - "80"
  links:
    - app:app
  restart: always
  volumes:
    - /etc/localtime:/etc/localtime:ro
  environment:
    - MATTERMOST_ENABLE_SSL=false
    - VIRTUAL_HOST
    - VIRTUAL_PORT=80
    - VIRTUAL_PROTO=http

Was working fine before and the only thing that should change is the connection between app and web if I understand the PR right. So why is it failing?

@pichouk
Copy link
Contributor

pichouk commented Apr 16, 2018

Weird... I have the same configuration but without troubles, you might have something different on your Web or App container configuration. Is your application container loop-restarting ?

@Kiina
Copy link

Kiina commented Apr 16, 2018

Oh you are right. The health check is failing for some reason so it keeps rebooting.

Last output | % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed   0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8000: Connection refused
-- | --

@pichouk
Copy link
Contributor

pichouk commented Apr 16, 2018

Yes, in fact the health check if failing probably for the same reason (the check try to curl on port 8000) but I don't think that the health check is restarting the container. So if the container keeps restarting it should be for another reason.
Do you have error log in your database container ?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
2: Dev Review Requires review by a core committer Enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants