Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nginx: [emerg] invalid number of arguments in "upstream" directive in /etc/nginx/conf.d/default.conf:71 #2144

Closed
thomasmerz opened this issue Jan 20, 2023 · 9 comments · Fixed by #2145

Comments

@thomasmerz
Copy link

Bugs

I'm using https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/with-nginx-proxy/mariadb/fpm which uses FROM nginxproxy/nginx-proxy:alpine via this Dockerfile. I've set VIRTUAL_HOST and removed all images and rebuild them all. But it still says:

nginx.1 | nginx: [emerg] invalid number of arguments in "upstream" directive in /etc/nginx/conf.d/default.conf:72

version: '3'
services:
  db:
    image: mariadb:10.5
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --log-bin
    restart: always
    volumes:
      - db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD="secret"
      - MARIADB_AUTO_UPGRADE=1
      - MARIADB_DISABLE_UPGRADE_BACKUP=1
    env_file:
      - db.env
  redis:
    image: redis:alpine
    restart: always
  app:
    image: nextcloud:fpm-alpine
    restart: always
    volumes:
      - nextcloud:/var/www/html
    environment:
      - MYSQL_HOST=db
      - REDIS_HOST=redis
    env_file:
      - db.env
    depends_on:
      - db
      - redis
  web:
    build: ./web
    restart: always
    volumes:
      - nextcloud:/var/www/html:ro
    environment:
      - VIRTUAL_HOST=myserver.com
      - LETSENCRYPT_HOST=myserver.com
      - LETSENCRYPT_EMAIL=nextcloud@myserver.com
    depends_on:
      - app
    networks:
      - proxy-tier
      - default
  cron:
    image: nextcloud:fpm-alpine
    restart: always
    volumes:
      - nextcloud:/var/www/html
    entrypoint: /cron.sh
    depends_on:
      - db
      - redis
  proxy:
    build: ./proxy
    restart: always
    ports:
      - 80:80
      - 443:443
    labels:
      com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
    volumes:
      - certs:/etc/nginx/certs:ro
      - vhost.d:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro
    # 2023-01-19 added due to https://github.com/nextcloud/docker/issues/1902
    environment:
      - VIRTUAL_HOST=myserver.com
    networks:
      - proxy-tier
  letsencrypt-companion:
    image: nginxproxy/acme-companion
    restart: always
    volumes:
      - certs:/etc/nginx/certs
      - acme:/etc/acme.sh
      - vhost.d:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - proxy-tier
    depends_on:
      - proxy
volumes:
  db:
  nextcloud:
  certs:
  acme:
  vhost.d:
  html:
networks:
  proxy-tier:

Everything was fine until Jan, 18th - before my docker-compose.yml worked fine. What did you change that I have overseen and what am I missing?

  • generated nginx configuration obtained with docker exec nameofyournginxproxycontainer nginx -T
# nginx-proxy version : 1.1.0-7-g1775420
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
    default $http_x_forwarded_proto;
    '' $scheme;
}
map $http_x_forwarded_host $proxy_x_forwarded_host {
    default $http_x_forwarded_host;
    '' $http_host;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
    default $http_x_forwarded_port;
    '' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, preserve
# NGINX's default behavior ("Connection: close").
map $http_upgrade $proxy_connection {
    default upgrade;
    '' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header based on $proxy_x_forwarded_proto
map $proxy_x_forwarded_proto $proxy_x_forwarded_ssl {
    default off;
    https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent" '
                 '"$upstream_addr"';
access_log off;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
    ssl_prefer_server_ciphers off;
error_log /dev/stderr;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $proxy_x_forwarded_host;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
proxy_set_header X-Original-URI $request_uri;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
    server_name _; # This is just an invalid value which will never trigger on a real hostname.
    server_tokens off;
    listen 80;
    access_log /var/log/nginx/access.log vhost;
    return 503;
    listen 443 ssl http2;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;
    ssl_certificate /etc/nginx/certs/default.crt;
    ssl_certificate_key /etc/nginx/certs/default.key;
}
# /
upstream {
    # Cannot connect to network 'dockerpihole_default' of this container
    # Fallback entry
    server 127.0.0.1 down;
}
server {
    server_name ;
    access_log /var/log/nginx/access.log vhost;
    listen 80 default_server;
    include /etc/nginx/vhost.d/;
    location / {
        proxy_pass http://;
    }
}
server {
    server_name ;
    listen 443 ssl http2 default_server;
    access_log /var/log/nginx/access.log vhost;
    return 500;
    ssl_certificate /etc/nginx/certs/default.crt;
    ssl_certificate_key /etc/nginx/certs/default.key;
}
# myserver.com/
upstream myserver.com {
    # Cannot connect to network 'fpm_default' of this container
    ## Can be connected with "fpm_proxy-tier" network
    # fpm_web_1
    server 172.25.0.4:80;
}
server {
    server_name myserver.com;
    listen 80 ;
    access_log /var/log/nginx/access.log vhost;
    # Do not HTTPS redirect Let's Encrypt ACME challenge
    location ^~ /.well-known/acme-challenge/ {
        auth_basic off;
        auth_request off;
        allow all;
        root /usr/share/nginx/html;
        try_files $uri =404;
        break;
    }
    location / {
        return 301 https://$host$request_uri;
    }
}
server {
    server_name myserver.com;
    access_log /var/log/nginx/access.log vhost;
    listen 443 ssl http2 ;
    ssl_session_timeout 5m;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;
    ssl_certificate /etc/nginx/certs/myserver.com.crt;
    ssl_certificate_key /etc/nginx/certs/myserver.com.key;
    ssl_dhparam /etc/nginx/certs/myserver.com.dhparam.pem;
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/nginx/certs/myserver.com.chain.pem;
    set $sts_header "";
    if ($https) {
        set $sts_header "max-age=31536000";
    }
    add_header Strict-Transport-Security $sts_header always;
    include /etc/nginx/vhost.d/default;
    location / {
        proxy_pass http://myserver.com;
    }
}

My workaround

I "fixed" it and get my NC up and running again by removing these lines within my docker container fpm_proxy_1 from /etc/nginx/conf.d/default.conf:

# /
#upstream {
#    # Cannot connect to network 'dockerpihole_default' of this container
#    # Fallback entry
#    server 127.0.0.1 down;
#}
#server {
#    server_name ;
#    access_log /var/log/nginx/access.log vhost;
#    listen 80 default_server;
#    include /etc/nginx/vhost.d/;
#    location / {
#        proxy_pass http://;
#    }
#}
#server {
#    server_name ;
#    listen 443 ssl http2 default_server;
#    access_log /var/log/nginx/access.log vhost;
#    return 500;
#    ssl_certificate /etc/nginx/certs/default.crt;
#    ssl_certificate_key /etc/nginx/certs/default.key;
#}

There's also #1674 unanswered, hopefully someone will help via an issue!?

@rhansen
Copy link
Collaborator

rhansen commented Jan 21, 2023

  app:
    image: nextcloud:fpm-alpine

It's unclear to me whether your app container is based on nginx-proxy...

  proxy:
    build: ./proxy

...or your proxy container, or both. If both, then the two are likely conflicting with each other.

If your proxy container is based on nginx-proxy, what are the contents of its Dockerfile?

    restart: always
    ports:
      - 80:80
      - 443:443
    labels:
      com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
    volumes:
      - certs:/etc/nginx/certs:ro
      - vhost.d:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro
    # 2023-01-19 added due to https://github.com/nextcloud/docker/issues/1902
    environment:
      - VIRTUAL_HOST=myserver.com

If proxy is your nginx-proxy container then this environment variable should not be here, otherwise nginx-proxy will try to proxy itself.

@rhansen
Copy link
Collaborator

rhansen commented Jan 21, 2023

# /
upstream {
    # Cannot connect to network 'dockerpihole_default' of this container

This makes me think that you have a dockerpihole container with VIRTUAL_HOST set to the empty string.

@thomasmerz
Copy link
Author

# /
upstream {
    # Cannot connect to network 'dockerpihole_default' of this container

This makes me think that you have a dockerpihole container with VIRTUAL_HOST set to the empty string.

@rhansen , that's right: there's also a dockerpihole container on the same linux server without `VIRTUAL_HOST' set. 👍🏼

@thomasmerz
Copy link
Author

  app:
    image: nextcloud:fpm-alpine

It's unclear to me whether your app container is based on nginx-proxy...

No, it's not.

@thomasmerz
Copy link
Author

  proxy:
    build: ./proxy

...or your proxy container, or both. If both, then the two are likely conflicting with each other.

If your proxy container is based on nginx-proxy, what are the contents of its Dockerfile?

Yes, it is the proxy container. Here are the contents:

# cat proxy/Dockerfile
FROM nginxproxy/nginx-proxy:alpine
COPY uploadsize.conf /etc/nginx/conf.d/uploadsize.conf

@thomasmerz
Copy link
Author

    # 2023-01-19 added due to https://github.com/nextcloud/docker/issues/1902
    environment:
      - VIRTUAL_HOST=myserver.com

If proxy is your nginx-proxy container then this environment variable should not be here, otherwise nginx-proxy will try to proxy itself.

Ah, thanks, I removed it again - I thought it might be missing here. 👍🏼

@rhansen
Copy link
Collaborator

rhansen commented Jan 23, 2023

there's also a dockerpihole container on the same linux server without `VIRTUAL_HOST' set.

There's a difference between unset and set to the empty string. nginx-proxy already ignores containers where VIRTUAL_HOST is unset; it looks like that variable is set to the empty string in your dockerpihole container. You can use docker inspect to check.

@thomasmerz
Copy link
Author

# docker inspect pihole|grep VIRTUAL_HOST -C1
                "FTLCONF_LOCAL_IPV4=0.0.0.0",
                "VIRTUAL_HOST=",
                "FTL_CMD=no-daemon",

This is set by this Dockerfile:

/srv/docker-pi-hole/src/Dockerfile
33-ENV FTLCONF_LOCAL_IPV4 0.0.0.0
34:ENV VIRTUAL_HOST ""
35-ENV FTL_CMD no-daemon

And pihole uses an already built image with docker-compose:

services:
  pihole:
    hostname: 'pihole-nbg1-dc3'
    container_name: pihole
    image: pihole/pihole:latest
…

But I don't need to run Pi-hole with nginx-proxy… 🤷🏼‍♂️ Everything is fine with my pihole docker container and everything was fine with my nextcloud docker-compose containers which heavily relies on nginx-proxy…

Should we also ask the people from Pi-hole to leave this line 34 from Dockerfile or do you want to fix it within your template?

@rhansen
Copy link
Collaborator

rhansen commented Jan 23, 2023

34:ENV VIRTUAL_HOST ""

This is another instance of nginx-proxy using an environment variable that has special meaning to a container. (See #2141 for another recent example.) If pihole doesn't actually need that environment variable, then it would probably be a good idea to ask them to remove that line. But I'm guessing they do use it, and we should accelerate our plans to migrate nginx-proxy to use container labels instead of environment variables (#2148). In the meantime, #2145 should fix your particular issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants