Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

worker_connections are not enough when VIRTUAL_HOST is used #1149

Closed
gauravcanon opened this issue Jul 17, 2018 · 3 comments
Closed

worker_connections are not enough when VIRTUAL_HOST is used #1149

gauravcanon opened this issue Jul 17, 2018 · 3 comments

Comments

@gauravcanon
Copy link

gauravcanon commented Jul 17, 2018

Everything was working suddenly problem appears, I don't why
I tried removing all container and starting with the start.

Error: 85#85: 1024 worker_connections are not enough
Expected Behaviour: Should Running Fine as Previously

Docker Compose

version: '2'
services:

  nginx-proxy:
    restart: always
    container_name: nginx-proxy
    image: jwilder/nginx-proxy
    networks:
      - intranet
    ports:
      - "80:80"
      - "443:443"
    # logging:
    #   driver: gcplogs
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ~/certs:/etc/nginx/certs:ro
      - /etc/nginx/vhost.d
      - /usr/share/nginx/html
    labels:
      - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"

  nginx-letsencrypt:
    container_name: nginx-letsencrypt
    image: jrcs/letsencrypt-nginx-proxy-companion
    networks:
      - intranet
    volumes_from:
      - nginx-proxy
    # logging:
    #   driver: gcplogs
    volumes:
      - ~/certs:/etc/nginx/certs:rw
      - /var/run/docker.sock:/var/run/docker.sock:ro

  crawling:
    restart: always
    image: crawling
    networks:
      - intranet
    # logging:
    #   driver: gcplogs
    container_name: crawling
    ports:
      - 8181:80
    volumes:
      - /home/gaurav/crawling/crawling/:/var/www/html
    depends_on: 
      - nginx-proxy
      - nginx-letsencrypt
    environment:    // If i remove this it will work
      - VIRTUAL_HOST=foo.bar.com
      - LETSENCRYPT_HOST=foo.bar.com
      - LETSENCRYPT_EMAIL=user@foo.bar.com

networks:
  intranet:

I am able to access using IP & Port.

Logs :

WARNING: /etc/nginx/dhparam/dhparam.pem was not found. A pre-generated dhparam.pem will be used for now while a new one
is being generated in the background.  Once the new dhparam.pem is in place, nginx will be reloaded.
forego     | starting dockergen.1 on port 5000
forego     | starting nginx.1 on port 5100
dockergen.1 | 2018/07/17 09:51:42 Generated '/etc/nginx/conf.d/default.conf' from 2 containers
dockergen.1 | 2018/07/17 09:51:42 Running 'nginx -s reload'
dockergen.1 | 2018/07/17 09:51:43 Watching docker events
dockergen.1 | 2018/07/17 09:51:43 Generated '/etc/nginx/conf.d/default.conf' from 3 containers
dockergen.1 | 2018/07/17 09:51:43 Running 'nginx -s reload'
dockergen.1 | 2018/07/17 09:51:43 Received event start for container 273c50dba07d
dockergen.1 | 2018/07/17 09:51:44 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx.1    | 104.xx.xx.xx 35.198.221.49 - - [17/Jul/2018:09:51:48 +0000] "GET / HTTP/1.1" 503 214 "-" "GoogleStackdriverMonitoring-UptimeChecks(https://cloud.google.com/monitoring)"
nginx.1    | 2018/07/17 09:51:49 [alert] 64#64: 1024 worker_connections are not enough

I don't how to debug more, what exactly the problem is everything was working suddenly it stopped working.
More Logs,

nginx.conf

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}
daemon off;

/etc/nginx/conf.d/default.conf

# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
  default $http_x_forwarded_port;
  ''      $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
  default off;
  https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
access_log off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
	server_name _; # This is just an invalid value which will never trigger on a real hostname.
	listen 80;
	access_log /var/log/nginx/access.log vhost;
	return 503;
}
# foo.bar.com
upstream foo.bar.com {
				## Can be connected with "docker_default" network
			# bandbaaja-web
			server 172.18.0.7:80;
}
server {
	server_name foo.bar.com;
	listen 80 ;
	access_log /var/log/nginx/access.log vhost;
	return 301 https://$host$request_uri;
}
server {
	server_name foo.bar.com;
	listen 443 ssl http2 ;
	access_log /var/log/nginx/access.log vhost;
	ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
	ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:!DSS';
	ssl_prefer_server_ciphers on;
	ssl_session_timeout 5m;
	ssl_session_cache shared:SSL:50m;
	ssl_session_tickets off;
	ssl_certificate /etc/nginx/certs/foo.bar.com.crt;
	ssl_certificate_key /etc/nginx/certs/foo.bar.com.key;
	ssl_dhparam /etc/nginx/certs/foo.bar.com.dhparam.pem;
	ssl_stapling on;
	ssl_stapling_verify on;
	ssl_trusted_certificate /etc/nginx/certs/foo.bar.com.chain.pem;
	add_header Strict-Transport-Security "max-age=31536000" always;
	include /etc/nginx/vhost.d/default;
	location / {
		proxy_pass http://foo.bar.com;
	}
}
# www.foo.bar.com
upstream www.foo.bar.com {
				## Can be connected with "docker_default" network
			# bandbaaja-web
			server 172.18.0.7:80;
}
server {
	server_name www.foo.bar.com;
	listen 80 ;
	access_log /var/log/nginx/access.log vhost;
	return 301 https://$host$request_uri;
}
server {
	server_name www.foo.bar.com;
	listen 443 ssl http2 ;
	access_log /var/log/nginx/access.log vhost;
	ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
	ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:!DSS';
	ssl_prefer_server_ciphers on;
	ssl_session_timeout 5m;
	ssl_session_cache shared:SSL:50m;
	ssl_session_tickets off;
	ssl_certificate /etc/nginx/certs/www.foo.bar.com.crt;
	ssl_certificate_key /etc/nginx/certs/www.foo.bar.com.key;
	ssl_dhparam /etc/nginx/certs/www.foo.bar.com.dhparam.pem;
	ssl_stapling on;
	ssl_stapling_verify on;
	ssl_trusted_certificate /etc/nginx/certs/www.foo.bar.com.chain.pem;
	add_header Strict-Transport-Security "max-age=31536000" always;
	include /etc/nginx/vhost.d/default;
	location / {
		proxy_pass http://www.foo.bar.com;
	}
}

Curl Request to Website logs

curl https://foo.bar.com --verbose

* Rebuilt URL to: https://foo.bar.com/
*   Trying 104.xx.xx.xx...
* TCP_NODELAY set
* Connected to foo.bar.com (104.xx.xx.xx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to foo.bar.com:443 
* stopped the pause stream!
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to foo.bar.com:443 

I don't how I can debug further,
Meanwhile, I tried running docker compose with same configuration and same docker image, Running file on a fresh installation on the new server

@gauravcanon
Copy link
Author

gauravcanon commented Jul 18, 2018

Problem is solved
I build a custom image, extending worker_processes

RUN { \ 
    # Increased the number of worker_processes from any number to 4
    sed -i 's/\(worker_processes\s*\)[0-9]*;/\14;/' /etc/nginx/nginx.conf; \
    # Increased the number of worker_connections from any number to 19000
    sed -i 's/\(worker_connections\s*\)[0-9]*;/\119000;/' /etc/nginx/nginx.conf; \
}

But what was the problem can anyone tell?

@stefanullinger
Copy link

I had the very same issue right now. Removing all nginx-proxy containers and the image, and then starting with a fresh container solved the issue for me.

@buchdag
Copy link
Member

buchdag commented Apr 28, 2021

worker_connections was upped to 10240 by #973, closing

@buchdag buchdag closed this as completed Apr 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants