Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't proxy to containers running in host network mode #1059

Closed
bjornicus opened this issue Feb 5, 2018 · 36 comments · Fixed by #2222
Closed

Can't proxy to containers running in host network mode #1059

bjornicus opened this issue Feb 5, 2018 · 36 comments · Fixed by #2222
Assignees

Comments

@bjornicus
Copy link

When using nginx-proxy to try to proxy to a container running in host networking mode, I assume I also have to run nginx-proxy in host network mode as well (although I've tried both ways without success) but I can't get it to work. Here's a sample compose file using the "web" image used in the test suite:

version: '2'
services:
  nginx-proxy:
    image: jwilder/nginx-proxy:test
    network_mode: "host"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro

  web1:
    image: web
    expose:
      - "81"
    environment:
      WEB_PORTS: 81
      VIRTUAL_HOST: web1.nginx-proxy.local

  web2:
    image: web
    expose:
      - "82"
    network_mode: "host"
    environment:
      WEB_PORTS: 82
      VIRTUAL_HOST: web2.nginx-proxy.local

after running this with docker-compose -f test_network_mode_host.yml up -d I try to curl each:

$ curl localhost:80/port -H "Host: web1.nginx-proxy.local"
answer from port 81

$ curl localhost:80/port -H "Host: web2.nginx-proxy.local"
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.13.8</center>
</body>
</html>

I can, however get to web2 using localhost

curl 127.0.0.1:82/port
answer from port 82

The problem seems to be in the upstream section for web2, which just has server 127.0.0.1 down;
Here's the full /etc/nginx/conf.d/default.conf:

# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
  default $http_x_forwarded_port;
  ''      $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
  default off;
  https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
access_log off;
resolver 10.0.2.3;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
	server_name _; # This is just an invalid value which will never trigger on a real hostname.
	listen 80;
	access_log /var/log/nginx/access.log vhost;
	return 503;
}
# web1.nginx-proxy.local
upstream web1.nginx-proxy.local {
				## Can be connect with "test_sneakernet" network
			# test_web1_1
			server 172.18.0.3:81;
}
server {
	server_name web1.nginx-proxy.local;
	listen 80 ;
	access_log /var/log/nginx/access.log vhost;
	location / {
		proxy_pass http://web1.nginx-proxy.local;
	}
}
# web2.nginx-proxy.local
upstream web2.nginx-proxy.local {
				## Can be connect with "host" network
		# test_web2_1
			server 127.0.0.1 down;
}
server {
	server_name web2.nginx-proxy.local;
	listen 80 ;
	access_log /var/log/nginx/access.log vhost;
	location / {
		proxy_pass http://web2.nginx-proxy.local;
	}
}

Am I missing something in setting this up or is it just not working like it's supposed to?

@TroelsL
Copy link

TroelsL commented Feb 13, 2018

I can run my nginx in bridge mode and have proxy a container in host mode. However, I've had to alter the template as I describe here:

#832

I reported it quite a while ago, but I haven't heard anything yet on a native solution.

@yujiangshui
Copy link

I have the same issue.

@dozd
Copy link

dozd commented Sep 9, 2018

+1 for me

@mattsnowboard
Copy link

Same issue...having trouble getting this to work with Home-Assistant (which needs network_mode host to do some UPnP discovery)

@jordandrako
Copy link

This issue seems to have gotten stale, but I am running into this as well trying to get home assistant working properly. Without host networking mode, Hass can't find things like my Plex server or Google homes.

@AxxlFoley
Copy link

+1 for me

@Pro
Copy link

Pro commented Feb 3, 2019

Same here. I have an OpenHAB container which has to be on the host network, but still I want to have a proxy for authentication.

@mmarsalko
Copy link

I'm going to chime in as yet another person trying to use Home Assistant with this container. Some services (HomeKit, in my case) don't work unless the Home Assistant container is running in host networking mode -- but doing that completely breaks the reverse proxy.

@blackandred
Copy link

blackandred commented May 20, 2019

I also got stuck here. I think it is a serious problem for a lot of security, proxying, authentication, statistics services.

@sl45sms
Copy link

sl45sms commented May 28, 2019

+1

@rickw2001
Copy link

Not sure if this helps but this:

docker network connect my-other-network my-nginx-proxy

Did it for me. Although I'm not doing anything fancy but ran into the same issue.

@SimplySynced
Copy link

I also am having the same issue. @rickw2001 would you mind giving a little more insight into what you did? I'm using a docker-compose file and tried to connect to the bridge and host but unsuccessful.

@rickw2001
Copy link

Hey @SimplySynced.

I create my nginx proxy container manually with docker run. I have a compose file which starts my app containers and creates another network for my app. When I have created / started the nginx-proxy container and my app I run the above command on the CLI (I'm on OS X). Really was that simple for my configuration. Not sure how it would translate to docker compose. Basically:

docker network connect app-net nginx-proxy-net

did it for me.

@sandervandegeijn
Copy link

Yep same problem here with Home Assistant. Broke my brain to fix it all morning :)

@d46
Copy link

d46 commented Sep 28, 2019

docker/for-mac#2716

@nick-fytros
Copy link

nick-fytros commented Dec 17, 2019

Yep same problem here with Home Assistant. Broke my brain to fix it all morning :)

@neographikal
Where you able to find a solution for the Home Assistant?

@sandervandegeijn
Copy link

Nope...sorry.

@d46
Copy link

d46 commented Dec 17, 2019

Yep same problem here with Home Assistant. Broke my brain to fix it all morning :)

@neographikal
Where you able to find a solution for the Home Assistant?

Inside of a container, you can able to reach your local network by host.docker.internal:someport domain. I don't know how the things are going in the Home Assistant but use this tip

@rickw2001
Copy link

rickw2001 commented Dec 17, 2019 via email

@andergmartins
Copy link

andergmartins commented Jan 23, 2020

I had a similar issue and fixed it using this configuration:

nginx-proxy/docker-compose.yml:


services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

wp1.local/docker-compose.yml

version: '3.7'

services:

  wordpress:
    build: ./
    restart: always
    links:
      - db:mysql
    ports:
      - "80"
    networks:
      - nginx-proxy_default
    environment:
      VIRTUAL_HOST: wp1.local
      VIRTUAL_PORT: 80
    working_dir: /var/www/html

  db:
    image: mysql:5.7
    restart: always
    ports:
      - "33067:3306"
    networks:
      - nginx-proxy_default

networks:
  nginx-proxy_default:
    external: true

Note the network name. I didn't create it manually, it is based on the nginx-proxy default network.
I'm on a Mac, and after setup the containers I added the following line to the end of my hosts file:

127.0.0.1 wp1.local

@stieler-it
Copy link

OP was asking about running the container in host networking, this is different from creating a isolated network inside Docker. AFAIK docker network connect would not help here. It could work if nginx-proxy would also be started in host networking but I'm not sure if this has been implemented.

@YouDontGitMe
Copy link

Had the same issue with Home Assistant running on network_mode host and getting the upstream variable to the correct IP and port.

I ended up creating a configuration file in /etc/nginx/conf.d/your.domain.com.conf specific to the host (your.domain.com:8123). Inside my docker-compose file, I did not include Virtual_Host. Mounted the /conf.d volume outside the container as well.

version: '3'
services:
homeassistant:
container_name: homeassistant
image: homeassistant/home-assistant:stable
privileged: true
volumes:
- /path/to/configs/hass:/config
environment:
- PUID=1000
- PGID=1000
- TZ=America/Los_Angeles
- LETSENCRYPT_HOST=your.domain.com
- LETSENCRYPT_EMAIL=your@email.com
network_mode: host

/etc/nginx/conf.d/your.domain.com.conf

your.domain.com

upstream your.domain.com {
# Cannot connect to network of this container
server 10.0.0.4:8123; #Host IP Address and port
}
server {
server_name your.domain.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
# Do not HTTPS redirect Let'sEncrypt ACME challenge
location /.well-known/acme-challenge/ {
auth_basic off;
allow all;
root /usr/share/nginx/html;
try_files $uri =404;
break;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
server_name your.domain.com;
listen 443 ssl http2 ;
access_log /var/log/nginx/access.log vhost;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/your.domain.com.crt;
ssl_certificate_key /etc/nginx/certs/your.domain.com.key;
ssl_dhparam /etc/nginx/certs/your.domain.com.dhparam.pem;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/certs/your.domain.com.chain.pem;
add_header Strict-Transport-Security "max-age=31536000" always;
include /etc/nginx/vhost.d/default;
location / {
proxy_pass http://your.domain.com;
}
}

@Kami
Copy link

Kami commented Jan 3, 2021

@YouDontGitMe I don't think this will work correctly due to how included config files are handled in nginx (I also tested similar approach / workaround first).

This will only work if you have a single vhost served by nginx (aka your.domain.com).

If you have multiple vhosts, nginx will serve certificate for your.domain.com for all others vhosts as well and it won't work because server block for your.domain.com will have precedence over server blocks in default.conf which is generated from template in this repo.


By default, nginx.tmpl will generate an entry like this for container which is using host networking:

upstream sub.domain.com {
				# Cannot connect to network of this container
				server 127.0.0.1 down;
}

But we want something like this:

upstream sub.domain.com {
				# home_assistant
				# Keep in mind that this needs to be internal server IP of your server where
				# container containers are running
				server <internal server ip>:8123;
				# Cannot connect to network of this container
				server 127.0.0.1 down;

}

Right now, my work around includes a custom Dockerfile + Procfile for nginx-proxy image which uses sed to manipulate default.conf entry for vhost entry where docker host networking is used.

This approach is definitely on the hacky side and there are nicer workarounds possible (e.g. just add some if statements to the template file itself or just add support for environment variables for managing more complex setups), but it works.

Here is my Dockerfile:

FROM jwilder/nginx-proxy:alpine

# Copy over custom config
COPY nginx.conf /etc/nginx/nginx.conf

COPY uploadsize.conf /etc/nginx/conf.d/uploadsize.conf

COPY fix-ha-vhost.sh /app/fix-ha-vhost.sh
COPY Procfile /app/Procfile

RUN chmod +x /app/fix-ha-vhost.sh

Procfile:

dockergen: docker-gen -watch -notify "/app/fix-ha-vhost.sh ; nginx -s reload" /app/nginx.tmpl /etc/nginx/conf.d/default.conf
nginx: nginx

fix-ha-vhost.sh:

#!/usr/bin/env bash
# Errors should not be fatal
set +e
grep '<internal ip>:8123' /etc/nginx/conf.d/default.conf || sed -i 's#upstream sub.domain.com {#upstream sub.domain.com {\n\t\t\t\tserver <internal ip>:8123;#g' /etc/nginx/conf.d/default.conf

EDIT: For completeness sake, here is also a slightly nicer hack which only relies on small change to upstream nginx.tmpl.

diff --git a/nginx.tmpl b/nginx.tmpl
index 07e2b50..5284aa9 100644
--- a/nginx.tmpl
+++ b/nginx.tmpl
@@ -196,6 +196,10 @@ upstream {{ $upstream_name }} {
                                        {{ template "upstream" (dict "Container" $container "Address" $address "Network" $containerNetwork) }}
                                {{ end }}
                        {{ else }}
+                               {{ if eq $host "sub.domain.com" }}
+                               # Hack
+                               server 10.0.0.1:8123;
+                               {{ end }}
                                # Cannot connect to network of this container
                                server 127.0.0.1 down;
                        {{ end }}

@Lif3line
Copy link

Lif3line commented Feb 9, 2021

Thanks @Kami, your suggested solution was excellent. I had the same use-case as others; wanting to run Home Assistant with network_mode: host. In case anyone wants to replicate:

I found nginx fell over with the original suggested patch since it led to 2 server entries, but that's only a minor change:

diff --git a/nginx.tmpl b/nginx.tmpl
index 07e2b50..4c9c851 100644
--- a/nginx.tmpl
+++ b/nginx.tmpl
@@ -196,8 +196,12 @@ upstream {{ $upstream_name }} {
 					{{ template "upstream" (dict "Container" $container "Address" $address "Network" $containerNetwork) }}
 				{{ end }}
 			{{ else }}
-				# Cannot connect to network of this container
-				server 127.0.0.1 down;
+				{{ if eq $host "sub.domain.com" }}
+					server <docker internal ip>:8123;  
+				{{ else }}
+					# Cannot connect to network of this container
+					server 127.0.0.1 down;
+				{{ end }}
 			{{ end }}
 		{{ end }}
 	{{ end }}

The <docker internal ip> needs to be what ifconfig shows for Docker, normally that's under the heading docker0: or similar.

I had difficulty rebuilding the reverse proxy image from source as well as using the docker-compose keyword command to insert the patch at start-up, so I settled on building on top of the reverse proxy image:

FROM jwilder/nginx-proxy:alpine

# COPY nginx.tmpl nginx.tmpl # Alternative if you don't want to mess with patches

RUN apk --update add git
COPY hass_fix.patch hass_fix.patch
RUN git apply hass_fix.patch

where hass_fix.patch is the above patch file and must reside in the same directory as this Dockerfile.

The process was then:

  • docker build . on the folder with hass_fix.patch and the Dockerfile
  • docker tag <hash> host_mode_jwilder
    • Just to make it easier to reference later
  • Update my reverse_proxy image to run the new local host_mode_jwilder image
  • Updated Home Assistant image to run with network_mode: host
  • Everything else remained the same
    • e.g. Home Assistant VIRTUAL_HOST and LETSENCRYPT_HOST environment variables

@Kami
Copy link

Kami commented Feb 9, 2021

@Lif3line You are welcome. Glad to hear you got it working and thanks for the additional details :)

@Bugzero71
Copy link

Bugzero71 commented Mar 5, 2021

Thanks @Lif3line and @Kami for the first revision. This works like a charm for OpenHAB also.

I've modified nginx.tmpl as you commented

  {{ if eq $host "fully.qualified.domain.name" }}
        server <server ip>:8443;
  {{ else }}
        # Cannot connect to network of this container
        server 127.0.0.1 down;
  {{ end }}

In my case:

  • "fully.qualified.domain.name" is "openhab.domain.tld", the same as specified in docker-compose.yml for openhab
  • "server ip" is the 172.17.0.1 as you commented, ifconfig returns it as docker0: ip address

The docker-compose.yml for openhab is:

version: "3.8"

services:
  openhab:
    image: "openhab/openhab:3.0.1-debian"
    container_name: "openhab"
    network_mode: host
    restart: unless-stopped
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "/etc/timezone:/etc/timezone:ro"
      - "./addons:/openhab/addons"
      - "./conf:/openhab/conf"
      - "./userdata:/openhab/userdata"
    environment:
      OPENHAB_HTTP_PORT: "8080"
      OPENHAB_HTTPS_PORT: "8443"
      EXTRA_JAVA_OPTS: "-Duser.timezone=Europe/Andorra"
      USER_ID: "997"                                   # value returned by 'id -u openhab'
      GROUP_ID: "997"                                  # value returned by 'id -g openhab'
      # My language is Catalan
      LC_ALL: "ca_ES.UTF-8"
      LANG: "ca_ES.UTF-8"
      LANGUAGE: "ca_ES.UTF-8"
      # NGINX-PROXY ENVIRONMENT VARIABLES: UPDATE ME
      VIRTUAL_HOST: "openhab.domain.tld"
      VIRTUAL_PORT: "8080"
      LETSENCRYPT_HOST: "openhab.domain.tld"
      LETSENCRYPT_EMAIL: "user@domain.com"
      # /END NGINX-PROXY ENVIRONMENT VARIABLES

That's all. Maybe this can help other Openhab users.

@kariudo
Copy link

kariudo commented Jun 11, 2021

I'm currently skirting around this "bug"/"limitation" by using socat, since I was previously using socat to handle redirection to a raspberry pi with homeassistant on it, but since I have been consolidating a few things I decided to move hass to a container. Similar to the above openhab cases, its beneficial to use host mode networking. Anyway, the short version of my solution is to use the following in one of my docker-compose.yml:

  hass-socat:
    image: alpine/socat:latest
    container_name: hass-socat
    entrypoint: "socat tcp-listen:8122,fork,reuseaddr tcp-connect:192.168.1.110:8123"
    depends_on:
      - nginx-proxy
    environment:
      - LETSENCRYPT_HOST=home.example.com
      - LETSENCRYPT_EMAIL=email@example.com
      - VIRTUAL_PORT=8122
      - VIRTUAL_HOST=home.example.com
    network_mode: bridge
    ports:
      - 8122:8122
    restart: always

Where homeassistant is listening on the host in another stack on 8123. The socat container here handles the nginx/letsencrypt binding with this project (more or less how I had it working when it was external; however, now it points the host IP of the docker instance and just uses a different port for its nginx virtual host. Works like a charm.

@Lif3line
Copy link

@kariudo, that sounds like an excellent drop-in solution. Definitely easier to maintain/manage than the .tmpl edit. I guess the only downside is doubling internal network traffic for the target container?

@buchdag buchdag self-assigned this Jun 14, 2021
@anLizard
Copy link

@kariudo This is exactly what I was looking for thank you.
I can now access HA through the SOCAT port, but nginx-proxy can't see it for some reason. I keep getting 502 Bad Gateway when I try to access through my domain.

nginx docker-compose.yml:

version: "3"

networks:
  internal:
  external:

services:

  reverse-proxy:
    image: jwilder/nginx-proxy:alpine
    networks:
      - internal
      - external
    ports:
      - 80:80
      - 443:443
    restart: always
      - ./reverse-proxy/certs:/etc/nginx/certs
      - ./reverse-proxy/vhost.d:/etc/nginx/vhost.d:ro
      - ./reverse-proxy/dhparam:/etc/nginx/dhparam:ro
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    environment:
      - DOCKER_HOST=tcp://socket-proxy:2375

socat and ha docker-compose.yml:

version: "3"

services:

  ha:
    image: homeassistant/home-assistant:stable
    network_mode: host
    restart: always
    volumes:
      - ./ha:/config
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    devices:
      - /dev/ttyACM0

  socat:
    image: alpine/socat:latest
    entrypoint: "socat tcp-listen:8122,fork,reuseaddr tcp-connect:<MY_INTERNAL_IP>:8123"
    network_mode: bridge
    ports:
      - 8122:8122
    restart: always
    env_file:
      - .env
    environment:
      - VIRTUAL_HOST=sub.$MY_DOMAIN
      - VIRTUAL_PORT=8122

@kariudo
Copy link

kariudo commented Jun 21, 2021

I keep getting 502 Bad Gateway when I try to access through my domain.

@anLizard Try using network_mode: bridge with your reverse-proxy container instead of the internal and external networks you have currently.

@kariudo
Copy link

kariudo commented Jun 21, 2021

@kariudo, that sounds like an excellent drop-in solution. Definitely easier to maintain/manage than the .tmpl edit. I guess the only downside is doubling internal network traffic for the target container?

@Lif3line It would end up writing the traffic from one socket to another, and be a little redundant, yes. I haven't done any testing to see if theres some miniscule load impact to that; however, even with the countless things I have running through HomeAssistant it doesn't seem to produce any noticeable impact, even with everything just running on my qnap NAS. Socat is incredibly efficient in what it does in my experience with this and other applications.

@anLizard
Copy link

I keep getting 502 Bad Gateway when I try to access through my domain.

@anLizard Try using network_mode: bridge with your reverse-proxy container instead of the internal and external networks you have currently.

Tried this as well. I can't connect to nginx anymore at all in bridged mode.

Firefox can’t establish a connection to the server at $MY_DOMAIN.

@kariudo
Copy link

kariudo commented Jun 29, 2021

@anLizard, Rather than clogging up this issue, here is a more complete example of a docker-compose.yml for how I am handling using nginx-proxy to resolve this.

https://gist.github.com/kariudo/0e2531ef8165a6f8650cc81df56083a7

I can't attest to other environments, but I can confirm this works for me quite well.

@punkyard
Copy link

punkyard commented Apr 4, 2023

hi,
for me, network_mode: "host" worked fine, until I updated containers (nextcloud aio). After that the nginxpm doesn't get an IP .. !
now I can't see my nginx.domain.com neither my nextcloud.domain.com
I can't neither load my_public_ip:88 (port to npm)

I had to use the host mode, otherwise nc aio sub-containers (such as the video conferencing service) wouldn't work.

I've restarted .. I've tried the bridge mode, but that won't fit my situation ..

Could one say why nmp doesn't get an IP anymore ?

@buchdag buchdag linked a pull request Apr 30, 2023 that will close this issue
@EricReiche
Copy link

Thanks @kariudo for the example.
I modified it to work with non-bridge networks, maybe that's interesting for @anLizard as well. The secret ingredient is host.docker.internal:

    socat:
      image: alpine/socat:latest
      entrypoint: "socat tcp-listen:8122,fork,reuseaddr tcp-connect:host.docker.internal:8123"
      ports:
        - 8122:8122
      expose:
        - 8122
      restart: always
      extra_hosts:
        - "host.docker.internal:host-gateway"
      environment:
        - "VIRTUAL_HOST=hass.example.com"
        - "LETSENCRYPT_HOST=hass.example.com"
        - "VIRTUAL_PORT=8122"
      networks:
        internalbr:
          ipv4_address: 10.123.0.16
        default:

See also https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host

@SimonLammer
Copy link

To add to the other's answers, since I had to adapt them a bit, here's how I managed to get my nginx proxy working with netdata (which uses network_mode: host) using the socat approach on my raspberry pi 4:

netdata/docker-compose.yml:

version: '3'
# https://learn.netdata.cloud/docs/installing/docker
services:
  netdata:
    image: netdata/netdata:stable
    container_name: netdata
    restart: unless-stopped
    pid: host
    network_mode: host
    expose:
      - 19999
    cap_add:
      - SYS_PTRACE
      - SYS_ADMIN
    security_opt:
      - apparmor:unconfined
    volumes:
      - ./volumes/netdataconfig:/etc/netdata
      - ./volumes/netdatalib:/var/lib/netdata
      - ./volumes/netdatacache:/var/cache/netdata
      - /etc/group:/host/etc/group:ro
      - /etc/passwd:/host/etc/passwd:ro
      - /etc/os-release:/host/etc/os-release:ro
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro

  socat:
    image: alpine/socat:latest
    entrypoint: "socat tcp-listen:19998,fork,reuseaddr tcp-connect:host.docker.internal:19999"
    ports:
      - 19998:19998
    expose:
      - 19998
    restart: always
    extra_hosts:
      - "host.docker.internal:host-gateway"
    environment:
      - VIRTUAL_HOST=<redacted>
      - VIRTUAL_PORT=19998
      - LETSENCRYPT_HOST=<redacted>
      - LETSENCRYPT_EMAIL=<redacted>
    networks:
      - proxy-net

nginx-proxy/docker-compose.yml:

version: '3.7'
services:
  proxy:
    image: alexanderkrause/rpi-nginx-proxy
    restart: always
    ports:
      - 80:80
      - 443:443
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./data/certs:/etc/nginx/certs:ro
      - ./data/htpasswd:/etc/nginx/htpasswd
      - &NGINX_VHOSTD ./data/nginx_vhostd:/etc/nginx/vhost.d
      - &CHALLENGE ./data/challenge:/usr/share/nginx/html
    networks:
      - proxy-net
    labels:
      - com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true

  companion:
    image: alexanderkrause/rpi-letsencrypt-nginx-proxy-companion
    restart: always
    depends_on:
      - proxy
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./data/certs:/etc/nginx/certs:rw
      - *NGINX_VHOSTD
      - *CHALLENGE
    networks:
      - proxy-net

networks:
   proxy-net:
     name: proxy-net

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.