Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Docker containers sometimes become inaccessible when using "network_mode:" via gluetun VPN container #405

Closed
redtripleAAA opened this issue Mar 14, 2021 · 23 comments
Assignees

Comments

@redtripleAAA
Copy link

Is this urgent?:
Yes

#################################################################################

Host OS

Synology NAS DSM
Docker via Portainer

root@Synology:~# uname -a; uname -K                     # BSD
Linux Synology 4.4.59+ #25426 SMP PREEMPT Mon Dec 14 18:48:50 CST 2020 x86_64 GNU/Linux synology_apollolake_718+


Portainer version # 2.1.1

root@Synology:~# docker-compose --version
docker-compose version 1.24.0, build 0aa59064

root@Synology:/var/packages/Docker/target/usr/bin# docker-compose --version
docker-compose version 1.28.5, build c4eb3a1f

CPU arch or device name:
Intel amd64

What VPN provider are you using:
PIA

What are you using to run your container?:
Docker Compose

What is the version of the program

Netdata:
v1.29.3-129-nightly

qBittorrent:
version-14.3.3.99202101191832-7248-da0b276d5ubuntu20.04.1 | 01 Mar 2021 at 04:16:07

#################################################################################

What's the problem 🤔
Docker containers become inaccessible when using "network_mode:" via gluetun VPN container

Note: Maybe this RFE I already created for some time would fix the issue?
#386

The only way to resolve the issue, is by manual restart for the container

#################################################################################

**log

For example, last logs for netdata:

2021-03-14 01:36:19: tc-qos-helper.sh: WARNING: Cannot find file '/etc/netdata/tc-qos-helper.conf'.,
2021-03-14 01:36:19: tc-qos-helper.sh: WARNING: Cannot find file '/usr/lib/netdata/conf.d/tc-qos-helper.conf'.,
2021-03-14 01:36:19: tc-qos-helper.sh: WARNING: FireQoS is not installed on this system. Use FireQoS to apply traffic QoS and expose the class names to netdata. Check https://github.com/netdata/netdata/tree/master/collectors/tc.plugin#tcplugin,
2021-03-14 01:36:02: 20125: 266 '[localhost]:52060' 'DATA' (sent/all = 3664/3664 bytes -0%, prep/sent/total = 3.66/0.19/3.85 ms) 200 '/api/v1/info',
2021-03-14 01:36:02: 20125: 266 '[localhost]:52060' 'DISCONNECTED',
2021-03-14 01:36:02: 20125: 266 '[localhost]:52060' 'CONNECTED',
2021-03-14 01:35:02: 20124: 254 '[localhost]:50938' 'DATA' (sent/all = 3664/3664 bytes -0%, prep/sent/total = 5.44/0.64/6.08 ms) 200 '/api/v1/info',
2021-03-14 01:35:02: 20124: 254 '[localhost]:50938' 'CONNECTED',
2021-03-14 01:35:02: 20124: 254 '[localhost]:50938' 'DISCONNECTED',
2021-03-14 01:34:02: 20123: 254 '[localhost]:49944' 'DATA' (sent/all = 3664/3664 bytes -0%, prep/sent/total = 3.89/0.29/4.18 ms) 200 '/api/v1/info',
2021-03-14 01:34:02: 20123: 254 '[localhost]:49944' 'DISCONNECTED',
2021-03-14 01:34:02: 20123: 254 '[localhost]:49944' 'CONNECTED',
2021-03-14 01:33:02: 20122: 264 '[localhost]:49020' 'DATA' (sent/all = 3664/3664 bytes -0%, prep/sent/total = 4.79/0.20/4.99 ms) 200 '/api/v1/info'

For example, last logs for qBittorrent:

qt.network.ssl: QSslSocket::startClientEncryption: cannot start handshake on non-plain connection

#################################################################################

### Notes:
I already created github bugs for those two products, but it only happens when I use the network mode with gluetun container, so I thought would be better to create the bug here.

netdata
netdata/netdata#10764

qBittorrent
linuxserver/docker-qbittorrent#105

You can find in both docker compose stack I used for each and troubleshooting steps taken.

Thanks

@qdm12
Copy link
Owner

qdm12 commented Mar 14, 2021

  1. Can you confirm that when this happens gluetun has still connectivity? You can do that with a docker exec gluetun wget -qO- https://ipinfo.io
  2. By restarting, you mean just restarting netdata/qbittorrent or gluetun as well?
  3. Have you tried running them all in the same docker-compose using network_mode: service:gluetun?

@redtripleAAA
Copy link
Author

redtripleAAA commented Mar 14, 2021

Thanks for the update @qdm12

Answers:

  1. Yes, gluetun container still working and no errors in the logs. (Do you wish to run that command when the issue happens again?)
  2. Yes, restarting the containers using gluetun container as it's network. gluetun already gets restarted automatically by itself.
  3. I haven't tried to run them all on the same docker compose. Do you think that would resolve the issue to have them all running on the same docker compose?

I just upgraded gluetun container to the latest release and both containers are running

https://github.com/qdm12/gluetun/releases/tag/v3.15.0

Ran the command

/ # wget -qO- https://ipinfo.io
{
  "ip": "172.98.92.85",
  "city": "Toronto",
  "region": "Ontario",
  "country": "CA",
  "loc": "43.7001,-79.4163",
  "org": "AS46562 Performive LLC",
  "postal": "M5N",
  "timezone": "America/Toronto",
  "readme": "https://ipinfo.io/missingauth"
}/ # 

I will update when the issue happens again, soon.

@qdm12
Copy link
Owner

qdm12 commented Mar 14, 2021

  1. Yes when the issue happens. Although I think gluetun should log something about being unhealthy if the connection drops.
  2. gluetun already gets restarted automatically by itself. What do you mean? Try restarting only netdata without restarting gluetun to see if that works? If it does, then it has something to do with Docker / iptables / kernel of the host.
  3. Not sure, but it's worth a try.
  4. Also one more question, does the gluetun container restart before netdata etc. lose connection ? Because restarting gluetun will make connected containers lose their connection permanently (which sucks but that's how docker networking works)

@redtripleAAA
Copy link
Author

redtripleAAA commented Mar 14, 2021

  1. Here is the output from Gluetun
/ # wget -qO- https://ipinfo.io
{
  "ip": "172.98.80.185",
  "city": "Virginia Beach",
  "region": "Virginia",
  "country": "US",
  "loc": "36.8529,-75.9780",
  "org": "AS46562 Performive LLC",
  "postal": "23458",
  "timezone": "America/New_York",
  "readme": "https://ipinfo.io/missingauth"
}/ # 

Note:- None of the connected containers to gluetun are accessible, restarted one of them manually and it's working now.

  1. gluetun already gets restarted automatically by itself. I meant, when connection is lost, it restarts, which is a good thing to re-connect to the VPN

Example, of the logs when ISP is down, below:

today at 1:09 PM  2021/03/14 13:09:01 WARN Caught OS signal terminated, shutting down
today at 1:09 PM  2021/03/14 13:09:01 INFO Clearing forwarded port status file / /volume1/docker/gluetun/config/port-forwarding/port.conf
today at 1:09 PM  2021/03/14 13:09:01 ERROR remove / /volume1/docker/gluetun/config/port-forwarding/port.conf: no such file or directory
today at 1:09 PM  2021/03/14 13:09:01 WARN openvpn: context canceled: exiting loop
today at 1:09 PM  2021/03/14 13:09:01 WARN healthcheck: context canceled: shutting down server
today at 1:09 PM  2021/03/14 13:09:01 WARN http server: context canceled: shutting down
today at 1:09 PM  2021/03/14 13:09:01 WARN http server: shut down
today at 1:09 PM  2021/03/14 13:09:01 WARN openvpn: loop exited
today at 1:09 PM  2021/03/14 13:09:01 WARN healthcheck: server shut down
today at 1:09 PM  2021/03/14 13:09:01 INFO Shutdown successful
today at 1:09 PM  =========================================
today at 1:09 PM  ================ Gluetun ================
today at 1:09 PM  =========================================
today at 1:09 PM  ==== A mix of OpenVPN, DNS over TLS, ====
today at 1:09 PM  ======= Shadowsocks and HTTP proxy ======
today at 1:09 PM  ========= all glued up with Go ==========
today at 1:09 PM  =========================================
today at 1:09 PM  =========== For tunneling to ============
today at 1:09 PM  ======== your favorite VPN server =======
today at 1:09 PM  =========================================
today at 1:09 PM  === Made with ❤️  by github.com/qdm12 ====
today at 1:09 PM  =========================================
today at 1:09 PM  
today at 1:09 PM  Running version latest built on 2021-03-13T13:54:28Z (commit fa220f9)
today at 1:09 PM  
today at 1:09 PM  
today at 1:09 PM  🔧  Need help? https://github.com/qdm12/gluetun/issues/new
today at 1:09 PM  💻  Email? quentin.mcgaw@gmail.com
today at 1:09 PM  ☕  Slack? Join from the Slack button on Github
today at 1:09 PM  💸  Help me? https://github.com/sponsors/qdm12
today at 1:09 PM  2021/03/14 13:09:03 INFO OpenVPN version: 2.4.10
today at 1:09 PM  2021/03/14 13:09:03 INFO Unbound version: 1.10.1
today at 1:09 PM  2021/03/14 13:09:03 INFO IPtables version: v1.8.4
today at 1:09 PM  2021/03/14 13:09:03 INFO Settings summary below:
today at 1:09 PM  |--OpenVPN:
today at 1:09 PM     |--Verbosity level: 1
today at 1:09 PM     |--Run as root: enabled
today at 1:09 PM     |--Provider:
today at 1:09 PM        |--Private Internet Access settings:
today at 1:09 PM           |--Network protocol: udp
today at 1:09 PM           |--Regions: ca ontario
today at 1:09 PM           |--Encryption preset: strong
today at 1:09 PM           |--Custom port: 0
today at 1:09 PM           |--Port forwarding:
today at 1:09 PM              |--File path: / /volume1/docker/gluetun/config/port-forwarding/port.conf
today at 1:09 PM  |--DNS:
today at 1:09 PM     |--Plaintext address: 1.1.1.1
today at 1:09 PM     |--DNS over TLS:
today at 1:09 PM        |--Unbound:
today at 1:09 PM            |--DNS over TLS providers:
today at 1:09 PM                |--cloudflare
today at 1:09 PM            |--Listening port: 53
today at 1:09 PM            |--Access control:
today at 1:09 PM                |--Allowed:
today at 1:09 PM                    |--0.0.0.0/0
today at 1:09 PM                    |--::/0
today at 1:09 PM            |--Caching: enabled
today at 1:09 PM            |--IPv4 resolution: enabled
today at 1:09 PM            |--IPv6 resolution: disabled
today at 1:09 PM            |--Verbosity level: 1/5
today at 1:09 PM            |--Verbosity details level: 0/4
today at 1:09 PM            |--Validation log level: 0/2
today at 1:09 PM            |--Blocked hostnames:
today at 1:09 PM            |--Blocked IP addresses:
today at 1:09 PM                |--127.0.0.1/8
today at 1:09 PM                |--10.0.0.0/8
today at 1:09 PM                |--172.16.0.0/12
today at 1:09 PM                |--192.168.0.0/16
today at 1:09 PM                |--169.254.0.0/16
today at 1:09 PM                |--::1/128
today at 1:09 PM                |--fc00::/7
today at 1:09 PM                |--fe80::/10
today at 1:09 PM                |--::ffff:0:0/96
today at 1:09 PM            |--Allowed hostnames:
today at 1:09 PM        |--Block malicious: enabled
today at 1:09 PM        |--Update: every 24h0m0s
today at 1:09 PM  |--Firewall:
today at 1:09 PM  |--System:
today at 1:09 PM     |--Process user ID: 1029
today at 1:09 PM     |--Process group ID: 100
today at 1:09 PM     |--Timezone: america/toronto
today at 1:09 PM  |--HTTP control server:
today at 1:09 PM     |--Listening port: 8000
today at 1:09 PM     |--Logging: enabled
today at 1:09 PM  |--Public IP getter:
today at 1:09 PM     |--Fetch period: 12h0m0s
today at 1:09 PM     |--IP file: /tmp/gluetun/ip
today at 1:09 PM  |--Github version information: enabled
  1. I will try the setup

  2. I think this is the issue here, as this only happens when gluetun does its health check (As mentioned in # 2 above) and restarts and then other containers lose the connection.

Question:
Do you have any recommendation on how to force reboot other containers after gluetun container reboots?

@qdm12
Copy link
Owner

qdm12 commented Mar 14, 2021

Gluetun does not restart even if it loses connection. It will change to an unhealthy state though. You can see it received a termination signal before restarting:

Caught OS signal terminated, shutting down

Maybe do you have it configured to restart when unhealthy? If so, that will break things up. You could have a script restart gluetun and then the other containers connected to it if it's unhealthy, until I address #386

@redtripleAAA
Copy link
Author

You are about the termination check, I misworded that. Thanks for correcting that.

Here is the stack I have for gluetun, fyi

---
version: '2.4'
services:
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    environment:
      - PUID=1029
      - PGID=100
      - TZ=America/Toronto
      - VPNSP=private internet access
      - REGION=CA Ontario
      - PORT_FORWARDING=on #Complete https://github.com/qdm12/gluetun/wiki/Environment-variables
      - PORT_FORWARDING_STATUS_FILE= /volume1/docker/gluetun/config/port-forwarding/port.conf
      - OPENVPN_USER=##################  #Change to YOUR Username
      - OPENVPN_PASSWORD=##############  #Change to YOUR Password
    volumes:
      - /volume1/docker/gluetun/config:/gluetun
    ports:
      - 8000:8000 #HTTP Server https://github.com/qdm12/gluetun/wiki/HTTP-Control-server#OpenVPN
      - 19999:19999 #Netdata
      - 666:80 #heimdal-VPN
      - 4466:443 #heimdal-VPN
      - 9080:9080 #QBitTorrent Web-UI
      - 6881:6881 #QBitTorrent
      - 6881:6881/udp #QBitTorrent
      #- 9117:9117 #Jackett
      #- 7878:7878 #Radarr
      #- 8989:8989 #Sonarr
    cap_add:
      - NET_ADMIN
    restart: unless-stopped

I still need to research how to restart the container and the other connected container on a health check.

@qdm12
Copy link
Owner

qdm12 commented Mar 15, 2021

You are about the termination check, I misworded that.

So to be clear the gluetun container (not openvpn inside) never restarts by itself right? It only restart when it's told to right?

Also maybe try with docker compose version '3' maybe that helps for the networking part between containers.

I still need to research how to restart the container and the other connected container on a health check.

I would advise you not to, I'll code that auto-healing in the coming days/2 weeks, it shouldn't be too hard to develop.

@redtripleAAA
Copy link
Author

So to be clear the gluetun container (not openvpn inside) never restarts by itself right? It only restart when it's told to right?

I think yeah, as the docker logs not showing anything that it's restarting (Unless I do it manually), based on the Telegram notifier, it shows the following gluetun events happen from time to time when the ISP dropping, which is the termination part you mentioned above and this is normal.

Status Unhealthy for gluetun (qmcgaw/gluetun) {90de6496f080}
Started gluetun (qmcgaw/gluetun) {90de6496f080}
Status Healthy for gluetun (qmcgaw/gluetun) {90de6496f080}

I would advise you not to, I'll code that auto-healing in the coming days/2 weeks, it shouldn't be too hard to develop.

That would be great!! 🥇

@redtripleAAA
Copy link
Author

@qdm12 I have been monitoring the behavior of the gluetun container, and noticed something.

It's actually restarting the container per Portainer run time and here is the logs

Exit Code:

Telegram Notifier API logs:

Hamra Services ALERT, [16.03.21 06:21]
Stopped gluetun (qmcgaw/gluetun) {90de6496f080}
Exit Code: 1

Hamra Services ALERT, [16.03.21 06:21]
Started gluetun (qmcgaw/gluetun) {90de6496f080}

Hamra Services ALERT, [16.03.21 06:21]
Status Healthy for gluetun (qmcgaw/gluetun) {90de6496f080}

gluetun container logs

today at 3:09 AM  2021/03/16 03:09:06 INFO dns over tls: generate keytag query _ta-4a5c-4f66. NULL IN
today at 5:09 AM  2021/03/16 05:09:06 INFO dns over tls: generate keytag query _ta-4a5c-4f66. NULL IN
today at 5:30 AM  2021/03/16 05:30:38 INFO http server: 404 GET  wrote 41B to 172.20.0.1:53698 in 23.849µs
today at 6:20 AM  2021/03/16 06:20:56 WARN Caught OS signal terminated, shutting down
today at 6:20 AM  2021/03/16 06:20:56 INFO Clearing forwarded port status file / /volume1/docker/gluetun/config/port-forwarding/port.conf
today at 6:20 AM  2021/03/16 06:20:56 ERROR remove / /volume1/docker/gluetun/config/port-forwarding/port.conf: no such file or directory
today at 6:20 AM  2021/03/16 06:20:56 WARN http server: context canceled: shutting down
today at 6:20 AM  2021/03/16 06:20:56 WARN openvpn: context canceled: exiting loop
today at 6:20 AM  2021/03/16 06:20:56 WARN dns over tls: context canceled: exiting loop
today at 6:20 AM  2021/03/16 06:20:56 WARN http server: shut down
today at 6:20 AM  2021/03/16 06:20:56 WARN healthcheck: context canceled: shutting down server
today at 6:20 AM  2021/03/16 06:20:56 WARN healthcheck: server shut down
today at 6:20 AM  2021/03/16 06:20:56 WARN dns over tls: loop exited
today at 6:20 AM  2021/03/16 06:20:56 WARN openvpn: loop exited
today at 6:21 AM  2021/03/16 06:21:01 WARN Shutdown timed out

today at 6:21 AM  =========================================
today at 6:21 AM  ================ Gluetun ================
today at 6:21 AM  =========================================
today at 6:21 AM  ==== A mix of OpenVPN, DNS over TLS, ====
today at 6:21 AM  ======= Shadowsocks and HTTP proxy ======
today at 6:21 AM  ========= all glued up with Go ==========
today at 6:21 AM  =========================================
today at 6:21 AM  =========== For tunneling to ============
today at 6:21 AM  ======== your favorite VPN server =======
today at 6:21 AM  =========================================
today at 6:21 AM  === Made with ❤️  by github.com/qdm12 ====
today at 6:21 AM  =========================================
today at 6:21 AM  
today at 6:21 AM  Running version latest built on 2021-03-13T13:54:28Z (commit fa220f9)

And then the connected containers lose the connection for sure since it's main network mode container was rebooting.

And ISP wasn't down, everything network/internet and PIA were working fine.

Is that expected behavior?

@qdm12
Copy link
Owner

qdm12 commented Mar 16, 2021

Caught OS signal terminated, shutting down highlights it's receiving a signal from the docker daemon / Portainer to terminate. So it's an external thing shutting it down. It cannot receive this signal from within really. Maybe are you running low on memory?

@redtripleAAA
Copy link
Author

redtripleAAA commented Mar 17, 2021

@qdm12 makes sense. will try to reach out to Portainer to troubleshoot the root cause of that issue.

I am not sure if it's Portainer that I have to talk with. As this issue happens between gluetun and docker daemon only.

All the other containers are fine.

Do recommend any logs to check/grab?

@qdm12
Copy link
Owner

qdm12 commented Mar 17, 2021

Maybe there is something interesting in docker inspect gluetun on why it restarted although maybe not.

You could try running gluetun outside Portainer using docker-compose 3 in the cli, see if it gets restarted over time?

@qdm12
Copy link
Owner

qdm12 commented Apr 5, 2021

Hi there, did you find the root cause in the end? Or got some logs/healthcheck logs? Cheers

@redtripleAAA
Copy link
Author

Hey @qdm12 no luck, I have been digging around with no luck, been doing manual restarted for qBittorrent manually whenever gluetun break and restarted by itself.

@qdm12
Copy link
Owner

qdm12 commented Apr 24, 2021

Maybe out of the topic, but it will now restart Openvpn from within if it gets unhealthy. You can try by pulling the latest image. You should thus disable the auto healing now.

@redtripleAAA
Copy link
Author

redtripleAAA commented Apr 24, 2021

@qdm12 I did update both official image and the testing one

Since then, I haven't got any restarts on the container level

My docker compose is the same

version: '2.4'
services:
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    environment:
      - PUID=1029
      - PGID=100
      - TZ=America/Toronto
      - VPNSP=private internet access
      - REGION=CA Ontario
      - PORT_FORWARDING=on #Complete https://github.com/qdm12/gluetun/wiki/Environment-variables
      - PORT_FORWARDING_STATUS_FILE=/gluetun/port-forwarding/port.conf
      - OPENVPN_USER=################# #Change to YOUR Username
      - OPENVPN_PASSWORD=################ #Change to YOUR Password
    volumes:
      - /volume1/docker/gluetun/config:/gluetun
    ports:
      - 8000:8000 #HTTP Server https://github.com/qdm12/gluetun/wiki/HTTP-Control-server#OpenVPN
      #- 19999:19999 #Netdata
      - 666:80 #heimdal-VPN
      - 4466:443 #heimdal-VPN
      - 9080:9080 #QBitTorrent Web-UI
      - 6881:6881 #QBitTorrent
      - 6881:6881/udp #QBitTorrent
      #- 9117:9117 #Jackett
      #- 7878:7878 #Radarr
      #- 8989:8989 #Sonarr
    cap_add:
      - NET_ADMIN
    restart: unless-stopped

Do I need to add/remove something from the environment variables from above to disable anywhere the auto-heal?
I don't see anything here # https://github.com/qdm12/gluetun/wiki/Environment-variables

I think the change you did made a huge difference as a great workaround from the limitation of docker when container uses another container network and it doesn't break connectivity, since the network container doesn't restart anymore on the host level.

Another note: Is this related you think? or maybe should be another issue #

last Monday at 4:24:48 PM Running version latest built on 2021-04-19T19:54:17Z (commit fb8279f)

last Monday at 4:24:48 PM  Running version latest built on 2021-04-19T19:54:17Z (commit fb8279f)

last Monday at 4:24:46 PM  2021/04/19 16:24:46 ERROR remove /gluetun/port-forwarding/port.conf: no such file or directory
last Monday at 5:06:03 PM  2021/04/19 17:06:03 ERROR port forwarding: cannot bind port: Get "https://10.48.110.1:19999/bindPort?payload=<payload>&signature=8ItKfjFfHlBE3%2FYc%2FiUfaCospLVJZzqG5adRGvkHNHbRBI%2FpbQuny0AZmz24Qe8yUO0Axkdr0ncp6PE2xb2zAg%3D%3D": dial tcp 10.48.110.1:19999: i/o timeout (Client.Timeout exceeded while awaiting headers)
last Monday at 8:06:34 PM  2021/04/19 20:06:34 ERROR port forwarding: cannot bind port: Get "https://10.48.110.1:19999/bindPort?payload=<payload>&signature=8ItKfjFfHlBE3%2FYc%2FiUfaCospLVJZzqG5adRGvkHNHbRBI%2FpbQuny0AZmz24Qe8yUO0Axkdr0ncp6PE2xb2zAg%3D%3D": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
last Wednesday at 9:40:52 PM  2021/04/21 21:40:52 ERROR port forwarding: cannot bind port: Get "https://10.48.110.1:19999/bindPort?payload=

yesterday at 6:54:12 PM  2021/04/23 18:54:12 ERROR port forwarding: cannot bind port: Get "https://10.48.110.1:19999/bindPort?payload=<payload>&signature=8ItKfjFfHlBE3%2FYc%2FiUfaCospLVJZzqG5adRGvkHNHbRBI%2FpbQuny0AZmz24Qe8yUO0Axkdr0ncp6PE2xb2zAg%3D%3D": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

I see in portainer it has health label though

image

@qdm12
Copy link
Owner

qdm12 commented Apr 25, 2021

No that's just for the vpn server side port forwarding which didn't answer within 30 seconds, probably a problem on PIA server side I'd say. Closing the issue then thanks!

@qdm12 qdm12 closed this as completed Apr 25, 2021
@JStar73
Copy link

JStar73 commented Aug 4, 2022

This sounds like a known issue that is identified in this video...
https://youtu.be/IWj1-j2QWvo?t=398
The container for Gluetun gets a new ID on restart and the other containers using the Gluetun Container as their network interface needs to be re-associated with it.
Would be easy to fix if you can automate...
Get image ID for Gluetun and add it to the associated network interface(s) and restart containers

@qdm12
Copy link
Owner

qdm12 commented Aug 4, 2022

Would be easy to fix if you can automate...

Not that easy, but it's possible I believe. I'm working on it through github.com/qdm12/deunhealth

@doit4dalolz
Copy link

Ive had ths problem but it seemed to because the ports either 80 or 443 are port forwarded to another IP address

@BRNKR
Copy link

BRNKR commented Dec 15, 2022

This is not fixed yet. Every couple of days I have to manually redeploy my *arr stacks which are connected to gluetun network.

@imbehind
Copy link

imbehind commented May 13, 2023

Interestingly, this problem caught with me once I started to use deunhealth.

Before, I used some other autoheal which did not reacted instantly on "unhealthy" status, like deunhealth does using Docker events, and never had this problem. I guess gluetun restarts quickly, between two polls, and the old autoheal never caught it in unhealthy state.

In fact, in my case (just checked), gluetun restarts and achieves "INFO [healthcheck] healthy!" in less than 4 seconds, while the default polling interval for old autoheal was 5 seconds. 😊

So, @qdm12 congrats! You made both gluetun and deunhealth to function a little bit too good. 😂

Now, there's a question on how to delay restart with deunhelth to allow container time to autoheal?

@qdm12, would you be so kind to introduce a small variation to the label triggering deunhealth because vast majority of users have ISPs that change IP periodically by dropping the connection. It would be nice if it worked like this:

deunhealth.restart.on.unhealthy=5000

Where 5000 is optional desired delay in miliseconds.

@aleksilassila
Copy link

@qdm12 this seems to still be an issue, happens to me pretty regularly, too. Let me know if I can provide additional info that could be of use

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants