Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: qBittorrent stops listening to the open port after the gluetun VPN restarts internally #1407

Open
Gylesie opened this issue Feb 21, 2023 · 65 comments

Comments

@Gylesie
Copy link

Gylesie commented Feb 21, 2023

Is this urgent?

No

Host OS

Ubuntu 22.04

CPU arch

x86_64

VPN service provider

Custom

What are you using to run the container

docker-compose

What is the version of Gluetun

Running version latest built on 2022-12-31T17:50:58.654Z (commit ea40b84)

What's the problem 🤔

Everything works as expected when qBittorrent and gluetun containers are freshly started. The qBittorrent is listening on the open port and it is reachable via the internet. However, when gluetun runs for a longer period of time and for some reason the VPN stops working for a brief time, trigerring gluetun's internal VPN restart, the open port in qBittorrent is no longer reachable.

What I found out was that by changing the open listening port in qBittorrent WebUI settings to some random port, saving the configuration and then immediately after that reverting the change to the original port, it starts listening and it is now once again reachable. Or just restarting the qBittorrent container without changing anything also worked.

Is there anything gluetun can do to prevent this? Is this solely qBittorrent's bug? Unfortunately, I have no idea.

Thanks!

Share your logs

INFO [healthcheck] program has been unhealthy for 36s: restarting VPN
INFO [vpn] stopping
INFO [firewall] removing allowed port xxxxxx...
INFO [vpn] starting
INFO [firewall] allowing VPN connection...
INFO [wireguard] Using available kernelspace implementation
INFO [wireguard] Connecting to yyyyyyyyy:yyyyy
INFO [wireguard] Wireguard is up
INFO [firewall] setting allowed input port xxxxxx through interface tun0...
INFO [healthcheck] healthy!

Share your configuration

No response

@undated4410
Copy link

undated4410 commented Feb 25, 2023

Exactly the same is happening to me as well. The workaround @Gylesie mentioned works for me too, but unfortunately it is not too nice when one wants to rely on the raspberry just working without needing any input.

Maybe my docker-compose.yml will help with debugging/reproducing the error:

version: "3"
services:
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    devices:
    - /dev/net/tun:/dev/net/tun
    environment:
      - VPN_SERVICE_PROVIDER=mullvad
      - VPN_TYPE=wireguard
      - WIREGUARD_PRIVATE_KEY=<redacted>
      - WIREGUARD_ADDRESSES=<redacted>
      - SERVER_CITIES=<redacted>
      - FIREWALL_VPN_INPUT_PORTS=<redacted>  # mullvad forwarded port
      - PUID=1000
      - PGID=1000
    ports:
      - 8080:8080       # qbittorrent webgui
      - <redacted>:<redacted>     # mullvad forwarded port
      - <redacted>:<redacted>/udp # mullvad forwarded port
    restart: unless-stopped
  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: qbittorrent
    network_mode: "service:gluetun"
    environment:
      - PUID=1000
      - PGID=1000
      - WEBUI_PORT=8080
    volumes:
      - <redacted>:/config
      - <redacted>:/downloads
    depends_on:
      gluetun:
        condition: service_healthy
    restart: unless-stopped

EDIT:
The same is happening with deluge too.

EDIT 2:
Doesn't seem to happen with transmission

@deepakvinod
Copy link

Chiming in that I have the same issue with qbittorrent and gluetun with the hotio image for qbittorrent. @Gylesie's workaround is okay but troublesome when it happens at night.

@qdm12
Copy link
Owner

qdm12 commented Mar 1, 2023

It might be because there is a listener going through the tunnel, but gluetun destroys that tunnel on an internal vpn restart and re-creates it.

I had the same issue with the http client fetching version info/public ip info from within gluetun, and the fix was to close 'idle connections' for the http client when the tunnel is up again

https://github.com/qdm12/private-internet-access-docker/blob/ab5dbdca9744defe3afbb68d5c0a029a29b0a6a0/internal/vpn/tunnelup.go#L20

A bit weird though, since a server (listener) should still work across vpn restarts (it does work with i.e. the shadowsocks server).
Also strange it works with Transmission. But from what you said

saving the configuration and then immediately after that reverting the change to the original port, it starts listening and it is now once again reachable

Doing this restarts the listener which is why it works again I would say.

I don't think I can really do something from within Gluetun, you could perhaps have some script reading the logs of Gluetun and restart qbittorrent when a vpn restarts occurs. Not ideal but I cannot think of something better really for now.

@Gylesie
Copy link
Author

Gylesie commented Mar 1, 2023

Hmm, that's unfortunate. Are you interested in implementing a way to define a custom script after the VPN gets restarted? That would be kinda useful in situations like this.

@Gylesie
Copy link
Author

Gylesie commented Mar 1, 2023

@qdm12 When the tunnel gets destroyed, does that mean that also the network interface gets destroyed and recreated afterwards?

@qdm12
Copy link
Owner

qdm12 commented Mar 2, 2023

Are you interested in implementing a way to define a custom script after the VPN gets restarted? That would be kinda useful in situations like this.

yes and no, because this script would likely have to run on the host outside the gluetun container. We could eventually as an option add capabilities for Gluetun to do Docker host operations by bind mounting the docker socket, but that's kinda risky security wise (although it already runs as root + NET_ADMIN capabilities, so maybe why not). Anyway the backlog of more pressing issues is already thick, but let's keep this opened, it would be interesting to explore this more.

@Gylesie
Copy link
Author

Gylesie commented Mar 5, 2023

In the meantime, feel free to use this script I made, it's not perfect but good enough. Keep it running the whole time on the host system.

#!/bin/bash
# Gluetun monitoring script by Gylesie. More info:
# https://github.com/qdm12/gluetun/issues/1407

######### Config:
gluetun_container_id="gluetun"
qbittorrent_container_id="qbittorrent"
timeout="60"

docker="/usr/bin/docker"
#################################################

log() {
   echo "$(date) [INFO] $1"
}

# Wait for the container to be running
while ! "$docker" inspect "$gluetun_container_id" | jq -e '.[0].State.Running' > /dev/null; do
   log "Waiting for the container($gluetun_container_id) to be up and running! Sleeping for $timeout seconds..."
   sleep "$timeout"
done


# store the start time of the script
start_time=$(date +%s)
# stream the logs and process new lines only
"$docker" logs -t -f "$gluetun_container_id" 2>&1 | while read line; do
    # get the timestamp of the log line
    log_time=$(date -d "$(echo "$line" | cut -d ' ' -f1)" +%s)
    # check if the log line was generated after the script started
    if [[ "$log_time" -ge "$start_time" ]]; then
        # Check if vpn was restarted
        if [[ "$line" =~ "[wireguard] Wireguard is up" ]]; then
           # Check if qbittorrent container is running
           if "$docker" inspect "$qbittorrent_container_id" | jq -e '.[0].State.Running' > /dev/null; then
               log "Restarting qbittorrent!"
               "$docker" restart "$qbittorrent_container_id"
           else
               log "qBittorrent container($qbittorrent_container_id) is not running! Passing..."
           fi
        fi
    fi
done

@eiqnepm
Copy link
Contributor

eiqnepm commented Mar 7, 2023

Are you interested in implementing a way to define a custom script after the VPN gets restarted? That would be kinda useful in situations like this.

yes and no, because this script would likely have to run on the host outside the gluetun container. We could eventually as an option add capabilities for Gluetun to do Docker host operations by bind mounting the docker socket, but that's kinda risky security wise (although it already runs as root + NET_ADMIN capabilities, so maybe why not). Anyway the backlog of more pressing issues is already thick, but let's keep this opened, it would be interesting to explore this more.

I'd imagine it would be possible to have some environment variables for Gluetun which specify the address, port username and password of your qBittorrent instance, then Gluetun could use the qBittorrent web API to change the port and then back whenever the tunnel is restarted. This wouldn't require any special Docker permissions. Obviously not the cleanest solution, however a solution nonetheless.

@qdm12
Copy link
Owner

qdm12 commented Mar 7, 2023

@eiqnepm I wasn't aware of such web API can you create a separate issue for this? Definitely something doable!

@eiqnepm
Copy link
Contributor

eiqnepm commented Mar 7, 2023

@eiqnepm I wasn't aware of such web API can you create a separate issue for this? Definitely something doable!

The API is documented here, I went ahead and created the new issue #1441 (comment), thanks a bunch for the quick response!

@eiqnepm
Copy link
Contributor

eiqnepm commented Mar 9, 2023

I've gone ahead and made a container portcheck purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.

Docker Compose example
version: "3"

services:
  gluetun:
    cap_add:
      - NET_ADMIN
    container_name: gluetun
    devices:
      - /dev/net/tun:/dev/net/tun
    environment:
      - FIREWALL_VPN_INPUT_PORTS=6881
      - OWNED_ONLY=yes
      - SERVER_CITIES=Amsterdam
      - VPN_SERVICE_PROVIDER=mullvad
      - VPN_TYPE=wireguard
      - WIREGUARD_ADDRESSES=👀
      - WIREGUARD_PRIVATE_KEY=👀
    image: qmcgaw/gluetun
    ports:
      - 8080:8080 # qBittorrent
    restart: unless-stopped
    volumes:
      - ./gluetun:/gluetun

  portcheck:
    container_name: portcheck
    depends_on:
      - qbittorrent
    environment:
      - DIAL_TIMEOUT=5
      - QBITTORRENT_PASSWORD=adminadmin
      - QBITTORRENT_PORT=6881
      - QBITTORRENT_USERNAME=admin
      - QBITTORRENT_WEBUI_PORT=8080
      - QBITTORRENT_WEBUI_SCHEME=http
      - TIMEOUT=300
    image: eiqnepm/portcheck
    network_mode: service:gluetun
    restart: unless-stopped

  qbittorrent:
    container_name: qbittorrent
    environment:
      - PGID=1000
      - PUID=1000
      - TZ=Etc/UTC
      - WEBUI_PORT=8080
    image: lscr.io/linuxserver/qbittorrent
    network_mode: service:gluetun
    restart: unless-stopped
    volumes:
      - ./qbittorrent/config:/config
      - ./qbittorrent/downloads:/downloads

Environment variables

Variable Default Description
QBITTORRENT_PORT 6881 qBittorrent incoming connection port
QBITTORRENT_WEBUI_PORT 8080 Port of the qBittorrent WebUI
QBITTORRENT_WEBUI_SCHEME http Scheme of the qBittorrent WebUI
QBITTORRENT_USERNAME admin qBittorrent WebUI username
QBITTORRENT_PASSWORD adminadmin qBittorrent WebUI password
TIMEOUT 300 Time in seconds between each port check
DIAL_TIMEOUT 5 Time in seconds before the port check is considered incomplete

I've just updated the container to not rely on the Gluetun HTTP control server for the public IP address of the VPN connection, it now uses the outbound address from within the Gluetun service network to check the qBittorrent incoming port, this also has the added benefit of not needing to query the qBittorrent incoming port from the public IP address of your server.

For anyone that was using this before I made the change, make sure to run the container inside of the Gluetun service network and update the environment variables which have changed.

@garret
Copy link

garret commented Mar 13, 2023

I recently switched from linuxserver/transmission to linuxserver/qbittorrent and noticed that qbittorrent (working inside the gluetun docker network) stops working after some time. I have been suspecting that is due because gluetun kind of restarts itself for some reason. I am glad to see I am not the only one who has noticed this issue.

The extra container solution is nice but not ideal. I think I will revert to transmission until a proper solution is found out but really appreciate all your efforts. Will keep subscribed for updates.

@stvsu
Copy link

stvsu commented Mar 13, 2023

I've gone ahead and made a container portcheck purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.

Thank you for writing this - works great!

For others experiencing this issue, I'm wondering if it would also help to increase the HEALTH_VPN_DURATION_INITIAL config option. I'm seeing 6 reconnects in the last 12 hours, which seems really high.

Is the default setting of 6 seconds too sensitive?

@eiqnepm
Copy link
Contributor

eiqnepm commented Mar 13, 2023

I've gone ahead and made a container portcheck purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.

Thank you for writing this - works great!

For others experiencing this issue, I'm wondering if it would also help to increase the HEALTH_VPN_DURATION_INITIAL config option. I'm seeing 6 reconnects in the last 12 hours, which seems really high.

Is the default setting of 6 seconds too sensitive?

My pleasure!

After reading the wiki, it seems the healthcheck was primarily created due to the unreliability of OpenVPN connections. Considering I'm using WireGuard which is stateless I've just decided to completely disable the healthcheck feature and see how that goes. With my current knowledge, barring my VPN provider itself going offline, I can't think of a reason why my connection would be interrupted (I guess we'll find out).

While the healthcheck feature cannot be disabled per se, you can just set the HEALTH_TARGET_ADDRESS to the HEALTH_SERVER_ADDRESS which defaults to 127.0.0.1:9999.

@kjwill555
Copy link

I've gone ahead and made a container portcheck purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.

Thank you for writing this - works great!

For others experiencing this issue, I'm wondering if it would also help to increase the HEALTH_VPN_DURATION_INITIAL config option. I'm seeing 6 reconnects in the last 12 hours, which seems really high.

Is the default setting of 6 seconds too sensitive?

I can confirm that this fixed it for me. I set HEALTH_VPN_DURATION_INITIAL=120s about two weeks ago and haven't had this problem since.

Comcast hiccups often in my area, so 6 seconds was definitely too aggressive for me

@alaskanrabb
Copy link

alaskanrabb commented Apr 19, 2023

in qBittorent you can go into options and under advanced, and you can lock the network interface to tun0. this fixed the heath check disconnect/reconnect issue for me months ago as it's an issue with qbit not handling reconnects correctly. I will still probably set the HEALTH_VPN_DURATION_INITIAL=120s just because I hate seeing a bunch of reconnects in the logs.

Also, someone just posted a bug that tun0 disappeared after the last update, but it hasn't been verified yet.

@afladmark
Copy link

I can also confirm this. I was having this problem regularly, but locking the network interface to tun0 in qBittorent has also solved it for me.

@alaskanrabb
Copy link

any chance are you on the latest version and not having the tun0 missing bug? someone pulled yesterday and said the lost it, but they are also having openvpn cert issues, so it's possibly not a valid bug, but a symptom of a different one

@afladmark
Copy link

I was running 3.32. I've updated to 3.33 and do not have any issues with tun0. Or are you referring to later git commits? I'm on a Synology NAS (DSM7) as well, but Wireguard to Mulvad. So far everything is fine. I'll keep an eye on the public port issue as ever, but so far tun0 is present and still bound in qBittorrent as expected.

@ksurl
Copy link
Contributor

ksurl commented Apr 22, 2023

In the meantime, feel free to use this script I made, it's not perfect but good enough. Keep it running the whole time on the host system.

#!/bin/bash
# Gluetun monitoring script by Gylesie. More info:
# https://github.com/qdm12/gluetun/issues/1407

######### Config:
gluetun_container_id="gluetun"
qbittorrent_container_id="qbittorrent"
timeout="60"

docker="/usr/bin/docker"
#################################################

log() {
   echo "$(date) [INFO] $1"
}

# Wait for the container to be running
while ! "$docker" inspect "$gluetun_container_id" | jq -e '.[0].State.Running' > /dev/null; do
   log "Waiting for the container($gluetun_container_id) to be up and running! Sleeping for $timeout seconds..."
   sleep "$timeout"
done


# store the start time of the script
start_time=$(date +%s)
# stream the logs and process new lines only
"$docker" logs -t -f "$gluetun_container_id" 2>&1 | while read line; do
    # get the timestamp of the log line
    log_time=$(date -d "$(echo "$line" | cut -d ' ' -f1)" +%s)
    # check if the log line was generated after the script started
    if [[ "$log_time" -ge "$start_time" ]]; then
        # Check if vpn was restarted
        if [[ "$line" =~ "[wireguard] Wireguard is up" ]]; then
           # Check if qbittorrent container is running
           if "$docker" inspect "$qbittorrent_container_id" | jq -e '.[0].State.Running' > /dev/null; then
               log "Restarting qbittorrent!"
               "$docker" restart "$qbittorrent_container_id"
           else
               log "qBittorrent container($qbittorrent_container_id) is not running! Passing..."
           fi
        fi
    fi
done

I tested this script with an echo instead of restart before actually enabling, and if your gluetun has been running a while and already restarted a few times, it will restart qb just as many times in rapid sequence. I think I will try the longer timeout for the gluetun healthcheck first to avoid the internal reconnects

@argonan0
Copy link

Switched over to this recently and started seeing this daily (scheduled VPN reconnect). Glad it's already been reported but hoping for an integrated solution.

AirVPN Wireguard here. Same solutions seem to work (restarting container) however I would like to avoid having to do that.

Is an official solution possible? @qdm12

@ksurl
Copy link
Contributor

ksurl commented May 22, 2023

Switched over to this recently and started seeing this daily (scheduled VPN reconnect). Glad it's already been reported but hoping for an integrated solution.

AirVPN Wireguard here. Same solutions seem to work (restarting container) however I would like to avoid having to do that.

Is an official solution possible? @qdm12

The best workaround for now is to use the libtorrentv1 version of qbittorrent. Or switch to transmission. It's an issue with libttorrentv2.

@eiqnepm
Copy link
Contributor

eiqnepm commented May 22, 2023

Switched over to this recently and started seeing this daily (scheduled VPN reconnect). Glad it's already been reported but hoping for an integrated solution.

AirVPN Wireguard here. Same solutions seem to work (restarting container) however I would like to avoid having to do that.

Is an official solution possible? @qdm12

If restarting the container is undesirable, you should use #1407 (comment).

@argonan0
Copy link

@ksurl Sounds like a downgrade best avoided. Is there a bug reference for the libtorrentv2 issue?

@eiqnepm Nifty but requires another container, and isn't on the UNRAID app portal. Looking for an official solution within this container. Can you merge the solution with a pull request here?

@ksurl
Copy link
Contributor

ksurl commented May 23, 2023

@ksurl Sounds like a downgrade best avoided. Is there a bug reference for the libtorrentv2 issue?

@eiqnepm Nifty but requires another container, and isn't on the UNRAID app portal. Looking for an official solution within this container. Can you merge the solution with a pull request here?

I found no other functionality changes with v1. Does unraid not let you use any image from docker hub? You could accomplish the same thing with a cron script to poke the api.

@eiqnepm
Copy link
Contributor

eiqnepm commented May 23, 2023

and isn't on the UNRAID app portal

Under apps and then setting, enable additional search results from dockerHub.

The container is very lightweight. It could be implimented into Gluetun, I even made an issue upon request #1441 (comment), however I don't currently understand the inner workings of Gluetun and don't have the ability to implement the feature myself at this time.

If the maintainer decides this is an issue that Gluetun should resolve first hand, it should not be a very daunting task, considering I managed to get it done with just over two-hundred lines of Go.

@jathek
Copy link

jathek commented May 23, 2023

If this is a libtorrent issue then a bug should be opened there. I don't think gluetun should add a fix for a third-party issue that already has a simple container workaround.

@argonan0
Copy link

argonan0 commented May 29, 2023

and isn't on the UNRAID app portal

Under apps and then setting, enable additional search results from dockerHub.

Cool that there is that option however I do not see it.

As it happens... the issue sort of just went away on it's own apparently. There were several days I needed to restart the container but after a recent Gluetun update, the issue seems to have gone away.

@Snuffy2
Copy link

Snuffy2 commented Jun 20, 2023

Here's how I handle restarting dependent dockers when Gluetun restarts:
https://gist.github.com/Snuffy2/1d49250df3a5c8fdb3a24d486df92015

@AbbieDoobie
Copy link

I'm still having issues with qbittorrent + gluetun, and portcheck sorta kinda works around it, but sometimes things still go awry and I haven't had the time to figure out why.

I double checked my containers are up to date, and still saw this issue occur when portcheck was off/stopped. So I don't think the changes mentioned above fix this specific problem. Still need portcheck.

@xoxfaby
Copy link

xoxfaby commented Nov 5, 2023

Is there a good solution for deluge?

@xoxfaby
Copy link

xoxfaby commented Dec 5, 2023

Is there a good solution for deluge?

@qdm12 Anything? Deluge is still not aware when Gluetun reconnects to AirVPN and I lose the forwarded port until I restart Deluge.

@eiqnepm
Copy link
Contributor

eiqnepm commented Jan 5, 2024

Is there a good solution for deluge?

It would most likely be possible for me to add support for Deluge to portcheck. After a quick network inspection it does seem Deluge does things in a slightly more complicated way.

Unfortunately I haven't used port forwarding since it was removed from Mullvad, so I would be unable to test if it actually works with Deluge.

@xoxfaby
Copy link

xoxfaby commented Jan 5, 2024

I think I could set you up to connect with my AirVPN if it would help with this?

The last week or two my stack hasn't lost connection (at least I haven't noticed, I was waiting for it to do so to try to figure out the best to set up a health check) but it would be good to reliably solve it

@eiqnepm
Copy link
Contributor

eiqnepm commented Jan 6, 2024

I've created a dev branch to add Deluge support, it is completely untested portcheck:dev.

I think I could set you up to connect with my AirVPN if it would help with this?

It would be nice to test it myself to fix any issues if you're willing to let me borrow one of your connections.

@xoxfaby
Copy link

xoxfaby commented Jan 6, 2024

ah I was out and about when I saw your comment and didn't realize it was simply for portcheck. I figure it would be better to implement this as a healthcheck to the container? It's what my plan was for when I lose the port again in my setup.

@eiqnepm
Copy link
Contributor

eiqnepm commented Jan 6, 2024

I figure it would be better to implement this as a healthcheck to the container?

The consensus seems to be that because this is not necessarily an issue with Gluten, rather libtorrent, it should not be directly tackled by Gluten.

portcheck is written in Go and runs on Alpine, so it has a very low footprint. It is currently the only way I know of to open the ports back up automatically without restarting the container itself.

@AbbieDoobie
Copy link

Could Gluetun just get an option to fully restart whenever the connection goes down? That would resolve the problem in a roundabout way. When Gluetun restarts, docker restarts all containers that use its network.

@eiqnepm
Copy link
Contributor

eiqnepm commented Jan 12, 2024

Could Gluetun just get an option to fully restart whenever the connection goes down? That would resolve the problem in a roundabout way. When Gluetun restarts, docker restarts all containers that use its network.

That would be a good solution for those who don't mind the service containers restarting.

I'd imagine Gluetun would need access to /var/run/docker.sock.

@xoxfaby
Copy link

xoxfaby commented Feb 1, 2024

I'd imagine Gluetun would need access to /var/run/docker.sock.

Based on what the other person said, it would just need to end its own process, no?

@eiqnepm
Copy link
Contributor

eiqnepm commented Feb 1, 2024

I'd imagine Gluetun would need access to /var/run/docker.sock.

Based on what the other person said, it would just need to end its own process, no?

Gluten would need to restart the container it is running in to restart the service network, otherwise the service network would remain the same.

I am not sure as to whether a Gluten process restart would fix the torrent issue as it doesn't effect the torrent client containers directly.

@xoxfaby
Copy link

xoxfaby commented Feb 1, 2024

When Gluetun restarts, docker restarts all containers that use its network.

@eiqnepm
Copy link
Contributor

eiqnepm commented Feb 1, 2024

When Gluetun restarts, docker restarts all containers that use its network.

When the Gluten Docker container restarts all of the Docker containers using it as a service network will restart, however if Gluten was to have a persistent entry point process which merely restarted the main Gluten process all within the Gluetun Docker container it would not affect other Docker containers as the Gluten Docker network would remain the same.

Processes inside Docker containers don't have the ability to manipulate the state of the container itself OOTB.

@xoxfaby
Copy link

xoxfaby commented Feb 1, 2024

Docker containers live and die by their main process.

persistent entry point process

This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).

Processes inside Docker containers don't have the ability to manipulate the state of the container itself OOTB.

They absolutely do by necessity simply by the fact that the container only runs as long as the main process is running.

@eiqnepm
Copy link
Contributor

eiqnepm commented Feb 1, 2024

Docker containers live and die by their main process.

persistent entry point process

This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).

You are correct, however this would break workflows for those who do not want the container to restart on actual failures.

@xoxfaby
Copy link

xoxfaby commented Feb 1, 2024

Docker containers live and die by their main process.

persistent entry point process

This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).

You are correct, however this would break workflows for those who do not want the container to restart on actual failures.

it would simply need to be optional

@eiqnepm
Copy link
Contributor

eiqnepm commented Feb 1, 2024

Docker containers live and die by their main process.

persistent entry point process

This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).

You are correct, however this would break workflows for those who do not want the container to restart on actual failures.

it would simply need to be optional

Giving the Gluten container access to /var/run/docker.sock could be optional and would also not break the aforementioned workflows.

Two ways to achieve the same thing, but I think having the Gluten container restart itself instead of relying on a restart policy is a more ideal solution if Gluten was going to go the container restart route to address this issue.

@xoxfaby
Copy link

xoxfaby commented Feb 1, 2024

One complicated solution that needs gluetun to get extra unnecessary access to then implement more complex logic to go out and restart other containers, vs a dead simple solution that takes 2 lines of code to implement.

@eiqnepm
Copy link
Contributor

eiqnepm commented Feb 1, 2024

One complicated solution that needs gluetun to get extra unnecessary access to then implement more complex logic to go out and restart other containers, vs a dead simple solution that takes 2 lines of code to implement.

What I suggested was for Gluten to restart itself, say when an environment variable is enabled and the Gluten container has access to the Docker socket. This way you get the benefit of the service network restarting, which indirectly restarts all of the dependent containers and you don't have to use the always restart policy, which is undesirable for some.

I wouldn't call it complex, obviously in comparison to exiting the process it would be more "logic", however neither is challenging to implement and maintain.

Both viable suggestions. Like I said, I still believe it would be better to not break the no restart policy workflow, but that's subjective.

I don't think there's more for me to add.

@eiqnepm
Copy link
Contributor

eiqnepm commented Feb 3, 2024

@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?

I have made a restart branch for portcheck. With this branch you can use Docker labels to select containers that will be restarted when the selected TCP port is inaccessible (example).

@Jhutjens92
Copy link

Jhutjens92 commented Feb 22, 2024

@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?

I have made a restart branch for portcheck. With this branch you can use Docker labels to select containers that will be restarted when the selected TCP port is inaccessible (example).

Is it possible to check multiple ports with one docker instance? In my case Prowlarr and qBittorent both have the issue.

@eiqnepm
Copy link
Contributor

eiqnepm commented Feb 22, 2024

@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?

I have made a restart branch for portcheck. With this branch you can use Docker labels to select containers that will be restarted when the selected TCP port is inaccessible (example).

Is it possible to check multiple ports with one docker instance? In my case Prowlarr and qBittorent both have the issue.

I could be implemented.

You can restart multiple containers. Are you not able to just check the one port and have both containers restart? I assume that if one service is unreachable the other will be too.

@Jhutjens92
Copy link

@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?

I have made a restart branch for portcheck. With this branch you can use Docker labels to select containers that will be restarted when the selected TCP port is inaccessible (example).

Is it possible to check multiple ports with one docker instance? In my case Prowlarr and qBittorent both have the issue.

I could be implemented.

You can restart multiple containers. Are you not able to just check the one port and have both containers restart? I assume that if one service is unreachable the other will be too.

That's how i currently have it set up. When the qBittorent port is unreachable then both containers restart. Ill see how it works.

@fabiengagne
Copy link

I've gone ahead and made a container portcheck purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.

Thank you for writing this - works great!
For others experiencing this issue, I'm wondering if it would also help to increase the HEALTH_VPN_DURATION_INITIAL config option. I'm seeing 6 reconnects in the last 12 hours, which seems really high.
Is the default setting of 6 seconds too sensitive?

I can confirm that this fixed it for me. I set HEALTH_VPN_DURATION_INITIAL=120s about two weeks ago and haven't had this problem since.

Comcast hiccups often in my area, so 6 seconds was definitely too aggressive for me

Setting HEALTH_VPN_DURATION_INITIAL=120s is what solved it for me.

@Snoras
Copy link

Snoras commented Apr 5, 2024

Setting HEALTH_VPN_DURATION_INITIAL=120s solved it for me as well.

@giraffeingreen
Copy link

I was searching the internet for a solution and I found https://portcheck.transmissionbt.com/4330 which return 1 if the port is open and 0 if it's closed.

Meaning that you guys can add a healthcheck to the gluetun container:

 healthcheck:
    test: ["CMD-SHELL", "wget -qO- http://portcheck.transmissionbt.com/4330 | grep -q 1 || exit 1"]
    interval: 1m30s
    timeout: 10s
    retries: 3
    start_period: 40s

@aidan-gibson
Copy link

I believe I fixed it by manually setting HEALTH_SERVER_ADDRESS=127.0.0.1:5921 and HTTP_CONTROL_SERVER_ADDRESS=:8456 (these are just random unused ports on my machine) as the default ports were in use. You can check if a port is in use via nc -zv localhost <port>.

@aidan-gibson
Copy link

Nevermind, unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout is back 😔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests