Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: ProtonVPN port forwarding looses connection #1882

Closed
clemone210 opened this issue Sep 25, 2023 · 58 comments
Closed

Bug: ProtonVPN port forwarding looses connection #1882

clemone210 opened this issue Sep 25, 2023 · 58 comments

Comments

@clemone210
Copy link

Is this urgent?

No

Host OS

Ubuntu

CPU arch

x86_64

VPN service provider

ProtonVPN

What are you using to run the container

docker-compose

What is the version of Gluetun

latest docker image

What's the problem 🤔

I use gluetun to connect plex to protonvpn with OpenVPN + port forwarding.

When starting the container everything works. The container gets a opened port ad uses this to allow remote access.

Somehow after a few minutes (10-15min) the port connection is not possible anymore. Within Plex, no remote access is possible anymore.
After restarting gluetun and Plex there will be a new port which is used and it works again.

Anything I can provide in order to resolve this?

Share your logs (at least 10 lines)

========================================
========================================
=============== gluetun ================
========================================
=========== Made with ❤️ by ============
======= https://github.com/qdm12 =======
========================================
========================================

Running version latest built on 2023-09-23T13:31:26.334Z (commit aa6dc78)

🔧 Need help? https://github.com/qdm12/gluetun/discussions/new
🐛 Bug? https://github.com/qdm12/gluetun/issues/new
✨ New feature? https://github.com/qdm12/gluetun/issues/new
☕ Discussion? https://github.com/qdm12/gluetun/discussions/new
💻 Email? quentin.mcgaw@gmail.com
💰 Help me? https://www.paypal.me/qmcgaw https://github.com/sponsors/qdm12
2023-09-25T14:30:39+02:00 INFO [routing] default route found: interface eth0, gateway 172.20.0.1, assigned IP 172.20.0.4 and family v4
2023-09-25T14:30:39+02:00 INFO [routing] local ethernet link found: eth0
2023-09-25T14:30:39+02:00 INFO [routing] local ipnet found: 172.20.0.0/16
2023-09-25T14:30:40+02:00 INFO [storage] creating /gluetun/servers.json with 17689 hardcoded servers
2023-09-25T14:30:40+02:00 INFO Alpine version: 3.18.3
2023-09-25T14:30:40+02:00 INFO OpenVPN 2.5 version: 2.5.8
2023-09-25T14:30:40+02:00 INFO OpenVPN 2.6 version: 2.6.5
2023-09-25T14:30:40+02:00 INFO Unbound version: 1.17.1
2023-09-25T14:30:40+02:00 INFO IPtables version: v1.8.9
2023-09-25T14:30:40+02:00 INFO Settings summary:
├── VPN settings:
|   ├── VPN provider settings:
|   |   ├── Name: protonvpn
|   |   ├── Server selection settings:
|   |   |   ├── VPN type: openvpn
|   |   |   ├── Countries: germany
|   |   |   ├── Cities: frankfurt
|   |   |   └── OpenVPN server selection settings:
|   |   |       └── Protocol: TCP
|   |   └── Automatic port forwarding settings:
|   |       ├── Use port forwarding code for current provider
|   |       └── Forwarded port file path: /tmp/gluetun/forwarded_port
|   └── OpenVPN settings:
|       ├── OpenVPN version: 2.5
|       ├── User: [set]
|       ├── Password: s5...KML
|       ├── Network interface: tun0
|       ├── Run OpenVPN as: root
|       └── Verbosity level: 1
├── DNS settings:
|   ├── Keep existing nameserver(s): no
|   ├── DNS server address to use: 127.0.0.1
|   └── DNS over TLS settings:
|       └── Enabled: no
├── Firewall settings:
|   └── Enabled: no
├── Log settings:
|   └── Log level: INFO
├── Health settings:
|   ├── Server listening address: 127.0.0.1:9999
|   ├── Target address: cloudflare.com:443
|   ├── Duration to wait after success: 5s
|   ├── Read header timeout: 100ms
|   ├── Read timeout: 500ms
|   └── VPN wait durations:
|       ├── Initial duration: 6s
|       └── Additional duration: 5s
├── Shadowsocks server settings:
|   └── Enabled: no
├── HTTP proxy settings:
|   └── Enabled: no
├── Control server settings:
|   ├── Listening address: :8000
|   └── Logging: yes
├── OS Alpine settings:
|   ├── Process UID: 1000
|   ├── Process GID: 1000
|   └── Timezone: europe/berlin
├── Public IP settings:
|   ├── Fetching: every 12h0m0s
|   └── IP file path: /tmp/gluetun/ip
└── Version settings:
    └── Enabled: yes
2023-09-25T14:30:40+02:00 INFO [routing] default route found: interface eth0, gateway 172.20.0.1, assigned IP 172.20.0.4 and family v4
2023-09-25T14:30:40+02:00 INFO [routing] adding route for 0.0.0.0/0
2023-09-25T14:30:40+02:00 INFO [firewall] firewall disabled, only updating allowed subnets internal list
2023-09-25T14:30:40+02:00 INFO [routing] default route found: interface eth0, gateway 172.20.0.1, assigned IP 172.20.0.4 and family v4
2023-09-25T14:30:40+02:00 INFO TUN device is not available: open /dev/net/tun: no such file or directory; creating it...
2023-09-25T14:30:40+02:00 INFO [dns] using plaintext DNS at address 1.1.1.1
2023-09-25T14:30:40+02:00 INFO [http server] http server listening on [::]:8000
2023-09-25T14:30:40+02:00 INFO [healthcheck] listening on 127.0.0.1:9999
2023-09-25T14:30:40+02:00 INFO [firewall] firewall disabled, only updating internal VPN connection
2023-09-25T14:30:40+02:00 INFO [openvpn] OpenVPN 2.5.8 x86_64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Nov  2 2022
2023-09-25T14:30:40+02:00 INFO [openvpn] library versions: OpenSSL 3.1.3 19 Sep 2023, LZO 2.10
2023-09-25T14:30:40+02:00 INFO [openvpn] TCP/UDP: Preserving recently used remote address: [AF_INET]194.126.177.14:443
2023-09-25T14:30:40+02:00 INFO [openvpn] Attempting to establish TCP connection with [AF_INET]194.126.177.14:443 [nonblock]
2023-09-25T14:30:40+02:00 INFO [healthcheck] healthy!
2023-09-25T14:30:40+02:00 INFO [openvpn] TCP connection established with [AF_INET]194.126.177.14:443
2023-09-25T14:30:40+02:00 INFO [openvpn] TCP_CLIENT link local: (not bound)
2023-09-25T14:30:40+02:00 INFO [openvpn] TCP_CLIENT link remote: [AF_INET]194.126.177.14:443
2023-09-25T14:30:40+02:00 WARN [openvpn] 'link-mtu' is used inconsistently, local='link-mtu 1635', remote='link-mtu 1636'
2023-09-25T14:30:40+02:00 WARN [openvpn] 'comp-lzo' is present in remote config but missing in local config, remote='comp-lzo'
2023-09-25T14:30:40+02:00 INFO [openvpn] [node-de-17.protonvpn.net] Peer Connection Initiated with [AF_INET]194.126.177.14:443
2023-09-25T14:30:41+02:00 INFO [openvpn] TUN/TAP device tun0 opened
2023-09-25T14:30:41+02:00 INFO [openvpn] /sbin/ip link set dev tun0 up mtu 1500
2023-09-25T14:30:41+02:00 INFO [openvpn] /sbin/ip link set dev tun0 up
2023-09-25T14:30:41+02:00 INFO [openvpn] /sbin/ip addr add dev tun0 10.81.0.7/16
2023-09-25T14:30:41+02:00 INFO [openvpn] UID set to nonrootuser
2023-09-25T14:30:41+02:00 INFO [openvpn] Initialization Sequence Completed
2023-09-25T14:30:41+02:00 INFO [firewall] firewall disabled, only updating allowed ports internal state
2023-09-25T14:30:41+02:00 INFO [vpn] You are running 6 commits behind the most recent latest
2023-09-25T14:30:41+02:00 INFO [port forwarding] starting
2023-09-25T14:30:41+02:00 INFO [port forwarding] gateway external IPv4 address is 194.126.177.84
2023-09-25T14:30:41+02:00 INFO [port forwarding] port forwarded is 36736
2023-09-25T14:30:41+02:00 INFO [firewall] firewall disabled, only updating allowed ports internal state
2023-09-25T14:30:41+02:00 INFO [port forwarding] writing port file /tmp/gluetun/forwarded_port
2023-09-25T14:30:41+02:00 INFO [ip getter] Public IP address is 194.126.177.84 (Germany, Hesse, Frankfurt am Main)

Share your configuration

gluetun:
    image: qmcgaw/gluetun:${GLUETUN_VERSION}
    container_name: gluetun
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    environment:
      - VPN_SERVICE_PROVIDER=protonvpn
      - OPENVPN_USER=myuser+pmp
      - OPENVPN_PASSWORD=mypassword
      - FIREWALL_VPN_INPUT_PORTS=32400
      - VPN_PORT_FORWARDING=ON
      - SERVER_COUNTRIES=GERMANY
      - FIREWALL=OFF
      - DOT=OFF
      - OPENVPN_PROTOCOL=TCP
      - SERVER_CITIES=FRANKFURT
      - TZ=${TIMEZONE}
    ports:
      - 32400:32400
@clemone210
Copy link
Author

it seems that the port is not consistent on ProtonVPN when not in use.
Do we have any information about how long the port is mapped and published with ProtonVPN?

@qdm12
Copy link
Owner

qdm12 commented Sep 25, 2023

Technically speaking, they use the natpmp protocol, and Gluetun requests a port for a 60 seconds lifetime, and then, every 45 seconds it will re-request it for a 60 seconds lifetime. Basically 15 seconds before it expires, it re-requests it to maintain it.

Does it behave the same with image :v3.35.0? 🤔

@akutruff
Copy link

Yeah, this sounds like expected behavior. It's annoying but that's how proton vpn works. You need to automate some other script or program to update the port forward settings.

@qdm12
Copy link
Owner

qdm12 commented Sep 25, 2023

@akutruff so eventhough gluetun does request correctly on time, and their gateway answers correctly, the port forwarded gets disconnected silently after a few minutes!? I guess I could add something to try to reach publicip:forwardedport every N seconds to check the forwarded port works as an option, but ideally not since my time resources are a bit limited 😄

@akutruff
Copy link

@qdm12 It sounds like you're already doing the right thing. You just need to continually poll them for a port with natpmpc as far as I understand. I don't think you'd need to do any more of a check than that.

However, I just checked the container I had setup to test your port forwarding PR and the port is no longer open. : ( I don't see any port forwarding messages in the log after the reconnect.

gluetun                   | 2023-09-25T19:54:51Z INFO [healthcheck] unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout
gluetun                   | 2023-09-25T19:54:56Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:52716 in 12.27µs
gluetun                   | 2023-09-25T19:54:59Z INFO [healthcheck] program has been unhealthy for 6s: restarting VPN (see https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md)
gluetun                   | 2023-09-25T19:54:59Z INFO [vpn] stopping
gluetun                   | 2023-09-25T19:54:59Z INFO [vpn] starting
gluetun                   | 2023-09-25T19:54:59Z INFO [firewall] allowing VPN connection...
gluetun                   | 2023-09-25T19:54:59Z INFO [wireguard] Using userspace implementation since Kernel support does not exist
gluetun                   | 2023-09-25T19:55:00Z INFO [wireguard] Connecting to ***
gluetun                   | 2023-09-25T19:55:00Z INFO [wireguard] Wireguard setup is complete. Note Wireguard is a silent protocol and it may or may not work, without giving any error message. Typically i/o timeout errors indicate the Wireguard connection is not working.
gluetun                   | 2023-09-25T19:55:00Z INFO [vpn] VPN gateway IP address: 10.2.0.1
gluetun                   | 2023-09-25T19:55:00Z INFO [healthcheck] healthy!
gluetun                   | 2023-09-25T19:55:01Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:52726 in 15.03µs
gluetun                   | 2023-09-25T19:55:06Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:52734 in 12.66µs
gluetun                   | 2023-09-25T19:55:08Z INFO [healthcheck] unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout
gluetun                   | 2023-09-25T19:55:09Z INFO [healthcheck] healthy!
gluetun                   | 2023-09-25T19:55:10Z INFO [ip getter] Public IP address is *** (United States, New York, New York City)
gluetun                   | 2023-09-25T19:55:12Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:52746 in 16.72µs
gluetun                   | 2023-09-25T19:55:17Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:52756 in 12.48µs
gluetun                   | 2023-09-25T19:55:17Z INFO [healthcheck] unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout
gluetun                   | 2023-09-25T19:55:22Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:52762 in 12.48µs
gluetun                   | 2023-09-25T19:55:23Z INFO [healthcheck] healthy!

@akutruff
Copy link

Are you restarting the port forward process after the VPN restarts?

@akutruff
Copy link

@qdm12 I just verified that the port is now being reported as 0 when there's a healthcheck fail. The lines with port-mapper in the logs below show the output of the control server for the port.

gluetun                   | 2023-09-25T20:24:31Z INFO [http server] 200 GET /portforwarded wrote 15B to 127.0.0.1:56848 in 20.17µs
port-mapper           | 53986
gluetun                   | 2023-09-25T20:24:51Z INFO [healthcheck] unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout
gluetun                   | 2023-09-25T20:24:59Z INFO [healthcheck] program has been unhealthy for 6s: restarting VPN (see https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md)
gluetun                   | 2023-09-25T20:24:59Z INFO [vpn] stopping
gluetun                   | 2023-09-25T20:24:59Z INFO [port forwarding] stopping
gluetun                   | 2023-09-25T20:24:59Z INFO [firewall] removing allowed port 53986...
gluetun                   | 2023-09-25T20:24:59Z INFO [vpn] starting
gluetun                   | 2023-09-25T20:25:00Z INFO [port forwarding] removing port file /tmp/gluetun/forwarded_port
gluetun                   | 2023-09-25T20:25:00Z INFO [firewall] allowing VPN connection...
gluetun                   | 2023-09-25T20:25:00Z INFO [wireguard] Using userspace implementation since Kernel support does not exist
gluetun                   | 2023-09-25T20:25:00Z INFO [wireguard] Connecting to ***
gluetun                   | 2023-09-25T20:25:00Z INFO [wireguard] Wireguard setup is complete. Note Wireguard is a silent protocol and it may or may not work, without giving any error message. Typically i/o timeout errors indicate the Wireguard connection is not working.
gluetun                   | 2023-09-25T20:25:00Z INFO [vpn] VPN gateway IP address: 10.2.0.1
gluetun                   | 2023-09-25T20:25:04Z INFO [healthcheck] healthy!
gluetun                   | 2023-09-25T20:25:05Z INFO [ip getter] Public IP address is ***
gluetun                   | 2023-09-25T20:25:32Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:56896 in 15.89µs
port-mapper           | 0

@clemone210
Copy link
Author

clemone210 commented Sep 26, 2023

with the version built on 2023-09-24T16:54:36.207Z (commit 9b00763) there seems to be a change.
A few commits before, the connection got lost and the port was not reachable after 1 minute.
Now the connection will somehow update, but the actual container will still loose its connection.

This is my Plex containers logs when I do a fresh restart:

Sep 26, 2023 10:03:44.948 [140519956605584] DEBUG - PublicAddressManager: Starting.
Sep 26, 2023 10:03:44.948 [140519956605584] DEBUG - PublicAddressManager: Obtaining public address and mapping port.
Sep 26, 2023 10:03:44.949 [140519956605584] DEBUG - NetworkInterface: Starting watch thread.
Sep 26, 2023 10:03:44.949 [140519895755576] DEBUG - PublicAddressManager: Obtaining public IP.
Sep 26, 2023 10:03:44.949 [140519895755576] DEBUG - [HCl#d] HTTP requesting GET https://v4.plex.tv/pms/:/ip
Sep 26, 2023 10:03:44.949 [140519889427256] DEBUG - NAT: UPnP, attempting port mapping.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - Network change.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkInterface: Notified of network changed (force=0)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - Detected primary interface: 10.80.0.2
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - Network interfaces:
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG -  * 1 lo (127.0.0.1) (00-00-00-00-00-00) (loopback: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG -  * 365 eth0 (172.22.0.4) (02-42-AC-16-00-04) (loopback: 0)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - Creating NetworkServices singleton.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkServices: Initializing...
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Creating new service.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Got notification of changed network (first change: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Quick dispatch of network change.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Creating new service.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Got notification of changed network (first change: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Quick dispatch of network change.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Creating new service.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Got notification of changed network (first change: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Quick dispatch of network change.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Creating new service.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Got notification of changed network (first change: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Quick dispatch of network change.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Creating new service.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Got notification of changed network (first change: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Quick dispatch of network change.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Creating new service.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Got notification of changed network (first change: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Quick dispatch of network change.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - Network change for advertiser.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Setting up multicast listener on 0.0.0.0:32414
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - Network change for advertiser.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Setting up multicast listener on 0.0.0.0:32410
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - Network change for advertiser.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Setting up multicast listener on 0.0.0.0:32412
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Network change for browser (polled=0), closing 0 browse sockets.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Setting up multicast listener on 0.0.0.0:32413
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Browsing on interface 127.0.0.1 on broadcast address 127.255.255.255 (index: 0)
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Browsing on interface 172.22.0.4 on broadcast address 172.22.255.255 (index: 1)
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Network change for browser (polled=1), closing 0 browse sockets.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Browsing on interface 127.0.0.1 on broadcast address 127.255.255.255 (index: 0)
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Browsing on interface 172.22.0.4 on broadcast address 172.22.255.255 (index: 1)
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Network change for browser (polled=0), closing 0 browse sockets.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Setting up multicast listener on 0.0.0.0:1901
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Browsing on interface 172.22.0.4 on broadcast address 239.255.255.250 (index: 0)

Here is some log when I noticed the connection was not possible anymore:

Sep 26, 2023 10:19:41.001 [140519832599352] DEBUG - MyPlex: sendMapping resetting state - previous mapping state: 'Mapped'.
Sep 26, 2023 10:19:41.001 [140519832599352] DEBUG - MyPlex: mapping state set to 'Unknown'.
Sep 26, 2023 10:19:41.002 [140519855803192] DEBUG - Push: Processing new content in section 2 for 18 users.
Sep 26, 2023 10:19:41.005 [140519832599352] DEBUG - MyPlex: Sending Server Info to myPlex (user=XXXXXXXX, ip=194.126.177.37, port=50842)
Sep 26, 2023 10:19:41.005 [140519832599352] DEBUG - [HCl#52] HTTP requesting POST https://plex.tv/servers.xml?auth_token=xxxxxxxxxxxxxxxxxxxx
Sep 26, 2023 10:19:41.262 [140519913012024] DEBUG - [HttpClient/HCl#52] HTTP/2.0 (0.3s) 201 response from POST https://plex.tv/servers.xml?auth_token=xxxxxxxxxxxxxxxxxxxx
Sep 26, 2023 10:19:41.262 [140519832599352] DEBUG - MyPlex: Published Mapping State response was 201
Sep 26, 2023 10:19:41.262 [140519832599352] DEBUG - MyPlex: Got response for d9ec52012XXXXc107851d56XXX45acXXX124033 ~ registered 194.126.177.37:50842
Sep 26, 2023 10:19:41.262 [140519832599352] DEBUG - MyPlex: updating mapped state - current state: 'Mapped'
Sep 26, 2023 10:19:41.262 [140519832599352] DEBUG - MyPlex: mapping state set to 'Mapped - Publishing'.
Sep 26, 2023 10:19:41.262 [140519832599352] DEBUG - MyPlex: async reachability check - current mapped state: 'Mapped - Publishing'.
Sep 26, 2023 10:19:41.262 [140519832599352] DEBUG - MyPlex: Requesting reachability check.
Sep 26, 2023 10:19:41.263 [140519832599352] DEBUG - [HCl#53] HTTP requesting PUT https://plex.tv/api/servers/d9ec5XXXXXXXXXX1d56e2e645acd6e124033/connectivity?X-Plex-Token=xxxxxxxxxxxxxxxxxxxx&asyncIdentifier=9d83ceb4-6XXX-4f31-aXXXc-36de736b3952
Sep 26, 2023 10:19:41.383 [140519913012024] DEBUG - [HttpClient/HCl#53] HTTP/2.0 (0.1s) 200 response from PUT https://plex.tv/api/servers/d9ec5XXXX062dXXXXX51dXXX2e64XXXXXe124033/connectivity?X-Plex-Token=xxxxxxxxxxxxxxxxxxxx&asyncIdentifier=9d83ceb4-XXXX-XXXX-XXXX-36de736b3952 (reused)
Sep 26, 2023 10:19:41.383 [140519830489912] DEBUG - MyPlex: sendMapping resetting state - previous mapping state: 'Mapped - Publishing'.
Sep 26, 2023 10:19:41.383 [140519830489912] DEBUG - MyPlex: mapping state set to 'Unknown'.
Sep 26, 2023 10:19:41.385 [140519830489912] DEBUG - MyPlex: Sending Server Info to myPlex (user=XXXXXX, ip=194.126.177.37, port=50842)
Sep 26, 2023 10:19:41.385 [140519830489912] DEBUG - [HCl#54] HTTP requesting POST https://plex.tv/servers.xml?auth_token=xxxxxxxxxxxxxxxxxxxx
Sep 26, 2023 10:19:41.559 [140519913012024] DEBUG - [HttpClient/HCl#54] HTTP/2.0 (0.2s) 201 response from POST https://plex.tv/servers.xml?auth_token=xxxxxxxxxxxxxxxxxxxx (reused)
Sep 26, 2023 10:19:41.559 [140519830489912] DEBUG - MyPlex: Published Mapping State response was 201
Sep 26, 2023 10:19:41.559 [140519830489912] DEBUG - MyPlex: Got response for d9ec5XXXXXXdc1078XXXXXXXXXXXXXXXX33 ~ registered 194.126.177.37:50842
Sep 26, 2023 10:19:41.559 [140519830489912] DEBUG - MyPlex: updating mapped state - current state: 'Mapped - Publishing'
Sep 26, 2023 10:19:41.559 [140519830489912] DEBUG - MyPlex: mapping state set to 'Mapped - Publishing'.
Sep 26, 2023 10:19:41.559 [140519830489912] DEBUG - MyPlex: async reachability check - current mapped state: 'Mapped - Publishing'.
Sep 26, 2023 10:19:41.559 [140519830489912] DEBUG - MyPlex: we already have requested a connectivity refresh for async identifier 9d83ceb4XXXXXXXXXXXXXXXXX36b3952 which has not yet expired.
Sep 26, 2023 10:19:46.375 [140519947148088] DEBUG - [EventSourceClient/pubsub/172.105.245.168:443] EventSource: Got event [data] '<Message address="194.126.177.37" port="50842" asyncIdentifier="9XXXXXXXXXXXXXXXXXXXX952" connectivity="0" command="notifyConnectivity"/>'
Sep 26, 2023 10:19:46.376 [140519947148088] DEBUG - [EventSourceClient/pubsub/172.105.245.168:443] PubSub: Got notified of reachability for async identifier 9d83ceb4-643c-4f31-af5c-36de736b3952: 0 for 194.126.177.37:50842 (responded in 4992 ms)
Sep 26, 2023 10:19:46.376 [140519947148088] DEBUG - [EventSourceClient/pubsub/172.105.245.168:443] MyPlex: reachability check - current mapping state: 'Mapped - Publishing'.
Sep 26, 2023 10:19:46.376 [140519947148088] DEBUG - [EventSourceClient/pubsub/172.105.245.168:443] MyPlex: mapping state set to 'Mapped - Not Published (Not Reachable)'.`

Within gluetun there is no log past the initial start.

@qdm12
Copy link
Owner

qdm12 commented Sep 26, 2023

@clemone210 Please pull the latest image and run it with LOG_LEVEL=debug, I've added debug logs in the 'keep port forward' part in commit 53cbd83 (built today 2023-09-26), and see what the logs say?

but the actual container will still loose its connection.

You are talking about internet --> forwarded port through Gluetun --> Plex container correct?

If so, did you have any vpn internal restarts (due to unhealthy) maybe?

@akutruff as @clemone210 mentioned, what you experience is likely the bug in Gluetun that was fixed only 3 days ago, are you sure you are running the latest image? I also answered on the closed issue. If you are running an image built on or after 2023-09-24 and still experience the problem, let me know!

@akutruff
Copy link

@qdm12 I pulled the latest tagged image just now and will try. I also see you have an image tagged pr-1742 In general, will the latest tag have any of these PR's in them? Thanks.

@Friday13th87
Copy link

Friday13th87 commented Sep 26, 2023

i pulled the pr-1742 and it is a very old build. it says while starting its over 90 days old.

i was answering in the already closed issue. i am not using ProtonVPN but PureVPN with "FIREWALL_VPN_INPUT_PORTS" and have the same issue. after a while i end up in a unhealthy/healthy loop, connection is stable and working but port forwarding is lost.

i re-pulled the latest image right now but it is still
Running version latest built on 2023-09-24T16:54:36.207Z (commit 9b00763)

@clemone210
Copy link
Author

@qdm12 when I pull the latest docker image with the tag :latest, I am still 1 commit behind according to the log.

@akutruff
Copy link

akutruff commented Sep 26, 2023

@qdm12 For the latest tagged image I still see the behavior. But I don't think your debug statement is in this image.

gluetun | Running version latest built on 2023-09-24T16:54:36.207Z (commit 9b00763)

The port forwarding does not happen again, and the control server still returns 0.

gluetun  | 2023-09-26T14:27:55Z INFO [healthcheck] unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout
gluetun  | 2023-09-26T14:28:03Z INFO [healthcheck] program has been unhealthy for 6s: restarting VPN (see https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md)
gluetun  | 2023-09-26T14:28:03Z INFO [vpn] stopping
gluetun  | 2023-09-26T14:28:03Z INFO [port forwarding] stopping
gluetun  | 2023-09-26T14:28:03Z INFO [firewall] removing allowed port 65103...
gluetun  | 2023-09-26T14:28:03Z INFO [port forwarding] removing port file /tmp/gluetun/forwarded_port
gluetun  | 2023-09-26T14:28:03Z INFO [vpn] starting
gluetun  | 2023-09-26T14:28:03Z INFO [firewall] allowing VPN connection...
gluetun  | 2023-09-26T14:28:03Z INFO [wireguard] Using userspace implementation since Kernel support does not exist
gluetun  | 2023-09-26T14:28:03Z INFO [wireguard] Connecting to ***
gluetun  | 2023-09-26T14:28:03Z INFO [wireguard] Wireguard setup is complete. Note Wireguard is a silent protocol and it may or may not work, without giving any error message. Typically i/o timeout errors indicate the Wireguard connection is not working.
gluetun  | 2023-09-26T14:28:13Z ERROR [ip getter] Get "https://ipinfo.io/": dial tcp: lookup ipinfo.io on 10.2.0.1:53: read udp 10.2.0.2:58403->10.2.0.1:53: i/o timeout - retrying in 5s
gluetun  | 2023-09-26T14:28:14Z INFO [healthcheck] program has been unhealthy for 11s: restarting VPN (see https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md)
gluetun  | 2023-09-26T14:28:14Z INFO [vpn] stopping
gluetun  | 2023-09-26T14:28:14Z INFO [vpn] starting
gluetun  | 2023-09-26T14:28:14Z INFO [firewall] allowing VPN connection...
gluetun  | 2023-09-26T14:28:14Z INFO [wireguard] Using userspace implementation since Kernel support does not exist
gluetun  | 2023-09-26T14:28:14Z INFO [wireguard] Connecting to ***
gluetun  | 2023-09-26T14:28:14Z INFO [wireguard] Wireguard setup is complete. Note Wireguard is a silent protocol and it may or may not work, without giving any error message. Typically i/o timeout errors indicate the Wireguard connection is not working.
gluetun  | 2023-09-26T14:28:18Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:36074 in 12.86µs
gluetun  | 2023-09-26T14:28:23Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:36082 in 12.431µs
gluetun  | 2023-09-26T14:28:28Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:36090 in 13.53µs
gluetun  | 2023-09-26T14:28:28Z ERROR [ip getter] Get "https://ipinfo.io/": dial tcp: lookup ipinfo.io on 10.2.0.1:53: read udp 10.2.0.2:34539->10.2.0.1:53: i/o timeout - retrying in 10s
gluetun  | 2023-09-26T14:28:29Z INFO [healthcheck] healthy!
gluetun  | 2023-09-26T14:28:33Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:36100 in 14.87µs

@clemone210
Copy link
Author

but the actual container will still loose its connection.

You are talking about internet --> forwarded port through Gluetun --> Plex container correct?

If so, did you have any vpn internal restarts (due to unhealthy) maybe?

So for example in gluetun the forwarded port is 37706 while in Plex the auto-allocated port is 56146
I am honestly not sure, why the ports are different, but locally Plex always listens on 32400 which also my Cloudflare tunnel is connecting to.
At the moment also the port remains the same within Plex container but as Plex tries to check if the port mapping is healthy, it will fail at some point. After the failure it will work again with the same port for a short time, but it loops in working - not working - working - not working in Plex.

Maybe the DEBUG function will bring some light into the dark.

@Stetsed
Copy link

Stetsed commented Sep 27, 2023

I am currently experiencing the same issue, have not found a workaround including stopping and starting the VPN via the API. Only thing that fixes it is stopping the container and restarting it. Does seem to be AFTER a healthcheck fails and it restarts the VPN so as long as there is 0 interruption in the connection it does work.

@qdm12
Copy link
Owner

qdm12 commented Sep 28, 2023

My apologies everyone for:

  • (more importantly) the port forwarding issue is still there indeed, I'm working on a final fix... it should be done today (tldr it stays off after being stopped, so it doesn't start again after.. a lot more complicated than that but I'll spare you details 😄)
  • the previous debug logs commit was bad, I re-pushed it as 7793495

@qdm12
Copy link
Owner

qdm12 commented Sep 28, 2023

d4df872 should finally fix it for good.
Previously I only tested when it would be unhealthy from the start (never port forwarded), now I tested it does re-trigger port forwarding after a successful port forward -> unhealthy vpn restart (by disconnecting my ethernet cable lol I didn't find a fancier way to do it).

Let me know if this is fixed please 🙏 Thanks!!!!

@qdm12 qdm12 pinned this issue Sep 28, 2023
@clemone210
Copy link
Author

so for the gluetun image it seems to be okay.
The debug shows that the port is maintained, it also shows the same port is maintained.

My problem still exists. I am not sure what its causing it. Furthermore the forwarded port within gluetun never matches with the one which is exposed in the Plex container (automatically).

@ZekuX
Copy link

ZekuX commented Sep 29, 2023

Thank you for your hard work. I can confirm with the latest version the problem sadly still exists, after a while the container isn't responsive any more and only a restart fixes it.

@fizzxed
Copy link

fizzxed commented Sep 30, 2023

so for the gluetun image it seems to be okay. The debug shows that the port is maintained, it also shows the same port is maintained.

My problem still exists. I am not sure what its causing it. Furthermore the forwarded port within gluetun never matches with the one which is exposed in the Plex container (automatically).

I don't think gluetun supports UPnP and as such you will have to manually specify the port to forward in Plex with the one gluetun gets from protonVPN. Maybe you can forward 32400 through gluetun for LAN access and hope and pray the WAN port you manually set never changes. See this. I don't think they allow setting the WAN port thru their web API but maybe its undocumented somewhere.

Edit: Perhaps it is possible to update the Plex public/wan port through the web api since this python api apparently can do it, but I admit I spent all of 2 minutes looking at it and have not verified. You could then add a cron job that would periodically update the web port with the one gluetun reports, or maybe if gluetun someday supported web hooks we could spin up something to do it on port change?

@gmillerd
Copy link

Make sure that the ip address you are using from proton (which definitely is not dedicated) doesn't already have someone using this port, otherwise server hop to a new one and try again. Even if you yourself connect, port forward, disconnect vpn and try again, that port will still be bound for a considerable amount of time and you will not be able to rebind to it ... as proton's endpoint still has it in use.

@Friday13th87
Copy link

For me it was similar. after 4 days (with the current version) port forwarding stopped working. i am using a cron script which checks if port forwarding is still working and if not it restarts the container. so i dont care anymore, but the script was running last night, so port forwarding was actually not working anymore.

Before the last update it was at least once a day stopping to work, so it got much better, but not totally solved.

@CplPwnies
Copy link

For me it was similar. after 4 days (with the current version) port forwarding stopped working. i am using a cron script which checks if port forwarding is still working and if not it restarts the container. so i dont care anymore, but the script was running last night, so port forwarding was actually not working anymore.

Before the last update it was at least once a day stopping to work, so it got much better, but not totally solved.

I'm glad to know it's not just me. Since this seems like a slightly different issue than what is being discussed in this thread (though, very adjacent), I deleted my original comment and opened up issue #1891

@AlbyGNinja
Copy link

AlbyGNinja commented Oct 3, 2023

I wanna add a problem to this.
Whenever it happens, the service is restarted, this cause the containers passing through Gluetun to stop working properly unless a docker restart xyz is submitted, just like this issue: #405

@N47H4N
Copy link

N47H4N commented Oct 3, 2023

I've more or less the same problem here. I can't figure out how to assign my forwarded port (VPN) to the port of my linked container ?

Any idea please ?

@qdm12
Copy link
Owner

qdm12 commented Oct 7, 2023

@akutruff indeed sorry I completely forgot. You can do

docker exec gluetun iptables --append INPUT -i tun0 -p tcp --dport 38229 -j ACCEPT

@syss
Copy link

syss commented Oct 12, 2023

I found that forwarded port gets defunct after a while.
After starting i get full up/down speeds.
while it works there are small interrupts of the upload of ~2 seconds and goes back to full speed.
this works for lets say 30 minutes and then the upload speed plummets.
Sometimes it helped to restart the container, sometimes it helped to change the server.
But it stopped working overall after a time even though i get port Ok messages from natpmpc.
had this issue with rtorrent in container and wireguard on host and now with qbittorrent and gluetun both in containers.

At the not ok state i see dht has 0 hosts, kali linux torrents download and upload, but raspberry pi torrents dont start. Also other magnet links do not start downloading metadata in failing state.

to me protonvpn with wireguard and portforwarding is just not working. I have a strong suspicion that the flaw is on their side.

I try the openvpn option or get my money back, because it just doesnt work for me.

A pitty that mullvad closed their ports (which i used before)

@qdm12
Copy link
Owner

qdm12 commented Oct 16, 2023

@syss Thanks for clarifying! Let me know how it goes with OpenVPN, and others feel free to chime in with what you find out. I'll keep this issue opened, but won't mark it as urgent/blocker for next release anymore.

@syss
Copy link

syss commented Oct 16, 2023

The ovpn connection seems to stay intact.
However the down/upload speeds are very flakey. Going from 100Mbit down to some Kbit and up again.
giving up on it, for me the provider does not deliver. I am getting my money back

Edit: Port forwarding works on ovpn and stays open. But with said quality not useable for me

edit2: after a while lots of peers, no upload with ovpn

@syss
Copy link

syss commented Oct 28, 2023

After tweaking a lot of settings I can now finally say, that the ProtonVPN is working nicely with wireguard.

qbittorrent:
First the biggest issue I had was that in qbittorrent the option Enable local peer discovery was enabled and caused lots and lots of network issues. After disabling things worked fine for me.
Additionally it was needed to reduce the connections made.
I have a 100/20 Mbit connection and use the following settings:

  • global connections: 750
  • max con. per torrent: 50
  • global max number of upload slots: 50
  • max number of upload slots per torrent: 10

VPN settings:

VPN_SERVICE_PROVIDER=custom
VPN_TYPE=wireguard
VPN_PORT_FORWARDING=on
VPN_PORT_FORWARDING_PROVIDER=protonvpn
VPN_ENDPOINT_IP=<your ip here>
VPN_ENDPOINT_PORT=51820
WIREGUARD_PRIVATE_KEY=<your priv key here>
WIREGUARD_PUBLIC_KEY=<your pub key here>
WIREGUARD_ADDRESSES=10.2.0.2/32
VPN_DNS_ADDRESS=10.2.0.1

I was missing the VPN_PORT_* options before.

When it comes to portforwarding and updating the port, each program has its own method.

#!/bin/bash

GLUETUN_URL=http://127.0.0.1:8000
QBITTORRENT_URL=https://myurl/qbittorrent

#get the port from gluetun control server and modify it a bit
json="$(curl -L "${GLUETUN_URL}/v1/openvpn/portforwarded" 2>/dev/null | sed 's/port/listen_port/g')"
#set the port in qbittorrent
curl -i -X POST -d "json=${json}" "${QBITTORRENT_URL}/api/v2/app/setPreferences"

but I use this container here: https://hub.docker.com/r/technosam/qbittorrent-gluetun-port-update

so what you could do to forward the exposed port from ProtonVPN is to somehow tell your firewall/router to do a port trigger from protonvpn port to your plex port.

@alcroito
Copy link

so what you could do to forward the exposed port from ProtonVPN is to somehow tell your firewall/router to do a port trigger from protonvpn port to your plex port.

I feel like this is something gluetun should be able to do automatically when using protonvpn or any other vpn that returns a dynamic port, by forwarding it it to some static port that the user provides as configuration.

Basically establish the vpn connection, extract the dynamic port from "${GLUETUN_URL}/v1/openvpn/portforwarded", and then use something like socat to forward to the given static port.

@qdm12
Copy link
Owner

qdm12 commented Nov 10, 2023

@alcroito I just implemented it with VPN_PORT_FORWARDING_LISTENING_PORT from commit 6122911 let me know if it works 😉 (it uses that iptables prerouting redirect instruction(s)).

@alcroito
Copy link

@alcroito I just implemented it with VPN_PORT_FORWARDING_LISTENING_PORT from commit 6122911 let me know if it works 😉 (it uses that iptables prerouting redirect instruction(s)).

Thanks a lot! I hope it works.
Unfortunately i can't test it yet, because the docker image has not been updated yet.

@KptCheeseWhiz
Copy link

This might be unrelated, but whenever the ProtonVPN using wireguard connection restarts, the forwarded port dies and you need to re-listen that on port, or maybe this is just an issue with deluge. Here's a script I am using to fix the issue using inotifyd, it is also updating the forwarded port if it changes for some reason (it should be straightforward to modify it to work for qbittorrent) :

#!/bin/sh

FORWARDED_PORT_FILE=/gluetun/forwarded_port

while [ ! -f "$FORWARDED_PORT_FILE" ] || [ -z "$(cat "$FORWARDED_PORT_FILE")" ]; do
  echo "info: waiting for forwarded port file.."
  sleep 5
done

{
  FORWARDED_PORT=$(cat "$FORWARDED_PORT_FILE")
  echo "info: forwarded port is $FORWARDED_PORT"

  while ! nc -z 0.0.0.0 8112 &>/dev/null; do
    echo "info: waiting for deluge to wake up.."
    sleep 5
  done

  deluge-console -c /config "config -s listen_ports [$FORWARDED_PORT,$FORWARDED_PORT]"

  echo "info: watching if the forwarded port has been changed.."
  while :; do
    while read EVENT FILE; do
      if [ "$EVENT" == "x" ]; then
        while [ ! -f "$FORWARDED_PORT_FILE" ] || [ -z "$(cat "$FORWARDED_PORT_FILE")" ]; do
          echo "info: waiting for forwarded port file to be recreated.."
          sleep 5
        done
      fi

      NEW_PORT=$(cat "$FILE")
      if [ "$NEW_PORT" -ne "$FORWARDED_PORT" ]; then
        echo "info: forwarded port has been changed to $NEW_PORT (was $FORWARDED_PORT)"
        FORWARDED_PORT=$NEW_PORT
      else
        echo "info: forwarded port unchanged (is $FORWARDED_PORT)"
        # We need to reset the port since it might be dead and deluge is not aware
        deluge-console -c /config "config -s listen_ports [$((FORWARDED_PORT+1)),$((FORWARDED_PORT+1))]"
        sleep 1
      fi
      deluge-console -c /config "config -s listen_ports [$FORWARDED_PORT,$FORWARDED_PORT]"
    done < <(inotifyd - "$FORWARDED_PORT_FILE:wx")
  done
} &

@JeremyGuinn
Copy link

JeremyGuinn commented Nov 21, 2023

@alcroito I just implemented it with VPN_PORT_FORWARDING_LISTENING_PORT from commit 6122911 let me know if it works 😉 (it uses that iptables prerouting redirect instruction(s)).

@qdm12, I built the Dockerfile from commit [6122911].
The container starts and successfully connects using the basic config without any port forwarding, but as soon as VPN_PORT_FORWARDING=on is set, I get the following crash:

$ docker build -t gmcgaw/gluetun .
$ docker run -it --rm --cap-add=NET_ADMIN \
  -e VPN_SERVICE_PROVIDER=protonvpn \
  -e VPN_TYPE=openvpn -e VPN_PORT_FORWARDING=on \
  -e OPENVPN_USER=test -e OPENVPN_PASSWORD=test \
  -p 8000:8000/tcp \
  qmcgaw/gluetun

gluetun  | panic: runtime error: invalid memory address or nil pointer dereference
gluetun  | [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x75b3a6]
gluetun  |
gluetun  | goroutine 6 [running]:
gluetun  | github.com/qdm12/gotree.(*Node).Appendf(...)
gluetun  |      github.com/qdm12/gotree@v0.2.0/node.go:37
gluetun  | github.com/qdm12/gluetun/internal/configuration/settings.PortForwarding.toLinesNode({0xc00029c7d3?, 0xc0001eaf30?, 0xc0001eae70?, 0xc00029c7d4?})
gluetun  |      github.com/qdm12/gluetun/internal/configuration/settings/portforward.go:109 +0x146
gluetun  | github.com/qdm12/gluetun/internal/configuration/settings.Provider.toLinesNode({0xc0001eae40, {{0xc000012039, 0x7}, {{0x0, 0xffff00000000}, 0xc00012a000}, {0xc0001eae50, 0x1, 0x1}, {0x0, ...}, ...}, ...})
gluetun  |      github.com/qdm12/gluetun/internal/configuration/settings/provider.go:94 +0x2ca
gluetun  | github.com/qdm12/gluetun/internal/configuration/settings.VPN.toLinesNode({{0xc000012039, 0x7}, {0xc0001eae40, {{0xc000012039, 0x7}, {{...}, 0xc00012a000}, {0xc0001eae50, 0x1, 0x1}, ...}, ...}, ...})
gluetun  |      github.com/qdm12/gluetun/internal/configuration/settings/vpn.go:87 +0xb8
gluetun  | github.com/qdm12/gluetun/internal/configuration/settings.Settings.toLinesNode({{0xc0001eae00, 0xc00029c738}, {{{0x0, 0xffff7f000001}, 0xc00012a000}, 0xc00029c739, {0xc00029c73a, 0xc00029c770, {{...}, 0xc00029c778, ...}, ...}}, ...})
gluetun  |      github.com/qdm12/gluetun/internal/configuration/settings/settings.go:147 +0xb8
gluetun  | github.com/qdm12/gluetun/internal/configuration/settings.Settings.String({{0xc0001eae00, 0xc00029c738}, {{{0x0, 0xffff7f000001}, 0xc00012a000}, 0xc00029c739, {0xc00029c73a, 0xc00029c770, {{...}, 0xc00029c778, ...}, ...}}, ...})
gluetun  |      github.com/qdm12/gluetun/internal/configuration/settings/settings.go:141 +0x31
gluetun  | main._main({0x108da80, 0xc000111540}, {{0x1086f58, 0x7}, {0x1086f60, 0x7}, {0x10885f0, 0xf}}, {0xc000114050, 0x1, ...}, ...)
gluetun  |      ./main.go:278 +0x16b0
gluetun  | main.main.func1()
gluetun  |      ./main.go:92 +0x12c
gluetun  | created by main.main in goroutine 1
gluetun  |      ./main.go:91 +0x5e5

Looks like node is being defined after the new log you've added
6122911#diff-6a711fc9088a325002bd9769a59d04cd3dfb31e7c658f5e51b596f6cf9ea0168R109-L97

After a little switcheroo, I've got it running, but then fail when trying to create the NAT redirect, is it supposed to be -i tun0?

ERROR [vpn] redirecting port in firewall: 
  redirecting port: redirecting IPv4 source port 46742 to destination port 55660 on interface tun0: 
  command failed: "iptables -t nat --append PREROUTING -o tun0 -d 127.0.0.1 -p tcp --dport 46742 -j REDIRECT --to-ports 55660":
    iptables v1.8.9 (legacy): Can't use -o with PREROUTING

@qdm12
Copy link
Owner

qdm12 commented Nov 23, 2023

@alcroito my bad, the automated build failed because of a linter error;

@KptCheeseWhiz indeed, what a disastrous commit 😄 I repushed the commit as 4105f74 it should fix both issues you successfully spotted! 😉

@Michsior14
Copy link

Michsior14 commented Nov 26, 2023

@alcroito I just implemented it with VPN_PORT_FORWARDING_LISTENING_PORT from commit 6122911 let me know if it works 😉 (it uses that iptables prerouting redirect instruction(s)).

For me this didn't work in transmission (commit 4105f74, port was marked as closed). I've created docker mod instead for linuxserver container. If someone is interested it can be found here.

@SnoringDragon
Copy link

@alcroito I just implemented it with VPN_PORT_FORWARDING_LISTENING_PORT from commit 6122911 let me know if it works 😉 (it uses that iptables prerouting redirect instruction(s)).

For me this didn't work in transmission (commit 4105f74, port was marked as closed). I've created docker mod instead for linuxserver container. If someone is interested you can check it here.

I would assume the issue is created in that when transmission is announcing to trackers, it includes the callback port which is set in the config, not dynamically by the port it is accessing the internet with. As a result, you still need some intermediary code as you noticed such as what you are working on, or the linux container I have (link). The only way to change I would imagine to change this natively within gluetun/your torrent software would be to get NAT-PMP working properly, which to my knowledge is not (at least with qbittorrent).

@SnoringDragon
Copy link

I've more or less the same problem here. I can't figure out how to assign my forwarded port (VPN) to the port of my linked container ?
Any idea please ?

I actually have a container I built to solve specifically this problem which I posted under another issue. I hope this helps, and I plenty to update it with the listed suggestions when I get a change, but I have been busy as I am a student.

Would you mind sharing your qbittorrent connection config? Everytime I manually set the port in my qbittorrent config to the port supplied by gluetun, it shows Disconnected

Appologies for taking forever to get back to this, but if you're still looking for an answer, here's what I have

gluetun:
    image: qmcgaw/gluetun:latest
    container_name: gluetun
    restart: unless-stopped
    labels:
      #Domain routing 

      com.centurylinklabs.watchtower.monitor-only: true
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun
    volumes:
      - ${DIR}/config/gluetun:/gluetun
      - ${DIR}/tmp/gluetun:/tmp/gluetun
    environment:
      - VPN_SERVICE_PROVIDER=custom
      - VPN_TYPE=wireguard
      - VPN_ENDPOINT_IP=[IP]
      - VPN_ENDPOINT_PORT=[PORT]
      - WIREGUARD_PUBLIC_KEY="[Public Key]"
      - WIREGUARD_PRIVATE_KEY="[Private Key]"
      - WIREGUARD_ADDRESSES="10.2.0.2/32"
      - VPN_PORT_FORWARDING=on
      - VPN_PORT_FORWARDING_PROVIDER=protonvpn
    networks:
      - external-network
      - qbittorrent-proxy

  qbittorrent:
    image: qbittorrentofficial/qbittorrent-nox:latest
    container_name: qbittorrent
    restart: unless-stopped
    labels:
      com.centurylinklabs.watchtower.monitor-only: true
    volumes:
      - ${DIR}/config:/config
      - ${DOWNLOADS}:/downloads
      - /media/{user}/Media/torrent:/downloads2
    network_mode: "service:gluetun"

  qmap:
    image: snoringdragon/gluetun-qbittorrent-port-manager:latest
    container_name: qmap
    restart: unless-stopped
    labels:
      com.centurylinklabs.watchtower.monitor-only: true
    volumes:
      - ${DIR}/tmp/gluetun:/tmp/gluetun
    environment:
      QBITTORRENT_SERVER: localhost
      QBITTORRENT_PORT: 8080
      QBITTORRENT_USER: "[username]"
      QBITTORRENT_PASS: "[password]"
      PORT_FORWARDED: /tmp/gluetun/forwarded_port
      HTTP_S: http
    network_mode: "service:gluetun"

I have recently also done a bunch of updates for improved compatibility.

@Stetsed
Copy link

Stetsed commented Dec 17, 2023

So after some investigating, it seems like the problem isn't with Gluetun not port forwarding. But it's with Qbittorent losing the port binding when the tunnel restarts. So a really hacky way to get around this that I have found seems to be to tell it to listen on all addresses, and then switching it back to the tunnel address. This forces it to rebind to the port and it seems to be a fix for the issue. It just netcats the ip and port, and if it's closed then it forces it. I have pasted it below, and will report if I have any issues(it would be easy to integrate this with the other qbittorent-port-manager scripts.)

#!/bin/bash

cd /root/docker/arr

while true; do
	while [[ ! -f tmp/ip || ! -f tmp/forwarded_port ]]; do
		echo "Waiting for gluetun to connect..."
		sleep 1
		FILE_NO_EXIST=1
	done

	if [[ $FILE_NO_EXIST -eq 1 ]]; then
		FILE_NO_EXIST=0
		echo "gluetun connected"
		sleep 240
	fi

	nc -v -z -w 3 $(cat tmp/ip) $(cat tmp/forwarded_port)

	if [[ $? -eq 0 ]]; then
		sleep 60
	else
		echo "$(date -u +%Y-%m-%d-%H:%M) Port is closed, forcing qbittorent to relisten" | tee -a tmp/port_checker.log

		curl -s -c tmp/qbittorrent-cookies.txt --data "username=$USER&password=$PASSWORD" https://$HOST/api/v2/auth/login >/dev/null

		curl -b tmp/qbittorrent-cookies.txt -X POST https://$HOST/api/v2/app/setPreferences --data 'json={"current_interface_address":"10.2.0.2"}'
		curl -b tmp/qbittorrent-cookies.txt -X POST https://$HOST/api/v2/app/setPreferences --data 'json={"current_interface_address":"0.0.0.0"}'

		sleep 240
	fi
done

@jakesmorrison
Copy link

jakesmorrison commented Jan 4, 2024

EDIT: I changed "container:vpn" to "service:vpn" and restarted the containers. Instead of resuming all the qbit torrents at once I did them in batches. It has been 5 hours and I have not had a Gluetun crash and my port is still open.

EDIT2: Overnight Gluetun crashed and ports closed

I am also using Proton VPN (wireguard) and Gluetun.

I recently had to restart my gluetun container that had been active for ~2 months (working great). After restarting the container I am now experiencing some port forwarding problems.

I start the gluetun, qbittorrent and qbittorrent-natmap. For the first 10 minutes or so everything works; the port is open. Then gluetun crashes and the port closes after gluetun reconnects. If I manually restart all containers the port is open again.

Crash Code:

2024-01-04T22:39:38Z DEBUG [port forwarding] refreshing port forward since 45 seconds have elapsed
2024-01-04T22:39:38Z DEBUG [port forwarding] port forwarded 43785 maintained
2024-01-04T22:39:53Z INFO [healthcheck] unhealthy: dialing: dial tcp4 104.16.132.229:443: i/o timeout
2024-01-04T22:40:01Z INFO [healthcheck] program has been unhealthy for 6s: restarting VPN (see https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md)
2024-01-04T22:40:01Z INFO [vpn] stopping
2024-01-04T22:40:01Z INFO [port forwarding] stopping
2024-01-04T22:40:01Z INFO [firewall] removing allowed port *******
2024-01-04T22:40:01Z DEBUG [firewall] iptables --delete INPUT -i tun0 -p tcp --dport 43785 -j ACCEPT
2024-01-04T22:40:01Z DEBUG [firewall] ip6tables-nft --delete INPUT -i tun0 -p tcp --dport 43785 -j ACCEPT
2024-01-04T22:40:01Z DEBUG [firewall] iptables --delete INPUT -i tun0 -p udp --dport 43785 -j ACCEPT
2024-01-04T22:40:01Z DEBUG [firewall] ip6tables-nft --delete INPUT -i tun0 -p udp --dport 43785 -j ACCEPT
2024-01-04T22:40:01Z INFO [port forwarding] removing port file /tmp/gluetun/forwarded_port
2024-01-04T22:40:01Z DEBUG [wireguard] closing controller client...
2024-01-04T22:40:01Z DEBUG [wireguard] removing IPv6 rule...
2024-01-04T22:40:01Z DEBUG [wireguard] removing IPv4 rule...
2024-01-04T22:40:01Z DEBUG [wireguard] shutting down link...
2024-01-04T22:40:01Z DEBUG [wireguard] deleting link...
2024-01-04T22:40:01Z INFO [vpn] starting
2024-01-04T22:40:01Z DEBUG [wireguard] Wireguard server public key: *******
2024-01-04T22:40:01Z DEBUG [wireguard] Wireguard client private key: ICu...1c=
2024-01-04T22:40:01Z DEBUG [wireguard] Wireguard pre-shared key: [not set]
2024-01-04T22:40:01Z INFO [firewall] allowing VPN connection...
2024-01-04T22:40:01Z INFO [wireguard] Using available kernelspace implementation
2024-01-04T22:40:01Z INFO [wireguard] Connecting to ****
2024-01-04T22:40:01Z INFO [wireguard] Wireguard setup is complete. Note Wireguard is a silent protocol and it may or may not work, without giving any error message. Typically i/o timeout errors indicate the Wireguard connection is not working.
2024-01-04T22:40:01Z INFO [port forwarding] starting
2024-01-04T22:40:01Z INFO [healthcheck] healthy!
2024-01-04T22:40:01Z INFO [port forwarding] gateway external IPv4 address is *******
2024-01-04T22:40:02Z INFO [port forwarding] port forwarded is 
2024-01-04T22:40:02Z INFO [firewall] setting allowed input port 43785 through interface tun0...
2024-01-04T22:40:02Z DEBUG [firewall] iptables --append INPUT -i tun0 -p tcp --dport 43785 -j ACCEPT
2024-01-04T22:40:02Z DEBUG [firewall] ip6tables-nft --append INPUT -i tun0 -p tcp --dport 43785 -j ACCEPT
2024-01-04T22:40:02Z DEBUG [firewall] iptables --append INPUT -i tun0 -p udp --dport 43785 -j ACCEPT
2024-01-04T22:40:02Z DEBUG [firewall] ip6tables-nft --append INPUT -i tun0 -p udp --dport 43785 -j ACCEPT
2024-01-04T22:40:02Z INFO [port forwarding] writing port file /tmp/gluetun/forwarded_port
2024-01-04T22:40:02Z INFO [ip getter] Public IP address is *******
2024-01-04T22:40:47Z DEBUG [port forwarding] refreshing port forward since 45 seconds have elapsed
2024-01-04T22:40:47Z DEBUG [port forwarding] port forwarded 43785 maintained

Docker Compose

  vpn:
    container_name: vpn
    image: qmcgaw/gluetun:latest
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun
    volumes:
      - /config/gluetun/config:/gluetun
      - /config/gluetun/tmp:/tmp/gluetun
    environment:
      - VPN_SERVICE_PROVIDER=custom
      - VPN_TYPE=wireguard
      - VPN_ENDPOINT_IP=
      - VPN_ENDPOINT_PORT=
      - WIREGUARD_PUBLIC_KEY=
      - WIREGUARD_PRIVATE_KEY=
      - WIREGUARD_ADDRESSES=
      - WIREGUARD_ALLOWED_IPS=
      - FIREWALL_OUTBOUND_SUBNETS=
      - VPN_PORT_FORWARDING=on
      - VPN_PORT_FORWARDING_PROVIDER=protonvpn
      - LOG_LEVEL=debug
    restart: unless-stopped
  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: qbittorrent
    environment:
      - PUID=3000
      - PGID=568
      - TZ=US/Boise
      - WEBUI_PORT=10095
    restart: unless-stopped
    network_mode: container:vpn
  qbittorrent-natmap:
    image: ghcr.io/soxfor/qbittorrent-natmap:latest
    container_name: qbittorrent-natmap
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      - TZ=America/Boise
      - QBITTORRENT_SERVER=192.168.100.5
      - QBITTORRENT_PORT=10095
      - QBITTORRENT_USER=admin
      - QBITTORRENT_PASS=adminadmin
      - VPN_CT_NAME=vpn
      - VPN_IF_NAME=tun0
      - CHECK_INTERVAL=300
      - NAT_LEASE_LIFETIME=300
    depends_on:
      vpn:
        condition: service_healthy
      qbittorrent:
        condition: service_started
    network_mode: container:vpn

@jakesmorrison
Copy link

@Stetsed I have been testing the port using a simple Python script. I wasn't convinced that toggling qbits binding ipaddress would matter because I thought the python script that only uses port and ipaddress of the vpn would be independent from qbit.

Turns out I was wrong. I manually toggled the ip address binding and my port is now open.

@clemone210
Copy link
Author

fixed on my end.

@levouh
Copy link

levouh commented Feb 13, 2024

What was the solution here @clemone210?

@clemone210
Copy link
Author

What was the solution here @clemone210?

With the latest version, I do not have any error or interruption.

@kainzilla
Copy link

kainzilla commented Feb 27, 2024

A hopefully helpful note for anyone looking over this issue report in the future:

  • With the current versions (2024-02-27) of the LinuxServer.io Deluge and qBittorrent containers attached to Gluetun's network,
  • With Gluetun connecting to ProtonVPN via Wireguard protocol,
  • When Gluetun needed to restart the VPN,
  • the Deluge and qBittorrent clients would stop listening on the forwarded port specifically only on the tun0 adapter.

Using nc -zvw10 <ip> <port> to test, you can confirm that even within the torrent client container itself, the torrent client listening port on the tun0 adapter would not respond, even though Gluetun was forwarding and working as usual.

As soon as the configured IP in the torrent client is changed, (whether to the tun0 address or to 0.0.0.0, or to something else and back to it's original setting), the torrent client would start responding on that port; sometimes not for long however. Restarting the torrent client container would resolve the issue 100% until a VPN disconnection occurs.

A final note is that LinuxServer.io's Transmission torrent client container doesn't seem to have this issue; it survives VPN restarts for me and keeps listening without needing restarts. As far as I can tell, Gluetun and Proton VPN appear to be working great, and the port listening issue lies with the torrent clients.

@qdm12 qdm12 unpinned this issue May 1, 2024
@beastlybeast
Copy link

beastlybeast commented Jun 13, 2024

As soon as the configured IP in the torrent client is changed, (whether to the tun0 address or to 0.0.0.0, or to something else and back to it's original setting), the torrent client would start responding on that port; sometimes not for long however. Restarting the torrent client container would resolve the issue 100% until a VPN disconnection occurs.

@kainzilla, thanks for recapping -- is there a fix for the quoted part above? I notice when it loses connection, I too can get it to resume if I restart the containers, or, if I change the network interface in qBittorrent (like from "any" to "tun") and click "save".

CleanShot 2024-06-13 at 13 34 47@2x

The port updating script I use simply updates the port:

https://codeberg.org/TechnoSam/qbittorrent-gluetun-port-update

However, I see another that fully restarts the qbittorrent container when the port changes:

https://github.com/royborgen/qbt_port_update

That said, the issue I'm having is not when the port changes. The port from ProtonVPN might not change for many days, but I still have this issue where qBittorrent becomes unconnectable until restarting the container or toggling the network interface in the qBittorrent advanced menu.

It seems like the ideal situation would be something that checked for connectability (nc -zvw10 <ip> <port>) and restarted if it wasn't connectable.

@kainzilla
Copy link

kainzilla commented Jun 13, 2024

It seems like the ideal situation would be something that checked for connectability (nc -zvw10 <ip> <port>) and restarted if it wasn't connectable.

I have two solutions for you:

  • Solution 1: I actually made a script that was intended for the specific purpose of updating the port and also checking that the port is working - and it will toggle the qBittorrent or Deluge network adapter for you if it detects that the port isn't receiving. It currently doesn't force a restart of the entire torrent container... but someone could probably add this feature in.
  • Solution 2: In my testing, the Transmission client outright didn't experience this portion of the issue - Gluetun UDP disconnections would still happen, but the client recovered gracefully from them without fail, which was a significant reliability improvement. I actually prefer this over the script. I combined Transmission with the Flood Torrent Client UI running in another container and I like it.

I think the bug in question is in the torrent clients, as opposed to anything Gluetun is doing - and I suspect it might be a libtorrent bug as both qBittorrent and Deluge use it, and they both experience the issue.

Regarding the script above - it's meant to be used with LinuxServer.io's containers (see the page for links), because their containers have a super-easy way to drop in add-on scripts like that. I have a gluetun-delay script that delays the torrent client startups that works with the same LinuxServer.io containers also.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests