-
-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OMR not aggregating #3068
Comments
MPTCP support Check is now disabled for 6.1 based release. |
I seem to be experiencing this as well. Understanding that the "Support Check" page is broken, I still do not achieve any aggregated traffic. The bandwidth page shows essentially 0 traffic via the other two WANs, with the only traffic being on the master interface. Status page is fine (all green checkmarks) The "established connections" page shows only connections using the master interface IP (740 connections currently listed, which seems... high; essentially all are listed as ESTAB). Is the Fullmesh output broken too?
I experienced this lack of aggregation as well on the previous beta of 0.60 using the 6.1 kernel on a different OMR guest & VPS. As noted in #3067, my local OMR hardware/network/WANs aggregate using 5.4 kernel-based MPTCP on a different VPS, but I have yet to get the 6.1 MPTCP based system to aggregate. Current environment: |
"MPTCP fullmesh" tab is not broken, it doesn't have exactly the same output as 5.4 based release. |
For me its exactly the same as for @xzjq. Dashboard shows all green, traffic only through master interface. I only have VLAN on LAN side. So Guest, IoT and LAN. Proxy is currently V2Ray, but problem stays in no regard of the proxy used. |
The OMR and VPS referenced in this ticket are completely fresh; created specifically to test the 6.1 kernel based implementation. This is "fresh out of the box"; OMR is using shadowsocks and is set for OpenVPN for the vpn traffic. I use VLANs locally, though as mentioned in the other ticket, this 6.1-based OMR guest VM is on the same hardware node as the 5.4-based OMR guest VM that aggregates just fine (i.e. using these VLANs). The prior beta installation (using an OMR vm For this current snapshot installation, there are now 1840 connections listed in the "Established Connections" page, all using only the master WAN. This seems like a connection leak. |
There are 3570 connections now on this 6.1-based system (all solely using the master WAN IPv4 address), and this router is not being used for traffic. Status page is still green checkmarks.
Really seems like a connection leak. Is there an intended limit? I see my 5.4-based system has 57 connections and it has been up for 24 hrs. |
What is the result of |
I will provide the data from two separate 6.1-based VPS installs (neither of which aggregate) First, the one mentioned so far in this ticket, created in the last day or so from the test version of the script. The VPS script was run on a fresh install of debian 12:
net.ipv4.tcp_available_ulp = mptcp tls ip mptcp endpoint Second, the first 6.1-based VPS that I tried that didn't aggregate either. This is based on the script from the beta announcement. The VPS started with debian 11, which the VPS script upgraded:
net.ipv4.tcp_available_ulp = mptcp tls ip mptcp endpoint
Yes. Essentially 0 traffic on the other two WANs (e.g. ~100 Kbps on each while the master WAN has 50 Mbps).
It has currently pared down the list to "only" 1100 connections right now, all using the WAN master device IPv4. Destination port is |
Good news: I pointed the new OMR guest VM (based on snapshot) to the older of the two 6.1-based VPS installations (referenced above) and achieved aggregation after I rebooted the OMR (not the VPS). The Established Connections page showed 4418 connections after a speed test, all using the Master WAN IPv4 (only) and almost all were in ESTAB state (4393 / 4418). All but one was to port Maybe 4,000+ active connections is normal via just one of the WANs? It seems like a lot. |
Port 65101 is Shadowsocks port, you shouldn't have so much connection to it. You are using P2P or really nothing ? |
Established Connections page shows 6371 connections at the moment, all using the master WAN interface IPv4. When I checked last night I did confirm that they were all using unique local ports. I changed master WAN designation last night and can confirm this is based on the master WAN, not something about the other underlying device/interface (i.e. the connections all changed to the new master WAN IPv4 address, whereas they were all the other master WAN IPv4 before). No P2P. Just regular home internet use with some client side VPNs, some web browsing/email/etc, and intermittent speedtest.net tests. |
I'm now also encountering another disaggregation issue I experienced before on the 6.1-based beta, where the individual WAN interfaces constantly toggle on/off. The status page shows red X's on one or more interfaces, that are subsequently replaced by green checks, and then that cycles on the next status page refresh. This was a major reason I initially used the 5.4-based OMR/VPS (i.e. the 6.1-based OMR/VPS status page would not stay "green", whereas 5.4-based would). The below is using the brand new OMR snapshot VM but it is connected to the older 6.1-based Is there a way to upgrade the beta VPS to snapshot, or do I have to destroy the VPS and recreate it? Right now, eth0.300 and eth0.400 interfaces are both toggling. So, there are hundreds of these messages in logread, approximately one set every 7 seconds. Like the beta OMR/VPS, it didn't always do this. Sometimes it was stable. This was stable last night, but degenerated to this state hours later. Dec 12 21:26:40 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Reload MPTCP config for eth0.300 |
Can you, on the router do a |
Now, if I do something like This corresponds to hundreds of |
Dec 14 21:06:44 vps omr-admin.py[1299]: ERROR: Exception in ASGI application (there is indeed no such file, though an /etc/openvpn/tun0.conf file does exist) And also these curious errors: Dec 14 21:06:44 vps omr-service[2206466]: Error: Nexthop has invalid gateway. Dec 14 21:20:05 vps ss-server[777]: getpeername: Transport endpoint is not connected As well as hundreds of these; as you can see, several per second: Dec 14 21:06:44 vps ss-server[1389]: server recv: Connection reset by peer |
For omr-admin error, it was fixed a few weeks ago, launch VPS snapshot install script again. |
Hi, i updated the router and VPS to the latest snapshot releases. What am i missing? :/ Hope you had a wonderful Christmas! |
Did you try to change master interface ? |
I have tried both. With 5.4 kernel on router and VPS everything was fine. |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days |
Hello, after updating OMR and VPS to 6.1rc2 omr-bypass finaly works. But the main problem still persists. I cannot find the cause for this. WAN1 -> Metric=3 = The WAN which the Traffic is going through. However Network -> MPTCP -> MPTCP Fullmesh is showing: Where "eth3" is WAN2 and "eth4" is WAN1. Shouldnt "id" be equivalent to the metric of the WANs? |
id is not related to metric. |
Ok, thank you. Yes i did. Already reinstalled OMR + VPS without importing an old config. |
What is the result of |
Screenshots as wished. However the behaivior of omr-test-speed is absolutly strange... Both tests for eth3 and eth4 dont even start downloading... But if i download https://nbg1-speed.hetzner.com/1GB.bin via Browser traffic goes over WAN1 and also with full speed. Statuspage is all green and Internet in general is working, yet only over one WAN. |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days |
Expected Behavior
Together with the VPS, both of my Internet lines should be used via MPTCP.
MPTCP Support-Check should show a working state for both links.
Current Behavior
Since the update to 0.60beta1, this is no longer the case. (Of Course updated the VPS to Debian 12 and then executed your script, keys do match.)
MPTCP Support-Check states unsupported on both links. Therefore Traffic is only going through the defined master inferface.
Im using Shadowsocks as Proxy and Glorytun TCP as VPN.
What am i missing?
Thanks for your work!
Specifications
The text was updated successfully, but these errors were encountered: