-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Description
What is the issue?
I am experiencing a significant performance disparity between the native Windows Tailscale client and the Tailscale client running inside WSL (Windows Subsystem for Linux) on the same physical machine.
Setup:
- Topology: Self-hosted DERP server used for relaying traffic. ACL configured to enforce DERP connections.
- Destination: A Debian MiniPC running
iperf3 -s. - Source: A Windows PC (acting as the sender).
The Problem:
When sending data from the Windows native client (via iperf3 -c or python http.server download), the throughput is capped at approximately 10 Mbps (700KB/s).
The Control Group (WSL):
When running Tailscale independently inside WSL (Debian) on the same Windows host (using the same physical network interface), the throughput stabilizes at 40 Mbps (~5.3 MB/s).
Since WSL shares the same physical link and DERP server, this indicates the issue lies specifically within the Windows client implementation or its interaction with the Windows network stack, rather than the network bandwidth or DERP node capacity.
Steps to reproduce
- Set up a self-hosted DERP server and ensure clients connect via this relay.
- Scenario A (Windows Native):
- Run
iperf3 -c <Remote_Linux_IP>from PowerShell. - Result: Throughput struggles to exceed 10 Mbps.
- (Verification): Host a file via
python -m http.serveron Windows. Download it from the remote Linux node viawget. Speed is ~700KB/s.
- Run
- Scenario B (WSL on same Host):
- Install Tailscale inside WSL2 (Ubuntu-24.04).
- Run
iperf3 -c <Remote_Linux_IP>from WSL terminal. - Result: Throughput reaches 40-60 Mbps.
- (Verification): Host a file via
python -m http.serverinside WSL. Download it from the remote Linux node. Speed is ~5.3MB/s.
Are there any recent changes that introduced the issue?
No specific changes. This is a new setup testing DERP performance.
OS
Windows
OS version
Windows 11 24H2
Tailscale version
1.90.8
Other software
Troubleshooting attempts taken (All failed to improve Windows performance):
1. RSC: Disabled Receive Segment Coalescing via netsh and PowerShell.
2. MTU: Manually adjusted Tailscale interface MTU to 1200.
3. TCP Auto-Tuning: Set to experimental and normal.
4. Congestion Control: Tried changing to ctcp and cubic.
5. Offloading: Disabled UDP Checksum Offload via PowerShell.
6. Registry: Adjusted NetworkThrottlingIndex to ffffffff.
7. Nagle's Algorithm: Added TcpAckFrequency=1 and TCPNoDelay=1 to the Tailscale interface GUID in registry.
8. QoS: Disabled QoS Packet Scheduler on the virtual adapter.
Verified no "Game Booster" or "Network Manager" software (like cFosSpeed) is installed.
Bug report
BUG-d43a2be8563ea8abe0ca2df1da83d56e14d7d4ed25a7df72e40819d6861ede9b-20251121064759Z-3663a48d2e863eba