Skip to content

[Windows] Extremely low upload throughput via DERP compared to WSL on same host (10Mbps vs 40Mbps) #18017

@cesaryuan

Description

@cesaryuan

What is the issue?

I am experiencing a significant performance disparity between the native Windows Tailscale client and the Tailscale client running inside WSL (Windows Subsystem for Linux) on the same physical machine.

Setup:

  • Topology: Self-hosted DERP server used for relaying traffic. ACL configured to enforce DERP connections.
  • Destination: A Debian MiniPC running iperf3 -s.
  • Source: A Windows PC (acting as the sender).

The Problem:
When sending data from the Windows native client (via iperf3 -c or python http.server download), the throughput is capped at approximately 10 Mbps (700KB/s).

The Control Group (WSL):
When running Tailscale independently inside WSL (Debian) on the same Windows host (using the same physical network interface), the throughput stabilizes at 40 Mbps (~5.3 MB/s).

Since WSL shares the same physical link and DERP server, this indicates the issue lies specifically within the Windows client implementation or its interaction with the Windows network stack, rather than the network bandwidth or DERP node capacity.

Steps to reproduce

  1. Set up a self-hosted DERP server and ensure clients connect via this relay.
  2. Scenario A (Windows Native):
    • Run iperf3 -c <Remote_Linux_IP> from PowerShell.
    • Result: Throughput struggles to exceed 10 Mbps.
    • (Verification): Host a file via python -m http.server on Windows. Download it from the remote Linux node via wget. Speed is ~700KB/s.
  3. Scenario B (WSL on same Host):
    • Install Tailscale inside WSL2 (Ubuntu-24.04).
    • Run iperf3 -c <Remote_Linux_IP> from WSL terminal.
    • Result: Throughput reaches 40-60 Mbps.
    • (Verification): Host a file via python -m http.server inside WSL. Download it from the remote Linux node. Speed is ~5.3MB/s.

Are there any recent changes that introduced the issue?

No specific changes. This is a new setup testing DERP performance.

OS

Windows

OS version

Windows 11 24H2

Tailscale version

1.90.8

Other software

Troubleshooting attempts taken (All failed to improve Windows performance):
1. RSC: Disabled Receive Segment Coalescing via netsh and PowerShell.
2. MTU: Manually adjusted Tailscale interface MTU to 1200.
3. TCP Auto-Tuning: Set to experimental and normal.
4. Congestion Control: Tried changing to ctcp and cubic.
5. Offloading: Disabled UDP Checksum Offload via PowerShell.
6. Registry: Adjusted NetworkThrottlingIndex to ffffffff.
7. Nagle's Algorithm: Added TcpAckFrequency=1 and TCPNoDelay=1 to the Tailscale interface GUID in registry.
8. QoS: Disabled QoS Packet Scheduler on the virtual adapter.

Verified no "Game Booster" or "Network Manager" software (like cFosSpeed) is installed.

Bug report

BUG-d43a2be8563ea8abe0ca2df1da83d56e14d7d4ed25a7df72e40819d6861ede9b-20251121064759Z-3663a48d2e863eba

Metadata

Metadata

Assignees

No one assigned

    Labels

    OS-windowsIssues involving Tailscale for WindowsbugBug

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions