Open
Description
Describe the bug
The timer-based loss detection seems to fall apart when RTT is 100s of microseconds. The RTT/8
interval for loss might be too short because CPU scheduler and other things can affect traffic significantly at these tiny timescales. Consider using a heuristic to adjust the loss timer when RTT is very very small.
Affected OS
- Windows
- Linux
- macOS
- Other (specify below)
Additional OS information
No response
MsQuic version
main
Steps taken to reproduce bug
Run secnetperf on a high bandwidth ultra-low latency network (like a software loopback-type network, like duonic)
Expected behavior
Connection stats succeed with reasonable numbers of suspected/spurious loss.
Actual outcome
Connection stats spurious loss numbers are high (tens of thousands), but bandwidth doesn't seem to be affected.
Additional details
No response