Skip to content
This repository was archived by the owner on Jan 8, 2026. It is now read-only.
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 24 additions & 0 deletions design/history/exploration-reports/2018.11-compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,3 +155,27 @@ Not necessarily.
WRT packet loss, a large issue there is that go-ipfs currently sends out *way* too many packets (we need to buffer better).

WRT compression, I'd be surprised if intermediate nodes were all that compressible. They tend to *mostly* be composed of hashes.

---
#### (2020-03-31T20:31:00Z) RubenKelevra:
> * Depending on the congestion control algorithm, early requests are much slower than subsequent requests. TCP (and TFRC when using UDP) use a loss based congestion control algorithm that ramps up, increasing the send rate until it sees loss. Because we connect to multiple peers we hit this in every connection and hit it more in the initial stages of a connection, unless we're using an alternative congestion control algorithm I don't know about.
> * Mobile packet loss plays havoc with these algorithms and mobile infrastructure tries to compensate by keeping a buffer in the network layer. In the long run this helps but it tends to make the initial connection speed fluctuate with spikes up and down and sending larger amounts of data before this normalizes tends to make it worse.

The default congestion control algorithm used by TCP on linux is cubic, which is designed for extremly large throughput in wired networks which has low latency.

It's not made for lossy connections like wireless networks and it also tend to buffer extensively (leading to bufferbloat).

You might want to switch to Westwood which recovers much quicker from sudden drops in wireless networks while maintaining an "ok" fairness to cubic streams.

> For example, TCP CUBIC [11] aggressively probes for the available bandwidth leading to a high average buffer utilization, whereas TCP Westwood+ [12] clears the buffers when congestion episodes occur leading to, on average, a reduced buffer occupancy.

And

> The more aggressive congestion control used by CUBIC roughly doubles the Web response time as compared to Westwood+.

And

> All congestion control algorithms achieve similar throughput, whereas CUBIC and BIC are observed to exhibit larger RTTs and a larger number of retransmissions.


Source: https://c3lab.poliba.it/images/3/32/BufbloatLTE2013.pdf