Skip to content
This repository has been archived by the owner on Apr 22, 2023. It is now read-only.

tls: dynamic record size to optimize latency & throughput #6889

Closed
igrigorik opened this issue Jan 17, 2014 · 2 comments
Closed

tls: dynamic record size to optimize latency & throughput #6889

igrigorik opened this issue Jan 17, 2014 · 2 comments

Comments

@igrigorik
Copy link

Reducing size of TLS records can yield significant latency wins on the client: faster time to first paint, time to first frame for video, etc. The issue is that by default TLS will pack up to 16KB of data into each record, which effectively guarantees that when the CWND is low (e.g. new TCP connection), then the record will span multiple RTTs. As a result, the time to first (application consumable) byte is pushed out by an extra RTT. More info:

Node uses the default 16KB limit, which forces an extra RTT on new connections (wpt):

image

That said, exposing a config flag to set a smaller / static record size is also suboptimal as it introduces an inherent tradeoff between latency and throughput – smaller records are good for latency, but hurt server throughput by adding bytes and CPU overhead. It would be great if we could implement a smarter strategy in node... Some background on how Google servers handle this:

  • new connections default to small record size
    • each record fits into a TCP packet
    • packets are flushed at record boundaries
  • server tracks number of bytes written since reset and timestamp of last
    write
    • if bytes written > {configurable byte threshold) then boost record size
      to 16KB
    • if last write timestamp > now - {configurable time threshold} then reset
      sent byte count

In other words, start with small record size to optimize for delivery of
small/interactive objects (bulk of HTTP traffic). Then, if large file is
being transferred bump record size to 16KB and continue using that until
the connection goes idle.. when communication resumes, start with small
record size and repeat. Overall, this is aimed to optimize delivery of
small files where incremental delivery is a priority, and also for large
downloads where overall throughput is a priority.

Both byte and time thresholds are exposed as configurable flags, and
current defaults in GFE are 1MB and 1s.

A dynamic strategy would provide the best out-of-the-box experience and work well regardless of mix/type of traffic being served (interactive, bulk, etc).

/cc @indutny

@indutny
Copy link
Member

indutny commented Jan 17, 2014

I don't think that we are going to implement dynamic strategy in core, but just opened a PR that will allow users to control it: #6900

@indutny indutny closed this as completed Jan 17, 2014
@indutny
Copy link
Member

indutny commented Jan 17, 2014

Thank you for reporting!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants