Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core(time-to-first-byte): use receiveHeadersStart #15126

Merged
merged 5 commits into from Jun 12, 2023
Merged

Conversation

connorjclark
Copy link
Collaborator

@connorjclark connorjclark commented Jun 1, 2023

I added receiveHeadersStart to network timings in https://chromium-review.googlesource.com/c/chromium/src/+/4556570 . It will be available in M116.

This PR changes ttfb calculation to use timing.receiveHeadersStart instead of timing.receiveHeadersEnd. Using the end time potentially misreports ttfb in uncommon cases where the server either intentionally delays some headers (say, it push out some link headers early then does the rest after some thinking); or sends so many headers that they arrive over multiple packets. In either case, we care about the network connection latency, so we want to exclude such delays.

@connorjclark connorjclark requested a review from a team as a code owner June 1, 2023 21:28
@connorjclark connorjclark requested review from brendankenny and removed request for a team June 1, 2023 21:28
Copy link
Member

@adamraine adamraine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is good. Should we update the usages of receiveHeadersEnd in server-reponse-time.js and network-analyzer.js as well?

@connorjclark
Copy link
Collaborator Author

Should we update the usages of receiveHeadersEnd in server-reponse-time.js

As the server-response audit is documented, yeah we should change it there too. It seems the intention was to be a synonym for ttfb.

Without considering the current state of the audit, I'd consider "server response" to be pretty ambiguous, you could argue it can be the first byte of the headers or the first byte of the response body. I don't have a good argument for one way or the other, except that: if it was meant to be the first byte of the headers, why wasn't the audit called TTFB?

network-analyzer.js?

transferSize includes bytes from the header, so I think changing that too is right. good catch.

@connorjclark
Copy link
Collaborator Author

Let's defer the network analyzer changes until after we update the Lantern database. #15150

@adamraine
Copy link
Member

if it was meant to be the first byte of the headers, why wasn't the audit called TTFB?

According to this comment, server response time is just one part of TTFB:

// When connection was fresh...
// TTFB = DNS + (SSL)? + TCP handshake + 1 RT for request + server response time

@brendankenny
Copy link
Member

Yeah, TTFB is startTime to responseStart, possibly including unload handling, redirects, etc.

ccT8ltSPrTri3tz7AA3h

Server response time tries to measure the end of that duration from when the first request byte is sent (requestStart).

responseStart is when the browser "receives the first byte of the response (e.g., frame header bytes for HTTP/2 or response status line for HTTP/1.x)", so receiveHeadersStart is what we want.

@connorjclark
Copy link
Collaborator Author

Thanks for the breakdown @brendankenny! I'll prepare another PR to update server-response-time.

@brendankenny
Copy link
Member

What other uses are there? Would it be helpful to make a tracking bug to make sure they all get moved over?

@connorjclark
Copy link
Collaborator Author

Analyzer is the only other one, and that is tracked here #15150

@devtools-bot devtools-bot merged commit c741df0 into main Jun 12, 2023
32 of 33 checks passed
@devtools-bot devtools-bot deleted the headers-start branch June 12, 2023 19:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants