New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define buffering behaviors & hooks for queued (PerformanceObserver) entries #81

Open
igrigorik opened this Issue Jun 19, 2017 · 11 comments

Comments

Projects
None yet
4 participants
@igrigorik
Copy link
Member

igrigorik commented Jun 19, 2017

We recently (see #76) added an optional flag to "queue an entry" that allows the caller to optionally append the entry to the performance entry buffer.. For example, Long Tasks or Server Timing can set this to true while before onload to allow developers to register and retrieve any LT/ST records captured between start of page load and max(time when they register observer, onloadend).

A few interactions that we did not address:

  1. How does performance entry buffer interact with other buffers defined by individual specs? E.g. ResourceTiming defines own buffer with methods to query and clear it.
  • When RT's buffer is cleared, does that affect the performance entry buffer?
  1. Do we want to set a global cap on the performance entry buffer?
  • What happens when the limit is reached, do we start dropping items on the floor?

Sidenote: moving forward we're not planning on adding more type-specific buffers.. Instead, we want to encourage use of PerfObserver. So, this is mostly an exercise in backwards compat and to make sure that we can pave a clean path for existing users of RT/NT/UT.


For (1), my hunch is "no", calls to clear type-specific buffers should not clear the performance entry buffer. This does mean that we might end up with some double book-keeping, but one of the motivating reasons for PerfObserver was to resolve the race conditions created by consumers clearing buffers from under each other. As such, I think I propose we treat them as separate entities: it should be possible to clear the RT buffer and still use PerfObserver with buffered: true to get all the entries queued before onload.

For (2), my hunch is "yes", and we should probably recommend a minimum ~ user agent should allow a minimum of XXXX entries to be queued, and once full the items are silently discarded.

/cc @cvazac @nicjansma @toddreifsteck

@igrigorik

This comment has been minimized.

Copy link
Member

igrigorik commented Jul 21, 2017

Based on the discussion at the WebPerf F2F, we converged on:

  • Each spec defines a buffer with custom buffer size (depends on type of data, etc)
  • Performance Timeline will buffer events until buffer is full. When the buffer is full...
    • Keep the first N entries, drop overflow
  • Buffer is not cleared at onload, but under memory pressure UA may clear buffers
  • Some specs may provide additional methods to manipulate their buffer (e.g. ResourceTiming), but this is not a requirement.

Does that sound correct? Assuming the answer is yes, the tactical work here is...

  • I think we can back out some of the changes we landed in #76. Specifically, the buffered flag for the queue step.
  • We need to agree on how we want to structure the hooks. One take could be:
    • Performance Timeline defines performance entry buffer (as it does today), which is segmented by type of entries
    • Upstream specs define max buffer size for their type, which is enforced by PerfTimeline.
      • We should define "add to performance entry buffer" as a concept in PerfTimeline.
      • Upstream specs call add step and pass in entry to perf timeline, which does the rest. This way we don't have to replicate the buffer and and enforcement logic in every spec.
    • PerformanceTimeline's getEntries* queries global performance entry buffer.. as defined today.

@toddreifsteck WDYT, does that seem reasonable?

@igrigorik igrigorik changed the title Define `performance entry buffer` interaction w/ existing buffers Define buffering behaviors & hooks for queued (PerformanceObserver) entries Jul 21, 2017

igrigorik added a commit that referenced this issue Oct 7, 2017

@rniwa

This comment has been minimized.

Copy link

rniwa commented Feb 5, 2018

I'm not certain if we should allow UA to clear buffer at a memory pressure. That would make the API a lot less reliable.

@igrigorik

This comment has been minimized.

Copy link
Member

igrigorik commented Feb 6, 2018

@rniwa fair enough. Does the rest look reasonable to you?

@rniwa

This comment has been minimized.

Copy link

rniwa commented Feb 8, 2018

What's the rationale for each spec to define its own limit? Isn't it easier to have a global limit?

@igrigorik

This comment has been minimized.

Copy link
Member

igrigorik commented Feb 8, 2018

@rniwa we've all implemented separate buffers for each type, and I think there is a good argument that different types of events (i.e. some specs) may need different limits. As a concrete example, our current limit for ResourceTiming is, arguably, too low: w3c/resource-timing#89. At the same time, I don't think it makes sense to raise the limit to a higher number for all event types.. WDYT?

@toddreifsteck

This comment has been minimized.

Copy link
Member

toddreifsteck commented May 3, 2018

@igrigorik Here are my thoughts. Does this answer the questions in a way that helps close on the spec update you plan to make?

I assert that the memory pressure needs to stay. UAs will clear the buffers regardless of what the specs say if they can avoid crashing and performance issues due to memory pressure so we should keep that text. It should be a corner case and will not impact the real world for 99.9% of usage.

Upstream specs should define a min and max buffer size.

In general, each spec needs to consider carefully what will occur when buffers are cleared and buffer management is recommended but the Performance Timeline should defer those decisions to each spec/buffer in my opinion.

Should the global Performance Entry buffer we redefined as a merge of the various separate buffers?

@igrigorik

This comment has been minimized.

Copy link
Member

igrigorik commented May 15, 2018

@toddreifsteck thanks, yes that sounds reasonable.

Should the global Performance Entry buffer we redefined as a merge of the various separate buffers?

Effectively, that is how it is defined today.

@yoavweiss

This comment has been minimized.

Copy link
Contributor

yoavweiss commented Oct 9, 2018

Going back to this and given the work done on w3c/resource-timing#163, I think it makes sense to move all the buffer processing to a central location. I also think we need to go beyond that and define the "buffer full" behavior and events in a generic way (e.g. In Chromium's implementation, there are a lot of parallels between Resource Timing and Event Timing on that front).

I guess the main question is if we want this to be an L2 blocker, as it's a rather large refactoring.

@rniwa

This comment has been minimized.

Copy link

rniwa commented Oct 10, 2018

I don't think it needs to block L2. I think it makes more sense to do it in L3 when we define things in terms of fetch.

@yoavweiss yoavweiss self-assigned this Oct 16, 2018

@igrigorik

This comment has been minimized.

Copy link
Member

igrigorik commented Oct 18, 2018

Ditto, I don't think this is an L2 blocker. Also, in a world where PerfObserver is the new default, we may be able to soft deprecate the buffer bits.. </wishful-thinking>

@yoavweiss yoavweiss modified the milestones: Level 2, Level 3 Oct 20, 2018

@toddreifsteck

This comment has been minimized.

Copy link
Member

toddreifsteck commented Oct 25, 2018

Discussed at TPAC 2018:

  • Agreement was reached that the concept of buffering for PerformanceObserver should be removed from L2 for .observe. Also ensure that keeping the buffering in queue step isn't "broken". @cvazac will take this on.
  • Buffering is needed to allow Navigation Timing, Paint Timing, User Timing, Resource Timing and Long Tasks that are before a script is able to register a PerformanceObserver. Solving this will need discussion of how buffering should work for each and how to avoid un-needed overhead. Options may be using headers. Leaving this issue assigned to @yoavweiss to coordinate with others at Google and folks at analytics vendors on how to move this forward.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment