Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Define buffering behaviors & hooks for queued (PerformanceObserver) entries #81
We recently (see #76) added an optional flag to "queue an entry" that allows the caller to optionally append the entry to the performance entry buffer.. For example, Long Tasks or Server Timing can set this to
A few interactions that we did not address:
Sidenote: moving forward we're not planning on adding more type-specific buffers.. Instead, we want to encourage use of PerfObserver. So, this is mostly an exercise in backwards compat and to make sure that we can pave a clean path for existing users of RT/NT/UT.
For (1), my hunch is "no", calls to clear type-specific buffers should not clear the performance entry buffer. This does mean that we might end up with some double book-keeping, but one of the motivating reasons for PerfObserver was to resolve the race conditions created by consumers clearing buffers from under each other. As such, I think I propose we treat them as separate entities: it should be possible to clear the RT buffer and still use PerfObserver with
For (2), my hunch is "yes", and we should probably recommend a minimum ~ user agent should allow a minimum of XXXX entries to be queued, and once full the items are silently discarded.
This was referenced
Jun 19, 2017
Based on the discussion at the WebPerf F2F, we converged on:
Does that sound correct? Assuming the answer is yes, the tactical work here is...
@toddreifsteck WDYT, does that seem reasonable?
changed the title
Define `performance entry buffer` interaction w/ existing buffers
Jul 21, 2017
added a commit
Oct 7, 2017
referenced this issue
Oct 7, 2017
referenced this issue
Nov 11, 2017
@rniwa we've all implemented separate buffers for each type, and I think there is a good argument that different types of events (i.e. some specs) may need different limits. As a concrete example, our current limit for ResourceTiming is, arguably, too low: w3c/resource-timing#89. At the same time, I don't think it makes sense to raise the limit to a higher number for all event types.. WDYT?
@igrigorik Here are my thoughts. Does this answer the questions in a way that helps close on the spec update you plan to make?
I assert that the memory pressure needs to stay. UAs will clear the buffers regardless of what the specs say if they can avoid crashing and performance issues due to memory pressure so we should keep that text. It should be a corner case and will not impact the real world for 99.9% of usage.
Upstream specs should define a min and max buffer size.
In general, each spec needs to consider carefully what will occur when buffers are cleared and buffer management is recommended but the Performance Timeline should defer those decisions to each spec/buffer in my opinion.
Should the global Performance Entry buffer we redefined as a merge of the various separate buffers?
Going back to this and given the work done on w3c/resource-timing#163, I think it makes sense to move all the buffer processing to a central location. I also think we need to go beyond that and define the "buffer full" behavior and events in a generic way (e.g. In Chromium's implementation, there are a lot of parallels between Resource Timing and Event Timing on that front).
I guess the main question is if we want this to be an L2 blocker, as it's a rather large refactoring.
Discussed at TPAC 2018: