Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define buffering behaviors & hooks for queued (PerformanceObserver) entries #81

Closed
igrigorik opened this issue Jun 19, 2017 · 13 comments
Closed

Comments

@igrigorik
Copy link
Member

We recently (see #76) added an optional flag to "queue an entry" that allows the caller to optionally append the entry to the performance entry buffer.. For example, Long Tasks or Server Timing can set this to true while before onload to allow developers to register and retrieve any LT/ST records captured between start of page load and max(time when they register observer, onloadend).

A few interactions that we did not address:

  1. How does performance entry buffer interact with other buffers defined by individual specs? E.g. ResourceTiming defines own buffer with methods to query and clear it.
  • When RT's buffer is cleared, does that affect the performance entry buffer?
  1. Do we want to set a global cap on the performance entry buffer?
  • What happens when the limit is reached, do we start dropping items on the floor?

Sidenote: moving forward we're not planning on adding more type-specific buffers.. Instead, we want to encourage use of PerfObserver. So, this is mostly an exercise in backwards compat and to make sure that we can pave a clean path for existing users of RT/NT/UT.


For (1), my hunch is "no", calls to clear type-specific buffers should not clear the performance entry buffer. This does mean that we might end up with some double book-keeping, but one of the motivating reasons for PerfObserver was to resolve the race conditions created by consumers clearing buffers from under each other. As such, I think I propose we treat them as separate entities: it should be possible to clear the RT buffer and still use PerfObserver with buffered: true to get all the entries queued before onload.

For (2), my hunch is "yes", and we should probably recommend a minimum ~ user agent should allow a minimum of XXXX entries to be queued, and once full the items are silently discarded.

/cc @cvazac @nicjansma @toddreifsteck

@igrigorik
Copy link
Member Author

Based on the discussion at the WebPerf F2F, we converged on:

  • Each spec defines a buffer with custom buffer size (depends on type of data, etc)
  • Performance Timeline will buffer events until buffer is full. When the buffer is full...
    • Keep the first N entries, drop overflow
  • Buffer is not cleared at onload, but under memory pressure UA may clear buffers
  • Some specs may provide additional methods to manipulate their buffer (e.g. ResourceTiming), but this is not a requirement.

Does that sound correct? Assuming the answer is yes, the tactical work here is...

  • I think we can back out some of the changes we landed in PerformanceObserverInit.buffered and add to performance entry buffer flags #76. Specifically, the buffered flag for the queue step.
  • We need to agree on how we want to structure the hooks. One take could be:
    • Performance Timeline defines performance entry buffer (as it does today), which is segmented by type of entries
    • Upstream specs define max buffer size for their type, which is enforced by PerfTimeline.
      • We should define "add to performance entry buffer" as a concept in PerfTimeline.
      • Upstream specs call add step and pass in entry to perf timeline, which does the rest. This way we don't have to replicate the buffer and and enforcement logic in every spec.
    • PerformanceTimeline's getEntries* queries global performance entry buffer.. as defined today.

@toddreifsteck WDYT, does that seem reasonable?

@igrigorik igrigorik changed the title Define performance entry buffer interaction w/ existing buffers Define buffering behaviors & hooks for queued (PerformanceObserver) entries Jul 21, 2017
igrigorik added a commit that referenced this issue Oct 7, 2017
@rniwa
Copy link

rniwa commented Feb 5, 2018

I'm not certain if we should allow UA to clear buffer at a memory pressure. That would make the API a lot less reliable.

@igrigorik
Copy link
Member Author

@rniwa fair enough. Does the rest look reasonable to you?

@rniwa
Copy link

rniwa commented Feb 8, 2018

What's the rationale for each spec to define its own limit? Isn't it easier to have a global limit?

@igrigorik
Copy link
Member Author

@rniwa we've all implemented separate buffers for each type, and I think there is a good argument that different types of events (i.e. some specs) may need different limits. As a concrete example, our current limit for ResourceTiming is, arguably, too low: w3c/resource-timing#89. At the same time, I don't think it makes sense to raise the limit to a higher number for all event types.. WDYT?

@toddreifsteck
Copy link
Member

@igrigorik Here are my thoughts. Does this answer the questions in a way that helps close on the spec update you plan to make?

I assert that the memory pressure needs to stay. UAs will clear the buffers regardless of what the specs say if they can avoid crashing and performance issues due to memory pressure so we should keep that text. It should be a corner case and will not impact the real world for 99.9% of usage.

Upstream specs should define a min and max buffer size.

In general, each spec needs to consider carefully what will occur when buffers are cleared and buffer management is recommended but the Performance Timeline should defer those decisions to each spec/buffer in my opinion.

Should the global Performance Entry buffer we redefined as a merge of the various separate buffers?

@igrigorik
Copy link
Member Author

@toddreifsteck thanks, yes that sounds reasonable.

Should the global Performance Entry buffer we redefined as a merge of the various separate buffers?

Effectively, that is how it is defined today.

@yoavweiss
Copy link
Contributor

Going back to this and given the work done on w3c/resource-timing#163, I think it makes sense to move all the buffer processing to a central location. I also think we need to go beyond that and define the "buffer full" behavior and events in a generic way (e.g. In Chromium's implementation, there are a lot of parallels between Resource Timing and Event Timing on that front).

I guess the main question is if we want this to be an L2 blocker, as it's a rather large refactoring.

@rniwa
Copy link

rniwa commented Oct 10, 2018

I don't think it needs to block L2. I think it makes more sense to do it in L3 when we define things in terms of fetch.

@yoavweiss yoavweiss self-assigned this Oct 16, 2018
@igrigorik
Copy link
Member Author

Ditto, I don't think this is an L2 blocker. Also, in a world where PerfObserver is the new default, we may be able to soft deprecate the buffer bits.. </wishful-thinking>

@yoavweiss yoavweiss modified the milestones: Level 2, Level 3 Oct 20, 2018
@toddreifsteck
Copy link
Member

Discussed at TPAC 2018:

  • Agreement was reached that the concept of buffering for PerformanceObserver should be removed from L2 for .observe. Also ensure that keeping the buffering in queue step isn't "broken". @cvazac will take this on.
  • Buffering is needed to allow Navigation Timing, Paint Timing, User Timing, Resource Timing and Long Tasks that are before a script is able to register a PerformanceObserver. Solving this will need discussion of how buffering should work for each and how to avoid un-needed overhead. Options may be using headers. Leaving this issue assigned to @yoavweiss to coordinate with others at Google and folks at analytics vendors on how to move this forward.

@npm1
Copy link
Contributor

npm1 commented Feb 11, 2019

Discussed this today with @yoavweiss and @tdresser. I'd like to work on this by doing the following:

  • Replacing the global buffer with a map entryType->buffer, which probably aligns better with user agent implementations.
  • Adding a flag so that each entryType can specify whether its buffer is available from the performance timeline (performance.getEntries()).
  • Adding a max size for each entryType. For UserTiming entries it might be infinite? For ResourceTiming, I guess we'd keep the method to change this size. But for other types, the max size cannot be changed.
  • When the buffer becomes full, the user agent will no longer buffer any further entries (except for the special processing in ResourceTiming). This is acceptable because these buffers (for future entryTypes) are mostly meant only for support of the buffered flag, they are not meant to be a replacement for PerformanceObserver.

Does this make sense to others? Ideally I'd work on this after we have (re)published Level 2 because all of these changes should be considered L3 as they are only relevant for the buffered flag. Is anyone working on the Level 2 work? I see that the buffered flag PR is quite old (b763e5c) so if we want to publish using earlier commits then we'd need to pin the commit and then manually revert anything related to the bufferd flag.

ehanley324 added a commit to ehanley324/performance-timeline that referenced this issue Apr 2, 2019
@npm1
Copy link
Contributor

npm1 commented Sep 15, 2020

The work here is done, yay. Closing

@npm1 npm1 closed this as completed Sep 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants