New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarity on buffered-until-onload (and buffers, buffers, everywhere) #78
Comments
Actually, I think a lot of my questions hinge on 3: whether or not PO has a separate buffer from PT. If PO doesn't have a separate buffer -- it just relies on the underlying spec's behavior, then:
Also, if this is the case, LongTasks won't work with I guess this is all just an elaborate message saying that I'm confused. |
The problem with only buffering ST entries until |
@nicjansma great feedback and questions. @cvazac I think we missed a few cases in #76...
Not quite. We left this flexible and upstream specs that call "queue entry" step can define the right behavior. For example, Server Timing and Long Tasks should set the As a general pattern: we cannot queue all entry types by default and indefinitely, as this requirement would be at odds with our goal of allowing performance timeline to scale to hundreds of events per second.
No, there is no ability to clear the global buffer. There are many different consumers, if we allow clearing, we're back to square one with race conditions.
There are no updates needed to RT. RT has own global buffer and that's something we're not planning to change.
As specced, yes.
This is undefined. We do need to specify a cap on how many entries the global timeline is willing to buffer, and it's behavior for when it's full...
Paint Timing can't get any new entries after onload.. by definition, those events fire before onload. In case of Server Timing, events that are queued before end of onload will be stored in perf timeline. Stepping back, I think a lot of the above hinges on assumptions about "clearing buffers" which is not something we discussed or want.. I think. Does the above help clarify the lay of the land? |
@cvazac I think we have a bug in the processing model. We should move step 7 above step 4.. as that short-circuits our add to global buffer logic if PO is queued to execute. |
@nicjansma opened #81 -- did I miss anything from this thread? Should we close this and iterate there? |
Let's merge threads in #81, closing. |
(sorry for the delayed reply), thanks for the clarification! Yes #81 looks good. |
Hi everyone!
We've recently integrated the
buffered: true
flag for the PerformanceObserver, but I'm confused a bit on its behavior.From our discussions in the past, I believe the intention was that
buffered: true
would give you all buffered PO entries up to the point thePerformanceObserver.observe({ buffered: true})
is called. I think this is captured in the spec correctly, and it's an awesome addition for RUM.I think we had also discussed that some of the specs might clear that PO buffer at
onload
. For example, with ResourceTiming were were discussing that atonload
, its PO buffer would be cleared. So if you calledPerformanceObserver.observe({ buffered: true})
afteronload
, you would not get any ResourceTimings that happened beforeonload
.I know we just merged in #76 and probably just haven't updated ResourceTiming to reflect this behavior, but I think that behavior should be captured either in PerformanceTimeline or ResourceTiming.
Along those lines, I have questions on that behavior. I think I understand the answer to some of these, but I'm not sure these are all explained in the specs:
Does the PO buffer behavior apply to all specs, or is it up to each spec to define how long that buffer lasts. For example, do all specs clear their PO buffer at
onload
(like RT/ST which have PT buffer limits) or can some fill the buffer indefinitely (like UT which have no PT buffer limit)? If the former, we should describe this behavior in the PerformanceTimeline spec. If the later, we should probably add the details to each other spec, and possibly briefly describe the differences in the PerformanceTimeline spec as well. My preference is to allow each spec to specify its PO buffer clearing behavior, since they each have different PT buffer behaviors.What happens if you call
PerformanceObserver.observe({ buffered: true})
afteronload
for a spec that clears its buffer atonload
? You don't get any buffered entries, just new ones going forward, correct? We should point this out in the spec.The buffer we're talking about for PerformanceObserver is different from the PerformanceTimeline buffer right? Does any of this behavior with the PerformanceObserver buffer affect the PerformanceTimeline buffer? i.e. I want to make sure that ResourceTiming entries in the PerformanceTimeline wouldn't also get cleared at
onload
just because the PerformanceObserver's buffer is cleared (which would be a regression from today's behavior).What happens if someone calls
.clearResourceTimings()
beforeonload
? This clears the PerformanceTimeline buffer. Does that also affect the PerformanceObserver buffer?What happens if the PerformanceTimeline buffer for ResourceTiming/ServerTiming (e.g. 150 entries) reaches capacity? Does the PerformanceObserver buffer still fill up without bounds until
onload
? That would be my preference.I've heard a bit of confusion on how long new specs like ServerTiming and PaintTiming will keep their entries -- notably that 1) after
onload
, if no PO is registered, the PT will not get any more entries, or 2) atonload
, the PT buffer will get cleared. Both of those would be a challenge for RUM.I also made this small chart for clarity on how each spec handles buffering and what I think it's behavior should be:
onload
performance.timing
(I'm willing to help out with a spec PR once there's clarity on the above).
The text was updated successfully, but these errors were encountered: