Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "performance warning" report type? #118

Closed
RByers opened this issue Aug 1, 2018 · 6 comments
Closed

Add "performance warning" report type? #118

RByers opened this issue Aug 1, 2018 · 6 comments
Assignees

Comments

@RByers
Copy link
Collaborator

RByers commented Aug 1, 2018

From this twitter discussion, perhaps there should be a generic performance warning report type as well (eg. like many of the things Chrome reports as "violations" in the devtools console)?

@tdresser suggests that PerformanceObserver is really only appropriate for things where you want to know WHEN they happened and suggests ReportingObserver is probably a better for things like "long forced reflow taking 500ms". Exposing events like that somehow makes sense to me, though I'm not clear on exactly why long event handlers and generic long tasks should fit better in PerformanceObserver with long reflows fitting better in ReportingObserver. Maybe PerformanceObserver should be for the things whose semantics we can describe precisely, while ReportingObserver may have more implementation-defined generic warnings (like intervention reports)?

If there's at least one major analytics provider interested in partnering with us to ship a product that includes collecting and exposing such warnings to developers, then this seems worth exploring to me. If we decide ReportingObserver is indeed the right place, then I suspect we'd want to do some work to better deal with potentially large buffers - eg. having a fixed-size buffer per reporting type (so performance warnings don't swamp out deprecation reports) and using a priority queue to expire the least severe warnings instead of just the oldest.

@igrigorik
Copy link
Member

igrigorik commented Aug 1, 2018

@tdresser suggests that PerformanceObserver is really only appropriate for things where you want to know WHEN they happened and suggests ReportingObserver is probably a better for things like "long forced reflow taking 500ms".

I'm not sure I agree with that. It is the explicit goal and intent of Performance Timeline to surface performance related events to the application with all the necessary information for them to be actionable: when they happened, why they happened, etc. There is no need to fragment this data across multiple observers, and I think doing so would be counterproductive.

I think it's also worth highlighting that the "big" use case for Reporting API is to enable out of band delivery (and header registration) for certain type of reports that may not be accessible with or via JS. Whatever we consider exposing via JS, we should first think if it makes sense to expose through OOB reporting mechanism — my hunch is that in majority of cases the answer there should be "no".

/cc @yoavweiss @toddreifsteck

@ebidel
Copy link

ebidel commented Aug 1, 2018

Things are a bit muddy b/c there's cross-over with a lot of these browser-generated warnings. For example, feature policy violations and interventions can also be related to performance issues but I don't think those should surface in a PO.

As mentioned in #119 (comment), there's a growing list of APIs which are required to capture useful information about a site. I'd love if we got to a place where developers wouldn't wonder, "Which observer/API do I use to capture X?".

... we should first think if it makes sense to expose through OOB reporting mechanism — my hunch is that in majority of cases the answer there should be "no".

I'd argue that having both in and OOB reporting mechanism are useful(where possible). For example, libraries won't be able to setup headers so they'd need a RO to catch + send their own reports. But having the option to auto-send reports to a backend is very convenient. Set it and forget it.

I wonder how many sites use something like PerformanceObserver actually send information to a backend? If that's high, it would be a good signal for determining whether to integrate with the Reporting API.

@spanicker
Copy link

Quick observation: Performance Timeline / Observer is historically un-opinionated: it exposes timing of interesting / relevant things (not "warnings"). Long Tasks API (inadvertently) muddies the water a bit with showing "problem signal / warnings" that should be paid attention to, so now we have a precedent. Defer to Ilya, though I would have expected Performance Timeline / Observer to remain un-opinionated and find a different mechanism for surfacing clear perf problems as warnings.

@igrigorik
Copy link
Member

Things are a bit muddy b/c there's cross-over with a lot of these browser-generated warnings. For example, feature policy violations and interventions can also be related to performance issues but I don't think those should surface in a PO.

Agreed, because those are developer set policy violations, or UA-initiated interventions.

As mentioned in #119 (comment), there's a growing list of APIs which are required to capture useful information about a site. I'd love if we got to a place where developers wouldn't wonder, "Which observer/API do I use to capture X?".

I understand your motivation, but I'm not convinced that an omnibus observer for all events is the right solution here either. This is a discussion we need to have at a much higher level in the HTML spec: there's a proliferation of event handlers, is there a way to logically group them and make it easier to observe?

ReportingObserver could be a solution for some of them, but much of what's outlined here so far is well out of initial scope and intent for this API. Reporting API has OOB semantics that don't map / sufficient / necessary for much of the above.

I'd argue that having both in and OOB reporting mechanism are useful(where possible). For example, libraries won't be able to setup headers so they'd need a RO to catch + send their own reports. But having the option to auto-send reports to a backend is very convenient. Set it and forget it.

This space is much more complex than simply "get event from observer, send it to backend". Analytics providers aggregate and preprocess events, inject custom instrumentation to pull out custom events and metrics, and add many other layers of smarts before reporting data to their collectors.

Reporting API is specifically scoped to ignore all of the above, because the core use cases we target are UA-initiated events that analytics providers cannot / may not be able to collect themselves. Opening up the scope to what you're outlining above opens a large can of complexity that we should think about very carefully first — we don't need to reinvent fetch(), it works and it works well.

Quick observation: Performance Timeline / Observer is historically un-opinionated: it exposes timing of interesting / relevant things (not "warnings"). Long Tasks API (inadvertently) muddies the water a bit with showing "problem signal / warnings" that should be paid attention to, so now we have a precedent. Defer to Ilya, though I would have expected Performance Timeline / Observer to remain un-opinionated and find a different mechanism for surfacing clear perf problems as warnings.

We never set any explicit expectations or requirements that PerfTimeline is scoped to "un-opinionated" events. The intent for PerfTimeline is to be used as a channel for emitting & receiving performance-related events and metrics. Warnings are within scope, and I would prefer that we not fragment this data across multiple observers.

@yoavweiss
Copy link
Contributor

This space is much more complex than simply "get event from observer, send it to backend". Analytics providers aggregate and preprocess events, inject custom instrumentation to pull out custom events and metrics, and add many other layers of smarts before reporting data to their collectors.

Reporting API is specifically scoped to ignore all of the above, because the core use cases we target are UA-initiated events that analytics providers cannot / may not be able to collect themselves.

I agree these are the main differences between Reporting API and PerfObserver. PerfObserver entries need to be processed client-side before beaconing them back up. One major purpose of that processing is lossy compression - analytics providers drop irrelevant information in order to make sure their beacons don't get overly large. (aside - a native Compression API can significantly help here)

With that said, there may be overlap between the two where some client-side functionality can be replaced by sending the relevant entries to the server, while applying a standard compression scheme to them.

This seems like a V2 feature to me, if at all, as I'm not sure of its value. I doubt we could enable fully JS-free analytics (there will always be info that's only available client-side), so the benefits of enabling JS-free collection for some things may be marginal.

/cc @nicjansma @cvazac

@clelland
Copy link
Contributor

This is a potentially useful space -- the core Reporting API no longer defines any report types, though. The best path forward for this idea would be to start an explainer, and a new repository on WICG to float the idea. Integration with reporting should be pretty straightforward with examples such as https://wicg.github.io/crash-reporting/ and https://www.w3.org/TR/CSP3/#reporting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants