New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resource processing time #133

Open
yoavweiss opened this Issue Dec 18, 2017 · 5 comments

Comments

Projects
None yet
5 participants
@yoavweiss
Copy link
Contributor

yoavweiss commented Dec 18, 2017

While talking to people about visual metrics and the reasons it's currently hard to get them from RUM, a recurrent theme is that knowing the resources processing time would be useful for understanding their visual impact.

Use cases I encountered so far:

  • Image processing time would enable developers to estimate when these images were painted to screen. That can be used along with Element Timing, or as a polyfill for Element Timing. Because images are decoded in a progressive manner, we may need a list of events in that case.
  • Font processing time would enable developers to estimate when the fonts were paints to screen.
  • JS processing time would enable developers to take note that the performance impact of JS goes beyond its download. There may be a case in favor of splitting that between parsing (one time event) and execution (potentially ongoing). The "ongoing" case has some overlap with LongTasks, so that will require some thought.

Does it make sense to add that to Resource Timing? Or should we try to split it out into a separate spec that hooks into Resource Timing, in a similar way to Server Timing?

@tdresser

This comment has been minimized.

Copy link
Contributor

tdresser commented Dec 18, 2017

I'm hoping that the Element Timing API will, over time, be extended to handle the first two cases.

When should font processing time end? If this is hung off of Resource Timing, it seems a bit weird to me to measure until it's displayed, but it's quite natural as part of Element Timing.

In theory, long tasks V2 will handle the JS case, if we can sort out attribution.

@nicjansma

This comment has been minimized.

Copy link
Collaborator

nicjansma commented Dec 19, 2017

For JS processing (parsing/executing) time, LongTasks might not capture the case unless it's over 50ms correct?

It would be very useful to have stats on each JavaScript for the parsing time and initial execution time via ResourceTiming. In fact, I was just digging into trying to do this yesterday with https://github.com/danielmendel/DeviceTiming

@tdresser

This comment has been minimized.

Copy link
Contributor

tdresser commented Dec 19, 2017

Sorry, yes, long tasks will only handle a subset of the JS case.

How high priority is the sub 50ms JS case?

@toddreifsteck toddreifsteck added this to the Level 3 milestone Jan 18, 2018

@colinbendell

This comment has been minimized.

Copy link

colinbendell commented Jan 27, 2018

I think Image/Video parsing time and JS parsing time have different use cases and should probably be split to two different specs.

  1. With media timings (image/video/font) you want to know how long was the user waiting for pixels on the screen. This is slightly wider use case than Element timing. That is, how long from the time the UA knew to draw something did the network turn the content around (RT) and then decode and show the content on the UA device's active viewport. While decode time is important, it can be less relevant if the media content is off screen, below the fold or otherwise covered. Hero Image/element timing assumes there is minimal distinction between decode and in-view, but I would argue that this is an important distinction in the real-world as a validation metric for content creators. More generally, media content timing should focus on the timings of
    a) discovery (dom parsing? RT initiator?),
    b) network transfer (RT)
    c) decode & paint timings and
    d) viewport visibility timing: when the pixels finally first show on the UA active viewport. (this is important for below the fold prioritization and discovery, or even ascertaining when delays cause content to be painted too late, when the user has already scrolled out of view - see medium as a classic use case).

The owner of the media timing is the visual & creative designer and operations.

  1. With JS parsing & executing timing the developer is focused on the variability of the decode as a critical path item to overall page responsiveness and usability. The follow-on metrics of JS-execution are generally handled by LongTasks and UserTiming. The owner of this timing is the front-end developer. Using this timing results, I would expect her to take action by optimizing js code & bundlers to address device variances.

JS parsing might belong more naturally in RT, while the media timings might make sense to spin off separately.

@tdresser

This comment has been minimized.

Copy link
Contributor

tdresser commented Jan 29, 2018

Hero Image/element timing assumes there is minimal distinction between decode and in-view, but I would argue that this is an important distinction in the real-world as a validation metric for content creators.

My hope is that while this will be true for the initial version of element timing, we'll add more granular timing information to element timing in successive versions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment