Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Utility for checking a metric passes the threshold? #2

Closed
addyosmani opened this issue Apr 10, 2020 · 6 comments
Closed

Utility for checking a metric passes the threshold? #2

addyosmani opened this issue Apr 10, 2020 · 6 comments
Labels
enhancement New feature or request
Milestone

Comments

@addyosmani
Copy link
Member

When I first started using the library, I was a little surprised to see it didn't implement a canonical helper for determining if a metric passed the Core Web Vitals thresholds.

While our documentation on web.dev captures what the thresholds are, I can imagine there is value to a developer not needing to hardcode threshold values themselves (e.g imagine if they got this subtly wrong).

I currently implement custom scoring logic in the WIP web vitals extension, but thought I'd throw this idea over the fence to see what you thought.

I'll defer on the API shape if you think this would be helpful, but you could loosely imagine...

getCLS((result) => logCurrentValue('cls', result.value, result.passesThreshold, result.isFinal));
@addyosmani addyosmani added the enhancement New feature or request label Apr 10, 2020
@philipwalton
Copy link
Member

This is an interesting idea! If it's possible to add this without increasing the file size too much, I agree it'd be a nice addition. (It may take a slight refactor since so much of the reporting code is shared by all metrics).

@Zizzamia
Copy link
Contributor

What would happen if a result pass or not the threshold? Is it more like to flag good vs bad results? Like, this quarter we went from 10% to 35% increase in good results kind of OKR?

It is def interesting to help the developer have a quick idea if the result is good enough 😄

@philipwalton
Copy link
Member

Thinking about this a bit more, one downside of reporting a pass/fail value on the metric is that, if Google decides to change the metric thresholds, that may require a breaking change in this library (may, not 100% sure).

We also don't yet have hard, consistent, thresholds for all of the metrics (e.g. the non-core Web Vitals like FCP and TTFB), and I'm not sure I'd want the API to differ between core and non-core vitals.

@Zizzamia
Copy link
Contributor

Zizzamia commented May 4, 2020

How about a threshold set as an optional value for each metric? In the README, you can recommend the developer the optimal value to benchmark, and it still gives the developer the opportunity to change it when they prefer.

We could see this as a budget for those metrics, where the developer can customize the value where they are pushing for.

@philipwalton
Copy link
Member

I've gone back and forth on whether we should support this, but ultimately I haven't been able to come up with an API that works well. Since the thresholds are well-documented publicly, I think it makes sense for developers who want to track threshold compliance to do so in their own code.

Some of the reasons for my thinking:

  • Thresholds might change over time, and a user might not necessarily want to update their version of this library just to get new thresholds
  • Not all of the metrics have published thresholds, nor do we plan on publishing thresholds for all metrics, so adding that would make the interface for the Metric object inconsistent (e.g. TTFB is unlikely to have a published threshold)
  • Including the thresholds adds extra weight to the library (albeit small) for many users who may not need it

@philipwalton
Copy link
Member

Coming back to this issue after a few years, I think most of the reasons I decided not to implement this no longer apply, so now I think it's worth reconsidering:

  • Thresholds might change over time, and a user might not necessarily want to update their version of this library just to get new thresholds

In the past two years none of the published thresholds for any metrics have changed, and I don't anticipate them changing much (if at all) in the future. So this no longer seems like a reason not to have this feature.

  • Not all of the metrics have published thresholds, nor do we plan on publishing thresholds for all metrics, so adding that would make the interface for the Metric object inconsistent (e.g. TTFB is unlikely to have a published threshold)

Since I wrote this, we now do have published thresholds for all of the metrics in this library, so there would be no inconsistency between metrics interfaces.

  • Including the thresholds adds extra weight to the library (albeit small) for many users who may not need it

After experimenting with a basic implementation, the size different is very small (~100 bytes), so I think it's probably nothing to worry about.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants