You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I implemented the DCLS code provided here and tried it on a few pages. I got values that ranged from 0.5 to 1.5. My initial reaction was wondering what was a good score and what was a bad score. It sounds like DCLS could get very large - in the thousands potentially.
I think it would be better if the score could be on a fixed scale - say 0 to 100. This would let people have a better understanding of whether layout instability was a priority issue. Since it's not a time-based metric, it might be better to invert it so a score of 100 is perfect (like Lighthouse).
(I read the previous discussion about measuring Layout INstability vs Layout STability. Sorry if this rehashes that debate.)
The text was updated successfully, but these errors were encountered:
Having an upper bound is a good idea. We actually do limit the CLS score to 10.0 when we report it to UKM (data source for the Chrome User Experience Report). We could impose a similar limit for the Web Perf API. Differentiating beyond 10.0 is probably not useful.
I implemented the DCLS code provided here and tried it on a few pages. I got values that ranged from 0.5 to 1.5. My initial reaction was wondering what was a good score and what was a bad score. It sounds like DCLS could get very large - in the thousands potentially.
I think it would be better if the score could be on a fixed scale - say 0 to 100. This would let people have a better understanding of whether layout instability was a priority issue. Since it's not a time-based metric, it might be better to invert it so a score of 100 is perfect (like Lighthouse).
(I read the previous discussion about measuring Layout INstability vs Layout STability. Sorry if this rehashes that debate.)
The text was updated successfully, but these errors were encountered: