Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finalize assignments: Chapter 12. Mobile web #14

Closed
3 tasks done
rviscomi opened this issue May 21, 2019 · 29 comments
Closed
3 tasks done

Finalize assignments: Chapter 12. Mobile web #14

rviscomi opened this issue May 21, 2019 · 29 comments

Comments

@rviscomi
Copy link
Member

rviscomi commented May 21, 2019

Section Chapter Authors Reviewers
II. User Experience 12. Mobile web @slightlyoff @OBTo @HyperPress @AymenLoukil

Due date: To help us stay on schedule, please complete the action items in this issue by June 3.

To do:

  • Assign subject matter experts (coauthors)
  • Finalize peer reviewers
  • Finalize metrics

Current list of metrics:

  • Tap targets
    • Lets tackle through [1], with the freq being how many offending elements are on each page
  • Legible Font size. Analyzing this with what lighthouse deems as an acceptable % of legible text is fine
  • Proper font contrast
    • Please tackle like "Tap Targets" above
  • Mobile configuration split - separate mobile and desktop sites, responsive site, dynamically served content.
  • % sites prevent users from scaling the viewport
  • % site with a meta viewport at all
  • % sites containing any CSS breakpoints <= 600px
  • % sites locking display orientation
  • % of sites preventing pasting into password fields
  • % of sites sites making NO permission requests. Should only be making these upon a user-interaction like a click.
  • For each of the following, what % of sites make this permission request while loading: Notifications, Geolocation, Camera, Microphone
  • # of links or buttons (ideally any element with a click listener attached) only containing an icon [1]
    • This can be tested by checking if only svg is inside the button or if a single character is (font icons)
  • How well are sites using native features on the web to simplify a users job:
    • What is the penetration for each of the following input types [1]
      • color, date, datetime-local, email, month, number, range, reset, search, tel, time, url, week, datalist
      • % of sites using ANY of the above input types
    • Penetration for each of the following attributes [1]
      • autocomplete, min or max, pattern, placeholder, required, step
      • % of sites using ANY of the above attributes (besides placeholder and required)
  • For sites which have a document event listener triggering on a scroll (touchstart, wheel, etc), how many are using passive event listeners
  • % of sites that send more JS than the size of the viewport ("Web Bloat Score") per pageload
  • number/fraction of sites specifying a webapp manifest
  • number of sites registering a Service Worker
  • cumulative layout shift

[1] The best way to both analyze and display these pieces of data is through a frequency distribution graph. With this we can both find out how big of an issue this tends to be for the average site, and what the global trends are

👉AI (@slightlyoff @OBTo): Peer reviewers are trusted experts who can support you when brainstorming metrics, interpreting results, and writing the report. Ideally this chapter will have multiple reviewers who can promote a diversity of perspectives. You currently have 1 peer reviewer.

👉 AI (@slightlyoff @OBTo): Finalize which metrics you might like to include in an annual "state of mobile web" report powered by HTTP Archive. Community contributors have initially sketched out a few ideas to get the ball rolling, but it's up to you, the subject matter experts, to know exactly which metrics we should be looking at. You can use the brainstorming doc to explore ideas.

The metrics should paint a holistic, data-driven picture of the mobile web landscape. The HTTP Archive does have its limitations and blind spots, so if there are metrics out of scope it's still good to identify them now during the brainstorming phase. We can make a note of them in the final report so readers understand why they're not discussed and the HTTP Archive team can make an effort to improve our telemetry for next year's Almanac.

Next steps: Over the next couple of months analysts will write the queries and generate the results, then hand everything off to you to write up your interpretation of the data.

Additional resources:

@rviscomi rviscomi transferred this issue from HTTPArchive/httparchive.org May 21, 2019
@rviscomi rviscomi added this to the Chapter planning complete milestone May 21, 2019
@rviscomi rviscomi added this to TODO in Web Almanac 2019 via automation May 21, 2019
@rviscomi rviscomi changed the title [Web Almanac] Finalize assignments: Chapter 12. Mobile web Finalize assignments: Chapter 12. Mobile web May 21, 2019
@rviscomi rviscomi moved this from TODO to In Progress in Web Almanac 2019 May 21, 2019
@rviscomi
Copy link
Member Author

Added @OBTo as a coauthor

@foxdavidj
Copy link
Contributor

Here are some metrics I've been thinking about:

  1. This is a subset of viewport scaling, but how many sites prevent users from zooming in?
  2. Pasting into password fields
  3. Sites not making permission requests for anything upon a page load. I feel this metric could be tweaked a bit still to be more impactful.
  4. How well are sites using new(ish) native features on the web to simplify a users job:
    • How many sites use new input fields (number, email, etc). How many specify the autocomplete attribute? Use placeholders?
    • Penetration of new usability elements like <datalist>. Need help forming a list of these.
  5. For sites which have document event listeners triggering on a scroll (touchstart, wheel, etc), how many are using passive event listeners

I'd still like to add a few more metrics... but giving proper context to all of them is what is most important. One of my ideas for doing so is to show breakdowns of these metrics per industry. Looking at stats across the board is nice... but its more impactful to see how others in my industry are doing (my competitors).

Thoughts?

I'll also start reaching out to potential reviewers a bit later this week.

@foxdavidj
Copy link
Contributor

I proposed this for the #9 (perf chapter) as well, but I think it'd be really valuable to have a quick Google Meet sometime in the next few weeks to bounce ideas off each other or just get on the same page. We'd accomplish a lot in just 20 minutes. Let me know.

@rviscomi
Copy link
Member Author

rviscomi commented Jun 4, 2019

@slightlyoff @OBTo @HyperPress we're hoping to finalize the list of metrics for each chapter today. Could you edit #14 (comment) and add any outstanding metrics to the list? It would also be good to have at least one more reviewer. When these are completed, please tick the last two checkboxes and close this issue. Thanks!

@foxdavidj
Copy link
Contributor

@slightlyoff @HyperPress @rviscomi I have updated the list of metrics. Any thoughts?

Notes:

  • I removed "RWD" as I feel I've covered that through all the other individual metrics
  • Best way I could think of testing if a site has had effort put into being made responsive, is testing if there are any css breakpoints <= 600px. Does this sound reasonable?

@logicalphase
Copy link
Contributor

logicalphase commented Jun 4, 2019

@OBTo I agree with your last assessment in #14. I've reviewed the current list and it looks good to move ahead if in agreement. Barring any last minute suggestions I'll close this.

@rviscomi
Copy link
Member Author

rviscomi commented Jun 4, 2019

Not sure if @slightlyoff has had a chance to look over the metrics yet.

@logicalphase
Copy link
Contributor

Sorry I jumped the gun :-P

@rviscomi rviscomi added the ASAP This issue is blocking progress label Jun 6, 2019
@rviscomi
Copy link
Member Author

rviscomi commented Jun 7, 2019

Reminder for @slightlyoff to close this out today ❤️

@logicalphase
Copy link
Contributor

The last update of metrics looks good. I agree with adding metric to account for breakpoints. Viewport looks good. I can't think of anything to add. This one has broad coverage. Any other comment before we close this out?

@foxdavidj
Copy link
Contributor

@rviscomi, @HyperPress and I are giving the metrics here a green light. I'll leave it up to either you or @slightlyoff to make the final call and close it out however.

I'm still looking for one other reviewer. The couple I've reached out to thus far weren't able to help due to their schedules.

@rviscomi
Copy link
Member Author

The list of metrics LGTM. Thanks @HyperPress and @OBTo.

@slightlyoff I'm closing this issue to keep things moving and on schedule but if you have anything to add we'd still love your input.

Web Almanac 2019 automation moved this from In Progress to Done Jun 10, 2019
@rviscomi rviscomi removed the ASAP This issue is blocking progress label Jun 10, 2019
@slightlyoff
Copy link

Sorry for the slow reply, @rviscomi. Reopening for another day; hopefully that's alright.

@slightlyoff slightlyoff reopened this Jun 11, 2019
Web Almanac 2019 automation moved this from Done to In Progress Jun 11, 2019
@slightlyoff
Copy link

A few extra things I'd like to track:

  • % of sites that send more JS than the size of the viewport ("Web Bloat Score") per pageload
  • number/fraction of sites specifying a webapp manifest
  • number of sites registering a Service Worker

Thoughts?

@logicalphase
Copy link
Contributor

logicalphase commented Jun 11, 2019

@slightlyoff - The additional metrics look fine to me, but are these a better fit for PWA? There's benefits to both PWA and Mobile Web, but thought I'd ask. If so, we can add them here and close it out.

@foxdavidj
Copy link
Contributor

All of them look good to me. I especially like the first one.

Also this chapter's metrics have a lot of overlap with other chapters (e.g., A11Y), but I think they have value being talked about here. It all depends what angle you're looking at the metrics from, so I don't think there's any problem.

@rviscomi
Copy link
Member Author

rviscomi commented Jun 11, 2019

Thanks @slightlyoff!

% of sites that send more JS than the size of the viewport ("Web Bloat Score") per pageload

I don't understand what this one is measuring. More JS (bytes) than the size of the viewport (pixels)?

Edit: OK I found https://www.webbloatscore.com/ and that helps explain it. The denominator there is the size of the full page screenshot of the page. That could be tricky to get in HA/WPT.

@foxdavidj
Copy link
Contributor

foxdavidj commented Jun 12, 2019

Just learned that CLS (cumulative layout shift) is in CrUX? If so, we should definitely add this metric. Thoughts @slightlyoff @HyperPress?

https://web.dev/layout-instability-api/

@rviscomi
Copy link
Member Author

Yes just landed yesterday. We're still figuring out how to analyze the data. It's a histogram of cumulative scores in bins of width 5. So for example, there's a bin for cumulative scores between 0 and 5, and in theory we want as much of the distribution to be within this bin. So we could set some target like 90% of CLS is under 5 and measure the % of CrUX origins that meet this target. WDYT?

@foxdavidj
Copy link
Contributor

@rviscomi you say bins of width 5; what unit is that width in? Time? CLS?

@rviscomi
Copy link
Member Author

5 units. CLS is kind of a weird metric in that it's calculated as a sum of percents. So it's not a percent and it's not a time value.

@foxdavidj
Copy link
Contributor

Right, but what would the axis be on the histogram (im a visual guy). Y: # of sites; X: sum of percents (CLS)?

@rviscomi
Copy link
Member Author

Yes, X axis would be CLS.

@foxdavidj
Copy link
Contributor

Hmm I think a freq distribution graph (#sites / CLS) would be best because it'd answer a lot of different questions... but I'm not sure how wide each bin would have to be. Think figuring out the width is something we'd have to do with the analyst team. Thoughts?

Actually... do we track the time each of these shifts occur? It could be really interesting to see how these shifts are distributed over time

@rviscomi
Copy link
Member Author

For each origin CrUX provides the % of page views that fall within the given CLS. So to aggregate a histogram of many origins, we'd need to take the average % for each CLS bin. Suboptimal but I think it's the best we can do.

We don't have any other info about the shifts (time, individual shifts), but that's exactly the kind of feedback we (on the CrUX team) are looking for so we can improve the metric.

@foxdavidj
Copy link
Contributor

Thanks for the clarification, now I think i understand. So just to re-iterate: We'll be taking the average CLS for each origin (like you described) and use that data to build a freq distribution graph for all sites.

I think the toughest challenge will be showing readers what these numbers really mean. I'll start thinking of ideas for this.

@rviscomi
Copy link
Member Author

Marking this as closed. I've updated the top comment with the metrics from the discussion,

Web Almanac 2019 automation moved this from In Progress to Done Jun 13, 2019
@rviscomi
Copy link
Member Author

Adding @AymenLoukil as a reviewer!

@logicalphase
Copy link
Contributor

Welcome @AymenLoukil

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development

No branches or pull requests

4 participants