Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does Dragging blur platform and author considerations? #6

Closed
mbgower opened this issue Oct 27, 2020 · 17 comments
Closed

Does Dragging blur platform and author considerations? #6

mbgower opened this issue Oct 27, 2020 · 17 comments

Comments

@mbgower
Copy link

mbgower commented Oct 27, 2020

In w3c/wcag#1307 the user asks whether iOS accommodation features (specifically assistive touch) are sufficient to support users who lack the ability to easily invoke a drag action.

Part of the AGWG's official response is that such accommodations:

are not universally available and would not qualify as Sufficient Technique for a wider accessibility baseline covering different platforms and user agents.

I think this response poses an interesting scenario we need to consider, and also points to a potential logic puzzle in the development of this SC (and arguably in Pointer Gestures as well).

First, we say that Assistive Touch is not an acceptable support because it is not universal. What if it was/is?

I can see the guidance getting in an odd situation where every touch-OS provides affordances for simple-touch drag and drop, and yet we continue to insist author's solve this. The analogy would be making authors responsible for key repetition and other keyboard operation simplifiers even though they are offered at the OS level (and piggy-backed on by any web author).

This new SC grew from a desire to further the guidance introduced in 2.1 for Pointer Gestures, which itself was the result of the Mobile Accessibility Task Force trying to ensure touch was made more accessible.

I understand the value of encouraging authors to offer simple, single pointer actions. But does this SC become redundant when hardware providers are required to provide single-touch mechanisms (through other standards)? Is that already the case with Android, iOS and Windows 10 systems? When are we placing undue burden on an author for a common interaction already solved by an elegant accessibility setting at the OS level? And when will we know this is reality?

Second, I think there is a related logic puzzle to solve. In this hypothetical, we have a user who cannot do complex gestures but wants to use a touch device. How does the user ever get to the web page where we are insisting simple touch interactions are required? They obviously need to first open and operate their device to even reach a web page. It seems to me there are a few possible answers:

  1. The device interaction is designed so that there is no reliance on gestures and dragging for all its functions. (There is a method of achieving all functions with simple operations, similar to the WCAG SCs).
  2. The device has an accommodation that allows a user to invoke all required complex functions with simple operations (The device offers something like Assistive Touch)
  3. The user figures out workarounds for interactions that lack accommodations, which allows them to operate the device to a degree.
  4. The user does not use the device.

In scenario 1, if the system itself supports simple pointer interactions for all functions, then the need to have SCs that require the same on author-controlled web pages seems critical.
However, it is easy to demonstrate that mobile systems do not meet scenario 1. For instance, the only touch way to scroll inside a mobile touch browser is by flicking or dragging up and down. There is no simple touch equivalent that I am aware of. Therefore, the only ways someone can even view a web page on a mobile touch screen (whose content exceeds the viewport) is to do actions the user in our hypothetical cannot achieve -- or to invoke a different modality, like keyboard operation.

For scenario 2, if the system provides accommodations (accessibility features) that allow someone to complete all functions with simple pointer interactions, then as long as authors invoke those same functions in a way that is supported, there is no need to have the SCs for Pointer Gesture and Dragging in their current state. The user is already able to operate the OS with the accommodations. The SC can be restricted to ensuring authors create content that supports the platform affordances.

For scenario 3, if workarounds exists for some interactions that lack accommodations, the user may or may not be able to reach the web pages, where those workarounds may or may not also work for authoring guidance. The SCs would have value. However, without having requirements for the platform and user agent, the SC requirements are unlikely to solve many of the user issues.

For scenario 4, where devices fail to support touch users with reduced ability, creating requirements for web authors to support touch is not going to matter (for those devices).

I'm not sure we have sufficient research to know what blending of the above scenarios most closely reflects reality, nor what the cost/benefit is of trying to have web authors individually solve interaction challenges that are considerations across a touch device's entire capabilities.

@alastc
Copy link

alastc commented Oct 29, 2020

For instance, the only touch way to scroll inside a mobile touch browser is by flicking or dragging up and down.

Yes, but that action can be performed almost anywhere on the screen, it does not require the same level of dexterity as a gesture or a dragging operation on one element.

@patrickhlauke
Copy link
Member

on the scrolling issue/scenario 1, i'd also say that if a user uses AT or an external bluetooth keyboard or similar, scrolling is possible ... but dragging still isn't.

for scenario 3 i'm reminded of the classic "mouse keys" on desktop. technically, even pure keyboard users can operate things that are purely mouse-driven. nonetheless, we have 2.1.1 Keyboard.

scenario 4 seems a bit "big brush" - if some mobile/tablet devices don't fall into scenarios 1-3, it doesn't mean that none of them do? of course, if a device simply can't be operated by a user with a particular situation, then authors can't be expected to work around the failings of the device/OS itself. but that's probably not reason enough to ignore the issues altogether for all other devices/OSs?

@patrickhlauke
Copy link
Member

also, not forgetting that dragging etc isn't just about touch. it's pointer, so applies to mouse use as well.

@alastc
Copy link

alastc commented Dec 30, 2020

I wonder if it would be best to discuss (with @mbgower) in a MAFT meeting? It resolves around the scenarios of use, and I'm not as a familiar with the use-cases.

@scha14
Copy link

scha14 commented Jan 24, 2021

Hi @mbgower, thank you for raising this. Below is the draft response from the MATF. Although this is a mobile/ touch device focused response, it applies more broadly. To summarize, based on the above scenarios, factors such as universality, discoverability, complexity of use and pre-conditions for getting to the application where the gesture might be required were considered. Given the current state of things, keeping a single-pointer alternative to dragging an SC seems reasonable. Thank you @detlevhfischer for your research on this!

Discoverability and ease of use : On platforms that offer assistive touch, we cannot simply assume that it provides affordances for simple-touch drag and drop. To configure it to purpose, understand / train yourself in how it works, and then carry out an action supported by it, seems like an undue burden on the user, when all they want to do is occasionally manage to drag-and-drop when such a control happens to be offered without single-activation equivalent. This is outlined below in a trial of assistive touch as a new user, which is likely similar to the experience of a non-expert user with motor impairment.

Getting to the web page or app: On the latest OS versions, apart from external keyboards, a user can navigate through web pages and apps on both Android and iOS through voice access. You can say “scroll down”, “swipe left” and other phrases to navigate. Users can even define custom actions in AssistiveTouch for those generic actions they need all the time (rather than use it to tackle the occasional drag-n-drop interface).

Universal availability and adoption: Gestures such as dragging are still not universally available. Currently, if we just look at Android and iOS, that cover 99% of the mobile operating systems market, assistive touch and voice access options are built in and provide workarounds.

Firstly, we cannot not consider these solutions universal yet.

a) These features are part of relatively newer OS versions. Mobile is notorious for the long tail adoption by users who are either not able to (because of older models that will not upgrade), or have not upgraded to the latest version of the OS will continue to struggle with the lack of these features for a long time. Once there is over a certain threshold of adoption (not sure what that threshold should be) of the latest OS versions that have it as an option, it will reduce the burden for individual app developers or mobile web developers from building their own solutions.
b) Not all assistive functionality is available on both platforms in the same modes. For example, on Android, dragging is not one of the supported gestures on the list of supported voice gestures.

Secondly, as discussed in the point about ease and discoverability, configuring and learning workarounds for gestures that are not as frequently used, when there are established patterns for alternatives, seems to pose undue burden on the user. Examples of these patterns include:
For image sliders as well as control sliders, providing arrows for alternative single tap navigation (or for control sliders, allowing activation of the groove) is an established and widespread practice and not difficult to implement. For things like Kanban boards we have seen mainstream implementations where users can single-tap items to select and then move them via a dropdown menu. Single touch implementations exist also for list sorting that supports dragging.

Outline of a trial of assistive touch as a new user by @detlevhfischer : I called up, as example page, which has sliders that you can drag and swipe. I used the first, SINGLE ELEMENT DOT NAVIGATION, ignoring the static arrows that are provided here. Then I went to Accessibility > AssistiveTouch and assigned the custom action for single tap to "Hold and Drag", assuming this would allow me to tap once (to set, as it were, the anchor of the movement) and then tap again (to indicate the location where the anchor should go). I am not entirely sure this is the way to do it, not even after studying the support page (and would imagine evaluating the options provided in AsssistiveTouch would not be easy for anyone needing single tap activation). I could, somewhat erratically, use single tap to move the slider, but had a hard time reversing direction. Once it is on, the behaviour is pretty difficult to predict (you can also see this when selecting text).

@alastc
Copy link

alastc commented Feb 1, 2021

@mbgower is the explanation above sufficient, or would you like to discuss with the group?

@mbgower
Copy link
Author

mbgower commented Feb 9, 2021

Thank you for your response. I understand the group's perspective, and support the objective to make devices more usable by more people. A concern continues to be web author's being asked to exceed the pointer accessibility of the underlying operating system. Your response suggest this same problem, where on the one hand you point to "the latest OS versions" ability to overcome os-level dragging and path-based gestures by using voice commands as proof mobile systems aren't reliant on dragging, while on the other hand insisting that similar OS supports for press-and-hold cannot be considered sufficient for users of author created content who find dragging difficult.
That problem may be addressed when this SC is examined in WCAG2ICT.

The second concern continues to be lack of cited research. You mention research on the current state of things in your response, but have never cited that research. Where is it?

In terms of the reply's statements on discoverability,

when all they want to do is occasionally manage to drag-and-drop when such a control happens to be offered without single-activation equivalent

Has there been any study done on what interactions in the OS itself rely on some form of dragging? I've already cited the simple act of scrolling a page. A response said this is not as onerous, since it does not rely on navigating to a specific point to drag but can be done from anywhere; however, alternatives to drag are generally focused on needing to activate a specific target. So does the group feel a single-point alternative is actually more difficult to carry out than a scroll gesture or a swipe? Again, some studies on the actual burden of different user actions would help confirm we are properly identifying the pain points and improvements

As well, one of your workaround examples ("move them via a dropdown menu") invariably involves dragging on mobile. So the solution to dragging itself involves dragging.

@alastc alastc added the wcag2ict label Feb 9, 2021
@maryjom maryjom transferred this issue from w3c/wcag Jun 7, 2022
@maryjom
Copy link
Contributor

maryjom commented Jun 7, 2022

Moved to WCAG2ICT repository as the new TF will work to address issues tagged as WCAG2ICT from the WCAG repository.

@detlevhfischer
Copy link

@mbgower

As well, one of your workaround examples ("move them via a dropdown menu") invariably involves dragging on mobile. So the solution to dragging itself involves dragging.

How so? you single tap/click the menu button; menu opens and displays a list of options; you move the pointer and single tap/click an option? (But maybe I misunderstand...)

@mbgower
Copy link
Author

mbgower commented Jun 10, 2022

How so?

@detlevhfischer Unless you have a fairly short list, you're going to have to drag (or do a path-based gesture) to reposition the viewport so that you can review all the items in the dropdown list.

@maryjom maryjom added WCAG 2.2 and removed wcag2ict labels Oct 11, 2022
@maryjom maryjom added this to the Add WCAG 2.2 SC milestone Oct 26, 2022
@maryjom
Copy link
Contributor

maryjom commented Oct 12, 2023

@GreggVan This issue was an existing one related to dragging. Can you take a look at this to ensure the draft you made addresses this issue as well? Thanks!

@maryjom
Copy link
Contributor

maryjom commented Nov 27, 2023

@mbgower Wondering how you think that WCAG2ICT could or should clear this up? We don't provide techniques or requirements. Please check the current WCAG2ICT guidance on Dragging movements. Does this section sufficient interpretation and information, or do you feel something else is needed?

@mbgower
Copy link
Author

mbgower commented Nov 27, 2023

Notes 1 and 3 have been retained:

This requirement applies to content that interprets pointer actions (i.e. this does not apply to actions that are required to operate the user agent or assistive technology).
This requirement applies to [software] that interprets pointer actions (i.e. this does not apply to actions that are required to operate the [underlying platform software] or assistive technology).

What if the software is a user agent, AT or platform software? Shouldn't these notes include some qualifiers for this kind of software?

@maryjom maryjom assigned maryjom and unassigned GreggVan Nov 27, 2023
@maryjom
Copy link
Contributor

maryjom commented Nov 28, 2023

@mbgower Note 1 is scoped specifically to non-web documents. User agents are non-web software.

Note 3 is scoped to non-web software. For that, the note is specifically about software applications that have some user agent or platform software underneath - to indicate the software applications aren't responsible for fixing issues with any dragging required by their underlying OS or User Agent for standard UI elements supplied/supported by the underlying software.

Perhaps another note is needed to talk about user agents and OS, or we need to figure out how to show that a particular software layer (application, user agent, OS) is only responsible for dragging movements that is implemented/interpreted by that particular layer of software. What do you think?

@maryjom
Copy link
Contributor

maryjom commented Dec 15, 2023

@mbgower The adjustments to the WCAG2ICT guidance are being developed using Issue #284. Once that is settled, I think this issue can be closed. Agree?

@maryjom
Copy link
Contributor

maryjom commented Jan 12, 2024

WCAG2ICT TF answer:

Mike,
You raised a good point regarding the need for a separation of responsibilities between the author, user agent, and platform software. Glad you re-raised and clarified the issue in your review of 2.5.7 Dragging Movements. The WCAG2ICT Task Force has developed guidance for non-web contexts in the notes to address the concern. This same guidance is provided for other success criteria that also interpret pointer actions, namely 2.5.1 Pointer Gestures and 2.5.2 Pointer Cancellation. The notes read:

NOTE: This requirement applies to [user agents and other software applications that interpret] pointer actions (i.e. this does not apply to actions that are required to operate the [underlying platform software] or assistive technology).

NOTE: This requirement also applies to [platform software] that interprets pointer actions. This does not apply to actions that are required to operate the assistive technology.

See PR #296 for the changes. The WCAG2ICT Task Force approved the above notes in our 11 January meeting. The PR incorporates those updates into the document.

Related issue: #284 (opened due to the AG WG review of 2.5.7) where the Task Force developed these notes.

@mbgower Please close this issue if your concerns have been satisfied. Thanks!

@maryjom
Copy link
Contributor

maryjom commented Jan 16, 2024

@ mbgower I am closing this issue, as you reviewed the changes already based on the issue and PR. If there's anything further, please open a new issue.

@maryjom maryjom closed this as completed Jan 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

No branches or pull requests

7 participants