Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Second thoughts on pointer gestures and drag-and-drop #403

Closed
michael-n-cooper opened this issue Jun 29, 2018 · 25 comments
Closed

Second thoughts on pointer gestures and drag-and-drop #403

michael-n-cooper opened this issue Jun 29, 2018 · 25 comments

Comments

@michael-n-cooper
Copy link
Member

From @detlevhfischer on May 23, 2018 10:13

I have some (late) second thoughts on the new SC Pointer Gestures.
Take a look at the Salesforce example of drag-and-drop that has purpose of laying out elements on a two-dimensional plane (as in composing a diagram) referenced in a recent accessible drag-and-drop article on Medium by Jesse Hausler.
This drag-and-drop example has been made keyboard-accessible (with added instructions for desktop screen reader users via aria-live) and also allows dragging with a single pointer (including touch). Now, do we really mean there should be an extra mechanism to move the objects with single clicks / tabs? This seems to necessitate either

  • four discrete buttons at each side of a moveable object, or
  • a mechanism (e.g. single tap) whereby the object can be selected / "picked up" and on-screen controls (arrows) are then available to move the selected element in discrete steps

I fear we might have gone too far; I fear that adding extra functionality for draggable objects to afford single click/tap operation might decrease the usability / affordance overall, and may not be worth it (quite apart from adding a lot of complexity for developers). Even so, the current SC text seems to mandate exactly that, and would fail the example above. Since accommodations are possible, this does not seem to be a case for the 'essential' exception - but others may disagree.

Thoughts?

Copied from original issue: w3c/wcag21#937

@michael-n-cooper
Copy link
Member Author

From @patrickhlauke on May 23, 2018 10:20

I'd note that the intention for the SC is to prohibit path-based gestures. i.e. where a user has to follow a specific path / make a specific gesture (e.g. a precisely timed and executed swipe or similar). Dragging/dropping is more of a freeform interaction...while the user is dragging, it doesn't matter if they stray off a specific path or not, what counts is only the start and end points. Maybe this should be clarified (assuming my take on this is correct).

@michael-n-cooper
Copy link
Member Author

From @patrickhlauke on May 23, 2018 10:24

to further clarify: for a path-based gesture, a user has to have enough fine motor control to "down" the pointer (e.g. put the finger on the touchscreen, click and hold the mouse button, etc), follow a specific path within a specific time (as otherwise it doesn't count as, say, a swipe), and then release the pointer (e.g. lift finger off the touchscreen, release the mouse button, etc)
for a drag'n'drop operation, the user has to only "down" the pointer (to select the item), and is then free to move the pointer as slow/fast and to whatever position they want (though the system may constrain the area where the element can be dragged on). they don't need to be particularly fast nor precise. what counts then is only where they stop - the end point - where they then release the pointer.

@michael-n-cooper
Copy link
Member Author

From @patrickhlauke on May 23, 2018 10:27

having said all that, an alternative method for allowing a drag'n'drop operation that doesn't rely at all on movement would be: allow the user to tap/click an item to move; the item is then "selected" - user doesn't need to keep the pointer down (i.e. they can release the mouse button, lift their finger off the touchscreen); the user then taps/clicks on the destination they want to move the item to. so a "pick this thing, move it there" kind of paradigm, which does not require the user to keep the pointer down at all.

@michael-n-cooper
Copy link
Member Author

From @detlevhfischer on May 23, 2018 12:0

Yeah - "Select and then click /tap in the direction where the user wants to move the element" crossed my mind as well, but I think in many contexts (especially on more crowded canvases), interpreting such a tap might lead to unintended results. Someone implementing that would have to work out whether the horizontal or vertical offset (element centroid and tap/click location) is larger; would need keep the element selected and thereby negate the option to deselect by tapping/clicking outside of it, which would be bad if it moves under another element higher up on the x-axis; work out what to do when a click/tab happens to land inside another object (should this element now be immediately selected?) etc. Maybe all this is solvable, but gut feeling tells me it is probably not the right approach.

As to differentiating path based gestures like swipes from 'free-form' gestures in drag-n-drop: I would dearly like to buy into that distinction, but my hunch is that flicks and swipes, if anything, might be easier to perform than the sequence of tap on a particular target (to pick up), hold and drag (to move) and release over a defined area (to drop).

@michael-n-cooper
Copy link
Member Author

From @patrickhlauke on May 23, 2018 12:46

interpreting such a tap might lead to unintended results [...]

it would be a case of having this as a toggle-able mode. i could envisage once in the mode, after first tap to select, you could have a new bar at the top of the screen to exit mode without moving/dropping. or tapping on the same/selected element again to deselect. there are options in any case. but also, i still believe there's a distinction here between a "path-based gesture" and a drag'n'drop interaction.

as for gestures being easier than drag'n'drop interactions: note that gestures usually have a time element to them (needing to execute a specific path within a certain threshold of time, otherwise it's not registered as a gesture), and straying from the path invalidates the gesture. not all gestures are just simple horizontal/vertical swipes either. think for instance the various Android/TalkBack gestures (L shape ones like down-right etc). (and yes I know TB gestures are exempt, but just giving you an example of a gesture that, if implemented for web content, would be difficult for users to accurately perform within a set amount of time).

and i would still say, in general: a drag'n'drop interaction is, to my mind, NOT a "gesture" (gestures being essentially commands that require the user to draw a particular path within a certain time threshold). [edit: and to really boil the point down: drag'n'drop is NOT path-based, as the user does not have to follow a specific path ... so even IF we want to argue that drag'n'drop is a "gesture", it remains that it's not path-based so the SC does not apply as the normative language clearly says "path-based gestures"]

@michael-n-cooper
Copy link
Member Author

From @mraccess77 on May 23, 2018 12:56

I believe the intention was that path based gestures include drag/drop and swipes as well. It's interesting that we decided that drag and drop under Pointer Cancelation had to have an undo/abort as that could help people recover from a mistake with a drag and drop gesture. I agree that dragging and dropping will be problematic for some users. We should examine how each type of user might use this and what the solution is. For example, on mobile like iOS with switch control -- how would a user perform the drag and drop without using point mode? But at the same time how would they use a mode with an infinite number of targets without having buttons to move left, right, up, down? I agree a swipe to move would help -- but as intended swipes would be path based gestures. Ultimately what is needed a the ability to bring up a list of actions and to be able to choose an action from that list. Then the actions could be buttons that are context sensitive and would work with switch control or other technology. The challenges are very different here for users who are blind versus users who rely on switch or other input modalities. While an author could bring up an actions lists and that may be needed -- ultimately having an API to communicate what the actions are and to perform those actions would be best and would move this from the author requirement to the user agent.

@michael-n-cooper
Copy link
Member Author

From @patrickhlauke on May 23, 2018 13:0

we need to also be careful here not to try and solve everything in this SC. note that the SC for keyboard is still present, and that will cover things like switch access too.

As reminder, the normative wording for this SC: "(Level A)
All functionality that uses multipoint or path-based gestures for operation can be operated with a single pointer without a path-based gesture, unless a multipoint or path-based gesture is essential."

So, this isn't about "must be able to perform this without any pointer interface" (e.g. keyboard or switch access), but rather that where a user can use a pointer, they're not required to perform path-based gestures (complex due to path-following/precision and timing) or forced to use multitouch gestures.

@michael-n-cooper
Copy link
Member Author

From @mraccess77 on May 23, 2018 13:46

@patrickhlauke The point of this SC was to not require users to carry around a physical keyboard to use a mobile site and to address these situations where something may technically be keyboard accessible but otherwise very difficult or impossible to access without a difficult gestures that either couldn't be performed with their AT or difficult for them to perform without AT. As you know it can be difficult if not impossible to get a keyboard to appear in a mobile browser. If the browser supports arrow key navigation -- how would a switch user be able to send those left/right/down/up keys through switch control? As I said before ultimately these issues should be solved through an API at the user agent level allowing authors to specify actions and for users to query a list of actions. Yes we still have the keyboard requirement but the intent is to minimize when users have to fall back to that requirement. The associated understanding document talks about not relying on gestures and I think the understanding document describes then intention of the group when this was written. It sounds like we are now in a situation where we need to scale back the original intention through re-writing what we "really meant to say was". If that's what we have to do then we should acknowledge the change. This seems to be a common theme across many of the SCs.

@michael-n-cooper
Copy link
Member Author

From @mraccess77 on May 23, 2018 13:49

I tried the example from Detlev with VocieOver on iOS and without a keyboard I was not able to tell where I was moving the object. With a double tap, hold, and drag I did not get feedback and VocieOver started selecting text and such. This is the type of situation that we were trying to solve with this SC.

@michael-n-cooper
Copy link
Member Author

From @detlevhfischer on May 23, 2018 13:55

Maybe we should cover this point in the understanding document. Authors will want to know if an implementation like the Jesse Hausler one I pointed to would pass or fail SC Pointer Gestures. If others, like Patrick, think that a drag-and-drop action does not fall under "path-based gesture", that problem is solved, (and I would be relieved).
BTW my impression is that the timing aspect of gestures (will it be interpreted as flicking or tapping? will it work?) is most pronounced in AT gestures (especially on Android where it can drive users to distraction), much less so in drag gestures (AT turned off) implemented on web sites (typical is the swipe-to-reveal-next-slide type carousel in news sites). Compare the weather forecast slider two-thirds down on tagesschau.de on a moble phone - here, neither speed nor exact direction matter much, the slider responds.

@michael-n-cooper
Copy link
Member Author

From @patrickhlauke on May 23, 2018 14:30

The point of this SC was to not require users to carry around a physical keyboard to use a mobile site and to address these situations where something may technically be keyboard accessible but otherwise very difficult or impossible to access without a difficult gestures that either couldn't be performed with their AT or difficult for them to perform without AT.

maybe that was the original intent, but I'm certainly not getting that impression from reading the normative SC text...

@michael-n-cooper
Copy link
Member Author

From @detlevhfischer on May 23, 2018 14:31

@mraccess77

With a double tap, hold, and drag I did not get feedback and VocieOver started selecting text and such. This is the type of situation that we were trying to solve with this SC.

Apart from adding more arrow controls (appended to the individual object, or general ones referencing the selected object) is there anything the author could have done to ensure voiceover output of position in that pass-through mode of repositioning? Beyond this example, there are ARIA widgets that do not work on iOS with VO on. Would that in your view always constitute a failure, or is is a UA/AT issue the author has no control over?

@michael-n-cooper
Copy link
Member Author

From @patrickhlauke on May 23, 2018 14:43

interesting point about swipes that are not timed. yes, i was perhaps thinking too much about gestural commands rather than...direct manipulation type gestures. interesting conundrum then in how to word this (as i believe we introduced "path-based gesture" to differentiate it from simple taps, which also fall under a generic "gesture").

on the "don't just force users to directly manipulate/swipe", i'm thinking now of the iOS mail app, where you can slowly swipe an email left/right to reveal options (archive, delete, flag, etc), but then the same actions can also be performed by simple taps (when tapping through to the email first, and then using the regular delete, flag, etc buttons at the bottom of the screen).

going back to drag'n'drop as in your example - which is essentially visually arranging elements (and focusing purely on the "can tap, just not accurately move/swipe" scenario, and leaving aside switch/keyboard/touch+AT scenarios) - i believe offering a two-step "select, then choose the target" mode would work here (with an additional "Cancel" option that shows up for users to bail out of the operation altogether, or allowing them to tap on the element again without moving it)

@michael-n-cooper
Copy link
Member Author

From @mraccess77 on May 23, 2018 18:52

on the "don't just force users to directly manipulate/swipe", i'm thinking now of the iOS mail app, where you can slowly swipe an email left/right to reveal options (archive, delete, flag, etc), but then the same actions can also be performed by simple taps (when tapping through to the email first, and then using the regular delete, flag, etc buttons at the bottom of the screen).

For what it's worth -- with VoiceOver running in the Mail app the VO user can swipe up or down on the message and go through rotor settings to quickly delete, mark as unread, etc. So VO users will swipe up which is delete by default and then double tap.

@michael-n-cooper
Copy link
Member Author

From @patrickhlauke on May 23, 2018 19:26

@mraccess77 yes but that's not a type of functionality that's available to web content (populating the rotor with custom commands)...only native apps can do that.

@michael-n-cooper
Copy link
Member Author

From @mraccess77 on May 23, 2018 19:30

@patrickhlauke I am well aware of that -- that's why I said "for what it's worth". I do envision a world though with something like IndieUI which would allow for the communication of an actions list and the ability to perform each of the actions programmatically to the platform/AT. This would solve a number of issues.

@michael-n-cooper
Copy link
Member Author

From @mraccess77 on May 23, 2018 19:37

ARIA controls that work with the keyboard but can't be used by VoiceOver on IOS without a keyboard seem problematic to me. It was my understanding that this SC was aimed to address that situation among other things. If that is no longer the case then we really do need to be clear about the purpose of this SC and who it helps and note the challenges that were not solved. Perhaps we could ad an advisory technique.

@michael-n-cooper
Copy link
Member Author

From @mraccess77 on June 1, 2018 15:9

Since this issue is still open I assume that gestures and drag and drop still must have single pointer equivalents. Or is there some update that I missed?

@michael-n-cooper
Copy link
Member Author

From @alastc on June 4, 2018 23:58

I think it's still open because... we haven't closed it (i.e. agreed something).

Reading the SC text and looking at the example, I would interpret that example as something that "can be operated with a single pointer without a path-based gesture,", in that it is not requiring a particular gesture.

I thought the aim was to prevent people relying on gestures (e.g. next/previous), and unless there is a known method of making that type of (valid) use-case work with VoiceOver/Talkback, I'm very wary about bringing it in scope when I don't think it was considered so before.

It is on the agenda for tomorrow (today now, my time), I think the possible outcomes are to update the understanding document, to make it clear that it:

  • Means pre-defined gestures such as swipe left/right, rather than freeform gestures.
  • Does include freeform gestures, therefore such an example would need to implement something like Patrick outlined above.

It would be very helpful if someone from the mobile TF were on the call tomorrow...

@michael-n-cooper
Copy link
Member Author

From @detlevhfischer on June 22, 2018 13:49

Proposed Working Group answer:
Thank you for your comment. We plan add this paragraph to the Understanding text for 2.5.1 Pointer Gestures in order to clarify that drag-and-drop interactions are not covered:

"The path-based operation used in drag-and-drop interfaces is not covered by this success criterion. While drag and drop interaction involves a free-form path to move elements to (or within) some drop target, it does not use pre-defined gestures in the same way as the swiping or dragging of content or controls, or the drawing of specific shape patterns. While we encourage authors to create alternatives for drag-and-drop interfaces that can be operated with a single pointer without path-based gestures, the complexity of providing such alternatives is likely to significantly increase interface complexity, and is therefore not required by this success criterion."
Compare Pull request #379

@alastc
Copy link
Contributor

alastc commented Jul 17, 2018

"The path-based operation used in drag-and-drop interfaces is not covered by this success criterion. While drag and drop interaction involves a free-form path to move elements to (or within) some drop target, it does not depend on pre-defined gestures in the same way as the swiping or dragging of content or controls, or the drawing of specific shape patterns, which depend on the path of the user's movement and not just the endpoints.

While we encourage authors to create alternatives for drag-and-drop interfaces that can be operated with a single pointer without path-based gestures, the complexity of providing such alternatives is likely to significantly increase interface complexity, and is therefore not required by this success criterion."

@mbgower
Copy link
Contributor

mbgower commented Jul 25, 2018

Resulting Actions

I would suggest there are several actions that need to come out of this, including....

Understanding document

Reviewing the Understanding document and ensuring its language is consistent with this decision -- and that it contain explicit guidance on drag and drop. At a quick glance, I see the following that need to be addressed (I've flagged issue language in bold):

Examples of path-based gestures include swiping, dragging, or the drawing of a complex path.

While we could just remove "dragging," from this sentence to resolve, this may also be an opportune time to provide a better context for path-based gestures. Perhaps consider replacing the above sentence with the following:

Path-based gestures refers to user interaction where a gesture's success is dependent on the path of the user's movement and not just the endpoints. Examples include swiping (which relies on the direction of movement) and gestures which trace a prescribed path, such as to draw a specific shape. Such paths... mouse emulation.
Note that drag and drop actions are not considered path-based gestures for the purposes of this Success Criterion. etc

I also think the following example needs to be changed:

A floor planning web app lets the user place shapes representing pieces of furniture on a map representing a room. The user can drag a shape to reposition it, but they can also accomplish this by clicking on a drag handle in the center of the shape then clicking arrow buttons to move the selected handle. Similarly, they can resize the shape by pinch-and-zoom, but they can also do this by clicking a drag handle on its boundary and then clicking the arrow buttons

BTW, I note that Alastair still has "dragging of content" in his language. I'd assumed that a drag that does not rely on precise direction (i.e., a slider that can be selected and then pulled in anthing in a 180 degree arc to increase (and the other 180 degrees to decrease) would not be a violation either, but a number of examples in the Understanding document seem to apply.

@detlevhfischer
Copy link
Contributor

detlevhfischer commented Sep 24, 2018

I have implemented mbgover's suggestions and created a pull request:
#483
I think that the dragging of thumbs of sliders etc., even if no precise path is required, is well within the scope of the SC. There is also usually no precise path requirement for other drags/swipes (e.g., to bring in a menu). What distinguishes these gestures from drag-n-drop is that there is no requirement for a defined start and end position.
We have excluded drag-n-drop not because there is no user need, but because having simple gestures for it is very tricky and likely to meet strong resisitance from developers. Adding single tap buttons (or clickablily to the slider track) is quite easy for things that slide on a defined track, or slide into view. There is clearly the need to make these things operable via simple gestures and it is easy to do -- that is why I would argue that this type of dragging should be considered in scope.

@alastc
Copy link
Contributor

alastc commented Nov 9, 2018

Given that the changes proposed in this issue have been merged and are live, and @mbgower has a new issue in this area, I'll close this issue.

@alastc
Copy link
Contributor

alastc commented May 7, 2019

This has been overtaken by newer issues/updates.

@alastc alastc closed this as completed May 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants