Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SPIKE: Improve experience for creating annotations/highlights with NVDA (and JAWS) #1148

Open
klemay opened this issue Oct 9, 2020 · 7 comments

Comments

@klemay
Copy link

klemay commented Oct 9, 2020

Overview

When using VoiceOver for Mac to annotate, VoiceOver will read out the text that is being selected, whether that's character-by-character, or word-by-word. You can see this in action here:

https://www.youtube.com/watch?v=AOyVt1w_MUU

I have worked extensively with two users who are blind and experienced with NVDA and JAWS, and they have worked with each other to try and replicate this workflow in NVDA and JAWS with no success. The short version: text selection with NVDA and JAWS happens in an invisible text layer that the Hypothesis client doesn't see. When text is selected, the annotation adder doesn't appear. When NVDA or JAWS users interact directly with the page, they can create text selections but there is no audio feedback

Research and troubleshooting

Here is a summary of our findings thus far:

  • Reading a web page with NVDA typically happens in what is called Browse Mode. This is an invisible text layer, and interaction with the text (e.g., making a selection to copy and paste) does not happen within the DOM, and is invisible to the Hypothesis client.
  • When reading a web page with JAWS, navigation through the page happens in an invisible text layer using the Virtual Cursor rather than on the document itself. If a user wants to copy and paste a passage of text, they make the text selection in this invisible text layer. Nothing is happening within the DOM here, and this is invisible to the Hypothesis client.
  • NVDA and JAWS do allow for direct interaction with the contents of web pages— this is necessary for things like filling out forms, pressing buttons, etc. In NVDA this is called Focus Mode. In JAWS, this is called Forms Mode.
  • In both NVDA and JAWS, I was able to help users activate caret browsing (as seen in the VoiceOver video) to navigate through the text on the screen and create text selections. However, neither screenreader provides audio feedback while text is being selected, so a blind user has no idea what they are highlighting. Making this even more confusing is the fact that switching between Browse and Focus mode doesn't necessarily mean the cursors will be in the same place. So I might be in the middle of a document in Browse mode, switch to Focus mode to annotate, and the cursor takes me back up to the beginning of the text and I have to find the passage I want to annotate again.
  • The users I have been working with reached out to the NVDA support forum and to the support team for Vispero (the vendor that makes JAWS), and neither resource was able to help.
  • I looked into how Google Docs handles text selection for their commenting function. Their documentation indicates that a comment can only be added to one word. This would be a better experience for our NVDA and JAWS users than the current one, but still not ideal (as sighted users and blind users on VoiceOver are able to expand text selections to entire passages, or narrow them to one character).

Helpful documentation

Questions for developers

  • How feasible is it to replicate the Google Docs commenting experience?
  • Is there a better way to address this problem, which would allow character-by-character and sentence-level annotations?

Additional information

The two users who I have been working with have said they'd be willing to meet with a developer for a screenshare of the current experience, and/or to test out solutions that we may come up with. I can put developers in touch with these two (very generous!) individuals.

@klemay klemay changed the title Improve experience for creating annotations/highlights with NVDA (and JAWS) SPIKE: Improve experience for creating annotations/highlights with NVDA (and JAWS) Nov 11, 2020
@klemay
Copy link
Author

klemay commented Nov 25, 2020

From one of our partners:

I was digging around and found the potential to create a user script in a browser (via TamperMonkey or GreaseMonkey) and found I can add an aria application region around items so that all keys are automatically passed through to the application. I wonder if you put an aria application region around your Hypothesis iFrame if it might give us screen reader users more control over selecting the text we want to annotate.

...might be worth a try!

@klemay
Copy link
Author

klemay commented Feb 19, 2021

Notes from a call with our friends at Benetech and an accessibility developer they introduced us to:

  • The virtual buffer is an established practice/workflow that has been a part of screen readers for 18+ years. It is unlikely that we would convince the makers of screen readers to change the way this works
  • An approach that could work for HTML and EPUB: allow user to use a keystroke that would put content-editable around the text on the page. From there the user can create text selections that we would have access to and that would be read aloud to the user. (Note that we'd need to adjust the h, a, and s keystrokes to have a modifier so they'd look something like Ctrl-Shift-h because simply pressing the h key will start typing).

See Slack for Rob's notes from this call.

@klemay
Copy link
Author

klemay commented Feb 24, 2021

Dan and Katelyn met with the founder and general manager of NVAccess - this meeting suggests the accessibility developer we spoke to was pessimistic re: NVAccess' willingness to implement changes on their end. Notes from the call in Slack.

@klemay klemay added this to To Do in [OLD] Accessibility Roadmap via automation Apr 9, 2021
@klemay klemay added this to To do in Accessibility Roadmap Archive via automation Apr 9, 2021
@klemay klemay removed this from Info in [OLD] Accessibility Roadmap Apr 9, 2021
@klemay klemay moved this from To do to In progress in Accessibility Roadmap Archive Apr 9, 2021
@robertknight
Copy link
Member

A couple of Slack threads with some recent updates on this:

  1. Work has been done on NVDA towards making it possible to set the DOM selection to match what is currently selected in the virtual buffer. Once the DOM selection has been updated, a user can then use the various shortcuts ("a", "h", "s") to create or view annotations for that selection. See https://hypothes-is.slack.com/archives/C8TPC8XMK/p1663155931072379 for status update.
  2. We discussed some ideas for a workaround until (1) is complete: https://hypothes-is.slack.com/archives/C8TPC8XMK/p1663334444636889

@dananjohnson
Copy link

This thread has been really valuable to my team as we work on making annotating more accessible in Manifold Scholarship. Thanks to everyone involved for your efforts!

@robertknight do you have any more details you can share re: item 1 in your most recent comment, for those of us who aren't in the Hypothes.is Slack? It's exciting to hear that NVDA is working on matching DOM and virtual buffer selection!

@robertknight
Copy link
Member

I'm not sure of the exact status of work in NVDA, but here are some relevant issues:

The last update on the NVDA PR, from Feb 13th 2023, says:

Blocked by further work on the implementation by Chrome/Firefox

I don't know exactly what that work is.

@dananjohnson
Copy link

Thanks, @robertknight! Really appreciate the update and these links.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

No branches or pull requests

3 participants