Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accessibility: Have TTS speak GUI elements on focus #9661

Open
devinprater opened this issue Oct 29, 2019 · 9 comments
Open

Accessibility: Have TTS speak GUI elements on focus #9661

devinprater opened this issue Oct 29, 2019 · 9 comments

Comments

@devinprater
Copy link

@devinprater devinprater commented Oct 29, 2019

First and foremost consider this:

  • Only RetroArch bugs should be filed here. Not core bugs or game bugs
  • This is not a forum or a help section, this is strictly developer oriented

Description

Now that Retroarch can speak in-game text using TTS, this feature should be extended to the GUI. Since we already know the text of the UI, like the main menu, game menu and such: GUI, this can simply be spoken using the TTS, no screenshot needed. I am willing to pay for the implementing of this feature as a bounty.

Of course, this should be a toggle, and could have a key command on Pc, Mac, and Linux, or a gamepad gesture, or exact set of gestures, to enable it, so that blind users can play games using Retroarch.

Expected behavior

RetroArch should speak the item in focus in the RetroArch UI, to help with accessibility by blind users.

Actual behavior

When retroarch runs, a blind user cannot navigate the main menu, to get cores, load cores, or otherwise control the program, except from the menu bar, which doesn't have all functions. Furthermore, the in-game menu isn't accessible either. This is why Retroarch should speak this itself.

Steps to reproduce the bug

  1. [Run a screen reader, like Narrator on Windows, VoiceOver on Mac, or Orca on Linux.]
  2. [Open Retroarch.]
  3. [Navigate Retroarch's screen with arrow keys, or the Tab key....]

Bisect Results

Version/Commit

You can find this information under Information/System Information

Retroarch 8.0

Environment information

  • OS: [Windows 10, macOS 1015, iOS 13.2]
  • Compiler: [In case you are running local builds]
@BarryJRowe

This comment has been minimized.

Copy link
Contributor

@BarryJRowe BarryJRowe commented Nov 4, 2019

I've looked into this. The best approach seems to be to use the hooks the screen reader uses to get what to read out, instead of hooking into the TTS that the AI Service uses. In that case there would be no need for a toggle button to turn it on or off. I should be able to work on this some time soon.

@devinprater

This comment has been minimized.

Copy link
Author

@devinprater devinprater commented Nov 4, 2019

@BarryJRowe

This comment has been minimized.

Copy link
Contributor

@BarryJRowe BarryJRowe commented Nov 4, 2019

@devinprater Accessibility APIs are what I mean. They're implemented in the OS, so that screen readers can read what's in the current application. Program authors write to these APIs so the screen reader doesn't have to figure out what's on the screen.

I haven't used these before, so I'll have to figure out what the best options are in terms of cross-platform compatibility.

Edit: The Qt library enables this automatically (usually) if you're using it to make a GUI. This page gives a good description of how it works: https://doc.qt.io/qt-5/accessible.html

@devinprater

This comment has been minimized.

Copy link
Author

@devinprater devinprater commented Nov 4, 2019

@BarryJRowe

This comment has been minimized.

Copy link
Contributor

@BarryJRowe BarryJRowe commented Nov 4, 2019

Every OS has a different API, but I would like to find a solution that can use a single library that is cross-platform and abstracts away the native APIs. Qt for instance will use UI Automation on Windows, macOS Accessibility on macOS, and AT-SPI for linux. The accessibility stuff there is designed for applications made using Qt, which is not what we would do here, we would just bypass and specify exactly what to read out instead.

@devinprater

This comment has been minimized.

Copy link
Author

@devinprater devinprater commented Nov 4, 2019

@BarryJRowe

This comment has been minimized.

Copy link
Contributor

@BarryJRowe BarryJRowe commented Nov 21, 2019

#9768
Had to revert to self-voicing, but it should be customizable enough to hook into a screen reader with a little outside work. Waiting on some changes for language support in windows, and refactoring suggestions before merging.

@devinprater

This comment has been minimized.

Copy link
Author

@devinprater devinprater commented Nov 22, 2019

@BarryJRowe

This comment has been minimized.

Copy link
Contributor

@BarryJRowe BarryJRowe commented Nov 22, 2019

Ok, I'll message you on twitter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.