Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NEW Use Case] Pinning audio output #22

Open
RealJoshue108 opened this issue Dec 2, 2020 · 7 comments
Open

[NEW Use Case] Pinning audio output #22

RealJoshue108 opened this issue Dec 2, 2020 · 7 comments
Assignees

Comments

@RealJoshue108
Copy link
Contributor

RealJoshue108 commented Dec 2, 2020

(Filed on behalf of @JaninaSajka ) We need to discuss whether it's time to call out the possibility of pinning audio output from multiple portions of a screen, perhaps routing those to multiple audio devices, to different TTS engines, and/or with different voice settings.

We have multiple use cases for pinning something making the ability to identify something atomic and invoke an architectural mechanism to communicate that specific item a kind of generic requirement.

Similarly, we may have several items destined for different TTS engines and different audio output devices.

@RealJoshue108
Copy link
Contributor Author

Relates to w3c/apa#97

@RealJoshue108
Copy link
Contributor Author

RealJoshue108 commented Dec 2, 2020

NOTE: This may also relate to the pinning of the braille display - or a mix of pinned braille and audio outputs. This may also involve routing to TTS engines, and/or with different voice settings.

@RealJoshue108
Copy link
Contributor Author

Discussed by RQTF on 9th Dec https://www.w3.org/2020/12/09-rqtf-minutes.html - may require support from on RTC API. To be left open and discussed in new year.

@RealJoshue108
Copy link
Contributor Author

@JaninaSajka we may need a footnote to reflect usage of the term pinning.

@jasonjgw so what are the use cases: audio, braille, routing - being able to dynamically route audio, pinning of status messages, who is the last speaker, here is the last message,

What are the atomic pieces of data - who is speaking, what just popped up, etc.

@joshueoconnor we will need to define what atomic is and how the term relates to pieces that the user may wish to be able:

NEW REQ (draft) 1d: Atomic pieces of data such as person currently speaking, people entering or leaving a meeting, or the last message posted in the chat channel can be pinned to a user interface.

There could be a new requirement for multiple sign language translations in an application.

@RealJoshue108
Copy link
Contributor Author

Discussed by RQTF: 6th Jan 2021 https://www.w3.org/2021/01/06-rqtf-minutes.html

@RealJoshue108
Copy link
Contributor Author

@joshueoconnor to ping @nitedog and Wilco, to see if EM/ACT has definitions of atomic we can use.

@RealJoshue108
Copy link
Contributor Author

Discussed in RQTF https://www.w3.org/2021/01/13-rqtf-minutes.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant