Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Persisted AudioContext for priming Web Audio engine #36

Merged
merged 5 commits into from
Jul 31, 2019
Merged

Conversation

compulim
Copy link
Owner

@compulim compulim commented Jul 31, 2019

Fix #34.

Description

Before playing audio clips, on Safari, the play() function on that specific AudioContext instance, need to explicitly triggered by user gesture.

This work will enable persisted AudioContext object, or passable thru options. Developers can prime the AudioContext object by either pronouncing an empty string, or pass in a pre-primed AudioContext object.

Changelog

Breaking changes

  • Instead of stopping AudioContext after all pending utterances are finished, the AudioContext is now persisted. If this is not desirable in your application and would like to control the lifetime of AudioContext object, please create your own instance and pass it as an option named audioContext when creating the ponyfill

Added

  • Speech synthesis: Fix #34, in PR #36
    • Support user-controlled AudioContext object to be passed as an option named audioContext
    • If no audioContext option is passed, will create a new AudioContext object and permanently allocated
  • Speech synthesis: If an empty utterance is being synthesized, will play an empty local audio clip instead, in PR #36

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Prime audio output on speech synthesis
1 participant