Skip to content
Permalink
Browse files
AX: Consider VTT-based audio descriptions with text-to-speech.
https://bugs.webkit.org/show_bug.cgi?id=243600
rdar://98206665

Reviewed by Jer Noble.

* LayoutTests/media/track/captions-webvtt/captions-descriptions.vtt: Added.
* LayoutTests/media/track/track-description-cue-expected.txt: Added.
* LayoutTests/media/track/track-description-cue.html: Added.
* LayoutTests/TestExpectations: Feature is Cocoa-specific so far, skip test globally.
* LayoutTests/platform/ios/TestExpectations: Enable test on iOS.
* LayoutTests/platform/mac/TestExpectations: Enable test on macOS.

* Source/WTF/Scripts/Preferences/WebPreferencesExperimental.yaml: Add 'AudioDescriptionsEnabled'
setting.

* Source/WebCore/Modules/speech/SpeechSynthesis.cpp:
(WebCore::SpeechSynthesis::handleSpeakingCompleted): Call utterance to fire event.
(WebCore::SpeechSynthesis::boundaryEventOccurred): Ditto.
(WebCore::SpeechSynthesis::didStartSpeaking): Ditto.
(WebCore::SpeechSynthesis::didPauseSpeaking): Ditto.
(WebCore::SpeechSynthesis::didResumeSpeaking): Ditto.
(WebCore::SpeechSynthesis::fireEvent const): Deleted.
(WebCore::SpeechSynthesis::fireErrorEvent const): Deleted.
* Source/WebCore/Modules/speech/SpeechSynthesis.h:
(WebCore::SpeechSynthesis::userGestureRequiredForSpeechStart const):
(WebCore::SpeechSynthesis::removeBehaviorRestriction): Make public.

* Source/WebCore/Modules/speech/SpeechSynthesisUtterance.cpp:
(WebCore::SpeechSynthesisUtterance::create): Add version that takes a completion handler.
(WebCore::SpeechSynthesisUtterance::SpeechSynthesisUtterance): Add completion handler
parameter.
(WebCore::SpeechSynthesisUtterance::eventOccurred): New. Call completion handler or
dispatch event.
(WebCore::SpeechSynthesisUtterance::errorEventOccurred): Ditto.
* Source/WebCore/Modules/speech/SpeechSynthesisUtterance.h:
* Source/WebCore/Modules/speech/SpeechSynthesisUtterance.idl: JSGenerateToJSObject.

* Source/WebCore/html/HTMLMediaElement.cpp:
(WebCore::HTMLMediaElement::updateActiveTextTrackCues): Return early if a seek
is pending. Call new executeCueEnterOrLeaveAction method instead of dispatching
events directly.
(WebCore::HTMLMediaElement::setSpeechSynthesisState): Maintain synthesis state.
(WebCore::HTMLMediaElement::speakCueText): Speak a cue.
(WebCore::HTMLMediaElement::pauseSpeakingCueText):
(WebCore::HTMLMediaElement::resumeSpeakingCueText):
(WebCore::HTMLMediaElement::cancelSpeakingCueText):
(WebCore::HTMLMediaElement::shouldSpeakCueTextForTime):
(WebCore::HTMLMediaElement::executeCueEnterOrLeaveAction): Trigger cue speech if
the track contains descriptions and we are entering a cue range, schedule an enter
or exit event.
(WebCore::HTMLMediaElement::seekWithTolerance): INFO_LOG -> ALWAYS_LOG
(WebCore::HTMLMediaElement::seekTask): Cancel speaking if necessary.
(WebCore::HTMLMediaElement::finishSeek): Update logging. If there isn't a pending
seek, queue a task to update text track cues.
(WebCore::HTMLMediaElement::configureTextTrackGroup): When processing descriptions
and the user wants text descriptions, set `fallbackTrack` to the first track seen
in case none of the tracks matches the audio track language.
(WebCore::HTMLMediaElement::playPlayer): Call resumeSpeakingCueText.
(WebCore::HTMLMediaElement::pausePlayer): Call pauseSpeakingCueText.
(WebCore::HTMLMediaElement::effectiveVolume const): Use the speech volume multiplier
when calculating the effective volume.
(WebCore::m_categoryAtMostRecentPlayback): Deleted.
* Source/WebCore/html/HTMLMediaElement.h:
(WebCore::HTMLMediaElement::cueBeingSpoken const):

* Source/WebCore/html/shadow/MediaControlTextTrackContainerElement.cpp:
(WebCore::MediaControlTextTrackContainerElement::updateDisplay): Skip spoken tracks.

* Source/WebCore/html/track/InbandGenericTextTrack.cpp: Remove unneeded include.

* Source/WebCore/html/track/TextTrack.cpp:
(WebCore::TextTrack::trackIndex): Use textTrackList() instead of m_textTrackList.
(WebCore::TextTrack::isRendered): Consider descriptions.
(WebCore::TextTrack::isSpoken):
(WebCore::TextTrack::trackIndexRelativeToRenderedTracks):  Use textTrackList()
instead of m_textTrackList.
(WebCore::TextTrack::speechSynthesis):
* Source/WebCore/html/track/TextTrack.h:

* Source/WebCore/html/track/TextTrackCue.cpp:
(WebCore::operator<<): All cues have a `text()` method, just use it.
* Source/WebCore/html/track/TextTrackCue.h:
(WebCore::TextTrackCue::text const):
(WebCore::TextTrackCue::speak):

* Source/WebCore/html/track/VTTCue.cpp:
(WebCore::VTTCue::updateDisplayTree): `track()` can return null, check it.
(WebCore::VTTCue::getDisplayTree): Ditto.
(WebCore::VTTCue::toJSON const): Drive-by: address a Darin FIXME.
(WebCore::VTTCue::speak):
* Source/WebCore/html/track/VTTCue.h:
(WebCore::VTTCue::speechUtterance const):
(WebCore::VTTCue::text const): Deleted.
* Source/WebCore/html/track/VTTCue.idl:

* Source/WebCore/page/CaptionUserPreferences.cpp:
(WebCore::CaptionUserPreferences::userPrefersTextDescriptions const): Check audioDescriptionsEnabled
setting.
(WebCore::CaptionUserPreferences::textTrackSelectionScore const): Consider description
tracks if the user preference is set. Clean up logic.

* Source/WebCore/page/CaptionUserPreferencesMediaAF.cpp:
(WebCore::CaptionUserPreferencesMediaAF::userPrefersCaptions const):
(WebCore::CaptionUserPreferencesMediaAF::userPrefersTextDescriptions const): check
MediaAccessibility framework preference.
* Source/WebCore/page/CaptionUserPreferencesMediaAF.h:
* Source/WebCore/platform/cf/MediaAccessibilitySoftLink.cpp:
* Source/WebCore/platform/cf/MediaAccessibilitySoftLink.h:

* Source/WebCore/platform/cocoa/PlatformSpeechSynthesizerCocoa.mm:
(-[WebSpeechSynthesisWrapper speakUtterance:]): If `utteranceVoice` is non-NULL
and the URI is empty, it is invalid so check the language.

* Source/WebCore/platform/graphics/InbandGenericCue.cpp:
(WebCore::InbandGenericCue::toJSONString const): Log cue text like VTTCue now does.

* Source/WebCore/platform/graphics/iso/ISOVTTCue.cpp:
(WebCore::ISOWebVTTCue::toJSONString const): Ditto.

* Source/WebCore/testing/Internals.cpp:
(WebCore::Internals::speechSynthesisUtteranceForCue):
(WebCore::Internals::mediaElementCurrentlySpokenCue):
* Source/WebCore/testing/Internals.h:
* Source/WebCore/testing/Internals.idl:

* Source/WebKit/UIProcess/WebPageProxy.cpp:
(WebKit::WebPageProxy::speechSynthesisSpeak): Remove the `startTime` parameter name,
it isn't used.

Canonical link: https://commits.webkit.org/253931@main
  • Loading branch information
eric-carlson committed Aug 30, 2022
1 parent 7a01e74 commit 969ac99ba7af596fe35f7adc7e5e60161ad137a3
Show file tree
Hide file tree
Showing 35 changed files with 533 additions and 116 deletions.
@@ -4794,6 +4794,7 @@ webkit.org/b/218325 imported/w3c/web-platform-tests/css/css-scroll-snap/scroll-t
http/tests/media/hls/hls-hdr-switch.html [ Skip ]
http/tests/media/video-canplaythrough-webm.html [ Skip ]
media/media-session/mock-coordinator.html [ Skip ]
media/track/track-description-cue.html [ Skip ]

# These tests rely on webkit-test-runner flags that aren't implemented for DumpRenderTree, so they will fail under legacy WebKit.
editing/selection/expando.html [ Failure ]
@@ -0,0 +1,14 @@
WEBVTT
1
00:00:01.000 --> 00:00:02.000
1 - The first cue

2
00:00:03.000 --> 00:00:15.000
2 - The second cue, from time 3 to 15

3
00:00:30.000 --> 00:00:40.000
2 - The second cue, from time 30 to 40

@@ -0,0 +1,4 @@


PASS WebVTT audio descriptions

@@ -0,0 +1,65 @@
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<script src="../../resources/testharness.js"></script>
<script src="../../resources/testharnessreport.js"></script>
<script src=../media-file.js></script>
</head>
<body>
<video controls muted id=video>
<track id='testTrack' src='captions-webvtt/captions-descriptions.vtt' kind='descriptions' >
</video>

<script>

promise_test(async (t) => {

let descriptionsTrack = document.querySelector("track");

if (window.internals)
internals.settings.setShouldDisplayTrackKind('TextDescriptions', true);

video.src = findMediaFile('video', '../content/test');
await new Promise(resolve => video.oncanplaythrough = resolve);

let cues = descriptionsTrack.track.cues;
assert_equals(cues.length, 3);

let checkCue = (cue, expectedText) => {
assert_equals(cue.text, expectedText);
if (!window.internals)
return;

let spokenCue = window.internals.mediaElementCurrentlySpokenCue(video);
assert_not_equals(spokenCue, null, 'descriptive cue is being spoken');

let props = ['vertical', 'snapToLines', 'line', 'lineAlign', 'position', 'positionAlign', 'size', 'align', 'text', 'region', 'id', 'startTime', 'endTime', 'pauseOnExit'];
props.forEach(prop => {
assert_equals(cue[prop], spokenCue[prop], `spoken cue has correct "${prop}" value`);
});

let utterance = window.internals.speechSynthesisUtteranceForCue(spokenCue);
assert_not_equals(utterance, null, 'cue utterance is not null');
assert_equals(utterance.text, expectedText, 'correct text is being spoken');
}

// Seek into the range for the first cue.
video.currentTime = 1.1;
await new Promise(resolve => cues[0].onenter = resolve);
checkCue(cues[0], '1 - The first cue');

video.currentTime = 2.9;
await new Promise(resolve => video.onseeked = resolve);

// Play into the range of the second cue.
video.play();
await new Promise(resolve => cues[1].onenter = (e) => { video.pause(); resolve() });
checkCue(cues[1], '2 - The second cue, from time 3 to 15');

}, "WebVTT audio descriptions");

</script>

</body>
</html>
@@ -58,6 +58,7 @@ http/tests/gzip-content-encoding [ Pass ]
imported/w3c/web-platform-tests/speech-api/ [ Pass ]

http/tests/media/fairplay [ Pass ]
media/track/track-description-cue.html [ Pass ]

#//////////////////////////////////////////////////////////////////////////////////////////
# End platform-specific directories.
@@ -86,6 +86,7 @@ imported/w3c/web-platform-tests/pointerevents [ Pass ]
imported/w3c/web-platform-tests/speech-api [ Pass ]

http/tests/media/fairplay [ Pass ]
media/track/track-description-cue.html [ Pass ]

#//////////////////////////////////////////////////////////////////////////////////////////
# End platform-specific directories.
@@ -1,4 +1,4 @@
# Copyright (c) 2020-2021 Apple Inc. All rights reserved.
# Copyright (c) 2020-2022 Apple Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
@@ -116,6 +116,19 @@ AsyncClipboardAPIEnabled:
WebCore:
default: false

AudioDescriptionsEnabled:
type: bool
condition: ENABLE(VIDEO)
humanReadableName: "Audio descriptions for video"
humanReadableDescription: "Enable audio descriptions for video"
defaultValue:
WebKitLegacy:
default: false
WebKit:
default: false
WebCore:
default: false

BlankAnchorTargetImpliesNoOpenerEnabled:
type: bool
humanReadableName: "Blank anchor target implies rel=noopener"
@@ -197,16 +197,6 @@ void SpeechSynthesis::resume()
}
}

void SpeechSynthesis::fireEvent(const AtomString& type, SpeechSynthesisUtterance& utterance, unsigned long charIndex, unsigned long charLength, const String& name) const
{
utterance.dispatchEvent(SpeechSynthesisEvent::create(type, { &utterance, charIndex, charLength, static_cast<float>((MonotonicTime::now() - utterance.startTime()).seconds()), name }));
}

void SpeechSynthesis::fireErrorEvent(const AtomString& type, SpeechSynthesisUtterance& utterance, SpeechSynthesisErrorCode errorCode) const
{
utterance.dispatchEvent(SpeechSynthesisErrorEvent::create(type, { { &utterance, 0, 0, static_cast<float>((MonotonicTime::now() - utterance.startTime()).seconds()), { } }, errorCode }));
}

void SpeechSynthesis::handleSpeakingCompleted(SpeechSynthesisUtterance& utterance, bool errorOccurred)
{
ASSERT(m_currentSpeechUtterance);
@@ -215,9 +205,9 @@ void SpeechSynthesis::handleSpeakingCompleted(SpeechSynthesisUtterance& utteranc
m_currentSpeechUtterance = nullptr;

if (errorOccurred)
fireErrorEvent(eventNames().errorEvent, utterance, SpeechSynthesisErrorCode::Canceled);
utterance.errorEventOccurred(eventNames().errorEvent, SpeechSynthesisErrorCode::Canceled);
else
fireEvent(eventNames().endEvent, utterance, 0, 0, String());
utterance.eventOccurred(eventNames().endEvent, 0, 0, String());

if (m_utteranceQueue.size()) {
Ref<SpeechSynthesisUtterance> firstUtterance = m_utteranceQueue.takeFirst();
@@ -229,19 +219,20 @@ void SpeechSynthesis::handleSpeakingCompleted(SpeechSynthesisUtterance& utteranc
}
}

void SpeechSynthesis::boundaryEventOccurred(PlatformSpeechSynthesisUtterance& utterance, SpeechBoundary boundary, unsigned charIndex, unsigned charLength)
void SpeechSynthesis::boundaryEventOccurred(PlatformSpeechSynthesisUtterance& platformUtterance, SpeechBoundary boundary, unsigned charIndex, unsigned charLength)
{
static NeverDestroyed<const String> wordBoundaryString(MAKE_STATIC_STRING_IMPL("word"));
static NeverDestroyed<const String> sentenceBoundaryString(MAKE_STATIC_STRING_IMPL("sentence"));

ASSERT(utterance.client());
ASSERT(platformUtterance.client());

auto utterance = static_cast<SpeechSynthesisUtterance*>(platformUtterance.client());
switch (boundary) {
case SpeechBoundary::SpeechWordBoundary:
fireEvent(eventNames().boundaryEvent, static_cast<SpeechSynthesisUtterance&>(*utterance.client()), charIndex, charLength, wordBoundaryString);
utterance->eventOccurred(eventNames().boundaryEvent, charIndex, charLength, wordBoundaryString);
break;
case SpeechBoundary::SpeechSentenceBoundary:
fireEvent(eventNames().boundaryEvent, static_cast<SpeechSynthesisUtterance&>(*utterance.client()), charIndex, charLength, sentenceBoundaryString);
utterance->eventOccurred(eventNames().boundaryEvent, charIndex, charLength, sentenceBoundaryString);
break;
default:
ASSERT_NOT_REACHED();
@@ -298,21 +289,21 @@ void SpeechSynthesis::voicesChanged()
void SpeechSynthesis::didStartSpeaking(PlatformSpeechSynthesisUtterance& utterance)
{
if (utterance.client())
fireEvent(eventNames().startEvent, static_cast<SpeechSynthesisUtterance&>(*utterance.client()), 0, 0, String());
static_cast<SpeechSynthesisUtterance&>(*utterance.client()).eventOccurred(eventNames().startEvent, 0, 0, String());
}

void SpeechSynthesis::didPauseSpeaking(PlatformSpeechSynthesisUtterance& utterance)
{
m_isPaused = true;
if (utterance.client())
fireEvent(eventNames().pauseEvent, static_cast<SpeechSynthesisUtterance&>(*utterance.client()), 0, 0, String());
static_cast<SpeechSynthesisUtterance&>(*utterance.client()).eventOccurred(eventNames().pauseEvent, 0, 0, String());
}

void SpeechSynthesis::didResumeSpeaking(PlatformSpeechSynthesisUtterance& utterance)
{
m_isPaused = false;
if (utterance.client())
fireEvent(eventNames().resumeEvent, static_cast<SpeechSynthesisUtterance&>(*utterance.client()), 0, 0, String());
static_cast<SpeechSynthesisUtterance&>(*utterance.client()).eventOccurred(eventNames().resumeEvent, 0, 0, String());
}

void SpeechSynthesis::didFinishSpeaking(PlatformSpeechSynthesisUtterance& utterance)
@@ -65,6 +65,16 @@ class SpeechSynthesis : public PlatformSpeechSynthesizerClient, public SpeechSyn
// Used in testing to use a mock platform synthesizer
WEBCORE_EXPORT void setPlatformSynthesizer(std::unique_ptr<PlatformSpeechSynthesizer>);

// Restrictions to change default behaviors.
enum BehaviorRestrictionFlags {
NoRestrictions = 0,
RequireUserGestureForSpeechStartRestriction = 1 << 0,
};
typedef unsigned BehaviorRestrictions;

bool userGestureRequiredForSpeechStart() const { return m_restrictions & RequireUserGestureForSpeechStartRestriction; }
void removeBehaviorRestriction(BehaviorRestrictions restriction) { m_restrictions &= ~restriction; }

private:
SpeechSynthesis(ScriptExecutionContext&);

@@ -88,19 +98,7 @@ class SpeechSynthesis : public PlatformSpeechSynthesizerClient, public SpeechSyn

void startSpeakingImmediately(SpeechSynthesisUtterance&);
void handleSpeakingCompleted(SpeechSynthesisUtterance&, bool errorOccurred);
void fireEvent(const AtomString& type, SpeechSynthesisUtterance&, unsigned long charIndex, unsigned long charLength, const String& name) const;
void fireErrorEvent(const AtomString& type, SpeechSynthesisUtterance&, SpeechSynthesisErrorCode) const;

// Restrictions to change default behaviors.
enum BehaviorRestrictionFlags {
NoRestrictions = 0,
RequireUserGestureForSpeechStartRestriction = 1 << 0,
};
typedef unsigned BehaviorRestrictions;

bool userGestureRequiredForSpeechStart() const { return m_restrictions & RequireUserGestureForSpeechStartRestriction; }
void removeBehaviorRestriction(BehaviorRestrictions restriction) { m_restrictions &= ~restriction; }

ScriptExecutionContext* scriptExecutionContext() const final { return ContextDestructionObserver::scriptExecutionContext(); }
EventTargetInterface eventTargetInterface() const final { return SpeechSynthesisEventTargetInterfaceType; }
void refEventTarget() final { ref(); }
@@ -26,6 +26,10 @@
#include "config.h"
#include "SpeechSynthesisUtterance.h"

#include "EventNames.h"
#include "SpeechSynthesisErrorEvent.h"
#include "SpeechSynthesisEvent.h"

#if ENABLE(SPEECH_SYNTHESIS)

#include <wtf/IsoMallocInlines.h>
@@ -36,12 +40,18 @@ WTF_MAKE_ISO_ALLOCATED_IMPL(SpeechSynthesisUtterance);

Ref<SpeechSynthesisUtterance> SpeechSynthesisUtterance::create(ScriptExecutionContext& context, const String& text)
{
return adoptRef(*new SpeechSynthesisUtterance(context, text));
return adoptRef(*new SpeechSynthesisUtterance(context, text, { }));
}

Ref<SpeechSynthesisUtterance> SpeechSynthesisUtterance::create(ScriptExecutionContext& context, const String& text, SpeechSynthesisUtterance::UtteranceCompletionHandler&& completion)
{
return adoptRef(*new SpeechSynthesisUtterance(context, text, WTFMove(completion)));
}

SpeechSynthesisUtterance::SpeechSynthesisUtterance(ScriptExecutionContext& context, const String& text)
SpeechSynthesisUtterance::SpeechSynthesisUtterance(ScriptExecutionContext& context, const String& text, UtteranceCompletionHandler&& completion)
: m_platformUtterance(PlatformSpeechSynthesisUtterance::create(*this))
, m_scriptExecutionContext(context)
, m_completionHandler(WTFMove(completion))
{
m_platformUtterance->setText(text);
}
@@ -69,6 +79,29 @@ void SpeechSynthesisUtterance::setVoice(SpeechSynthesisVoice* voice)
m_platformUtterance->setVoice(voice->platformVoice());
}

void SpeechSynthesisUtterance::eventOccurred(const AtomString& type, unsigned long charIndex, unsigned long charLength, const String& name)
{
if (m_completionHandler) {
if (type == eventNames().endEvent)
m_completionHandler(*this);

return;
}

dispatchEvent(SpeechSynthesisEvent::create(type, { this, charIndex, charLength, static_cast<float>((MonotonicTime::now() - startTime()).seconds()), name }));
}

void SpeechSynthesisUtterance::errorEventOccurred(const AtomString& type, SpeechSynthesisErrorCode errorCode)
{
if (m_completionHandler) {
m_completionHandler(*this);
return;
}

dispatchEvent(SpeechSynthesisErrorEvent::create(type, { { this, 0, 0, static_cast<float>((MonotonicTime::now() - startTime()).seconds()), { } }, errorCode }));
}


} // namespace WebCore

#endif // ENABLE(SPEECH_SYNTHESIS)
@@ -30,16 +30,19 @@
#include "ContextDestructionObserver.h"
#include "EventTarget.h"
#include "PlatformSpeechSynthesisUtterance.h"
#include "SpeechSynthesisErrorCode.h"
#include "SpeechSynthesisVoice.h"
#include <wtf/RefCounted.h>

namespace WebCore {

class SpeechSynthesisUtterance final : public PlatformSpeechSynthesisUtteranceClient, public RefCounted<SpeechSynthesisUtterance>, public EventTargetWithInlineData {
class WEBCORE_EXPORT SpeechSynthesisUtterance final : public PlatformSpeechSynthesisUtteranceClient, public RefCounted<SpeechSynthesisUtterance>, public EventTargetWithInlineData {
WTF_MAKE_ISO_ALLOCATED(SpeechSynthesisUtterance);
public:
using UtteranceCompletionHandler = Function<void(const SpeechSynthesisUtterance&)>;
static Ref<SpeechSynthesisUtterance> create(ScriptExecutionContext&, const String&, UtteranceCompletionHandler&&);
static Ref<SpeechSynthesisUtterance> create(ScriptExecutionContext&, const String&);

// Create an empty default constructor so SpeechSynthesisEventInit compiles.
SpeechSynthesisUtterance();

@@ -71,10 +74,11 @@ class SpeechSynthesisUtterance final : public PlatformSpeechSynthesisUtteranceCl

PlatformSpeechSynthesisUtterance* platformUtterance() const { return m_platformUtterance.get(); }

SpeechSynthesisUtterance(const SpeechSynthesisUtterance&);

void eventOccurred(const AtomString& type, unsigned long charIndex, unsigned long charLength, const String& name);
void errorEventOccurred(const AtomString& type, SpeechSynthesisErrorCode);

private:
SpeechSynthesisUtterance(ScriptExecutionContext&, const String&);
SpeechSynthesisUtterance(ScriptExecutionContext&, const String&, UtteranceCompletionHandler&&);

ScriptExecutionContext* scriptExecutionContext() const final { return &m_scriptExecutionContext; }
EventTargetInterface eventTargetInterface() const final { return SpeechSynthesisUtteranceEventTargetInterfaceType; }
@@ -84,6 +88,7 @@ class SpeechSynthesisUtterance final : public PlatformSpeechSynthesisUtteranceCl
RefPtr<PlatformSpeechSynthesisUtterance> m_platformUtterance;
RefPtr<SpeechSynthesisVoice> m_voice;
ScriptExecutionContext& m_scriptExecutionContext;
UtteranceCompletionHandler m_completionHandler;
};

} // namespace WebCore
@@ -26,6 +26,8 @@
// https://wicg.github.io/speech-api/#speechsynthesisutterance
[
Conditional=SPEECH_SYNTHESIS,
ExportMacro=WEBCORE_EXPORT,
JSGenerateToJSObject,
Exposed=Window
] interface SpeechSynthesisUtterance : EventTarget {
[CallWith=CurrentScriptExecutionContext] constructor(optional DOMString text);

0 comments on commit 969ac99

Please sign in to comment.