-
Notifications
You must be signed in to change notification settings - Fork 707
/
Copy pathOverview.bs
1849 lines (1547 loc) · 77.5 KB
/
Overview.bs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<pre class='metadata'>
Title: CSS Speech Module Level 1
Shortname: css-speech
Level: 1
Group: csswg
Status: ED
Work Status: Testing
ED: https://drafts.csswg.org/css-speech-1/
TR: https://www.w3.org/TR/css-speech-1/
Editor: Léonie Watson, Tetralogical, lwatson@tetralogical.com, w3cid 44692
Editor: Elika J. Etemad / fantasai, Apple, http://fantasai.inkedblade.net/contact, w3cid 35400
Former Editor: Daniel Weck, DAISY Consortium, dweck@daisy.org
Former Editor: Claudio Santambrogio, Opera Software
Former Editor: Dave Raggett, W3C / Canon, dsr@w3.org
Previous Version: https://www.w3.org/TR/2020/CR-css-speech-1-20200310/
Abstract: The Speech module defines aural CSS properties that enable authors to declaratively control the rendering of documents via speech synthesis, and using optional audio cues. Note that this standard was developed in cooperation with the <a href="http://www.w3.org/Voice/">Voice Browser Activity</a>.
At Risk: 'voice-balance', 'voice-duration', 'voice-pitch', 'voice-range', and 'voice-stress'
</pre>
<pre class=link-defaults>
spec:html; type:element; text:link
spec:css2; type:value;
text:screen
text:speech
text:all
</pre>
<h2 id="intro">
Introduction, design goals</h2>
<em>This section is non-normative.</em>
The aural presentation of information
is commonly used by people who are
blind, visually-impaired, or otherwise print-disabled.
For instance,
“screen readers” allow users to interact with visual interfaces
that would otherwise be inaccessible to them.
There are also circumstances in which <em>listening</em> to content
(as opposed to <em>reading</em>)
is preferred, or sometimes even required,
irrespective of a person's physical ability to access information.
For instance: playing an e-book whilst driving a vehicle,
learning how to manipulate industrial and medical devices,
interacting with home entertainment systems,
teaching young children how to read.
The CSS properties defined in this Speech module
enable authors to declaratively control the presentation of a document
in the aural dimension.
The aural rendering of a document combines speech synthesis
(also known as “TTS”, the acronym for “Text to Speech”)
and auditory icons
(which are referred-to as “audio cues” in this specification).
The CSS Speech properties provide the ability
to control speech pitch and rate, sound levels, TTS voices, etc.
These stylesheet properties can be used together
with visual properties (mixed media),
or as a complete aural alternative to a visual presentation.
<h2 id="background">
Background information, CSS 2.1</h2>
<em>This section is non-normative.</em>
The CSS Speech module is a re-work of the informative
<a href="https://www.w3.org/TR/CSS2/aural.html">CSS2.1 Aural appendix</a>,
within which the ''aural'' media type was described,
but also deprecated (in favor of the ''speech'' media type, which has now
also been deprecated).
Although the [[!CSS2]] specification reserved the ''speech'' media type,
it didn't actually define the corresponding properties.
The Speech module describes the CSS properties that apply to speech output,
and defines a new “box” model specifically for the aural dimension.
Content creators can include CSS properties for user agents with
text to speech synthesis capabilities for any media type - though
generally, they will only make sense for ''all'' and ''screen''.
These styles are simply ignored by user agents that do not support
the Speech module.
<h2 id="ssml-rel">
Relationship with SSML</h2>
<em>This section is non-normative.</em>
Some of the features in this specification are conceptually similar to
functionality described in the Speech Synthesis Markup Language (SSML) Version 1.1 [[!SSML]].
However, the specificities of the CSS model mean
that compatibility with SSML in terms of syntax and/or semantics
is only partially achievable.
The definition of each property in the Speech module
includes informative statements, wherever necessary,
to clarify their relationship with similar functionality from SSML.
<h3 id="values">
Value Definitions</h3>
This specification follows the <a href="https://www.w3.org/TR/CSS2/about.html#property-defs">CSS property definition conventions</a> from [[!CSS2]]
using the <a href="https://www.w3.org/TR/css-values-3/#value-defs">value definition syntax</a> from [[!CSS-VALUES-3]].
Value types not defined in this specification are defined in CSS Values & Units [[!CSS-VALUES-3]].
Combination with other CSS modules may expand the definitions of these value types.
In addition to the property-specific values listed in their definitions,
all properties defined in this specification
also accept the <a>CSS-wide keywords</a> as their property value.
For readability they have not been repeated explicitly.
<h2 id="example">
Example</h2>
<div class="example">
This example shows how authors can tell the speech synthesizer to speak HTML headings
with a voice called "paul",
using "moderate" emphasis (which is more than normal)
and how to insert an audio cue (pre-recorded audio clip located at the given URL)
before the start of TTS rendering for each heading.
In a stereo-capable sound system,
paragraphs marked with the CSS class <code>heidi</code>
are rendered on the left audio channel (and with a female voice, etc.),
whilst the class <code>peter</code>
corresponds to the right channel (and to a male voice, etc.).
The volume level of text spans marked with the class <code>special</code>
is lower than normal,
and a prosodic boundary is created
by introducing a strong pause after it is spoken
(note how the <{span}> inherits the voice-family from its parent paragraph).
<pre>
h1, h2, h3, h4, h5, h6 {
voice-family: paul;
voice-stress: moderate;
cue-before: url(../audio/ping.wav);
voice-volume: medium 6dB;
}
p.heidi {
voice-family: female;
voice-balance: left;
voice-pitch: high;
voice-volume: -6dB;
}
p.peter {
voice-family: male;
voice-balance: right;
voice-rate: fast;
}
span.special {
voice-volume: soft;
pause-after: strong;
}
...
<h1>I am Paul, and I speak headings.</h1>
<p class="heidi">Hello, I am Heidi.</p>
<p class="peter">
<span class="special">Can you hear me ?</span>
I am Peter.
</p></pre>
</div>
<h2 id="aural-model">
The aural formatting model</h2>
The CSS formatting model for aural media is based on
a sequence of sounds and silences that occur within a nested context
similar to the <a href="https://www.w3.org/TR/css-box-3/#box-model">visual box model</a>,
which we name the <dfn export lt="aural box model">aural “box” model</dfn>.
The aural “canvas” consists of a two-channel (stereo) space
and of a temporal dimension,
within which synthetic speech and audio cues coexist.
The selected element is surrounded by 'rest', 'cue' and 'pause' properties
(from the innermost to the outermost position).
These can be seen as aural equivalents to
'padding', 'border' and 'margin', respectively.
When used, the ''::before'' and ''::after'' pseudo-elements [[!CSS2]]
get inserted between the element's contents and the 'rest'.
The following diagram illustrates the equivalence between
properties of the visual and aural box models,
applied to the selected <element>:
<img
title="The aural 'box' model, illustrated by a diagram: the selected element is positioned in the center, on its left side are (from innermost to outermost) rest-before, cue-before, pause-before, on its right side are (from innermost to outermost) rest-after, cue-after, pause-after, where rest is conceptually similar to padding, cue is similar to border, pause is similar to margin."
alt="The aural 'box' model, illustrated by a diagram: the selected element is positioned in the center, on its left side are (from innermost to outermost) rest-before, cue-before, pause-before, on its right side are (from innermost to outermost) rest-after, cue-after, pause-after, where rest is conceptually similar to padding, cue is similar to border, pause is similar to margin."
id="aural-box" src="images/aural-box.png">
<h2 id="mixing-props">
Mixing properties</h2>
<h3 id="mixing-props-voice-volume">
The 'voice-volume' property</h3>
<pre class=propdef>
Name: voice-volume
Value: silent | [[x-soft | soft | medium | loud | x-loud] || <<decibel>>]
Initial: medium
Applies to: all elements
Inherited: yes
Percentages: N/A
Computed value: ''silent'', or a keyword value and optionally also a decibel offset (if not zero)
</pre>
The 'voice-volume' property allows authors to control
the amplitude of the audio waveform generated by the speech synthesizer,
and is also used to adjust the relative volume level of <a href="#cue-props">audio cues</a>
within the [=aural box model=] of the selected element.
Note: Although the functionality provided by this property is similar to
the <a href="https://www.w3.org/TR/speech-synthesis11/#edef_prosody"><code>volume</code>
attribute of the <code>prosody</code> element</a> from the SSML markup language [[!SSML]],
there are notable discrepancies.
For example, CSS Speech volume keywords and decibels units are not mutually-exclusive,
due to how values are inherited and combined for selected elements.
<dl dfn-type=value dfn-for=voice-volume>
<!--<dt><dfn>normal</dfn>
<dd>
Corresponds to +0.0dB,
which means that there is no modification of volume level.
This value overrides the inherited value. -->
<dt><dfn>silent</dfn>
<dd>
Specifies that no sound is generated (the text is read "silently").
Note: This has the same effect as using negative infinity decibels.
Also note that there is a difference between
an element whose 'voice-volume' property has a value of ''silent'',
and an element whose 'speak' property has the value ''speak/none''.
With the former,
the selected element takes up the same time as if it was spoken,
including any pause before and after the element,
but no sound is generated
(and descendants within the [=aural box model=] of the selected element
can override the 'voice-volume' value, and may therefore generate audio output).
With the latter,
the selected element is not rendered in the aural dimension
and no time is allocated for playback
(descendants within the [=aural box model=] of the selected element
can override the 'speak' value,
and may therefore generate audio output).
<dt><dfn>x-soft</dfn>, <dfn>soft</dfn>, <dfn>medium</dfn>, <dfn>loud</dfn>, <dfn>x-loud</dfn>
<dd>
This sequence of keywords corresponds to
monotonically non-decreasing volume levels,
mapped to implementation-dependent values
that meet the listener's requirements with regards to perceived loudness.
These audio levels are typically provided via a preference mechanism
that allow users to calibrate sound options
according to their auditory environment.
The keyword ''x-soft'' maps to the user's <em>minimum audible</em> volume level,
''x-loud'' maps to the user's <em>maximum tolerable</em> volume level,
''voice-volume/medium'' maps to the user's <em>preferred</em> volume level,
''soft'' and ''loud'' map to intermediary values.
<dt><dfn><<decibel>></dfn>
<dd>
This represents a change (positive or negative)
relative to the given keyword value (see enumeration above),
or to the default value for the root element,
or otherwise to the inherited volume level
(which may itself be a combination of a keyword value and decibel offset,
in which case the decibel values are combined additively).
When the inherited volume level is ''silent'',
this 'voice-volume' resolves to ''silent'' too,
regardless of the specified <<decibel>> value.
The <dfn type><<decibel>></dfn> type denotes
a [=dimension=] with a "dB" (decibel unit) unit identifier.
Decibels represent
the ratio of the squares of the new signal amplitude <var>a1</var>
and the current amplitude <var>a0</var>,
as per the following logarithmic equation:
volume(dB) = 20 × log10(<var>a1</var> / <var>a0</var>).
Note: -6.0dB is approximately half the amplitude of the audio signal,
and +6.0dB is approximately twice the amplitude.
</dl>
Note: Perceived loudness depends on various factors,
such as the listening environment, user preferences or physical abilities.
The effective volume variation between ''x-soft'' and ''x-loud'' represents
the dynamic range (in terms of loudness) of the audio output.
Typically, this range would be compressed in a noisy context,
i.e. the perceived loudness corresponding to ''x-soft''
would effectively be closer to ''x-loud''
than it would be in a quiet environment.
There may also be situations where both ''x-soft'' and ''x-loud''
would map to low volume levels,
such as in listening environments requiring discretion
(e.g. library, night-reading).
<h3 id="mixing-props-voice-balance">
The 'voice-balance' property</h3>
<pre class=propdef>
Name: voice-balance
Value: <<number>> | left | center | right | leftwards | rightwards
Initial: center
Applies to: all elements
Inherited: yes
Percentages: N/A
Computed value: the specified value resolved to a <number> between ''-100'' and ''100'' (inclusive)
</pre>
The 'voice-balance' property controls the spatial distribution
of audio output across a lateral sound stage:
one extremity is on the left, the other extremity is on the right hand side,
relative to the listener's position.
Authors can specify intermediary steps between left hand right extremities,
to represent the audio separation along the resulting left-right axis.
Note: The functionality provided by this property has no match in the SSML markup language [[!SSML]].
<dl dfn-type=value dfn-for=voice-balance>
<dt><dfn><<number>></dfn>
<dd>
A [=number=] between ''-100'' and ''100'' (inclusive).
Values smaller than ''-100'' are clamped to ''-100''.
Values greater than ''100'' are clamped to ''100''.
The value ''-100'' represents the left side,
and the value ''100'' represents the right side.
The value ''0'' represents the center point
whereby there is no discernible audio separation
between left and right sides.
(In a stereo sound system,
this corresponds to equal distribution of audio signals
between left and right speakers).
<dt><dfn>left</dfn>
<dd>
Same as ''-100''.
<dt><dfn>center</dfn>
<dd>
Same as ''0''.
<dt><dfn>right</dfn>
<dd>
Same as ''100''.
<dt><dfn>leftwards</dfn>
<dd>
Moves the sound to the left
by subtracting 20 from the inherited 'voice-balance' value
(and by clamping the resulting number to ''-100'').
<dt><dfn>rightwards</dfn>
<dd>
Moves the sound to the right,
by adding 20 to the inherited 'voice-balance' value
(and by clamping the resulting number to ''100'').
</dl>
User agents can be connected to different kinds of sound systems,
featuring varying audio mixing capabilities.
The expected behavior for mono, stereo, and surround sound systems
is defined as follows:
* When user agents produce audio via a mono-aural sound system
(i.e. single-speaker setup),
the 'voice-balance' property has no effect.
* When user agents produce audio through a stereo sound system
(e.g. two speakers, or a pair of headphones),
the left-right distribution of audio signals
can precisely match the authored values for the 'voice-balance' property.
* When user agents are capable of mixing audio signals through more than 2 channels
(e.g. 5-speakers surround sound system, including a dedicated center channel),
the physical distribution of audio signals
resulting from the application of the 'voice-balance' property
should be performed so that the listener perceives sound
as if it was coming from a basic stereo layout.
For example, the center channel as well as the left/right speakers
may be used all together
in order to emulate the behavior of the ''voice-balance/center'' value.
Future revisions of the CSS Speech module may include support for three-dimensional audio,
which would effectively enable authors to specify “azimuth” and “elevation” values.
In the future, content authored using the current specification
may therefore be consumed by user agents which are compliant
with the version of CSS Speech that supports three-dimensional audio.
In order to prepare for this possibility,
the values enabled by the current 'voice-balance' property
are designed to remain compatible with “azimuth” angles.
More precisely, the mapping between the current left-right audio axis (lateral sound stage)
and the envisioned 360 degrees plane around the listener's position
is defined as follows:
* The value ''0'' maps to zero degrees (''voice-balance/center'').
This is in "front" of the listener, not from "behind".
* The value ''-100'' maps to -40 degrees ('left').
Negative angles are in the counter-clockwise direction
(assuming the audio stage is seen from the top).
* The value ''100'' maps to 40 degrees ('right').
Positive angles are in the clockwise direction
(assuming the audio stage is seen from the top).
* Intermediary values on the scale from ''100'' to ''100''
map to the angles between -40 and 40 degrees
in a numerically linearly-proportional manner.
For example, ''-50'' maps to -20 degrees.
Note: Sound systems can be configured by users
in such a way that it would interfere with the left-right audio distribution
specified by document authors.
Typically, the various “surround” modes available in modern sound systems
(including systems based on basic stereo speakers)
tend to greatly alter the perceived spatial arrangement of audio signals.
The illusion of a three-dimensional sound stage
is often achieved using a combination of
phase shifting, digital delay, volume control (channel mixing), and other techniques.
Some users may even configure their system to “downgrade” any rendered sound
to a single mono channel,
in which case the effect of the 'voice-balance' property
would obviously not be perceivable at all.
The rendering fidelity of authored content
is therefore dependent on such user customizations,
and the 'voice-balance' property merely specifies the desired end-result.
Note: Many speech synthesizers only generate mono sound,
and therefore do not intrinsically support the 'voice-balance' property.
The sound distribution along the left-right axis
consequently occurs at post-synthesis stage
(when the speech-enabled user agent mixes
the various audio sources authored within the document)
<h2 id="speaking-props">
Speaking properties</h2>
<h3 id="speaking-props-speak">
The 'speak' property</h3>
<pre class=propdef>
Name: speak
Value: auto | never | always
Initial: auto
Applies to: all elements
Inherited: yes
Percentages: N/A
Computed value: specified value
</pre>
The 'speak' property determines whether or not to render text aurally.
Note: The functionality provided by this property has no match in the SSML markup language [[!SSML]].
<dl dfn-type=value dfn-for=speak>
<dt><dfn>auto</dfn>
<dd>
Resolves to a computed value of ''speak/never''
when 'display' is ''display/none'',
otherwise resolves to a computed value of ''speak/auto''.
The used value of a computed ''speak/auto'' is equivalent
to ''speak/always'' if 'visibility' is ''visibility/visible''
and to ''never'' otherwise.
Note: The ''display/none'' value of the 'display' property
cannot be overridden by descendants of the selected element,
but the ''speak/auto'' value of 'speak' can, however,
be overridden using either of ''speak/never'' or ''speak/always''.
<dt><dfn>never</dfn>
<dd>
This value causes an element (including pauses, cues, rests and actual content)
to not be rendered (i.e., the element has no effect in the aural dimension).
Note: Any of the descendants of the affected element are allowed to override this value,
so descendants can actually take part in the aural rendering
despite using ''display: none'' at this level.
However, the pauses, cues, and rests of the ancestor element
remain “deactivated” in the aural dimension,
and therefore do not contribute to the <a href="#collapsed-pauses">collapsing of pauses</a>
or additive behavior of adjoining rests.
<dt><dfn>always</dfn>
<dd>
The element is rendered aurally
(regardless of its 'display' value,
or the 'display' or 'speak' values of its ancestors).
Note: Using this value can result in the element being rendered in the aural dimension
even though it would not be rendered on the visual canvas.
</dl>
<h3 id="speaking-props-speak-as">
The 'speak-as' property</h3>
<pre class=propdef>
Name: speak-as
Value: normal | spell-out || digits || [ literal-punctuation | no-punctuation ]
Initial: normal
Applies to: all elements
Inherited: yes
Percentages: N/A
Computed value: specified value
</pre>
The 'speak-as' property determines in what manner text gets rendered aurally,
based upon a predefined list of possibilities.
Note: The functionality provided by this property is conceptually similar to
the <a href="https://www.w3.org/TR/speech-synthesis11/#edef_say-as"><code>say-as</code> element</a>
from the SSML markup language [[!SSML]]
(whose possible values are described in the [[SSML-SAYAS]] W3C Note).
Although the design goals are similar,
the CSS model is limited to a basic set of pronunciation rules.
<dl dfn-type=value dfn-for=speak-as>
<dt><dfn>normal</dfn>
<dd>
Uses language-dependent pronunciation rules for rendering the element's content.
For example, punctuation is not spoken as-is,
but instead rendered naturally as appropriate pauses.
<dt><dfn>spell-out</dfn>
<dd>
Spells the text one letter at a time (useful for acronyms and abbreviations).
In languages where accented characters are rare,
it is permitted to drop accents in favor of alternative unaccented spellings.
As an example, in English, the word “rôle” can also be written as “role”.
A conforming implementation would thus be able to spell-out “rôle” as “R O L E”.
<dt><dfn>digits</dfn>
<dd>
Speak numbers one digit at a time,
for instance, “twelve” would be spoken as “one two”,
and “31” as “three one”.
Note: Speech synthesizers are knowledgeable about what a <em>number</em> is.
The 'speak-as' property enables some level of control on how user agents render numbers,
and may be implemented as a preprocessing step
before passing the text to the actual speech synthesizer.
<dt><dfn>literal-punctuation</dfn>
<dd>
Punctuation such as semicolons, braces, and so on
is named aloud (i.e. spoken literally)
rather than rendered naturally as appropriate pauses.
<dt><dfn>no-punctuation</dfn>
<dd>
Punctuation is not rendered: neither spoken nor rendered as pauses.
</dl>
<h2 id="pause-props">
Pause properties </h2>
<h3 id="pause-props-pause-before-after">
The 'pause-before' and 'pause-after' properties</h3>
<pre class=propdef>
Name: pause-before, pause-after
Value: <time [0s,∞]> | none | x-weak | weak | medium | strong | x-strong
Initial: none
Applies to: all elements
Inherited: no
Percentages: N/A
Computed value: specified value
</pre>
The 'pause-before' and 'pause-after' properties specify a prosodic boundary
(silence with a specific duration)
that occurs before (or after) the speech synthesis rendition of the element,
or if any 'cue-before' (or 'cue-after') is specified,
before (or after) the cue within the [=aural box model=].
Note Although the functionality provided by this property is similar to
the <a href="https://www.w3.org/TR/speech-synthesis11/#edef_break"><code>break</code> element</a>
from the SSML markup language [[!SSML]],
the application of 'pause' prosodic boundaries within the [=aural box model=] of CSS Speech
requires special considerations (e.g. <a href="#collapsed-pauses">"collapsed" pauses</a>).
<dl dfn-type=value dfn-for="pause-before,pause-after">
<dt><dfn><<time [0s,∞]>></dfn>
<dd>
Expresses the pause in absolute time units
(seconds and milliseconds, e.g. "+3s", "250ms").
Only non-negative values are allowed.
<dt><dfn>none</dfn>
<dd>
Equivalent to 0ms (no prosodic break is produced by the speech processor).
<dt><dfn>x-weak</dfn>, <dfn>weak</dfn>, <dfn>medium</dfn>, <dfn>strong</dfn>, and <dfn>x-strong</dfn>
<dd>
Expresses the pause by the strength of the prosodic break in speech output.
The exact time is implementation-dependent.
The values indicate monotonically non-decreasing (conceptually increasing)
break strength between elements.
</dl>
Note: Stronger content boundaries are typically accompanied by pauses.
For example, the breaks between paragraphs are typically much more substantial
than the breaks between words within a sentence.
<div class="example">
This example illustrates how the default strengths of prosodic breaks
for specific elements (which are defined by the user agent stylesheet)
can be overridden by authored styles.
<pre>
p { pause: none } /* pause-before: none; pause-after: none */
</pre>
</div>
<h3 id="pause-props-pause">
The 'pause' shorthand property</h3>
<pre class=propdef>
Name: pause
Value: <<'pause-before'>> <<'pause-after'>>?
Initial: N/A (see individual properties)
Applies to: all elements
Inherited: no
Percentages: N/A
Computed value: N/A (see individual properties)
</pre>
The 'pause' property is a shorthand property for 'pause-before' and 'pause-after'.
If two values are given, the first value is 'pause-before' and the second is 'pause-after'.
If only one value is given, it applies to both properties.
<div class="example">
<p> Examples of property values:
<pre>
h1 { pause: 20ms; } /* pause-before: 20ms; pause-after: 20ms */
h2 { pause: 30ms 40ms; } /* pause-before: 30ms; pause-after: 40ms */
h3 { pause-after: 10ms; } /* pause-before: <i>unspecified</i>; pause-after: 10ms */
</pre>
</div>
<h3 id="collapsed-pauses">
Collapsing pauses</h3>
The pause defines the minimum distance of the aural "box"
to the aural "boxes" before and after it.
Adjoining pauses are merged by selecting the strongest named break
and the longest absolute time interval.
For example, "strong" is selected when merging "strong" and "weak",
"1s" is selected when merging "1s" and "250ms",
and "strong" and "250ms" take effect additively when merging "strong" and "250ms".
The following pauses are adjoining:
<ul>
<li>The 'pause-after' of an aural "box" and the 'pause-after' of its last child,
provided the former has no 'rest-after' and no 'cue-after'.
<li>The 'pause-before' of an aural "box" and the 'pause-before' of its first child,
provided the former has no 'rest-before' and no 'cue-before'.
<li>The 'pause-after' of an aural "box" and the 'pause-before' of its next sibling.
<li>The 'pause-before' and 'pause-after' of an aural "box",
if the "box" has a 'voice-duration' of "0ms"
and no 'rest-before' or 'rest-after' and no 'cue-before' or 'cue-after',
or if the "box" has no rendered content at all (see 'speak').
</ul>
A collapsed pause is considered adjoining to another pause
if any of its component pauses is adjoining to that pause.
Note: 'pause' has been moved from between the element's contents and any 'cue'
to outside the 'cue'.
This is not backwards compatible with the informative CSS2.1 Aural appendix [[!CSS2]].
<h2 id="rest-props">
Rest properties</h2>
<h3 id="rest-props-rest-before-after">
The 'rest-before' and 'rest-after' properties</h3>
<pre class=propdef>
Name: rest-before, rest-after
Value: <<time [0s,∞]>> | none | x-weak | weak | medium | strong | x-strong
Initial: none
Applies to: all elements
Inherited: no
Percentages: N/A
Computed value: specified value
</pre>
The 'rest-before' and 'rest-after' properties specify a prosodic boundary
(silence with a specific duration)
that occurs before (or after) the speech synthesis rendition of an element within the [=aural box model=].
Note: Although the functionality provided by this property is similar to
the <a href="https://www.w3.org/TR/speech-synthesis11/#edef_break"><code>break</code> element</a>
from the SSML markup language [[!SSML]],
the application of 'rest' prosodic boundaries within the [=aural box model=] of CSS Speech
requires special considerations (e.g. interspersed audio cues, additive adjacent rests).
<dl dfn-type=value dfn-for="rest-before,rest-after">
<dt><dfn><<time [0s,∞]>></dfn>
<dd>
Expresses the rest in absolute time units (seconds and milliseconds, e.g. "+3s", "250ms").
Only non-negative values are allowed.
<dt><dfn>none</dfn>
<dd>
Equivalent to 0ms.
(No prosodic break is produced by the speech processor.)
<dt><dfn>x-weak</dfn>, <dfn>weak</dfn>, <dfn>medium</dfn>, <dfn>strong</dfn>, and <dfn>x-strong</dfn>
<dd>
Expresses the rest by the strength of the prosodic break in speech output.
The exact time is implementation-dependent.
The values indicate monotonically non-decreasing (conceptually increasing)
break strength between elements.
</dl>
As opposed to <a href="#pause-props">pause properties</a>,
the rest is inserted between the element's content
and any 'cue-before' or 'cue-after' content.
Adjoining rests are treated additively, and do not collapse.
<h3 id="rest-props-rest">
The 'rest' shorthand property</h3>
<pre class=propdef>
Name: rest
Value: <<'rest-before'>> <<'rest-after'>>?
Initial: N/A (see individual properties)
Applies to: all elements
Inherited: no
Percentages: N/A
Computed value: N/A (see individual properties)
</pre>
The 'rest' property is a shorthand for 'rest-before' and 'rest-after'.
If two values are given, the first value is 'rest-before' and the second is 'rest-after'.
If only one value is given, it applies to both properties.
<h2 id="cue-props">
Cue properties</h2>
<h3 id="cue-props-cue-before-after">
The 'cue-before' and 'cue-after' properties</h3>
<pre class=propdef>
Name: cue-before, cue-after
Value: <<uri>> <<decibel>>? | none
Initial: none
Applies to: all elements
Inherited: no
Percentages: N/A
Computed value: specified value
</pre>
The 'cue-before' and 'cue-after' properties specify auditory icons
(i.e. pre-recorded / pre-generated sound clips)
to be played before (or after) the element within the <a href="#aural-model">aural box model</a>.
Note: Although the functionality provided by this property may appear related to
the <a href="https://www.w3.org/TR/speech-synthesis11/#edef_audio"><code>audio</code> element</a>
from the SSML markup language [[!SSML]],
there are in fact major discrepancies.
For example, the [=aural box model=] means that
audio cues are associated to the element's volume level;
and CSS Speech's auditory icons provide limited functionality
compared to SSML's <code>audio</code> element.
<dl dfn-type=value dfn-for="cue-before,cue-after">
<dt><dfn><<uri>></dfn>
<dd>
The URI designates an auditory icon resource.
When a user agent is not able to render the specified auditory icon
(e.g. missing file resource, or unsupported audio codec),
it is recommended to produce an alternative cue, such as a bell sound.
<dt>none
<dd>
Specifies that no auditory icon is used.
<dt><<decibel>>
<dd>
Represents a change (positive or negative) relative to
the computed value of the 'voice-volume' property
within the [=aural box model=] of the selected element.
(As a result, the volume level of an audio cue changes
when the 'voice-volume' property changes).
When omitted, the implied value computes to 0dB.
When the computed value of the 'voice-volume' property is ''silent'',
the audio cue is also set to ''silent'' (regardless of this specified <<decibel>> value).
Otherwise (when not ''silent''),
'voice-volume' values are always specified relatively
to the volume level keywords (see the definition of 'voice-volume'),
which map to a user-calibrated scale of "preferred" loudness settings.
If the inherited 'voice-volume' value already contains a decibel offset,
the dB offset specific to the audio cue is combined additively.
Note: There is a difference between an audio cue
whose volume is set to ''silent'' and one whose value is ''cue-before/none''.
In the former case, the audio cue takes up the same time as if it had been played,
but no sound is generated.
In the latter case, the there is no manifestation of the audio cue at all
(i.e. no time is allocated for the cue in the aural dimension).
</dl>
<div class="example">
<p> Examples of property values:
<pre>
a
{
cue-before: url(/audio/bell.aiff) -3dB;
cue-after: url(dong.wav);
}
h1
{
cue-before: url(../clips-1/pop.au) +6dB;
cue-after: url(../clips-2/pop.au) 6dB;
}
div.caution { cue-before: url(./audio/caution.wav) +8dB; }
</pre>
</div>
<h3 id="cue-props-volume">
Relation between audio cues and speech synthesis volume levels</h3>
<em>This section is non-normative.</em>
The volume levels of audio cues and of speech synthesis
within the [=aural box model=] of a selected element are related.
For example, the desired effect of an audio cue
whose volume level is set at +0dB (as specified by the <<decibel>> value)
is that its perceived loudness during playback
is close to that of the speech synthesis rendition of the selected element,
as dictated by the computed value of the 'voice-volume' property.
Note that a ''silent'' computed value for the 'voice-volume' property
results in audio cues being "forcefully" silenced as well
(i.e. regardless of the specified audio cue <<decibel>> value)
The volume keywords of the 'voice-volume' property
are user-calibrated to match requirements not known at authoring time
(e.g. auditory environment, personal preferences).
Therefore, in order to achieve this approximate loudness alignment of audio cues and speech synthesis,
authors should ensure that the volume level of audio cues
(on average, as there may be discrete variations of perceived loudness
due to changes in the audio stream, such as intonation, stress, etc.)
matches the output of a speech synthesis rendition based on the 'voice-family' intended for use,
given "typical" listening conditions
(i.e. default system volume levels, centered equalization across the frequency spectrum).
As speech processors are capable of directly controlling
the waveform amplitude of generated text-to-speech audio,
and because user agents are able to adjust the volume output of audio cues
(i.e. amplify or attenuate audio signals based on the intrinsic waveform amplitude of digitized sound clips),
this sets a baseline that enables implementations to manage the loudness
of both TTS and cue audio streams within the aural box model,
relative to user-calibrated volume levels
(see the keywords defined in the 'voice-volume' property).
Due to the complex relationship between perceived audio characteristics (e.g. loudness)
and the processing applied to the digitized audio signal (e.g. signal compression),
we refer to a simple scenario whereby the attenuation is indicated in decibels,
typically ranging from 0dB (i.e. maximum audio input, near clipping threshold)
to -60dB (i.e. total silence).
Given this context, a "standard" audio clip would oscillate between these values,
the loudest peak levels would be close to -3dB (to avoid distortion),
and the relevant audible passages would have average (RMS) volume levels
as high as possible (i.e. not too quiet, to avoid background noise during amplification).
This would roughly provide an audio experience that could be
seamlessly combined with text-to-speech output
(i.e. there would be no discernible difference in volume levels
when switching from pre-recorded audio to speech synthesis).
Although there exists no industry-wide standard to support such convention,
different TTS engines tend to generate comparably-loud audio signals
when no gain or attenuation is specified.
For voice and soft music, -15dB RMS seems to be pretty standard.
<h3 id="cue-props-cue">
The 'cue' shorthand property</h3>
<pre class=propdef>
Name: cue
Value: <<'cue-before'>> <<'cue-after'>>?
Initial: N/A (see individual properties)
Applies to: all elements
Inherited: no
Percentages: N/A
Computed value: N/A (see individual properties)
</pre>
The 'cue' property is a shorthand for 'cue-before' and 'cue-after'.
If two values are given the first value is 'cue-before' and the second is 'cue-after'.
If only one value is given, it applies to both properties.
<div class="example">
<p> Example of shorthand notation:
<pre>
h1
{
cue-before: url(pop.au);
cue-after: url(pop.au);
}
/* ...is equivalent to: */
h1
{
cue: url(pop.au);
}
</pre>
</div>
<h2 id="voice-char-props">
Voice characteristic properties</h2>
<h3 id="voice-props-voice-family">
The 'voice-family' property</h3>
<pre class=propdef>
Name: voice-family
Value: [[<family-name> | <generic-voice>],]* [<family-name> | <generic-voice>] | preserve
Initial: implementation-dependent
Applies to: all elements
Inherited: yes
Percentages: N/A
Computed value: specified value
</pre>
The 'voice-family' property specifies a prioritized list of component values
that are separated by commas to indicate that they are alternatives.
(This is analogous to 'font-family' in visual style sheets.)
Each component value potentially designates a speech synthesis voice instance,
by specifying match criteria.
See the <a href="#voice-selection">voice selection</a> section on this topic.
<dfn><<generic-voice>></dfn> = [<<age>>? <<gender>> <<integer>>?]
Note: Although the functionality provided by this property is similar to
the <a href="https://www.w3.org/TR/speech-synthesis11/#edef_voice"><code>voice</code> element</a>
from the SSML markup language [[!SSML]],
CSS Speech does not provide an equivalent to SSML's sophisticated voice language selection.
This technical limitation may be alleviated in a future revision of the Speech module.
<dl dfn-type=value dfn-for=voice-family>
<dt><dfn><<family-name>></dfn>
<dd>
Values are specific voice instances (e.g., Mike, comedian, mary, carlos2, "valley girl").
Like 'font-family' names, voice names must either be given quoted as [=strings=],
or unquoted as a sequence of one or more [=CSS identifiers=].
Note: As a result, most punctuation characters, or digits at the start of each token,
must be escaped in unquoted voice names.
If a sequence of identifiers is given as a voice name,
the computed value is the name converted to a string
by joining all the identifiers in the sequence by single spaces.
Voice names that happen to be the same as the gender keywords
(''male'', ''female'' and ''neutral'')
or that happen to match the [=CSS-wide keywords=] or ''preserve''
must be quoted to disambiguate with these keywords.
The keyword <css>default</css> is reserved for future use and must also be quoted when used as voice names.
Note: In [[!SSML]], voice names are space-separated and cannot contain whitespace characters.
It is recommended to quote voice names that contain
white space, digits, or punctuation characters other than hyphens--
even if these voice names are valid in unquoted form--
in order to improve code clarity.
For example: <code>voice-family: "john doe", "Henry the-8th";</code>
<dt><dfn type><<age>></dfn>
<dd>
Possible values are <dfn>child</dfn>, <dfn>young</dfn> and <dfn>old</dfn>,
indicating the preferred age category to match during voice selection.
Note: A recommended mapping with [[!SSML]] ages is:
''child'' = 6 y/o, ''young'' = 24 y/o, ''old'' = 75 y/o.
More flexible age ranges may be used by the processor-dependent voice-matching algorithm.
<dt><dfn type><<gender>></dfn>
<dd>
One of the keywords <dfn>male</dfn>, <dfn>female</dfn>, or <dfn>neutral</dfn>,
specifying a male, female, or neutral voice, respectively.
Note: The interpretation of the relationship between a person's age or gender,