You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
async function speechSynthesisUtteranceTests() {
// 12-24-2017
console.group("SpeechSynthesisUtterance pitch, rate, volume attribute tests")
const text = `hello universe`;
// https://w3c.github.io/speech-api/speechapi.html#dfn-utterancepitch
// `pitch` 0-2
for (let i = 0; i < 2; i += 0.3333333333333333) {
const utterance = new SpeechSynthesisUtterance(text);
utterance.pitch = i;
console.log(`SpeechSynthesisUtterance pitch: ${utterance.pitch}`);
await new Promise(resolve => {
utterance.onend = e => {
utterance.onend = null;
resolve()
}
window.speechSynthesis.speak(utterance);
})
}
// https://w3c.github.io/speech-api/speechapi.html#dfn-utterancerate
// `rate` 0.1-10
for (let i = .1; i < 10; i += .1) {
const utterance = new SpeechSynthesisUtterance(text);
utterance.rate = i;
console.log(`SpeechSynthesisUtterance rate: ${utterance.rate}`);
await new Promise(resolve => {
utterance.onend = e => {
utterance.onend = null;
resolve()
}
window.speechSynthesis.speak(utterance);
})
}
// https://w3c.github.io/speech-api/speechapi.html#dfn-utterancevolume
// volume 0-1
for (let i = 0; i < 1; i += 0.3333333333333333 / 2) {
const utterance = new SpeechSynthesisUtterance(text);
utterance.volume = i;
console.log(`SpeechSynthesisUtterance volume: ${utterance.volume}`);
await new Promise(resolve => {
utterance.onend = e => {
utterance.onend = null;
resolve()
}
window.speechSynthesis.speak(utterance)
})
}
console.groupEnd()
}
speechSynthesisUtteranceTests();
We could utilize a uniform procedure to verify the results within the required test format.
One approach would be to record the output of window.speechSynthesis.speak() using MediaRecorder then perform the tests on the output using AudioContext.AnalyzerNode, though not certain if the audio output of the recording will reflect the original audio output.
The text was updated successfully, but these errors were encountered:
I'm tentatively labelling this as untestable, because I'm not sure we can test it yet.
We can't use MediaRecorder because we can't rely on having a microphone (given most vendor's CI systems run on VMs that won't have any, and certainly couldn't rely on being on being otherwise quiet).
We can't use MediaRecorder because we can't rely on having a microphone
A microphone is not necessary to record output of speechSynthesis.speak()guest271314/SpeechSynthesisRecorder@87be7b9. The issue with using MediaRecorder is that the resulting audio/webm would be different from the original audio output.
From perspective here "untestable" means that the implementation is not capable of being tested, which is not the case and needs to be verified as well.
We simply need to consider and try the possible available approaches and concur on the results.
From perspective here "untestable" means that the implementation is not capable of being tested, which is not the case and needs to be verified as well.
FWIW, "untestable" means we can't do it with an automated test with no human in the loop.
But yeah, if MediaRecorder can get the audio output (which I didn't realise it could), then we can almost certainly do this well enough.
Setting
SpeechSynthesisUtterance.volume
does not appear to change volume of audio output ofwindow.speechSynthesis.speak()
at Chromium or Firefox at *nixWe could utilize a uniform procedure to verify the results within the required test format.
One approach would be to record the output of
window.speechSynthesis.speak()
usingMediaRecorder
then perform the tests on the output usingAudioContext.AnalyzerNode
, though not certain if the audio output of the recording will reflect the original audio output.The text was updated successfully, but these errors were encountered: