Skip to content

Examples

Brecht De Man edited this page Nov 17, 2020 · 22 revisions

Built-in demo projects

These are the examples shipped with the Web Audio Evaluation Tool code, to be found in the /tests/examples/ folder. They are run on a server at Queen Mary University of London with a recent WAET version. These all use the test stimuli (of a woman saying numbers from zero to nine) found in media/example, also included with the distribution, and as such can be run locally on your own machine immediately upon downloading.

Spotted in the wild

These are examples of real-life listening tests by various researchers.

If you are using the Web Audio Evaluation Tool for your work, we would love to hear about it. Where possible we are also keen to include it in the list below, when seeking participants or after the experiment is over.

Note: these tests use some past version of the Web Audio Evaluation Tool and may therefore exhibit old bugs or not feature new functionality.

  • Perception of Punch - 15 minute text, with 100 short samples to give an absolute score for punch, one-by-one. Participants have the option to enter a prize draw for a £50 (or equivalent) Amazon gift voucher!
  • Perceptual Test on Synthesizer Sounds - The perceptual test consists in listening to synthesizer sounds of about 2 seconds and describing them verbally (written form). 20 audio samples to annotate will be presented successively, by Fanny Roche of GIPSA lab, Grenoble INP.
  • Natural sound generation - Examining techniques for computationally generating everyday, non-music, non-speech sounds such as the wind howling or floorboards creaking, by William Wilkinson at Queen Mary University of London.
  • Synth timbre - Timbral perception of musical synthetic sounds, by Eddie Wade at Queen Mary University of London.
  • Bass playing difficulty - Perceived difficulty of bass guitar parts, by Callum Goddard at Queen Mary University of London.
  • Musical balance - Preference of relative levels of musical instruments, by Nick Jillings at Birmingham City University (15-20 minutes).
  • Perception of vocal processing - Evaluation of sonic properties of vocal processing in the context of a mix, by Austin Moore at the University of Huddersfield.
  • "Sonic Signatures" - Online evaluation of music mixes - Investigation into the nature of quality perception in mixes of contemporary music, by Alex Wilson from the University of Salford and Brecht De Man from Queen Mary University of London. Win £30 in Amazon vouchers.
  • Commentary balance in noisy environments - Investigating preference of background-foreground audio object balance in the presence of environmental noise. PhD research by Tim Walton from Newcastle University and BBC Research and Development. Win two £50 Amazon vouchers.
  • Evaluation of mixes - Subjective rating and description of different mixes of the same song for PhD research undertaken by Brecht De Man at Queen Mary University of London [6]. Every test is a random selection of four songs without copyright restrictions from the dataset. Raw tracks, mixes, and Digital Audio Workstation files can be found on the Open Multitrack Testbed.
  • Automatic reverb - Evaluation of an automated reverberation effect by Adán L. Benito Temprano at Queen Mary University of London.
  • Realism of sound synthesis - evaluate the realism of these synthesised sound effects, an experiment by David Moffat from Queen Mary University of London.
  • Aeolian harps - evaluate the plausibility of synthesised aeolian harp sounds, part of research by Rod Selfridge from Queen Mary University of London.

References

[1] Nicholas Jillings, Brecht De Man, David Moffat and Joshua D. Reiss, “Web Audio Evaluation Tool: A browser-based listening test environment,” 12th Sound and Music Computing Conference, July 2015. [pdf]

[2] Lucas Mengual, David Moffat and Joshua D. Reiss, “Modal Synthesis of Weapon Sounds,” Audio Engineering Society Conference: 61st International Conference: Audio for Games, February 2016. [pdf]

[3] Nicholas Jillings, Brecht De Man, David Moffat, Joshua D. Reiss and Ryan Stables, “Web Audio Evaluation Tool: A framework for subjective assessment of audio,” 2nd Web Audio Conference, April 2016. [pdf]

[4] Nicholas Jillings, Brecht De Man, David Moffat, Joshua D. Reiss and Ryan Stables, “Subjective comparison of music production practices using the Web Audio Evaluation Tool”, 2nd AES Workshop on Intelligent Music Production, September 2016. [pdf]

[5] Emmanouil Theofanis Chourdakis and Joshua D. Reiss, “A machine learning approach to application of intelligent artificial reverberation,” Journal of the Audio Engineering Society, vol. 65, January/February 2017.

[6] Brecht De Man, Kirk McNally and Joshua D. Reiss, “Perceptual Evaluation and Analysis of Reverberation in Multitrack Music Production,” Journal of the Audio Engineering Society, vol. 65, January/February 2017. [pdf]

[7] Mehri, Soroush, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, and Yoshua Bengio. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." 5th International Conference on Learning Representations (ICLR 2017), April 2017. [pdf]