Voogle is an audio search engine that uses vocal imitations of the desired sound as the search query.
Voogle is built in Python 3.6 and Javascript, using Node.js. Voogle runs best in Google Chrome.
Voogle backend dependencies are installed with pip install -r requirements.txt
.
Note: Windows and Linux users must have FFmpeg installed.
Voogle frontend dependencies are installed with npm install
.
Note: You must have Node.js installed before you can run npm install
.
Any collection of audio files can be used as sounds returned by Voogle in response to a vocal query. The Interactive Audio Lab has released 2 datasets specifically for the training of query-by-vocal-imitation models: Vocal Imitation Set and VocalSketch [1, 2]. A small test dataset for demos can be downloaded here.
Audio files should be placed in data/audio/<dataset_name>
. The dataset used during execution can be specified in config.yaml
.
Interactive Audio Lab has released the following models for query-by-vocal-imitation:
siamese-style
: a siamese-style neural network [3]VGGish-embedding
: cosine similarity of VGGish embeddings [4]mcft
: multi-resolution common-fate transform [5]
Weight files should be placed in model/weights
. The model used during execution can be specified in config.yaml
.
After installing the dependencies, a dataset, and a model, the Voogle app can be deployed.
- Start the server by running
npm run production
. - Navigate to
localhost:5000
in your browser.
From there, please follow the directions found under "Show Instructions". Enjoy!
Note: There are currently two frontend interfaces available for Voogle. If you would like to use the alternate interface, use the command npm run old-interface
instead during step 1.
Unit tests can be run with npm run test
.
Voogle can be extended to incorporate additional models and datasets. If you would like to make your model or dataset available to all users of Voogle, contact interactiveaudiolab@gmail.com.
- Define your model as a subclass of
QueryByVoiceModel
with all abstract methods implemented as described. - Add the model constructor to
factory.py
. - Place your model's weights in
model/weights
. - Update the model name and filepath in
config.yaml
.
An example model can be found here.
- Define your dataset as a subclass of
QueryByVoiceDataset
with all abstract methods implemented as described. - Add the dataset constructor to
factory.py
. - Place the audio files in
data/audio/<dataset_name>
. - Update the dataset name in
config.yaml
.
An example dataset can be found here.
- [1] Bongjun Kim, Madhav Ghei, Bryan Pardo, and Zhiyao Duan, "Vocal Imitation Set: a dataset of vocally imitated sound events using the AudioSet ontology," Proceedings of the Detection and Classification of Acoustic Scenes and Events 2018 Workshop (DCASE2018), Surrey, UK, Nov. 2018. [paper link]
- [2] Mark Cartwright and Bryan Pardo, "Vocalsketch: Vocally imitating audio concepts," Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (ACM), 2015. [paper link]
- [3] Yichi Zhang, Bryan Pardo, and Zhiyao Duan, "Siamese Style Convolutional Neural Networks for Sound Search by Vocal Imitation," IEEE/ACM Transactions on Audio Speech and Language Processing. [paper link]
- [4] Bongjun Kim and Bryan Pardo, "Improving Content-based Audio Retrieval by Vocal Imitation Feedback," IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2019.
- [5] Fatemeh Pishdadian and Bryan Pardo. “Multi-resolution Common Fate Transform,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2018. [paper link]