Skip to content
Go to file

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


OtoWorld is an interactive environment in which agents must learn to listen in order to solve navigational tasks. The purpose of OtoWorld is to facilitate reinforcement learning research in computer audition, where agents must learn to listen to the world around them to navigate.

Note: Currently the focus is on audio source separation.

OtoWorld is built on three open source libraries: OpenAI gym for environment and agent interaction, pyroomacoustics for ray-tracing and acoustics simulation, and nussl for training deep computer audition models. OtoWorld is the audio analogue of GridWorld, a simple navigation game. OtoWorld can be easily extended to more complex environments and games.

To solve one episode of OtoWorld, an agent must move towards each sounding source in the auditory scene and "turn it off". The agent receives no other input than the current sound of the room. The sources are placed randomly within the room and can vary in number. The agent receives a reward for turning off a source.

Read the OtoWorld Paper here

OtoWorld Environment


Clone the repository

git clone

Create a conda environment:

conda create -n otoworld python==3.7

Activate the environment:

conda activate otoworld

Install requirements:

pip install -r requirements.txt

Install ffmpeg from conda distribution (Note: Pypi distribution of ffmpeg is outdated):

conda install ffmpeg

If using a CUDA-enabled GPU (highly recommended), install Pytorch 1.4 from official source:

pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f


pip install torch==1.4.0 torchvision==0.5.0

You may ignore warnings if about certain dependencies for now.

Additional Installation Notes - Linux

  • Linux users may need to install the sound file library if it is not present in the system. It can be done using the following command:
sudo apt-get install libsndfile1

This should take care of a common musdb error.

Demo and Tutorial

You can get familiar with OtoWorld using our tutorial notebook: Tutorial Notebook.


jupyter notebook

and navigate to notebooks/tutorial.ipynb.


You can view (and run) examples of experiments:

cd experiments/


Please create your own experiments and see if you can win OtoWorld! You will need a GPU running CUDA to be able to perform any meaningful experiments.

Is It Running Properly?

You should a message indicating the experiment is running, such as this:

- Starting to Fit Agent

You may get a warning about SoX. Ignore this for now. You're good to go!


@inproceedings {otoworld
    author = {Omkar Ranadive and Grant Gasser and David Terpay and Prem Seetharaman},
    title = "OtoWorld: Towards Learning to Separate by Learning to Move",
    journal = "Self Supervision in Audio and Speech Workshop, 37th International Conference on Machine Learning ({ICML} 2020), Vienna, Austria",
    year = 2020


Applying reinforcement learning to perform source separation.




No releases published


No packages published

Contributors 4