Skip to content

sertansenturk/makam_recognition_experiments

Repository files navigation

DOI

[ConferenceXX] Ottoman-Turkish Makam Recognition Experiments

This repository hosts the experiments conducted in the paper:

TBD

The dataset used in the experiments was curated as part of the paper:

Karakurt, A., Şentürk S., & Serra X. (2016). MORTY: A Toolbox for Mode Recognition and Tonic Identification. 3rd International Digital Libraries for Musicology Workshop. pages 9-16, New York, USA

Please cite the papers above, if you are using the contents of this repository for your works.

Repository Structure

Rewrite XX

  • The scripts are located in the base folder along with several miscallenaeous files (the license, readme, setup and requirement files).
  • The folder ./data links to the relevant commit in our makam recognition dataset, the folds and the summary of the evaluation obtained from all experiments.
  • By running the Jupyter notebooks in this repository, you can reproduce the extensive experiments reported in the paper. The outputs will also be saved to the folder ./data. However the experiments might run for days (in a local machine), unless you use a cluster. For this reason, the computed features, training models, results and evaluation files are also dowloadable from Zenodo (link).
  • The folder ./dlfm_code has the relevant Python and MATLAB modules for the training, testing and evaluation.

Setup

Data

First, you should initialize and update the dataset, which is linked as a submodule:

```bash
cd path/to/makam_recognition_experiments
git submodule init
git submodule update
```

Zenodo data XX

Docker

For the sake of reproducibility, you can run the experiments within Docker. To run Docker, you need to setup the Docker engine. Please refer to the documentation to how to install the free, community version.

Running Experiments

Note: We suggest you to use a cluster to run the training and testing steps. Otherwise it might take days to reproduce the experiments.

First build the service:

```bash
docker-compose build
```

Then, run on the terminal:

```bash
docker-compose up
```

Follow the link in the terminal to open Jupyter.

Further instructions XX.

Development

For development purposes, you can build the docker image in development mode:

```bash
docker-compose -f docker-compose.dev.yml build
```

Then, run on the terminal:

```bash
docker-compose -f docker-compose.dev.yml up
```

Here, the experimentation_code is installed in editable mode, test & development tools (such as pytest, pylint) are included, and 3) the repo is mounted on work folder when running docker.

Testing

We use tox on a virtualenv to automate unittests, linting etc. You can use development docker-compose (explained above). Simply open a terminal in the running container (e.g. from Jupyter interface) and run:

```bash
cd work/
tox
```

Alternatively, you can install tox locally on a virtual environment. This method requires you to have Python 3.7 available on your local machine.

The Makefile automates the rest of the steps. To create virtualenv, simply run:

```bash
make dev
```

which creates a virtualenv called venv and installs tox. To test your work, then run:

```bash
make tox
```

License

The source code hosted in this repository is licenced under Affero GPL version 3. The data (the features, models, figures, results etc.) are licenced under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.