Skip to content
/ KD-SVD Public

[INTERSPEECH 2021] Official Keras Implementation of "Knowledge Distillation for Singing Voice Detection"

License

Notifications You must be signed in to change notification settings

mvp18/KD-SVD

Repository files navigation

Knowledge Distillation for Singing Voice Detection

Soumava Paul, Gurunath Reddy M, K. Sreenivasa Rao and Partha Pratim Das
Indian Institute of Technology Kharagpur


INTERSPEECH 2021 | arXiv | proceedings

Setup

For dataset download, environment setup and data preparation, please refer to this repo.

Training and Testing

Refer to the following folders for reproducing results in the paper:

[1] Tables 2-4: schluter-cnn

[2] Tables 5,6: leglaive_lstm

[3] Table 7: lstm_scnn_feat

[4] Table 8: enkd_scnn_feat_student-cnn and enkd_scnn_feat_student-lstm

Inside each folder, run main.py for baselines and main_kd.py for knowledge-distillation expts.

See expts.sh for sample runs.

Check results folder to get hyperparameter configs corresponding to highest validation accuracy. The corresponding test metrics are reported in our paper.

🎓 Cite

If this code was helpful for your research, consider citing:

@inproceedings{paul21b_interspeech,
  author={Soumava Paul and Gurunath Reddy M and K. Sreenivasa Rao and Partha Pratim Das},
  title={{Knowledge Distillation for Singing Voice Detection}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={4159--4163},
  doi={10.21437/Interspeech.2021-636}
}

🙏 Acknowledgements

We thank Kyungyun Lee for her revisiting-svd repo which proved to be the starting point of our work.

About

[INTERSPEECH 2021] Official Keras Implementation of "Knowledge Distillation for Singing Voice Detection"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published