Neural Turing machine for source separation in Tensorflow
-
Updated
Aug 16, 2017 - Python
Neural Turing machine for source separation in Tensorflow
This is the code&dataset for our paper [Modeling Attention and Memory for Auditory Selection in a Cocktail Party Environment. AAAI 2018]
Speech separation with utterance-level PIT experiments
target speaker separation using a short adaptation utterance
A PyTorch implementation of Time-domain Audio Separation Network (TasNet) with Permutation Invariant Training (PIT) for speech separation.
Speech Enhancement based on DNN (Spectral-Mapping, TF-Masking), DNN-NMF, NMF
Real-time GCC-NMF Blind Speech Separation and Enhancement
Pytorch implement of DANet For Speech Separation
According to funcwj's uPIT, the training code supporting multi-gpu is written, and the Dataloader is reconstructed.
A PyTorch implementation of " AN EMPIRICAL STUDY OF CONV-TASNET "
Script to calculate SNR and SDR using python
Pytorch implements Deep Clustering: Discriminative Embeddings For Segmentation And Separation
Implementation of "SpEx: Multi-Scale Time Domain Speaker Extraction Network".
Constrained Permutation Invariant Training, Speech Separation
A tiny mandarin dataset for speech separation
Python Implementation for Directional Sparse Filtering with Tensorflow/Keras
语音前端仓库 || a modified version of Asteroid toolkit for Speech Front-end
Add a description, image, and links to the speech-separation topic page so that developers can more easily learn about it.
To associate your repository with the speech-separation topic, visit your repo's landing page and select "manage topics."