Skip to content

MaryamHoss/BESD

Repository files navigation

code for BESD model to denoise speech using EEG signals

Official repository for Speaker-independent Brain Enhanced Speech denoising by Maryam Hosseini, Luca Celotti and Éric Plourde. The auditory system is extremely efficient in extracting attended auditory information in the presence of competing speakers. Single-channel speech enhancement algorithms, however, greatly lack this efficacy. In this paper, we propose a novel deep learning method referred to as the Brain Enhanced Speech Denoiser (BESD), that takes advantage of the attended auditory information present in the brain activity of the listener to denoise a multi-talker speech. We use this information to modulate the features learned from the sound and the brain activity, in order to perform speech enhancement. We show that our method successfully enhances a speech mixture, without prior information about the attended speaker, using electroencephalography (EEG) signals recorded from the listener. This makes it a great candidate for realistic applications where no prior information about the attended speaker is available, such as hearing aids or cell phones.

The dataset used is presented in the work Electrophysiological Correlates of Semantic Dissimilarity Reflect the Comprehension of Natural, Narrative Speech by M.P. Broderick, A.J. Anderson, G.M. Di Liberto, M.J. Crosse, and E.C. Lalor and is available for download here. We only used the Cocktail Party dataset. For a toy dataset have a look at data/Cocktail_party/Normalized/2s/eeg/readme.txt.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published