Skip to content

fakufaku/interspeech2023-moving-iva-samples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Multi-channel separation of dynamic speech and sound events

This repo provides the supplementary materials for the following paper.

Takuya Fujimura and Robin Scheibler, "Multi-channel separation of dynamic speech and sound events," Interspeech 2023.

There are two experiments: one for speech signals, and one for sound events.

Speech separation

In the paper, we evaluated the separation performance for two speakers, including the cases of one or two moving speakers. As supplementary materials, we provide some samples of separated signals and the visualization of the attention weights. For the details, please refer to the speech/README.txt

Sound event detection (SED) with separation

In the paper, we evaluated the SED performance using separation. As supplementary materials, we provide some results of SED and separated signals. For the details, please refer to the sed/README.txt

Authors

Takuya Fujimura @ Nagoya University, Japan

Robin Scheibler @ LINE corporation, Japan

About

Repository containing samples produced by the method proposed in "Multi-channel separation of dynamic speech and sound events" and presented at Interspeech 2023.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages