You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Different rooms and sources positions can increase the diversity of data. It is reasonable for task2. We want to know what is the correlation that you mention.
For each clip, the sound source positions of signals received by linear array and circular array are different, so how can we combine the signals of different arrays for distributed speech enhancement?
What I mean is that if the sound source positions are different, then speech enhancement can only be done on a single array, instead of combining the signals received by all arrays. Distributed speech enhancement should require different arrays to be in the same environment, such as sound source location, room size, etc.
According to the RIR files, It seems that there is no correlation between different microphone arrays.
The rooms are different and source positions are different.
Is it useful for task2?
The text was updated successfully, but these errors were encountered: