You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I haven't found a speaker change embodiment in the code. No training process was found for audio containing multiple speakers. In the pre-processing of training data, audio segments of the same individual is collected together, but how is the training process? Could you explain the generative process of uis-rnn?
Thank you in advance for your help.
1: The function sample_permuted_segments seems not work in utils.py, it generates always the same permutation.
2: And also, why need to collect the same cluster in resize_sequence, the overlapping window to calculate speaker embedding will not make sense from this way.
You mean the outputs in a single call are all identical sequences? Or do you mean when you call the function multiple times for the same input you always get same output? If latter, check if it is because you fixed the random seed of numpy.
There's additional segment level aggregation logic after sliding windows. Segments are not overlapping. See section 2 of this paper.
No description provided.
The text was updated successfully, but these errors were encountered: