New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rationale for dividing speech into chunks of 200ms with 10ms overlap #24
Comments
Hi,
the size and the shift of the context window are just hyperparameters of
the system and their best values could be very task-dependent. In the paper
we tuned these two hyperparameters on the addressed tasks and we have
observed the best performance with them. This doesn't necessarily mean that
are good for all the speech tasks...
…On Fri, 29 Mar 2019 at 17:52, Vikash ***@***.***> wrote:
Hi, I am trying to check the performance of SincNet on VoxCeleb dataset. I
am wondering about the rationale of extracting chunks of 200ms windows of
signals during training, and also the 10ms overlap that you have in test?
Does the model depend on this?
Can I use longer chunks like 3s of audio like the VoxCeleb paper seems to
be using? Seeing that VoxCeleb is a much larger dataset?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#24>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AQGs1h6FIOlHKLIi6qjXA5wdxXaRhqQ-ks5vbos1gaJpZM4cTTdK>
.
|
Thanks for the comment. So if i understand correctly you train the model to identify the speaker on a 200ms random chunk of the audio. While testing you take multiple such chunks and classify all of them and vote on the best ones? So in conclusion your model cannot handle variable length signals for now? Also, during generating training batches, you seem to be multiplying an amplitude factor randomly chosen from (0.8, 1.2) to the signal. Is there any rationale behind this? |
More precisely we are averaging all the probabilities and finally vote for
the speaker with the highest probability. Due to the averaging operation
(that is differentiable) we can manage sequence of arbitrary length. The
reason why we perform the amplitude multiplication is to add a bit of data
augmentation while training the system (the system can always see the
different amplitudes for a different signal). This has a minor effect
actually...
Mirco
…On Fri, 29 Mar 2019 at 18:16, Vikash ***@***.***> wrote:
Thanks for the comment. So if i understand correctly you train the model
to identify the speaker on a 200ms random chunk of the audio. While testing
you take multiple such chunks and classify all of them and vote on the best
ones? So in conclusion your model cannot handle variable length signals for
now?
Also, during generating training batches, you seem to be multiplying an
amplitude factor randomly chosen from (0.8, 1.2) to the signal. Is there
any rationale behind this?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#24 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AQGs1uhksDAemjgKzmJ-273X6NBxp_nrks5vbpDQgaJpZM4cTTdK>
.
|
Hi, I am trying to check the performance of SincNet on VoxCeleb dataset. I am wondering about the rationale of extracting chunks of 200ms windows of signals during training, and also the 10ms overlap that you have in test? Does the model depend on this?
Can I use longer chunks like 3s of audio like the VoxCeleb paper seems to be using? Seeing that VoxCeleb is a much larger dataset?
The text was updated successfully, but these errors were encountered: