Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rationale for dividing speech into chunks of 200ms with 10ms overlap #24

Closed
vikigenius opened this issue Mar 29, 2019 · 3 comments
Closed

Comments

@vikigenius
Copy link

Hi, I am trying to check the performance of SincNet on VoxCeleb dataset. I am wondering about the rationale of extracting chunks of 200ms windows of signals during training, and also the 10ms overlap that you have in test? Does the model depend on this?

Can I use longer chunks like 3s of audio like the VoxCeleb paper seems to be using? Seeing that VoxCeleb is a much larger dataset?

@mravanelli
Copy link
Owner

mravanelli commented Mar 29, 2019 via email

@vikigenius
Copy link
Author

Thanks for the comment. So if i understand correctly you train the model to identify the speaker on a 200ms random chunk of the audio. While testing you take multiple such chunks and classify all of them and vote on the best ones? So in conclusion your model cannot handle variable length signals for now?

Also, during generating training batches, you seem to be multiplying an amplitude factor randomly chosen from (0.8, 1.2) to the signal. Is there any rationale behind this?

@mravanelli
Copy link
Owner

mravanelli commented Mar 29, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants