Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The generalization abality #15

Closed
ABexit opened this issue Dec 21, 2020 · 1 comment
Closed

The generalization abality #15

ABexit opened this issue Dec 21, 2020 · 1 comment

Comments

@ABexit
Copy link

ABexit commented Dec 21, 2020

The generalization of RawNet2 is poor?
I trained RawNet2 in AISHELL dataset with 340 speaker and tested in trail.txt with 8w pairs bulit by another 40 speaker of AISHELL, and the final eer is 3.46%. But when tested in 40 speaker of VCTK dataset with 8w pairs, the eer got 32.71%. Do you know why? Thanks.

@Jungjee
Copy link
Owner

Jungjee commented Jan 7, 2021

Hi, it's not easy for me to judge the extent of generalization.
I don't know how much difference AISHELL and VCTK datasets have.
However, normally, I would not expect EER to increase over 30%.
One example would be cross lingual experiments (train: English, test: Korean) which is not published.
In this case, EER was somewhere between 5~7%.

Hope this helps :)

@Jungjee Jungjee closed this as completed Jan 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants