Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question #12

Closed
francqz31 opened this issue Jan 15, 2024 · 3 comments
Closed

A question #12

francqz31 opened this issue Jan 15, 2024 · 3 comments

Comments

@francqz31
Copy link

Hey Author , Thanks for the opensource

I wanted to ask if emotion2vec is better than https://github.com/audeering/w2v2-how-to

Thanks in advance

@ddlBoJack
Copy link
Owner

Hi, it is worth noting that emotion2vec is a universal emotional representation model, which needs to be trained on a certain downstream task to compare the performance. For performance on a specific dataset, please refer to our paper: https://arxiv.org/abs/2312.15185
We will provide a finetuned model with massive labeled data soon, welcome to follow up and compare performance :)

@francqz31
Copy link
Author

francqz31 commented Jan 16, 2024

@ddlBoJack Oh Thanks , Noted👍. so this finetuned model with massive labeled data are you planning on training it in a way to beat "w2v2-how-to" ?
in "w2v2-how-to" They finetuned the pre-trained wav2vec2-large-robust model on MSP-Podcast (v1.7). The pre-trained model was pruned from 24 to 12 transformer layers before fine-tuning. !!

@ddlBoJack
Copy link
Owner

Our emotion2vec only has 12 layers of 768-dim transformer. As for the performance, researchers can explore further after we release the finetuned checkpoint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants