Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Surface-level objective metrics of emotion-conditioned generation and training reimplementation problems. #8

Closed
yen52205 opened this issue Nov 26, 2021 · 2 comments

Comments

@yen52205
Copy link

yen52205 commented Nov 26, 2021

Hi,
I tried to train "CP transformer w/ pre-training" by processed data you offered in repo from scratch.
I used 1e-4 as pretraining learning rate, and selected loss_30.ckpt, and then trained it on EMOPIA dataset by 1e-5 learning rate.
But I couldn't found further detail about surface-level objective metrics , like how many clips for each clip types for evaluation, and which loss checkpoint you used for evaluation...etc.
To reproduce the surface-level objective metrics, I used the loss_25.ckpt w/ pre-training and generated by 4Q condition, each for 100 clips, and used muspy to get PR/NPC/POLY results (46.585/ 8.51/ 4.040805902072384 for each).
In the paper, the results are below.
**
image
**
Was there anything I didn't notice in training or evaluation?
And could you please provide the detail for doing this surface-level objective metrics evaluation?

@joann8512
Copy link
Collaborator

Hi. The general idea of choosing the number of clips to do evaluation is just so that each class has some decent amount of data, and 100 is about right. As for the choice of the checkpoint, since we used loss_30 for further training on EMOPIA, you can expect the loss of the checkpoint to be lower than 30.

@yen52205
Copy link
Author

thanks for your answering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants