Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lower performance with trained model #7

Closed
hahally opened this issue Sep 14, 2022 · 4 comments
Closed

Lower performance with trained model #7

hahally opened this issue Sep 14, 2022 · 4 comments

Comments

@hahally
Copy link

hahally commented Sep 14, 2022

Hi, my problem also is here L-Zhe/BTmPG#2.
On MSCOCO data, The model‘s performance is not good. BLEU 8.79, self-BLEU: 18.56.
Could you please tell me some tricks to train the model on MSCOCO data?
Thanks!

@tomhosking
Copy link
Owner

Can I check that you're trying to train HRQ-VAE on MSCOCO, not BTmPG? What steps are you taking to train the model, and how are you evaluating the results?

The results I report in the HRQ-VAE paper are based on selecting one of the five captions as input, and comparing the generated paraphrase to the other four captions - if you only compare to one caption, then the scores will be lower. But, I didn't require any additional tricks for MSCOCO and the hyperparameters were the same as for the other two datasets reported in the paper.

@hahally
Copy link
Author

hahally commented Sep 14, 2022

I am training BTmPG on MSCOCO now.

@tomhosking
Copy link
Owner

I am not the author of BTmPG and cannot help with any problems you have with that model, sorry.

@hahally
Copy link
Author

hahally commented Sep 14, 2022

Sorry to bother you.
I have read your paper. I saw you compared BTMPG.
Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants