-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lower performance with trained model #7
Comments
Can I check that you're trying to train HRQ-VAE on MSCOCO, not BTmPG? What steps are you taking to train the model, and how are you evaluating the results? The results I report in the HRQ-VAE paper are based on selecting one of the five captions as input, and comparing the generated paraphrase to the other four captions - if you only compare to one caption, then the scores will be lower. But, I didn't require any additional tricks for MSCOCO and the hyperparameters were the same as for the other two datasets reported in the paper. |
I am training BTmPG on MSCOCO now. |
I am not the author of BTmPG and cannot help with any problems you have with that model, sorry. |
Sorry to bother you. |
Hi, my problem also is here L-Zhe/BTmPG#2.
On MSCOCO data, The model‘s performance is not good. BLEU 8.79, self-BLEU: 18.56.
Could you please tell me some tricks to train the model on MSCOCO data?
Thanks!
The text was updated successfully, but these errors were encountered: