Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why the end-2-end performance mismatch with component level evaluation? #246

Closed
JamesCao2048 opened this issue Jan 12, 2024 · 2 comments
Closed

Comments

@JamesCao2048
Copy link

JamesCao2048 commented Jan 12, 2024

Hi, Thanks for your wonderful work of ConvLab series!
I found that the end-to-end performance you reported on MultiWoz is slightly mismatched with the component level evaluation.
For example, BERTNLU+RuleDST+RulePolicy+TemplateNLG has a lower Complete rate and Sucess rate than MILU+RuleDST+RulePolicy+TemplateNLG. However, BERTNLU performs better than MILU in module evaluation.
How does this happen?
Besides, given a newly unseen evaluation dataset, how can I decide which pipeline configuration perform best? As there are too much combinations of difference modules and the module level evaluation does not match well with end-to-end evaluation well.

I know these are very open questions and I am now trying to do some research about it. I will be very grateful If you have some insights or literature to share with me.

@zqwerty
Copy link
Member

zqwerty commented Jan 17, 2024

Yes we also observe that a better module performance does not guarantee a better end-to-end performance. see our paper: https://aclanthology.org/2020.sigdial-1.37/.

To decide which configuration to use, I would try several models that perform best in module-wise evaluation. Also, Try to use the pre-trained model in an end-to-end setting.

@JamesCao2048
Copy link
Author

Thanks for your reply, it perfectly answer my question.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants