Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add InfoLM #849

Closed
stancld opened this issue Feb 21, 2022 · 2 comments 路 Fixed by #915
Closed

Add InfoLM #849

stancld opened this issue Feb 21, 2022 · 2 comments 路 Fixed by #915
Assignees
Labels
enhancement New feature or request good first issue Good for newcomers New metric topic: Text
Projects

Comments

@stancld
Copy link
Contributor

stancld commented Feb 21, 2022

馃殌 Feature

Add InfoLM

Sources:

Motivation

The recent NLG metrics are more often based on BERT (or related) embeddings. As such, I believe, we should also start adding such metrics into TorchMetrics with an extra dependency on transformers if a user wants to use any of these metrics. The InfoLM metric is from a family of untrained metrics (i.e. the model is not fine-tuned on any specific task) so it should be easier for us to begin with it. (Any opinion on this? @Borda :] )

Abstract:

Assessing the quality of natural language generation systems through human annotation is very expensive. Additionally, human annotation campaigns are time-consuming and include non-reusable human labour. In practice, researchers rely on automatic metrics as a proxy of quality. In the last decade, many string-based metrics (e.g., BLEU) have been introduced. However, such metrics usually rely on exact matches and thus, do not robustly handle synonyms. In this paper, we introduce InfoLM a family of untrained metrics that can be viewed as a string-based metric that addresses the aforementioned flaws thanks to a pre-trained masked language model. This family of metrics also makes use of information measures allowing the adaptation of InfoLM to various evaluation criteria. Using direct assessment, we demonstrate that InfoLM achieves statistically significant improvement and over 10 points of correlation gains in many configurations on both summarization and data2text generation.

@stancld stancld added enhancement New feature or request good first issue Good for newcomers New metric labels Feb 21, 2022
@stancld stancld added this to To do in Text via automation Feb 21, 2022
@stancld stancld self-assigned this Feb 21, 2022
@stancld stancld added this to the v0.8 milestone Feb 22, 2022
@Borda
Copy link
Member

Borda commented Mar 9, 2022

@stancld ho is it doing, seems that this is the key for two additional metrics where we have volunteers to help... 馃惏

@stancld
Copy link
Contributor Author

stancld commented Mar 11, 2022

@Borda Sorry, was a bit busy this week, but it's already wip. I'm gonna try to finish it this weekend so that the other two PRs can be opened.

@Borda Borda modified the milestones: v0.8, v0.9 Mar 22, 2022
@stancld stancld mentioned this issue Mar 25, 2022
4 tasks
@SkafteNicki SkafteNicki removed this from the v0.9 milestone May 12, 2022
Text automation moved this from To do to Done Jul 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers New metric topic: Text
Projects
No open projects
Text
Done
Development

Successfully merging a pull request may close this issue.

3 participants