Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix seqeval metric #810

Merged
merged 1 commit into from
Nov 9, 2020
Merged

Fix seqeval metric #810

merged 1 commit into from
Nov 9, 2020

Conversation

sgugger
Copy link
Contributor

@sgugger sgugger commented Nov 6, 2020

The current seqeval metric returns the following error when computed:

~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/78a944d83252b5a16c9a2e49f057f4c6e02f18cc03349257025a8c9aea6524d8/seqeval.py in _compute(self, predictions, references, suffix)
    102         scores = {}
    103         for type_name, score in report.items():
--> 104             scores[type_name]["precision"] = score["precision"]
    105             scores[type_name]["recall"] = score["recall"]
    106             scores[type_name]["f1"] = score["f1-score"]

KeyError: 'LOC'

This is because the current code basically tries to do:

scores = {}
scores["LOC"]["precision"] = some_value

which does not work in python. This PR fixes that while keeping the previous nested structure of results, with the same keys.

Copy link
Member

@lhoestq lhoestq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch thanks !

@sgugger sgugger merged commit 92acf1e into master Nov 9, 2020
@sgugger sgugger deleted the fix_seqeval_metric branch November 9, 2020 14:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants