Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

F1Metric: infer labels also from predicted annotations #425

Merged
merged 1 commit into from
Jun 25, 2024

Conversation

ArneBinder
Copy link
Owner

@ArneBinder ArneBinder commented Jun 25, 2024

Until now, we just used the gold data for that. This impacts the MACRO value, because in the case where entries with a certain label are only in the predictions we will now get a zero score for that label which decreases the macro average, but before this was left out. However, this should have only an effect on very small data where there are really more labels in prediction than in gold.

@ArneBinder ArneBinder added the bug Something isn't working label Jun 25, 2024
@ArneBinder ArneBinder merged commit a81b182 into main Jun 25, 2024
6 checks passed
@ArneBinder ArneBinder deleted the fix_f1_infer_labels branch June 25, 2024 11:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

F1Metric with labels="INFERRED" should consider prediction-only labels
1 participant