Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

♻️ Enhance evaluation flow #36

Merged
merged 5 commits into from
Nov 18, 2022
Merged

♻️ Enhance evaluation flow #36

merged 5 commits into from
Nov 18, 2022

Conversation

GabrielePicco
Copy link
Contributor

@GabrielePicco GabrielePicco commented Nov 17, 2022

Enhance and simpilfy the evaluation flow

Status Type ⚠️ Core Change Issue
Ready Feature/Refactor No

Problem

Evaluate functions was contains logic specific to the used metric

Solution

Abstracted and made the evaluation more general

Other changes (e.g. bug fixes, small refactors)

  • Fix Displacy rel rendering in notebooks
  • Add split parameter to only download part of the datasets
  • Add time/performance to the evaluation

Issues

Closes #35
Closes #15

@GabrielePicco GabrielePicco self-assigned this Nov 17, 2022
@codecov
Copy link

codecov bot commented Nov 17, 2022

Codecov Report

Base: 92.14% // Head: 92.18% // Increases project coverage by +0.03% 🎉

Coverage data is based on head (b9bc226) compared to base (eebbbb1).
Patch coverage: 97.29% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##             main      #36      +/-   ##
==========================================
+ Coverage   92.14%   92.18%   +0.03%     
==========================================
  Files          67       67              
  Lines        2726     2763      +37     
==========================================
+ Hits         2512     2547      +35     
- Misses        214      216       +2     
Impacted Files Coverage Δ
zshot/utils/displacy/displacy.py 73.07% <60.00%> (-2.44%) ⬇️
zshot/__init__.py 100.00% <100.00%> (ø)
zshot/evaluation/__init__.py 100.00% <100.00%> (ø)
zshot/evaluation/dataset/__init__.py 100.00% <100.00%> (ø)
zshot/evaluation/dataset/dataset.py 93.75% <100.00%> (+1.44%) ⬆️
...ot/evaluation/dataset/med_mentions/med_mentions.py 100.00% <100.00%> (ø)
zshot/evaluation/dataset/ontonotes/onto_notes.py 70.17% <100.00%> (+5.59%) ⬆️
zshot/tests/evaluation/test_datasets.py 100.00% <100.00%> (ø)
zshot/tests/utils/test_displacy.py 93.93% <100.00%> (+0.83%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@GabrielePicco GabrielePicco merged commit 2a84de0 into main Nov 18, 2022
@GabrielePicco GabrielePicco deleted the enhance/evaluation branch November 18, 2022 12:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug] Correctly render relations visualisation in notebooks Add time analysis to evaluation pipeline
1 participant