Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhance] Add eval_metric script #214

Merged
merged 5 commits into from Oct 21, 2020
Merged

Conversation

dreamerlin
Copy link
Collaborator

@dreamerlin dreamerlin commented Sep 26, 2020

No description provided.

@codecov
Copy link

codecov bot commented Sep 26, 2020

Codecov Report

Merging #214 into master will increase coverage by 0.60%.
The diff coverage is 89.55%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #214      +/-   ##
==========================================
+ Coverage   84.39%   84.99%   +0.60%     
==========================================
  Files          78       85       +7     
  Lines        4947     5658     +711     
  Branches      781      912     +131     
==========================================
+ Hits         4175     4809     +634     
- Misses        643      688      +45     
- Partials      129      161      +32     
Flag Coverage Δ
#unittests 84.99% <89.55%> (+0.60%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmaction/apis/train.py 19.04% <ø> (ø)
mmaction/datasets/activitynet_dataset.py 95.04% <0.00%> (-1.93%) ⬇️
mmaction/datasets/base.py 68.33% <ø> (ø)
mmaction/models/backbones/resnet3d_slowonly.py 100.00% <ø> (ø)
...maction/models/localizers/utils/post_processing.py 100.00% <ø> (ø)
mmaction/core/evaluation/eval_hooks.py 17.58% <33.33%> (+0.34%) ⬆️
mmaction/version.py 58.33% <54.54%> (-41.67%) ⬇️
mmaction/models/recognizers/audio_recognizer.py 63.15% <63.15%> (ø)
mmaction/models/heads/audio_tsn_head.py 81.48% <81.48%> (ø)
mmaction/datasets/audio_dataset.py 83.78% <83.78%> (ø)
... and 34 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1e77b0b...eb9b752. Read the comment docs.

@dreamerlin dreamerlin marked this pull request as ready for review October 11, 2020 12:52
@dreamerlin
Copy link
Collaborator Author

Ping @innerlee

@innerlee innerlee merged commit 794b2d0 into open-mmlab:master Oct 21, 2020
@dreamerlin dreamerlin deleted the eval_metric branch November 3, 2020 04:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants