Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add init_eval to evaluation hook #3550

Merged
merged 12 commits into from
Aug 23, 2020
Merged

Conversation

Johnson-Wang
Copy link
Collaborator

No description provided.

@codecov
Copy link

codecov bot commented Aug 13, 2020

Codecov Report

Merging #3550 into master will increase coverage by 0.13%.
The diff coverage is 90.32%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3550      +/-   ##
==========================================
+ Coverage   60.77%   60.91%   +0.13%     
==========================================
  Files         205      205              
  Lines       13781    13804      +23     
  Branches     2332     2340       +8     
==========================================
+ Hits         8376     8409      +33     
+ Misses       4981     4968      -13     
- Partials      424      427       +3     
Flag Coverage Δ
#unittests 60.91% <90.32%> (+0.13%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmdet/core/evaluation/eval_hooks.py 86.66% <90.32%> (+59.63%) ⬆️
mmdet/models/detectors/cornernet.py 94.87% <0.00%> (-5.13%) ⬇️
mmdet/models/dense_heads/corner_head.py 74.09% <0.00%> (-1.95%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 31fb4fb...7bd100b. Read the comment docs.

"""

def __init__(self,
dataloader,
interval=1,
tmpdir=None,
start=None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

interval and start is a group of arguments. It is better to put them together and tmpdir afterwards.

return
if self.start is not None and runner.epoch >= self.start:
self.after_train_epoch(runner)
self.initial_epoch_flag = False
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There seem to be some issues with the logic. I suppose the expected behaviors are as follows.

  1. start=None, interval=1: perform evaluation after each epoch.
  2. start=1, interval=1: perform evaluation after each epoch.
  3. start=None, interval=2: perform evaluation after epoch 2, 4, 6, etc
  4. start=1, interval=2: perform evaluation after epoch 1, 3, 5, etc
  5. start=0/-1, interval=1: perform evaluation after each epoch and before epoch 1.
  6. resuming from epoch i, start = x (x<=i), interval =1: perform evaluation after each epoch and before the first epoch.
  7. resuming from epoch i, start = i+1/None, interval =1: perform evaluation after each epoch.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All are satisfied except 4. If start=1 and interval=2, then the evaluation is still conducted after epoch 2, 4, 6. Is it okay?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • For case 2, the current implementation will perform evaluation after each epoch and before epoch 2.
  • Since we specify start and interval, it may be more natural to adopt the range() like behavior.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for case 2, the initial_epoch_flag will be False after the first epoch, and thus there would be no evaluation at the start of the second epoch.
I agree with the range-like behavior and will modify the code accordingly.

@hellock hellock merged commit 4b6ff75 into open-mmlab:master Aug 23, 2020
@Johnson-Wang Johnson-Wang deleted the init_eval branch August 29, 2020 01:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants