Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

every_n_val_epochs -> every_n_epochs #8383

Merged
merged 3 commits into from Jul 12, 2021

Conversation

carmocca
Copy link
Member

@carmocca carmocca commented Jul 12, 2021

What does this PR do?

Deprecate every_n_val_epochs in favor of every_n_epochs

The flag is used in the on_validation_end hook, but we will also want to use it in the on_train_epoch_end hook

Part of #7724

Does your PR introduce any breaking changes ? If yes, please list them.

None

Before submitting

  • Was this discussed/approved via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or internal minor changes/refactorings)
  • Did you list all the breaking changes introduced by this pull request?

PR review

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

@carmocca carmocca added bug Something isn't working refactor labels Jul 12, 2021
@carmocca carmocca added this to the v1.4 milestone Jul 12, 2021
@carmocca carmocca self-assigned this Jul 12, 2021
@pep8speaks
Copy link

pep8speaks commented Jul 12, 2021

Hello @carmocca! Thanks for updating this PR.

Line 294:13: W503 line break before binary operator

Comment last updated at 2021-07-12 16:04:10 UTC

@codecov
Copy link

codecov bot commented Jul 12, 2021

Codecov Report

Merging #8383 (8cbb71a) into master (4f1e7be) will decrease coverage by 0%.
The diff coverage is 100%.

@@          Coverage Diff           @@
##           master   #8383   +/-   ##
======================================
- Coverage      92%     92%   -0%     
======================================
  Files         216     216           
  Lines       14097   14099    +2     
======================================
- Hits        13012   12998   -14     
- Misses       1085    1101   +16     

@carmocca carmocca added the ready PRs ready to be merged label Jul 12, 2021
@carmocca carmocca merged commit 733cdbb into master Jul 12, 2021
@carmocca carmocca deleted the feat/deprecate-every-n-val-epochs branch July 12, 2021 23:20
@ananthsub
Copy link
Contributor

@carmocca why is every_n_epochs along with a boolean flag to save on train epoch end preferable to supporting two flags like every_n_val_epochs and every_n_train_epochs?

The latter allows us to adjust the checkpoint frequency separately for training and validation. The mutual exclusion check is something we could address in the callback implementation.

@carmocca
Copy link
Member Author

carmocca commented Jul 13, 2021

@ananthsub Is your question in the context that the user can set both save_on_train_epoch_end and save_on_validation_end? The current design only has the former which acts as an exclusive boolean already. So it is natural that the every_n_epochs flag follows the same pattern.

If there is a need to specify both, I think two ModelCheckpoint instances should be created. One with each boolean value, but not different flags.

Also this seems better to avoid user confusion about the flag difference

@ananthsub
Copy link
Contributor

@carmocca it's more that as a user, users need to now know about the default behavior of save_on_train_epoch_end which is less explicit

  • When reading the code, there's ambiguity around ModelCheckpoint(every_n_epochs=...). Users need to check the default value for save_on_train_epoch_end to know when exactly end of epoch checkpoints are being saved
  • Same goes for writing the code: Users need to set either ModelCheckpoint(every_n_epochs=...) or ModelCheckpoint(every_n_epochs=..., save_on_train_epoch_end)which is less consistent thanModelCheckpoint(every_n_train_epochs=...)orModelCheckpoint(every_n_val_epochs=...)`

@carmocca
Copy link
Member Author

there's ambiguity around ModelCheckpoint(every_n_epochs=...). Users need to check the default value for save_on_train_epoch_end to know when exactly end of epoch checkpoints are being saved

You could also make a similar argument that the user would need to know if save_on_train_epoch_end=True is set with every_n_train_epochs=... or save_on_train_epoch_end=False is set with every_n_val_epochs=...

which is less consistent thanModelCheckpoint(every_n_train_epochs=...)orModelCheckpoint(every_n_val_epochs=...)`

Are you saying every_n_train_epochs would implicitly set save_in_train_epoch_end=True and the opposite for every_n_val_epochs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ready PRs ready to be merged refactor
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants