Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove Old Files in ModelCheckpoint #5404

Closed
3 tasks done
tpinetz opened this issue Feb 15, 2017 · 6 comments
Closed
3 tasks done

Remove Old Files in ModelCheckpoint #5404

tpinetz opened this issue Feb 15, 2017 · 6 comments

Comments

@tpinetz
Copy link
Contributor

tpinetz commented Feb 15, 2017

Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on StackOverflow or join the Keras Slack channel and ask there instead of filing a GitHub issue.

Thank you!

  • Check that you are up-to-date with the master branch of Keras. You can update with:
    pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps

  • If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.

  • If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
    pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps

My problem with the current ModelCheckpoint callback is that when I pass in the validation accuracy as a parameter in the filename I get flooded with model checkpoints. To combat this I would propose a flag in the ModelCheckpoint called keep_only_last_file that checks if the old save should be deleted. In combination with the save_best_only flag only the best model is kept.

I would propose following API: keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, keep_only_last_file=False, mode='auto', period=1). I am targeting everyone who wants to use the formatting options in the ModelCheckpoint without saving multiple models.

I can implement this myself.

@Zlash92
Copy link

Zlash92 commented Apr 28, 2017

Agreed. Would be very convenient to not store multiple checkpoints

@stale stale bot added the stale label Jul 27, 2017
@stale
Copy link

stale bot commented Jul 27, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.

@stale stale bot closed this as completed Aug 26, 2017
@felix-hilden
Copy link

felix-hilden commented Aug 5, 2018

This can be achieved with using a file name with no formatting for epoch or loss. It's a suboptimal solution for the time being, but does work.

@Zlash92
Copy link

Zlash92 commented Aug 6, 2018

True, but the formatting options are pretty nice to have.

@liyxi
Copy link

liyxi commented Dec 19, 2019

And people may want to save more than only one checkpoints. I suppose we can have a feature just like keep_checkpoint_max in tf estimator:

tf.estimator.RunConfig(model_dir=model_dir,
                       keep_checkpoint_max=3)

@stefan-falk
Copy link

This really seems like an issue for a lot of people e.g.

and of course this one.

I wonder, why is there no simple argument such as keep_checkpoint_max for the ModelCheckpoint?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants