Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

resume training from previous epoch #1872

Closed
Bhee opened this issue Mar 2, 2016 · 38 comments
Closed

resume training from previous epoch #1872

Bhee opened this issue Mar 2, 2016 · 38 comments

Comments

@Bhee
Copy link

Bhee commented Mar 2, 2016

I saved the model and weights after each epoch using callbacks.ModelCheckpoint. I want to train it again from the last epoch.
How to set the model.fit() command to start from the previous epoch?

@tboquet
Copy link
Contributor

tboquet commented Mar 2, 2016

Do you want to do something special with the history? If not, you can just call .fit one or several times and you will be able to continue to train the model. If you want to continue the training in another process, you just have to load the weights and call model.fit().

@Bhee
Copy link
Author

Bhee commented Mar 3, 2016

when I call model.fit() after loading models and weights , it showing epoch = 1. If I stop the training at 100 epoch. I want to resume the training with epoch=101.

@ymcui
Copy link

ymcui commented Mar 3, 2016

I think it is no matter whether it SHOWs the training is at epoch = 1 or epoch = 101.
As far as I know, the model itself doesn't save the EPOCH information into model file.
If you have loaded the correct previous model (the model should have been saved with epoch number), it should be no problem on continuing your training process.

@Bhee
Copy link
Author

Bhee commented Mar 3, 2016

thank u

@tboquet
Copy link
Contributor

tboquet commented Mar 3, 2016

@ymcui is right, the label of epoch is only a name for the iterations in the current fit. Sorry when I said history I meant the history dictionary the fit method returns. I think #1868 is basically the same question. If you think it resolves your problem please close the issue!

@Bhee Bhee closed this as completed Mar 4, 2016
@nithishdivakar
Copy link
Contributor

But there is a problem with this approach. What about hyper parameters that change according to epoch, say learning rate with a decay. Just restarting it with fit method doesn't take that into account.

@lolongcovas
Copy link

yeah, this happens to me when I resume the training process by loading weights.

I was training resnet18 with imagenet dataset, the model saved the weights at 1st epoch with lr=0.1 at beginning. I stopped it, then tried the resume functionality, and it turns out that the model starts with the same lr=0.1, and the loss increase for each iteration. To set the lr to the state of the 1st epoch, I changed the lr according to SGD lr update func: lr = lr * (1. / (1+decay*iterations)), however, it didnt work, the loss sill increases, but slower than with lr=0.1. Probably I should still lower the lr, but I dont understand why the loss still increase even the lr is set accordantly.

@ywenlu
Copy link

ywenlu commented Jan 24, 2017

Try the initial_epoch argument in .fit method.

@smhoang
Copy link

smhoang commented Feb 16, 2017

using initial_epoch didn't work in this case

@lewfish
Copy link

lewfish commented Mar 9, 2017

But there is a problem with this approach. What about hyper parameters that change according to epoch, say learning rate with a decay. Just restarting it with fit method doesn't take that into account.

Setting the initial_epoch in fit_generator is not enough to solve this problem when using the ReduceLROnPlateau callback because there's no way for the callback to know what the learning rate should be without having the history of the previous (ie. before resuming training) epochs. Perhaps the callback constructor should have an optional history parameter that can be used to correctly initialize the learning rate and the wait variable (see https://github.com/fchollet/keras/blob/ab3b93e8dd103f1d9729305825791a084c7c8493/keras/callbacks.py#L744)

@MartinThoma
Copy link
Contributor

Besides using the initial_epoch argument of fit, I re-wrote the history callback:

class History(Callback):
    """
    Callback that records events into a `History` object.

    This callback is automatically applied to
    every Keras model. The `History` object
    gets returned by the `fit` method of models.
    """

    def on_train_begin(self, logs=None):
        if not hasattr(self, 'epoch'):
            self.epoch = []
            self.history = {}

    def on_epoch_end(self, epoch, logs=None):
        logs = logs or {}
        self.epoch.append(epoch)
        for k, v in logs.items():
            self.history.setdefault(k, []).append(v)

This allows using the same callback and it just appends to the end. @fchollet should I post a pull request for this? It seems to me that this is more useful than the current behaviour of overwriting the
logs in on_train_begin.

@i3v
Copy link

i3v commented May 20, 2017

@MartinThoma ,
One would probably need to replace this line with

    if initial_epoch==0:
        self.history = cbks.History()

to make your suggestion work, right? I've tried to make this stuff work, and eventually ran into a feeling that too many different things should be changed, see #6697. What do you think?

@syedfaizalex
Copy link

if you want to resume from epoch 101 ,simply use "initial_epoch = 101" in model.fit().

initial_epoch: Epoch at which to start training (useful for resuming a previous training run)

@jperl
Copy link

jperl commented Dec 24, 2017

@bupedroni
Copy link

Related question: What happens to all the gradient computations that rely on a history of the gradients (when momentum is present, such as in ADAM and most gradient descent algorithms)? Does the checkpoint store these as well? Thanks!

@valekar
Copy link

valekar commented Jun 25, 2018

@bupedroni: As far as I know, every time I loaded the existing model, all the hyperparameters were set to default values.

Best way to resume is to write a custom callback and store all the hyperparameters and then start the training as mentioned by @MartinThoma

@morenoh149
Copy link
Contributor

morenoh149 commented Jul 27, 2018

@MartinThoma I'd like a pull request implementing that, basically I'm training a model, but if I notice that the metrics haven't diverged I'd like to train for another x epochs. And also be able to plot the history overall in an additive way.

For now I'm just accumulating histories like this https://www.kaggle.com/morenoh149/keras-continue-training

@imranparuk
Copy link

Still have this issues... any update on it?

@thebeancounter
Copy link

anything new here?

@imranparuk
Copy link

just port your code to pytorch 😆

@nithishdivakar
Copy link
Contributor

Ya. That actually worked for me. 2 years and counting.

@0xrushi
Copy link

0xrushi commented Apr 7, 2019

I think it is no matter whether it SHOWs the training is at epoch = 1 or epoch = 101.
As far as I know, the model itself doesn't save the EPOCH information into model file.
If you have loaded the correct previous model (the model should have been saved with epoch number), it should be no problem on continuing your training process.

So does that mean if i call
model.fit(epochs = 20)

and

model.fit(epochs=5)
model.fit(epochs=5)
model.fit(epochs=5)
model.fit(epochs=5)

both are same ??

@srcolinas
Copy link

I think it is no matter whether it SHOWs the training is at epoch = 1 or epoch = 101.
As far as I know, the model itself doesn't save the EPOCH information into model file.
If you have loaded the correct previous model (the model should have been saved with epoch number), it should be no problem on continuing your training process.

So does that mean if i call
model.fit(epochs = 20)

and

model.fit(epochs=5)
model.fit(epochs=5)
model.fit(epochs=5)
model.fit(epochs=5)

both are same ??

Yes, they are equivalent. At least that is what I found using the TensorFlow Keras API in TensorFlow 2.0

@MunishaTripping
Copy link

How can I get the epoch at which model was saved in ModelCheckpoint ?

@hollowgalaxy
Copy link

save epoch number in the name of the model. Fetch that number with regex when resuming training.

@MichelHalmes
Copy link

MichelHalmes commented Jan 9, 2020

I managed to do this with an optimizer whose learning rate depends on the number of iterations eg Adam.

Here is the pseudo-code:

...
if os.path.isfile(checkpoint_path+".index"):
    # This loads `(root).optimizer.iter`from the checkpoint
    model.load_weights(checkpoint_path)

# Recover the iterations from the model and convert to epochs
initial_epoch = model.optimizer.iterations.numpy() // STEPS_PER_EPOCH
callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, save_weights_only=True)
model.fit(train_data, epochs=NUM_EPOCHS, initial_epoch=initial_epoch,
                 callbacks=[callback])

Hope this helps :-)

@dorukkarinca
Copy link

dorukkarinca commented Apr 12, 2020

I got tired of this so I ended up writing a Keras wrapper that autosaves and restores the epoch number, training history, and model weights:

pip install keras-buoy
Link to Github project

Let me know what you think. PRs more than welcome.

@morenoh149
Copy link
Contributor

@dorukkarinca is this handled in tensor flow v2? that's supposed to supersede keras

@dorukkarinca
Copy link

@morenoh149 not to the best of my knowledge. This wrapper wraps tensorflow.keras anyway.

@KazegamiKuon
Copy link

@dorukkarinca UwU and Orz . Your wrapper help me so much. I didnt know why I wasted 1 week to retrain at start.

@RoyiAvital
Copy link

So does that mean if i call
model.fit(epochs = 20)

and

model.fit(epochs=5)
model.fit(epochs=5)
model.fit(epochs=5)
model.fit(epochs=5)

While they are the same, is there a simple way to append the history of each call in order to have the same history for both cases as well?

@MoshiurRahmanFaisal
Copy link

MoshiurRahmanFaisal commented Aug 11, 2022

just use the below callback function to resume training from the epoch where you have stopped.
callback = tf.keras.callbacks.experimental.BackupAndRestore(
backup_dir="temp")

@SExpert12
Copy link

Hello,
I am loading using function. Now how to access in fit to resume model training

def load_data(labels_file, test_size):
"""
Display a list of images in a single figure with matplotlib.
Parameters:
labels_file: The labels CSV file.
test_size: The size of the testing set.
"""
labels = pd.read_csv(labels_file)
X = labels[['center', 'left', 'right']].values
y = labels['steering'].values
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=test_size, random_state=0)
return X_train, X_valid, y_train, y_valid

image
resumemodel_error

Please help me out.

@MoshiurRahmanFaisal
Copy link

MoshiurRahmanFaisal commented Dec 17, 2022 via email

@MoshiurRahmanFaisal
Copy link

MoshiurRahmanFaisal commented Dec 17, 2022 via email

@SExpert12
Copy link

Thanks for reply. Yes it is declared here.
image
Now how to get local variable value to access?

@MoshiurRahmanFaisal
Copy link

MoshiurRahmanFaisal commented Dec 17, 2022 via email

@SExpert12
Copy link

Okay. Let me try this out. Thanks for quick reply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests