Calculating a loss function over the whole training set and updating the model #10642
Sadegh-Saberian
started this conversation in
General
Replies: 1 comment 2 replies
-
not necessary, you can do anything in that. If you want to accumulate the loss for each batch over the complete epoch and log it, you can simply use: def training_step(self, batch, batch_idx):
loss = ...
self.log('train_loss', loss, on_step=False, on_epoch=True)
return loss or in case you want to do it manually by yourself, you can return it inside |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I have a loss function that should be calculated over the whole training set instead of batches. I was trying to implement it using
training_epoch_end
, but realized that this function is mostly for logging purposes. My question is how can I implement this loss so that it is involved in optimization?Beta Was this translation helpful? Give feedback.
All reactions