You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I meet the same bug as this issue when I want to use pl.metrics.Accuracy directly to evaluate my models. This bug has been fixed in the new version 1.0.2.
However, the new version has changed the method to generate the loss/accuracy logs. This is the warning message.
The {log:dict keyword} was deprecated in 0.9.1 and will be removed in 1.0.0.
Please use self.log(...) inside the lightningModule instead.
Log on a step or aggregate epoch metric to the logger and/or progress bar (inside LightningModule) self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True)
If we decide to update, we need to update all the training_step, validation_step, testing_step with the new logging method.
The text was updated successfully, but these errors were encountered:
Yeah,
Pytorch Lightning changed the way you log metrics after a certain update to make things less weird. We don't need to use the same Pytorch Lightning version for every single example though. Each example folder can use whichever version it chooses. So, for the older examples, we can just specify that they use the older Pytorch Lightning version and for the newer examples we can specify that they use the newer version. That solves the problem right?
Yeah, it makes sense. However, we need to create different virtual environments to run different examples. We can update all the codes at one time later when Pytorch Lightning has a very stable version.
Bug
I meet the same bug as this issue when I want to use
pl.metrics.Accuracy
directly to evaluate my models. This bug has been fixed in the new version 1.0.2.However, the new version has changed the method to generate the loss/accuracy logs. This is the warning message.
The
{log:dict keyword}
was deprecated in 0.9.1 and will be removed in 1.0.0.Please use
self.log(...)
inside the lightningModule instead.Log on a step or aggregate epoch metric to the logger and/or progress bar (inside LightningModule)
self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True)
If we decide to update, we need to update all the
training_step
,validation_step
,testing_step
with the new logging method.The text was updated successfully, but these errors were encountered: