You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think changing the calculation of up to up = max([round((steps * n_epochs) * 0.1), 1]) would help. Currently, I can only add finetune_epochs=10 to fit to fix this issue.
The text was updated successfully, but these errors were encountered:
I get an error when I finetune a model. Here is the minimal code to reproduce:
You can see that when fine-tuning the model, I am using a single batch (
batch_size==len(df)
). This code throws an error:This is caused by these lines:
pytorch-widedeep/pytorch_widedeep/training/_finetune.py
Lines 112 to 120 in 9acfcec
pytorch-widedeep/pytorch_widedeep/training/_finetune.py
Lines 320 to 321 in 9acfcec
Here,
len(loader)==1
andn_epochs==5
(by default), sostep_size_up==0
, which is an illegal input forCyclicLR
. The same error can be revealed by:I think changing the calculation of
up
toup = max([round((steps * n_epochs) * 0.1), 1])
would help. Currently, I can only addfinetune_epochs=10
tofit
to fix this issue.The text was updated successfully, but these errors were encountered: