You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From your demo and tutorials, you always set epoch=100, the learning rate is a constant, and you didn't show the comparison between the training losses and the validation losses. I saw somewhere in your codes for early stopping, but I don't know how to set it. Did you have a learning rate scheduling function? Thank you!
The text was updated successfully, but these errors were encountered:
Hi, yes, the demo and tutorials follow the suggestion by the DeepDTA paper. The early stopping is automatically set to avoid overfitting. Currently, no scheduler is defined. But it should super straightforward to add. For example, add after
From your demo and tutorials, you always set epoch=100, the learning rate is a constant, and you didn't show the comparison between the training losses and the validation losses. I saw somewhere in your codes for early stopping, but I don't know how to set it. Did you have a learning rate scheduling function? Thank you!
The text was updated successfully, but these errors were encountered: