You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I write a model about sequence label problem.
only use three layers cnn.
when it train, loss is decrease and f1 is increase.
but when test and epoch is about 10, loss and f1 is not change .
Is it overfitting?
How to solve it ?
Code example
train:
train---epoch : 51 , global step : 24356
loss : 0.016644377261400223
accuracy : 0.999849
precision : 0.998770
recall : 0.998679
f1 : 0.998725
------------------------------------
train---epoch : 51 , global step : 24357
loss : 0.043941885232925415
accuracy : 0.999844
precision : 0.998727
recall : 0.998636
f1 : 0.998682
------------------------------------
train---epoch : 51 , global step : 24358
loss : 0.0024001500569283962
accuracy : 0.999844
precision : 0.998729
recall : 0.998638
f1 : 0.998684
can see that model is run well on the train data when epoch is about 50.
however, when opch is about 50, model is not desirable on test data.
eval information ( txt file save all run information on the test data, then sorted by f1 value , this is before the twentieth) :
can see f1 less change from epoch 4 to epoch 50
Issue description
I write a model about sequence label problem.
only use three layers cnn.
when it train, loss is decrease and f1 is increase.
but when test and epoch is about 10, loss and f1 is not change .
Is it overfitting?
How to solve it ?
Code example
train:
can see that model is run well on the train data when epoch is about 50.
however, when opch is about 50, model is not desirable on test data.
eval information ( txt file save all run information on the test data, then sorted by f1 value , this is before the twentieth) :
can see f1 less change from epoch 4 to epoch 50
Please try to provide a minimal example to repro the bug.
Error messages and stack traces are also helpful.
The text was updated successfully, but these errors were encountered: