Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

when test , loss can not change!! #7675

Closed
qlwang25 opened this issue May 18, 2018 · 1 comment
Closed

when test , loss can not change!! #7675

qlwang25 opened this issue May 18, 2018 · 1 comment

Comments

@qlwang25
Copy link

Issue description

I write a model about sequence label problem.
only use three layers cnn.
when it train, loss is decrease and f1 is increase.
but when test and epoch is about 10, loss and f1 is not change .
Is it overfitting?
How to solve it ?

Code example

train:

train---epoch : 51 ,  global step : 24356
loss :  0.016644377261400223
accuracy : 0.999849
precision : 0.998770
recall : 0.998679
f1 : 0.998725
------------------------------------
train---epoch : 51 ,  global step : 24357
loss :  0.043941885232925415
accuracy : 0.999844
precision : 0.998727
recall : 0.998636
f1 : 0.998682
------------------------------------
train---epoch : 51 ,  global step : 24358
loss :  0.0024001500569283962
accuracy : 0.999844
precision : 0.998729
recall : 0.998638
f1 : 0.998684

can see that model is run well on the train data when epoch is about 50.
however, when opch is about 50, model is not desirable on test data.
eval information ( txt file save all run information on the test data, then sorted by f1 value , this is before the twentieth) :
can see f1 less change from epoch 4 to epoch 50

epoch   loss    precision       recall  f1
4       11.766307       0.198263        0.254603        0.222928
2       10.437247       0.241509        0.203966        0.221156
3       10.858424       0.199627        0.246282        0.220514
1       9.906065        0.195629        0.225035        0.209304
6       16.741554       0.167704        0.205205        0.184569
12      18.872616       0.189472        0.166962        0.177506
16      19.437512       0.179588        0.169795        0.174554
5       14.804908       0.148019        0.211048        0.174002
13      20.942182       0.203333        0.151204        0.173436
7       16.623977       0.152858        0.195999        0.171761
9       17.268815       0.159771        0.177762        0.168287
10      19.142138       0.172041        0.157755        0.164589
8       17.819491       0.155651        0.166785        0.161026
15      21.475587       0.184768        0.142174        0.160696
11      19.324749       0.170243        0.150319        0.159661
19      22.313276       0.197936        0.132436        0.158693
28      25.159835       0.209786        0.126771        0.158040
20      23.833431       0.215899        0.123584        0.157190
38      27.536659       0.200055        0.128187        0.156253
0       10.829172       0.170155        0.143945        0.155956

Please try to provide a minimal example to repro the bug.
Error messages and stack traces are also helpful.

  • PyTorch or Caffe2: pytorch 0.4
  • OS:Ubuntu 16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants