You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think there are two approchs to improve the accuracy. Whether these methods were feasible?
The first method:
The validation dataset splitted from train dataset. Then the adaptive learning rate automatically adjusted with the validation accuracy.
The second method:
In the process of training, the test top1 once is lower than a fixed value the leaning rate of the epochs after this epoch settled to zero.
For example, the code with parameter( nEpochs 400) run ,the log file is as follows.
epoch test top 1 learning rate
370 3.62 0.02
....... ...... .......
400 3.82 0.00
In the log file , the best test top 1 was 3.62,but the result generated at the 370th epoch.
If the learning rate between epoch 371 and 400 set as zero, the test top 1 of epoch between 371 and 400 all should be 3.62 ?
I had experimented this method and found the test top 1 after the 371th epoch still slightly surge/change.
Can you give me some suggestion about above two methods?
Do you compare the adapting learning rate updating method such as rmsprop,adadelta with SGD?
Thank you very much!
The text was updated successfully, but these errors were encountered:
I think there are two approchs to improve the accuracy. Whether these methods were feasible?
The first method:
The validation dataset splitted from train dataset. Then the adaptive learning rate automatically adjusted with the validation accuracy.
The second method:
In the process of training, the test top1 once is lower than a fixed value the leaning rate of the epochs after this epoch settled to zero.
For example, the code with parameter( nEpochs 400) run ,the log file is as follows.
epoch test top 1 learning rate
370 3.62 0.02
....... ...... .......
400 3.82 0.00
In the log file , the best test top 1 was 3.62,but the result generated at the 370th epoch.
If the learning rate between epoch 371 and 400 set as zero, the test top 1 of epoch between 371 and 400 all should be 3.62 ?
I had experimented this method and found the test top 1 after the 371th epoch still slightly surge/change.
Can you give me some suggestion about above two methods?
Do you compare the adapting learning rate updating method such as rmsprop,adadelta with SGD?
Thank you very much!
The text was updated successfully, but these errors were encountered: