-
Notifications
You must be signed in to change notification settings - Fork 6.8k
problem in training #885
Comments
Is it accuracy or loss? Loss goes down. On Wed, Dec 9, 2015 at 6:05 PM, achao2013 notifications@github.com wrote:
Junli Gu--谷俊丽 |
the full name is “Train-accuracy”,i don't know how to display the loss yet |
please be a bit more specific, e.g. what is the configuration you use which could be helpful to give others more context. Usually accuracy goes down when you have bad initialization, or too large learning rate. |
closing due to inactive status, please feel free to reopen |
@tqchen ,I use the demo of the cifar-100.ipynb, for the classfication task of 146 classes. I don't modify any configuration except input data(batch size=32 for memory limitation). The accracy keeps falling down from 90% to 54% and begin to shock weakly around 55%. Moreover, when i set another net (34layers ResNet of MSRA), the same problem is happened, the data is ilsvrc2012 and the accuracy decrease from 19% to 1% and keeps decrease up to now.I try many configuration and the result is the same. The current params are as follows: |
if you use a smaller batch size, it is likely you need to re-tune your parameter with a smaller learning rate |
@tqchen I try some learning rate, The speed of decreasing go down slightly, and the train accuracy pan up, but it is still a general downward trend. I'm a new user, and i haven't find the code for calculating the train accuracy,but i conjecture the train accuracy will include more and more samples as the batch num inclease. I don't know if this situation is correct or not because i have encountered this when the data labels are wrong in caffe.(it's right here) |
when i train a network,the accuracy keeps falling down from the begin of each epoch,why?
The text was updated successfully, but these errors were encountered: