Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two improving approchs #8

Closed
xuyifeng-nwpu opened this issue Aug 27, 2017 · 1 comment
Closed

Two improving approchs #8

xuyifeng-nwpu opened this issue Aug 27, 2017 · 1 comment

Comments

@xuyifeng-nwpu
Copy link

xuyifeng-nwpu commented Aug 27, 2017

I think there are two approchs to improve the accuracy. Whether these methods were feasible?

The first method:
The validation dataset splitted from train dataset. Then the adaptive learning rate automatically adjusted with the validation accuracy.

The second method:
In the process of training, the test top1 once is lower than a fixed value the leaning rate of the epochs after this epoch settled to zero.

For example, the code with parameter( nEpochs 400) run ,the log file is as follows.
epoch   test top 1   learning rate
370    3.62    0.02
.......    ......    .......
400    3.82    0.00

In the log file , the best test top 1 was 3.62,but the result generated at the 370th epoch.
If the learning rate between epoch 371 and 400 set as zero, the test top 1 of epoch between 371 and 400 all should be 3.62 ?
I had experimented this method and found the test top 1 after the 371th epoch still slightly surge/change.

Can you give me some suggestion about above two methods?
Do you compare the adapting learning rate updating method such as rmsprop,adadelta with SGD?
Thank you very much!

@xgastaldi
Copy link
Owner

Hi, this is not an issue so let's take this offline. Please email me at xgastaldi.mba2011@london.edu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants