Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss didn't get down #13

Open
chutongz opened this issue Oct 9, 2018 · 2 comments
Open

loss didn't get down #13

chutongz opened this issue Oct 9, 2018 · 2 comments

Comments

@chutongz
Copy link

chutongz commented Oct 9, 2018

Hi, I am using your code on both VGG and your original alexnet networks.
The way I manage my data is exact following your instructions, and I tried to use np_util.to_categorial() function instead, but the loss is always around 8 even after 100+ epochs, and 16 on the original code.
Do you have any idea what's the probable problem of it?
Any help is appreciated, thanks a lot!

@matteo-dunnhofer
Copy link
Owner

Hi @chutongz
have you already tried to lower the learning rate? Like to 1e-4?

@briansune
Copy link

@dontfollowmeimcrazy i do have the same situation with your posted code. If learning rate is set to 1e-2 the loss drop logically, while after 1 epoch the loss saturate and no longer drop. May i ask what is the final settings for a workable training?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants