Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

there is a bit of glitch about batch_size #2

Closed
jaegerstar opened this issue Nov 17, 2017 · 4 comments
Closed

there is a bit of glitch about batch_size #2

jaegerstar opened this issue Nov 17, 2017 · 4 comments

Comments

@jaegerstar
Copy link

When I set the batch_size == 128, its performance downgraded significantly,namely its accuracy is about 15%.I wonder if there is any problem with the network implementation.

@laubonghaudoi
Copy link
Owner

I just replicated your problem, it is weird that the batch size affects performance so significantly, I suspect that it is due to #1. Yet have no idea why exactly this happens.

@jaegerstar
Copy link
Author

emm…I see. I haven't notice the problem mentioned in #1 . Look forward to your fix.

@laubonghaudoi
Copy link
Owner

laubonghaudoi commented Nov 22, 2017

I just pushed an update in the dev branch, you can clone it and see what happens. I have run a few experiments but the issue still exists, and it seems irrelevant to the batch size. Let me summarize the problem (experiments below are all batch_size=64):

  1. If I set lr=1e-3 as stated in the paper, then the reconstruction loss drops quickly, and finally stays below 30. The reconstructed pictures look good, but the marginal loss does not decline at all and always fixes at 3.644, thus the classifier stays untrained and the accuracy is about 10%.

  2. If I set lr=1e-5, then the marginal loss decreases steadily below 0.01 after a few epochs, and the classification accuracy normally goes above 95%. But the reconstruction loss can maximally decreases to about 50, thus the reconstructed pictures look like this:
    image

  3. But in some special cases, even if I set lr=1e-3, I can still get both the marginal loss below 0.01 and reconstruction loss below 30, like the sample run in my repo. But that requires a bit of luck and you have to experiment many times to encounter this situation (I'm very lucky to have it in the sample run).

So it seems that the loss surface has more than 2 local minimum and it is tricky to optimize to the global minimum. I have no idea why this happens, maybe I need to recheck every line of code to see if there is any wrong implementation.

@laubonghaudoi
Copy link
Owner

laubonghaudoi commented Dec 7, 2018

I just found that the problem may be caused by the wrong implementation of Decoder, as I forgot to add ReLU nonlinearity in the network. I have fixed it and it seems everything normal now. Check out if your problem still exists, and feel free to reopen this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants