Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BN on test set #2

Open
hardmaru opened this issue Aug 29, 2016 · 5 comments
Open

BN on test set #2

hardmaru opened this issue Aug 29, 2016 · 5 comments

Comments

@hardmaru
Copy link
Contributor

Nice blog post!

If you see any performance error I might’ve done, I’d love to know!

One comment: When you evaluate the validation/test set, you should use the saved statistics from training. Looking at the code, I think you are calculating the moments as well during validation/test runs.

@OlavHN
Copy link
Owner

OlavHN commented Aug 30, 2016

Totally true .. I guess the batch size of 100 gives "good enough" statistics for the problem so I forgot to add it in.

Will try to update with a version that stores population statistics and properly uses those at test time.

@hardmaru
Copy link
Contributor Author

TF slim has population statistics recurrent batch norm you can check out

I like your implementation style more though since it is elegant and in
pure TF

You might also want to play around with the random permutation mnist task
since it's only an extra line of code :)

On Tuesday, August 30, 2016, Olav Nymoen notifications@github.com wrote:

Totally true .. I guess the batch size of 100 gives "good enough"
statistics for the problem so I totally forgot about adding it in.

Will try to update with a version that stores population statistics and
properly uses those at test time.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#2 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AGBoHiDDVeuf_9zbZCYfqN8JSCekBDZNks5qlDeBgaJpZM4JvF4O
.

@OlavHN
Copy link
Owner

OlavHN commented Sep 3, 2016

I've tried running with population statistics a bit now with really poor results on sequential mnist. Same results when using slim.batch_norm.

The model seems to be dependent on the batch normalization.

To test I tried using local batch statistics, but increasing the batch from 100 to 1000. That works better than full population statistics, but much worse than batch statistics for a batch of 100.

The graphs in the paper looks very much like mine when using local batch statistics, however they explicitly mention using population statistics for their final results, so I'm not sure what's going on in my code.

@hardmaru
Copy link
Contributor Author

hardmaru commented Sep 3, 2016

I've been trying the same recently, and share similar frustrations as you.
I think I found out what's going on and it is not pretty. Basically in my
implementation and I think also with slim, the pop statistics is recorded
at one layer and assumed to be the same for each time step of the sequence.

But I think in the paper, the actual statistics are recorded separately at
each time step. So for MNIST there would be 784 set of statistics. He
shows in the paper that all the statistics converge over time for certain
tasks (I guess for text given the distribution must be time invariant) but
I suspect for MNIST, the statistics over time will not converge...

I also got really good results just using vanilla LSTM but initializing the
hidden to hidden layer to the exact identity (not .95 identity)

On Saturday, September 3, 2016, Olav Nymoen notifications@github.com
wrote:

I've tried running with population statistics a bit now with really poor
results on sequential mnist. Same results when using slim.batch_norm.

The model seems to be dependent on the batch normalization.

To test I tried using local batch statistics, but increasing the batch
from 100 to 1000. That works better than full population statistics, but
much worse than batch statistics for a batch of 100.

The graphs in the paper looks very much like mine when using local batch
statistics, however they explicitly mention using population statistics for
their final results, so I'm not sure what's going on in my code.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#2 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AGBoHjnOd-R2vE_7OtGKNNMc2GUnlsg-ks5qmWuSgaJpZM4JvF4O
.

@liangsun-ponyai
Copy link

@hardmaru Recently I also got worse result in test data set use pop mean and variance. You said you got good result just using vanilla lstm, could you please share you code and tell me what's going on.

I also got really good results just using vanilla LSTM but initializing the
hidden to hidden layer to the exact identity (not .95 identity)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants