-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix batchnormlayer compatibility to TF12 #42
Conversation
Add compatibility to TF12 cause by the change of ones_initializer api.
@boscotsang hi, are you using TL with TF12? I found a interesting thing, when I use TF11, a Thank you in advance. TF11 TL1.3
TF12 TL1.3
Code
|
Yes, I'm using TF0.12r and I found that when I use BatchNormLayer and share variables between train and test as you code in my ResNet 164 on Cifar10 the training cost drops normally while the test cost nearly doesn't change. Did you have this issue? |
@boscotsang Can you show your code? The test accuracy increase in my case
Can you run your code again under TensorFlow 12 ? just to see whether your test accuracy increase. |
@wagamamaz The following is the my code. The image read is the tensorflow pipeline. The data is the cifar10 binary and is put in the dataset directory.
|
@boscotsang To evaluate the performance, you need a inference with e.g. network = inference(x, is_train=True, reuse=False)
network_test = inference(x, is_train=False, reuse=True) Donot use the |
Since the default argument of gama_init is ones_initializer and the change of api in TF12. Though it can be fixed by user to specify the gama_init argument to be ones_initializer() it's easier for newbie to avoid this issue by revising the source code of tensorlayer.