Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SVHN normalization issue #11

Closed
balodhi opened this issue Nov 9, 2017 · 8 comments
Closed

SVHN normalization issue #11

balodhi opened this issue Nov 9, 2017 · 8 comments

Comments

@balodhi
Copy link

balodhi commented Nov 9, 2017

normalization process of SVHN images has issues. I have changed it which yields much better results.

elseif normalization_type =='mean_0':
train_n = np.full(((images.shape(0),images.shape(1),images.shape(3)),255.0,dtype-np.float32)
pixel_depth=255.0
images = ((images-train_n)/2)/pixel_depth

@ikhlestov
Copy link
Owner

Hi!
What do you mean with better results? Do such approach provide faster learning/converge and better results or will be run faster?

@balodhi
Copy link
Author

balodhi commented Nov 9, 2017

better results mean that it provides faster convergence and higher accuracy. which the previous normalization technique was unable to achieve.

@ikhlestov
Copy link
Owner

ok
It will be great if you add one more flag to the main script, add this new normalization and provide pull request. Are you ok with this?

@balodhi
Copy link
Author

balodhi commented Nov 10, 2017

Yes i was asking the permission to have a pull request. Let me just make a graph comparison of first 6 epochs of both methods.

@balodhi
Copy link
Author

balodhi commented Nov 10, 2017

result

@balodhi balodhi closed this as completed Nov 10, 2017
@ikhlestov
Copy link
Owner

Hm, so huge difference.
Ok, thank you for your pull request - I will run it during next days and if the results be similar I will merge it.

@ZhenyF
Copy link

ZhenyF commented May 26, 2018

It seems it image is substracted by 255 then divided by 2*255, why this one works better?
I tried the method that the image is only divided by 255 (as the paper). And I got 3.4% error rate. Have you got the same accuracy if only trained on using the original norm method and only on train dataset(without extra)?

@illarion-rl
Copy link
Contributor

Hi ZhenyF! Actually I still not compare normalization methods - so I have no answer for your question. You may try both approaches and tell about the difference

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants