Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify input scaling on LeNet network #976

Merged
merged 2 commits into from
Aug 22, 2016
Merged

Simplify input scaling on LeNet network #976

merged 2 commits into from
Aug 22, 2016

Conversation

mpbrigham
Copy link
Contributor

Removed scaling from data layers and kept single unified power layer. Previous definition was faster due to multi-threaded data loader, but potentially less straightforward to new users.

The scaling factor 0.0125 corresponds to 1/(standard deviation on MNIST dataset), which is ~80 per pixel (from a range of [0-255] per pixel). Scaling factor comment updated to reflect this.

@lukeyeager
Copy link
Member

@gheinrich any issue with this? The reasons for a separate power layer aren't particularly compelling (see #733).

@TimZaman
Copy link
Contributor

TimZaman commented Aug 15, 2016

@mpbrigham maybe also reflect this in the torch's implementation by changing https://github.com/NVIDIA/DIGITS/blob/master/digits/standard-networks/torch/lenet.lua#L36. I recall the learning rate should also be updated to yield identical results, not sure.

Update scaling factor in Torch implementation to 0.0125.
@mpbrigham
Copy link
Contributor Author

@TimZaman I've synced the scaling factor in torch's implementation (commit 2c40306). The previous value rescaled the range from [0, 255] to [0, 1] by multiplying by 1/256.

@gheinrich
Copy link
Contributor

That looks OK to me thanks.

@lukeyeager
Copy link
Member

Thanks for the improvement @mpbrigham!

@lukeyeager lukeyeager merged commit e743646 into NVIDIA:master Aug 22, 2016
SlipknotTN pushed a commit to cynnyx/DIGITS that referenced this pull request Mar 30, 2017
Simplify input scaling on LeNet network
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants