Add missing named optimizers #229
Add missing named optimizers #229
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed (or fixed any issues), please reply here (e.g. What to do if you already signed the CLAIndividual signers
Corporate signers
|
I signed it! |
CLAs look good, thanks! |
This is great! Just a few small nits. Review status: 0 of 2 files reviewed at latest revision, all discussions resolved, some commit checks failed. src/optimizers.ts, line 28 at r1 (raw file):
Please keep this list alphabetized, it helps reader when they're looking for a value. Thanks src/optimizers_test.ts, line 16 at r1 (raw file):
These should be alphabetized src/optimizers_test.ts, line 18 at r1 (raw file):
Based on this added spacing, I think the style preferences in your IDE are conflicting with the style preferences for this project. Can you check? Thanks. src/optimizers_test.ts, line 63 at r1 (raw file): Quoted 26 lines of code…
Please move these above the test for "non-existent optimizer", so that the non-existent one is last. Comments from Reviewable |
Thanks for the notes, i am still learning your stack :) will update the PR |
@bileschi - entered a new issue for the formatting, probably better to have the setting in GitHub so new contributors can start editing right away without misformatting for code |
src/optimizers.ts
Outdated
'Adam': () => train.adam(.001, .9, .999, K.epsilon()), | ||
'Adamax': () => train.adamax(0.002, .9, .999, K.epsilon(), 0.0), | ||
'Momentum': () => train.momentum(0.01, 0.0, false), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we want to remove the Momentum one. All the other ones are cross-compatible with python and JS using the JSON serialization format with agreed upon symbol names. Momentum is not in Keras at present. But is in tensorflow & tensorflow.js core. However the default arguments don't appear to make sense (why create a momentum optimizer with momentum set to 0?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took the defaults for Momentum from https://keras.io/optimizers/ SGD Optimizer, where momentum is 0.0.
Should I remove Momentum completely?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, please remove momentum. This way users won't be confused if they serialized a model built in tfjs, and it then crashes in python keras.
thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks ! removed now Momentum
… tfjs models and loading them into keras
Thanks, @atanasster ! Review status: complete! 1 of 1 LGTMs obtained Comments from Reviewable |
Review status: complete! 2 of 1 LGTMs obtained Comments from Reviewable |
@atanasster I've pulled master into your PR. As soon as the tests pass, we can merge this PR. |
Description
added Adadelta (adadelta), Adamax (adamax), Momentum(momentum) as named optimizers to Layers API.
defaults taken from https://keras.io/optimizers/
For repository owners only:
FEATURE
PR for issue #401
This change is