Skip to content
This repository has been archived by the owner on Oct 17, 2021. It is now read-only.

Add missing named optimizers #229

Merged
merged 5 commits into from Jun 11, 2018

Conversation

atanasster
Copy link
Contributor

@atanasster atanasster commented Jun 7, 2018

Description

added Adadelta (adadelta), Adamax (adamax), Momentum(momentum) as named optimizers to Layers API.
defaults taken from https://keras.io/optimizers/

For repository owners only:

FEATURE

PR for issue #401


This change is Reviewable

@googlebot
Copy link

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed (or fixed any issues), please reply here (e.g. I signed it!) and we'll verify it.


What to do if you already signed the CLA

Individual signers
Corporate signers

@atanasster
Copy link
Contributor Author

I signed it!

@googlebot
Copy link

CLAs look good, thanks!

@bileschi
Copy link
Contributor

bileschi commented Jun 8, 2018

This is great! Just a few small nits.


Review status: 0 of 2 files reviewed at latest revision, all discussions resolved, some commit checks failed.


src/optimizers.ts, line 28 at r1 (raw file):

// change based on (de)serialization requirements.
export function getOptimizer(identifier: string): Optimizer {
  const optimizerMap: { [optimizerName: string]: () => Optimizer } = {

Please keep this list alphabetized, it helps reader when they're looking for a value. Thanks


src/optimizers_test.ts, line 16 at r1 (raw file):

// tslint:disable:max-line-length
import { AdagradOptimizer, AdamOptimizer, RMSPropOptimizer, SGDOptimizer, AdadeltaOptimizer, AdamaxOptimizer, MomentumOptimizer } from '@tensorflow/tfjs-core';

These should be alphabetized


src/optimizers_test.ts, line 18 at r1 (raw file):

import { AdagradOptimizer, AdamOptimizer, RMSPropOptimizer, SGDOptimizer, AdadeltaOptimizer, AdamaxOptimizer, MomentumOptimizer } from '@tensorflow/tfjs-core';

import { getOptimizer } from './optimizers';

Based on this added spacing, I think the style preferences in your IDE are conflicting with the style preferences for this project. Can you check? Thanks.


src/optimizers_test.ts, line 63 at r1 (raw file):

Quoted 26 lines of code…
 
  it(`can instantiate Adadelta`, () => {
    const optimizer = getOptimizer('Adadelta');
    expect(optimizer instanceof AdadeltaOptimizer).toBe(true);
  });
  it(`can instantiate adadelta`, () => {
    const optimizer = getOptimizer('adadelta');
    expect(optimizer instanceof AdadeltaOptimizer).toBe(true);
  });
 
  it(`can instantiate Adamax`, () => {
    const optimizer = getOptimizer('Adamax');
    expect(optimizer instanceof AdamaxOptimizer).toBe(true);
  });
  it(`can instantiate adamax`, () => {
    const optimizer = getOptimizer('adamax');
    expect(optimizer instanceof AdamaxOptimizer).toBe(true);
  });
  it(`can instantiate Momentum`, () => {
    const optimizer = getOptimizer('Momentum');
    expect(optimizer instanceof MomentumOptimizer).toBe(true);
  });
  it(`can instantiate momentum`, () => {
    const optimizer = getOptimizer('momentum');
    expect(optimizer instanceof MomentumOptimizer).toBe(true);
  });

Please move these above the test for "non-existent optimizer", so that the non-existent one is last.


Comments from Reviewable

@atanasster
Copy link
Contributor Author

Thanks for the notes, i am still learning your stack :) will update the PR

@atanasster
Copy link
Contributor Author

@bileschi - entered a new issue for the formatting, probably better to have the setting in GitHub so new contributors can start editing right away without misformatting for code

'Adam': () => train.adam(.001, .9, .999, K.epsilon()),
'Adamax': () => train.adamax(0.002, .9, .999, K.epsilon(), 0.0),
'Momentum': () => train.momentum(0.01, 0.0, false),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we want to remove the Momentum one. All the other ones are cross-compatible with python and JS using the JSON serialization format with agreed upon symbol names. Momentum is not in Keras at present. But is in tensorflow & tensorflow.js core. However the default arguments don't appear to make sense (why create a momentum optimizer with momentum set to 0?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took the defaults for Momentum from https://keras.io/optimizers/ SGD Optimizer, where momentum is 0.0.

Should I remove Momentum completely?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, please remove momentum. This way users won't be confused if they serialized a model built in tfjs, and it then crashes in python keras.
thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks ! removed now Momentum

@caisq
Copy link
Contributor

caisq commented Jun 11, 2018

:lgtm_strong:

Thanks, @atanasster !


Review status: :shipit: complete! 1 of 1 LGTMs obtained


Comments from Reviewable

@bileschi
Copy link
Contributor

:lgtm_strong:


Review status: :shipit: complete! 2 of 1 LGTMs obtained


Comments from Reviewable

@caisq
Copy link
Contributor

caisq commented Jun 11, 2018

@atanasster I've pulled master into your PR. As soon as the tests pass, we can merge this PR.

@caisq caisq merged commit 2da2680 into tensorflow:master Jun 11, 2018
@atanasster atanasster deleted the add_missing_named_optimizers branch June 11, 2018 20:15
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
5 participants