Skip to content
This repository was archived by the owner on Aug 15, 2019. It is now read-only.

Conversation

frqc
Copy link
Contributor

@frqc frqc commented Oct 7, 2017

use relu to do the "larger than" trick, which could be done in a more elegant way


This change is Reviewable

@googlebot
Copy link

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed, please reply here (e.g. I signed it!) and we'll verify. Thanks.


  • If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check your existing CLA data and verify that your email is set on your git commits.
  • If your company signed a CLA, they designated a Point of Contact who decides which employees are authorized to participate. You may need to contact the Point of Contact for your company and ask to be added to the group of authorized contributors. If you don't know who your Point of Contact is, direct the project maintainer to go/cla#troubleshoot.
  • In order to pass this check, please resolve this problem and have the pull request author add another comment and the bot will run again.

@frqc
Copy link
Contributor Author

frqc commented Oct 7, 2017

Signed google cla

@googlebot
Copy link

CLAs look good, thanks!

@nsthorat
Copy link
Contributor

Thanks for the contribution, the implementation looks correct.

Couple comments, and then I will merge :)


Reviewed 3 of 3 files at r1.
Review status: all files reviewed at latest revision, 4 unresolved discussions, some commit checks failed.


src/graph/optimizers/adamax_optimizer.ts, line 32 at r1 (raw file):

      specifiedVariableList?: Node[]) {
    super(learningRate, specifiedVariableList);
    this.eps = Scalar.new(1e-8);

this is unused


src/graph/optimizers/adamax_optimizer.ts, line 77 at r1 (raw file):

        const ut1 = math.abs(gradient);

        const newWeightedInfNorm = math.add(

can you just add a TODO here saying that we should update this to use an element-wise max?


src/graph/optimizers/adamax_optimizer.ts, line 89 at r1 (raw file):

        this.firstMoment.set(node.output, keep(newFirstMoment));
        this.weightedInfNorm.set(node.output, keep(newWeightedInfNorm));
        

remove all the spaces on this line


src/graph/optimizers/adamax_optimizer.ts, line 90 at r1 (raw file):

        this.weightedInfNorm.set(node.output, keep(newWeightedInfNorm));
        
        ut0.dispose();

no need to dispose ut0, ut1, scope will take care of that (outputs of math operations will get cleaned up, the rest has to be disposed since it's part of the tensor array map).


Comments from Reviewable

@nsthorat
Copy link
Contributor

:lgtm_strong:


Review status: 2 of 3 files reviewed at latest revision, all discussions resolved, some commit checks failed.


Comments from Reviewable

@nsthorat
Copy link
Contributor

Reviewed 1 of 1 files at r2.
Review status: all files reviewed at latest revision, all discussions resolved, some commit checks failed.


Comments from Reviewable

@nsthorat nsthorat merged commit 90c7337 into tensorflow:master Oct 12, 2017
@frqc frqc deleted the dev_tim branch October 12, 2017 15:09
manrajgrover added a commit to manrajgrover/tfjs-core that referenced this pull request Apr 26, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants