Skip to content

Conversation

@jking-ca
Copy link

Summary:
This is the second diff in this stack. This diff includes the changes to DPER3; the first diff includes the changes to Caffe2.

We want to decay learning parameters properly. Previously this was not done when a parameter is absent from a minibatch. We fix this by keeping track of missed minibatches and making decay catch up accordingly.

The exponential moving averages (EMA) for the first and second moments used in Adam are updated only for parameters seen in a minibatch. Actually, for these parameters, 0 should be added to the EMAs and the EMAs should then be decayed by multiplying by beta1 and beta2 respectively.

To avoid the computational overhead of touching every parameter for every minibatch, we:

  • keep track of the last time a parameter is seen
  • instead of decaying the EMAs by multiplying by beta1 and beta2, we multiply by beta1^k and beta2^k, where k is the number of minibatches since the parameter was last seen.

We hope this will significantly improve the inconsistent learning parameter issue we have seen with Adam.

Differential Revision: D29638897

Summary:
This is the second diff in this stack.  This diff includes the changes to DPER3; the first diff includes the changes to Caffe2.

We want to decay learning parameters properly.  Previously this was not done when a parameter is absent from a minibatch.  We fix this by keeping track of missed minibatches and making decay catch up accordingly.

The exponential moving averages (EMA) for the first and second moments used in Adam are updated only for parameters seen in a minibatch.  Actually, for these parameters, 0 should be added to the EMAs and the EMAs should then be decayed by multiplying by beta1 and beta2 respectively.

To avoid the computational overhead of touching every parameter for every minibatch, we:
* keep track of the last time a parameter is seen
* instead of decaying the EMAs by multiplying by beta1 and beta2, we multiply by beta1^k and beta2^k, where k is the number of minibatches since the parameter was last seen.

We hope this will significantly improve the inconsistent learning parameter issue we have seen with Adam.

Differential Revision: D29638897

fbshipit-source-id: a8ba73033adf22c9c3224747099352558d391775
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 22, 2021

💊 CI failures summary and remediations

As of commit 08916b4 (more details on the Dr. CI page and at hud.pytorch.org/pr/62058):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


Preview docs built from this PR

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29638897

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 812bc1d.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants