Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[dist_optim] add distributed functional Adadelta optimizer #50623

Closed
wants to merge 6 commits into from

Conversation

wanchaol
Copy link
Contributor

@wanchaol wanchaol commented Jan 15, 2021

Stack from ghstack:

Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Differential Revision: D25932772

Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

[ghstack-poisoned]
Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

[ghstack-poisoned]
Copy link
Member

@rohan-varma rohan-varma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like there is a potentially related CI failure, could you confirm?

https://app.circleci.com/pipelines/github/pytorch/pytorch/260823/workflows/c7ded86c-e48d-4482-a8eb-a53c54a412a9/jobs/10202460

Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 422, in wrapper
    fn(*args, **kwargs)
  File "test_optim.py", line 380, in test_multi_tensor_optimizers
    self.assertEqual(p1, p2)
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1179, in assertEqual
    super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
AssertionError: False is not true : Tensors failed to compare as equal!With rtol=1e-07 and atol=1e-07, found 6 element(s) (out of 6) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 0.14733898893893732 (0.020583703180005117 vs. 0.16792269211894245), which occurred at index (2, 1).
		

Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Differential Revision: [D25932772](https://our.internmc.facebook.com/intern/diff/D25932772)

[ghstack-poisoned]
Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Differential Revision: [D25932772](https://our.internmc.facebook.com/intern/diff/D25932772)

[ghstack-poisoned]
Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Differential Revision: [D25932772](https://our.internmc.facebook.com/intern/diff/D25932772)

[ghstack-poisoned]
@rohan-varma rohan-varma self-requested a review January 21, 2021 09:24
Copy link
Member

@rohan-varma rohan-varma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI looks good now, LGTM

Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Differential Revision: [D25932772](https://our.internmc.facebook.com/intern/diff/D25932772)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

@wanchaol merged this pull request in 6c81b4d.

@facebook-github-bot facebook-github-bot deleted the gh/wanchaol/155/head branch January 26, 2021 15:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged oncall: distributed Add this issue/PR to distributed oncall triage queue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants