Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Use better coding style in optim.py #79

Closed

Conversation

MarisaKirisame
Copy link

No description provided.

@facebook-github-bot
Copy link

Hi @MarisaKirisame!

Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file.

In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 2, 2020
@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks!

1 similar comment
@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks!

@egrefen
Copy link
Contributor

egrefen commented Mar 1, 2021

Thanks for submitting a PR. I'm afraid this would break how we do book-keeping (i.e. matching the index of a grad target to a particular parameter) when applying gradients in differentiable optimizer. The way grad_targets was written was intentional: we do want to keep tensors that require grad in the list, yet guarantee that a None will come when taking gradients.

@egrefen egrefen closed this Mar 1, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants