Skip to content

Conversation

@carolineechen
Copy link
Contributor

Add reuse_logits_for_grads option to RNNT loss. If set to True, logits tensor is zeroed out and reused as gradients tensor to save memory. From unit testing, was able to confirm that cuda memory was less, but have not been able to demonstrate batch/seq length increases in model training yet (cc @hwangjeff)

Default is set to False to maintain default autograd differentiability; reusing the tensor would naturally cause gradcheck to fail.

cc @pytorch/team-audio-core

@facebook-github-bot
Copy link
Contributor

Hi @carolineechen!

Thank you for your pull request.

We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but the CLA is no longer valid, and will need to be resubmitted.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants