-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autograd Function Fallback bug fix - moe support #8105
Conversation
…rrespondingly, support "None/Tensor_Grad/None" fpr backward outputs.
…d_function is enabled.
…raining\python\training\ortmodule\__init__.py (1 issue)"
…->input_requires_grads
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor comments on places we could extend assertion checks. Adjacent to some of the changes in this PR there is a use of "static" that looks suspicious -- could you check if that is correct?
orttraining/orttraining/training_ops/cpu/torch/torch_custom_function_kernel_base.cc
Outdated
Show resolved
Hide resolved
orttraining/orttraining/training_ops/cpu/torch/torch_custom_function_kernel_base.cc
Outdated
Show resolved
Hide resolved
Co-authored-by: Tim Harris <tiharr@microsoft.com>
Refine the schema description Co-authored-by: Tim Harris <tiharr@microsoft.com>
…nxruntime into pengwa/autograd_moe
…o pengwa/autograd_moe
Thanks @tlh20 for your time reviewing , I have addressed all of comments . :) |
Description: Autograd Function Fallback bug fix - moe support
Attached the PythonOP/PythonOpGrad schemas (and how those attributes used) after the change:
Motivation and Context