Please implement the batching rule for torch.matrix_exp. #115992
Labels
actionable
good first issue
module: functorch
Pertaining to torch.func or pytorch/functorch
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
馃殌 The feature, motivation and pitch
Matrix exponentials are extremely expensive to compute, yet very important in many computing and ML problems . It would be great if such computational expensive functions can support batch implementations. Many thanks in advance for those who can help with this issue!
Alternatives
No response
Additional context
This below is the warning message from pytorch kernel: :3: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::matrix_exp. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\functorch\BatchedFallback.cpp:84.)
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
The text was updated successfully, but these errors were encountered: