This repository was archived by the owner on Aug 21, 2025. It is now read-only.

Description
Tasks
Add vmap support for the following PyTorch operations. That is, each one needs a batching rule.
Expected behavior
Currently, when one does vmap over them, they raise a warning suggesting the batching rule is not implemented:
import torch
from functorch import vmap
x = torch.randn(32, 2, 3)
y = vmap(torch.view_copy, (0, None))(x, [6])
# UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::view_copy
We expect to not see a warning
Read this!
See this note for more context https://github.com/pytorch/pytorch/blob/master/functorch/writing_batching_rules.md
Testing
(probably the most time consuming part of this)
functorch's vmap tests use the PyTorch OpInfo database to auto-generate tests. Unfortunately the above three operations don't have OpInfos, so we'll have to add some to test this. See the following https://github.com/pytorch/pytorch/wiki/Writing-tests-in-PyTorch-1.8#opinfos-and-the-future-of-testing-tensor-operations for more details; also read the OpInfos in torch/testing/_internal/common_methods_invocations.py