gradgradcheck for torch.repeat and torch.tile is outrageously slow #49962
Labels
module: autograd
Related to torch.autograd, and the autograd engine in general
module: tests
Issues related to tests (not the torch.testing module)
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
torch.repeat and torch.tile (which is implemented using torch.repeat) are relatively fast compared to NumPy's torch.tile, but attempting to gradgradcheck them is incredible slow in some cases. For example:
takes 77.93s on my devfair! While not an apples to apples comparison, computing the function's Hessian is relatively fast:
takes only .07s to run. That is, it is 1000x faster than the gradgradcheck.
gradgradcheck being so slow appears to have a real impact. See those two tests on ASAN:
or on the clang build:
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @mruberry @VitalyFedyunin @walterddr
The text was updated successfully, but these errors were encountered: