-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Closed
Labels
triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
📚 The doc issue
The doc of torch.gradient() shows its description as below:
Lines 4741 to 4744 in 16c0ccd
| edge_order (``int``, optional): 1 or 2, for `first-order | |
| <https://www.ams.org/journals/mcom/1988-51-184/S0025-5718-1988-0935077-0/S0025-5718-1988-0935077-0.pdf>`_ or | |
| `second-order <https://www.ams.org/journals/mcom/1988-51-184/S0025-5718-1988-0935077-0/S0025-5718-1988-0935077-0.pdf>`_ | |
| estimation of the boundary ("edge") values, respectively. |
For some limitations that usually happen below, which should be noted in docs:
pytorch/aten/src/ATen/native/ReduceOps.cpp
Lines 1043 to 1054 in f5e2de9
| if (dim.has_value()) { | |
| // The following function get called to check whether dim argument satisfies prerequisites. | |
| // The output of the function is not used for the computation of gradient. | |
| dim_list_to_bitset(dim.value(), self.dim()); | |
| for (const auto i : c10::irange(dim.value().size())) { | |
| TORCH_CHECK(self.size(dim.value()[i]) >= edge_order + 1, "torch.gradient expected each dimension size to be at least edge_order+1"); | |
| } | |
| } else { | |
| for (const auto i : c10::irange(self.dim())) { | |
| TORCH_CHECK(self.size(i) >= edge_order + 1, "torch.gradient expected each dimension size to be at least edge_order+1"); | |
| } | |
| } |
Here is a repro:
Repro
import torch
input_data = torch.randn(10, 2, 32, 32)
gradient = torch.gradient(input_data, spacing=1, dim=None, edge_order=2)Output
RuntimeError: torch.gradient expected each dimension size to be at least edge_order+1
Thanks for noting!
Suggest a potential alternative/fix
- Fix the Doc of the description of
edge_order
Metadata
Metadata
Assignees
Labels
triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module