Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with torch.max() over dim 2 #1310

Closed
bunelr opened this issue Apr 20, 2017 · 4 comments
Closed

Issue with torch.max() over dim 2 #1310

bunelr opened this issue Apr 20, 2017 · 4 comments

Comments

@bunelr
Copy link
Contributor

bunelr commented Apr 20, 2017

When operating over tensor with 4 dimensions, the index returned by torch.max() are wrong and can go beyond the size of the dimension that was reduced over.

Surprinsingly, this doesn't happen for other number of dimensions

Code for demonstrating:

import torch
from torch.autograd import Variable

# This is fine and works as expected
t_2d = torch.randn(5, 25)
max_val_2d, max_idxs_2d = torch.max(t_2d, 0)
# max_val_2d / max_idxs_2d is of size 1 x 25 -> fine
# The value of max_idxs_2d go from 0 to 4    -> fine
assert(max_idxs_2d.max() < 5)

# This is fine and works as expected
t_3d = torch.randn(26, 5, 25)
max_val_3d, max_idxs_3d = torch.max(t_3d, 1)
# max_val_3d / max_idxs_3d is of size 26 x 1 x 25 -> fine
# The value of max_idxs_3d go from 0 to 4         -> fine
assert(max_idxs_3d.max() < 5)

# This is fine and works as expected
t_5d = torch.randn(1, 1, 26, 5, 25)
max_val_5d, max_idxs_5d = torch.max(t_5d, 3)
# max_val_5d / max_idxs_5d is of size 1 x 1 x 26 x 1 x 25 -> fine
# The value of max_idxs_5d go from 0 to 4                 -> fine
assert(max_idxs_5d.max() < 5)

# This is fine and works as expected
t_6d = torch.randn(1, 1, 1, 26, 5, 25)
max_val_6d, max_idxs_6d = torch.max(t_6d, 4)
# max_val_6d / max_idxs_6d is of size 1 x 1 x 1 x 26 x 1 x 25 -> fine
# The value of max_idxs_6d go from 0 to 4                     -> fine
assert(max_idxs_6d.max() < 5)

# This is fine and works as expected
t_7d = torch.randn(1, 1, 1, 1, 26, 5, 25)
max_val_7d, max_idxs_7d = torch.max(t_7d, 5)
# max_val_7d / max_idxs_7d is of size 1 x 1 x 1 x 1 x 26 x 1 x 25 -> fine
# The value of max_idxs_7d go from 0 to 4                         -> fine
assert(max_idxs_7d.max() < 5)



# This is not fine
t_4d = torch.randn(1, 26, 5, 25)
max_val_4d, max_idxs_4d = torch.max(t_4d, 2)
# max_val_4d, max_idxs_4d is of size 1 x 26 x 1 x 25 -> fine
# The value of max_idxs_4d go from 0 to 24           -> ???
print(max_idxs_4d)
assert(max_idxs_4d.max() < 5)

Can you confirm this is a bug and not a misunderstanding on my part of what torch.max() should return?

@bunelr
Copy link
Contributor Author

bunelr commented Apr 20, 2017

Reproduced on
Python 3.6 with torch.__version__ -> 0.1.10+29f1f27 and
Python 2.7 with torch.__version__ -> 0.1.11_5

@apaszke
Copy link
Contributor

apaszke commented Apr 20, 2017

It seems that the error is only in TH. cc @adamlerer

@bunelr
Copy link
Contributor Author

bunelr commented Apr 20, 2017

@albanD found that it is not actually limited to Tensor of 4D.

import torch

dims = (3, 4, 5, 6, 7, 8)
a = torch.randn(*dims)
for dim in range(len(dims)):
    for i in range(a.size(dim)):
        a.select(dim,i).fill_(i)
    val, argmax = a.max(dim)
    print(argmax.max())

gives as output:

2
3
335
5
6
7

It would seem like the problem only happen when the dimension given is 2. The value obtained as the argmax will in that case always be the product of the size of the remaining dimensions.

Possible interpretation of what happens:
The tensor is flattened and the argmax that is stored is the idx of the max in the flattened version, that is not properly converted to the reshaped version.

Probable cause:
https://github.com/pytorch/pytorch/blob/master/torch/lib/TH/generic/THTensorMath.c#L1551,
dimOffset is only used by max and min and both are subject to this bug so it would seem likely that this is where the problem comes from.

@bunelr bunelr changed the title Issue with torch.max() with tensors of large dimension Issue with torch.max() over dim 2 Apr 20, 2017
@soumith
Copy link
Member

soumith commented Apr 20, 2017

fixed in master. thanks for the report.

@soumith soumith closed this as completed Apr 20, 2017
houseroad added a commit to houseroad/pytorch that referenced this issue Aug 23, 2018
…60de87

Summary:
Previous import was 7848f1e0414ba3b2e263609d93d46fd60790b2e9

Included changes:
- **[6146a85](onnx/onnx@6146a85)**: Check pybind version (pytorch#1315) <Changming Sun>
- **[2cbf740](onnx/onnx@2cbf740)**: Domain exists in GraphProto but not in Node (pytorch#1310) <Ryan Hill>
- **[9b874e9](onnx/onnx@9b874e9)**: [Title] Add optimization pass eliminating nop Pad (pytorch#1307) <Tingfan Wu>

Differential Revision: D9485475

fbshipit-source-id: be00d35f92b00c64b55c9f7798a9c142c70ecd93
facebook-github-bot pushed a commit that referenced this issue Aug 24, 2018
…60de87 (#10831)

Summary:
Pull Request resolved: #10831

Previous import was 7848f1e0414ba3b2e263609d93d46fd60790b2e9

Included changes:
- **[6146a85](onnx/onnx@6146a85)**: Check pybind version (#1315) <Changming Sun>
- **[2cbf740](onnx/onnx@2cbf740)**: Domain exists in GraphProto but not in Node (#1310) <Ryan Hill>
- **[9b874e9](onnx/onnx@9b874e9)**: [Title] Add optimization pass eliminating nop Pad (#1307) <Tingfan Wu>

Reviewed By: yinghai

Differential Revision: D9485475

fbshipit-source-id: 3adb4e6e182278fd2abe5068a9d4569763e0ff0c
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this issue Sep 11, 2018
…60de87 (pytorch#10831)

Summary:
Pull Request resolved: pytorch#10831

Previous import was 7848f1e0414ba3b2e263609d93d46fd60790b2e9

Included changes:
- **[6146a85](onnx/onnx@6146a85)**: Check pybind version (pytorch#1315) <Changming Sun>
- **[2cbf740](onnx/onnx@2cbf740)**: Domain exists in GraphProto but not in Node (pytorch#1310) <Ryan Hill>
- **[9b874e9](onnx/onnx@9b874e9)**: [Title] Add optimization pass eliminating nop Pad (pytorch#1307) <Tingfan Wu>

Reviewed By: yinghai

Differential Revision: D9485475

fbshipit-source-id: 3adb4e6e182278fd2abe5068a9d4569763e0ff0c
eqy pushed a commit to eqy/pytorch that referenced this issue Jan 20, 2022
)

* Implement alias_copy operations only for CudaFusionGroup to support fallback path
* Remove alias (a) annotation from alias_copy schema
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants