Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.kthvalue diverges from numpy equivalent for degenerate shape #59201

Closed
pmeier opened this issue May 31, 2021 · 2 comments
Closed

torch.kthvalue diverges from numpy equivalent for degenerate shape #59201

pmeier opened this issue May 31, 2021 · 2 comments
Labels
module: numpy Related to numpy support, and also numpy compatibility of our operators module: reductions triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@pmeier
Copy link
Collaborator

pmeier commented May 31, 2021

馃悰 Bug

torch.kthvalue diverges from numpy equivalent for degenerate shape

To Reproduce

import torch
import numpy as np

t = torch.empty((2, 0, 4))
a = t.numpy()

t_res = torch.kthvalue(t, k=1, dim=2).values
a_res = np.partition(a, 1, axis=2)

assert t_res.shape == a_res.shape

Expected behavior

The shapes should match.

Environment

PyTorch version: 1.9.0a0+gitc9de3e3
Is debug build: True
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Arch Linux (x86_64)
GCC version: (crosstool-NG 1.24.0.133_b0863d8_dirty) 9.3.0
Clang version: Could not collect
CMake version: version 3.20.2
Libc version: glibc-2.9

Python version: 3.6 (64-bit runtime)
Python platform: Linux-5.12.5-arch1-1-x86_64-with-arch
Is CUDA available: False
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080
Nvidia driver version: 465.31
cuDNN version: Probably one of the following:
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn.so.8.1.1
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn_adv_infer.so.8.1.1
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn_adv_train.so.8.1.1
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn_cnn_infer.so.8.1.1
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn_cnn_train.so.8.1.1
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn_ops_infer.so.8.1.1
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] pytest-pytorch==0.0.0
[pip3] torch==1.9.0a0+gitc9de3e3
[conda] magma-cuda111             2.5.2                         1    pytorch
[conda] mkl                       2021.2.0           h726a3e6_389    conda-forge
[conda] mkl-include               2021.2.0           h726a3e6_389    conda-forge
[conda] numpy                     1.19.5           py36h2aa4a07_1    conda-forge
[conda] pytest-pytorch            0.2.0              pyh44b312d_0    conda-forge
[conda] torch                     1.9.0a0+gitc9de3e3           dev_0    <develop>

Additional context

  • This is tested in

    def test_tensor_compare_ops_argmax_argmix_kthvalue_dim_empty(self, device):

    but the test never failed due to a bug in the underlying comparison mechanism. up the priority of numpy array comparisons in self.assertEqual聽#59067 will disable the failing test case, but it should be reinstated as soon as this issue is fixed.

  • IMO this is just a bug in the test suite that requires the torch and numpy shapes to match. numpy does not collapse a dimension in case it encounters a degenerate input shape.

cc @mruberry @rgommers @heitorschueroff @VitalyFedyunin @walterddr

@pmeier pmeier added module: reductions module: tests Issues related to tests (not the torch.testing module) labels May 31, 2021
@mruberry mruberry added module: numpy Related to numpy support, and also numpy compatibility of our operators triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module and removed module: tests Issues related to tests (not the torch.testing module) labels May 31, 2021
@mruberry
Copy link
Collaborator

mruberry commented May 31, 2021

Edited: See @rgommers statement below, I didn't realize this wasn't being compared to an exact kthvalue equivalent

@rgommers
Copy link
Collaborator

Isn't this just a function with different semantics? The same happens with non-empty input:

>>> t2 = torch.arange(8).reshape((2, 1, 4))
>>> a2 = t2.numpy()
>>> torch.kthvalue(t2, 1, dim=2).values.shape
torch.Size([2, 1])
>>> np.partition(a2, 1, axis=2).shape
(2, 1, 4)
>>> torch.kthvalue(t2, 1, dim=2).values
tensor([[0],
        [4]])
>>> np.partition(a2, 1, axis=2)
array([[[0, 1, 2, 3]],

       [[4, 5, 6, 7]]])

np.partition preserves shape, torch.kthvalue returns only the first k-th elements.

deniskokarev pushed a commit to deniskokarev/pytorch that referenced this issue Jun 9, 2021
Summary:
Fixes pytorch#59201. Should be merged after pytorch#59067 to ensure this actually working correctly.

Pull Request resolved: pytorch#59214

Reviewed By: albanD

Differential Revision: D28792363

Pulled By: mruberry

fbshipit-source-id: 0cf613463139352906fb567f1efcc582c2c25de8
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: numpy Related to numpy support, and also numpy compatibility of our operators module: reductions triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants