Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent behavior for _refs.fft.* operators when input is an empty tensor #105986

Closed
ekamiti opened this issue Jul 25, 2023 · 1 comment
Closed
Labels
module: decompositions Topics related to decomposition (excluding PrimTorch) module: primTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@ekamiti
Copy link
Contributor

ekamiti commented Jul 25, 2023

馃悰 Describe the bug

In eager mode when an empty tensor input is used for a torch.fft.* function it seems that the expectation is for it to fail as per this test:

# Note: NumPy will throw a ValueError for an empty input
@onlyNativeDeviceTypes
@ops(spectral_funcs, allowed_dtypes=(torch.half, torch.float, torch.complex32, torch.cfloat))
def test_empty_fft(self, device, dtype, op):
t = torch.empty(1, 0, device=device, dtype=dtype)
match = r"Invalid number of data points \([-\d]*\) specified"
with self.assertRaisesRegex(RuntimeError, match):
op(t)

In eager mode this seems to be validated here:

TORCH_CHECK(n >= 1, "Invalid number of data points (", n, ") specified");

When trying to enable this test for the version of the operators under torch/_refs the behavior is inconsistent

FAILED [0.0393s] test/test_spectral_ops.py::TestFFTCPU::test_empty_fft__refs_fft_fft_cpu_complex64 - AssertionError: RuntimeError not raised
FAILED [0.0237s] test/test_spectral_ops.py::TestFFTCPU::test_empty_fft__refs_fft_fft_cpu_float32 - AssertionError: RuntimeError not raised
FAILED [0.0109s] test/test_spectral_ops.py::TestFFTCPU::test_empty_fft__refs_fft_hfft_cpu_complex64 - AssertionError: "Invalid number of data points \([-\...
FAILED [0.0404s] test/test_spectral_ops.py::TestFFTCPU::test_empty_fft__refs_fft_hfft_cpu_float32 - AssertionError: "Invalid number of data points \([-\d]...
FAILED [0.0115s] test/test_spectral_ops.py::TestFFTCPU::test_empty_fft__refs_fft_ifft_cpu_complex64 - ZeroDivisionError: division by zero
FAILED [0.0120s] test/test_spectral_ops.py::TestFFTCPU::test_empty_fft__refs_fft_ifft_cpu_float32 - ZeroDivisionError: division by zero
FAILED [0.0114s] test/test_spectral_ops.py::TestFFTCPU::test_empty_fft__refs_fft_ihfft_cpu_float32 - ZeroDivisionError: division by zero
FAILED [0.0109s] test/test_spectral_ops.py::TestFFTCPU::test_empty_fft__refs_fft_irfft_cpu_complex64 - AssertionError: "Invalid number of data points \([-...
FAILED [0.0111s] test/test_spectral_ops.py::TestFFTCPU::test_empty_fft__refs_fft_irfft_cpu_float32 - AssertionError: "Invalid number of data points \([-\d...
FAILED [0.0114s] test/test_spectral_ops.py::TestFFTCPU::test_empty_fft__refs_fft_rfft_cpu_float32 - AssertionError: RuntimeError not raised

Repro of the difference:

>>> t = torch.empty(1, 0, device='cpu', dtype=torch.float32)
>>> torch.fft.fft(t)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: Invalid number of data points (0) specified
>>> torch._refs.fft.fft(t)
tensor([], size=(1, 0), dtype=torch.complex64)
>>> torch._refs.fft.hfft(t)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/mnt/d/Programming/pytorch/torch/_prims_common/wrappers.py", line 227, in _fn
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/mnt/d/Programming/pytorch/torch/_refs/fft.py", line 238, in hfft
    return _fft_c2r("hfft", input, n, dim, norm, forward=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/d/Programming/pytorch/torch/_refs/fft.py", line 119, in _fft_c2r
    torch._check(
  File "/mnt/d/Programming/pytorch/torch/__init__.py", line 987, in _check
    _check_with(RuntimeError, cond, message)
  File "/mnt/d/Programming/pytorch/torch/__init__.py", line 970, in _check_with
    raise error_type(message_evaluated)
RuntimeError: Invalid number of data points (None) specified

Versions

Collecting environment information...
PyTorch version: 2.1.0a0+git45bcd74
Is debug build: True
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 12.1.0-2ubuntu1~22.04) 12.1.0
Clang version: 15.0.7
CMake version: version 3.26.4
Libc version: glibc-2.35

Python version: 3.11.3 (tags/v3.11.3:f3909b8bc8, May 16 2023, 16:08:28) [Clang 15.0.7 ] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1060 6GB
Nvidia driver version: 531.61
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 2700X Eight-Core Processor
CPU family: 23
Model: 8
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 2
BogoMIPS: 7386.26
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr virt_ssbd arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 384 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 8 MiB (1 instance)
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==2.1.0a0+git2563111
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-include 2023.1.0 h06a4308_46342

cc @ezyang @mruberry @lezcano @peterbell10 @SherlockNoMad

@lezcano
Copy link
Collaborator

lezcano commented Jul 26, 2023

It looks like the reference is missing that check. We would accept a fix for it.

@lezcano lezcano added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: primTorch module: decompositions Topics related to decomposition (excluding PrimTorch) labels Jul 26, 2023
pytorchmergebot pushed a commit that referenced this issue Sep 5, 2023
Fixes #107335.

A few issues have been identified while enabling this test and filed:
#105986
#108204
#108205

Pull Request resolved: #107421
Approved by: https://github.com/ezyang
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: decompositions Topics related to decomposition (excluding PrimTorch) module: primTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants