Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.is_signed on new uint dtypes raises Unknown ScalarType #125124

Closed
justheuristic opened this issue Apr 28, 2024 · 4 comments
Closed

torch.is_signed on new uint dtypes raises Unknown ScalarType #125124

justheuristic opened this issue Apr 28, 2024 · 4 comments
Assignees
Labels
module: python frontend For issues relating to PyTorch's Python frontend triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@justheuristic
Copy link

justheuristic commented Apr 28, 2024

馃悰 Describe the bug

Torch tensors have .is_signed property. As of 2.2.0, it works for all integer dtypes, (e.g. torch.uint8.is_signed is False). PyTorch 2.3.0 introduced new unsigned types (which is awesome!), but they do not support is_signed yet.

>>> import torch
>>> torch.__version__
'2.3.0'
>>> torch.uint8.is_signed  # works correctly
False
>>> torch.int32.is_signed  # works correctly
True
>>> torch.uint16.is_signed  # fails
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: Unknown ScalarType
>>> torch.uint32.is_signed  # fails
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: Unknown ScalarType

image

Versions

Collecting environment information...
PyTorch version: 2.3.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~18.04) 9.4.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.27.7
Libc version: glibc-2.27

Python version: 3.9.18 (main, Sep 11 2023, 13:41:44)  [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.161-26.3-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 12.3.52
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB

Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              255
On-line CPU(s) list: 0-254
Thread(s) per core:  1
Core(s) per socket:  1
Socket(s):           255
NUMA node(s):        4
Vendor ID:           AuthenticAMD
CPU family:          23
Model:               49
Model name:          AMD EPYC 7702 64-Core Processor
Stepping:            0
CPU MHz:             1999.917
BogoMIPS:            3999.83
Virtualization:      AMD-V
L1d cache:           64K
L1i cache:           64K
L2 cache:            512K
L3 cache:            16384K
NUMA node0 CPU(s):   0-62
NUMA node1 CPU(s):   63-125
NUMA node2 CPU(s):   126-188
NUMA node3 CPU(s):   189-254
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities

Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] numpydoc==1.5.0
[pip3] onnx==1.15.0
[pip3] torch==2.3.0
[pip3] torchaudio==2.3.0
[pip3] torchvision==0.18.0
[pip3] triton==2.1.0
[conda] blas                      1.0                         mkl  
[conda] ffmpeg                    4.3                  hf484d3e_0    pytorch
[conda] libjpeg-turbo             2.0.0                h9bf148f_0    pytorch
[conda] mkl                       2021.4.0           h06a4308_640  
[conda] mkl-service               2.4.0            py39h7f8727e_0  
[conda] mkl_fft                   1.3.1            py39hd3c417c_0  
[conda] mkl_random                1.2.2            py39h51133e4_0  
[conda] numpy                     1.25.0                   pypi_0    pypi
[conda] numpy-base                1.24.3           py39h31eccc5_0  
[conda] numpydoc                  1.5.0            py39h06a4308_0  
[conda] pytorch                   2.3.0           py3.9_cuda11.8_cudnn8.7.0_0    pytorch
[conda] pytorch-cuda              11.8                 h7e8668a_5    pytorch
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torchaudio                2.3.0                py39_cu118    pytorch
[conda] torchtriton               2.3.0                      py39    pytorch
[conda] torchvision               0.18.0               py39_cu118    pytorch
[conda] triton                    2.1.0                    pypi_0    pypi

cc @albanD

@FFFrog FFFrog self-assigned this Apr 29, 2024
@cpuhrsch
Copy link
Contributor

Marking this for triage review since it appears relatively straightforward to add and I'm wondering if we'd accept a PR for this. Also not quite sure about the best module for this.

@malfet
Copy link
Contributor

malfet commented May 6, 2024

@FFFrog do you have PR ready? If not, I'll create one by EOD

@malfet malfet added the module: python frontend For issues relating to PyTorch's Python frontend label May 6, 2024
@jbschlosser jbschlosser added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module and removed triage review labels May 6, 2024
@jbschlosser jbschlosser assigned malfet and unassigned FFFrog May 6, 2024
@malfet
Copy link
Contributor

malfet commented May 6, 2024

Actually, can do it in 30 min or so, looks like all one needs to do is add more dtypes here

AT_FORALL_SCALAR_TYPES_AND7(
Half,
Bool,
BFloat16,
Float8_e5m2,
Float8_e4m3fn,
Float8_e5m2fnuz,
Float8_e4m3fnuz,

malfet added a commit that referenced this issue May 6, 2024
By defining `CASE_ISSIGNED` macros that just returns `std::numeric_limits<dtype>::is_signed`  for the types where it makes sense and explicitly code some types when it does not

Remove `default:` case from the switch to avoid regressions like the one reported in #125124
@FFFrog
Copy link
Collaborator

FFFrog commented May 7, 2024

@FFFrog do you have PR ready? If not, I'll create one by EOD

Sorry, my colleague is interested in this problem and working on it, but he is new to pytorch so it will take some time to fix it.
And I see that you have submitted a PR to fix this problem. I will tell my colleagues about it.

malfet added a commit that referenced this issue May 7, 2024
By defining `CASE_ISSIGNED` macros that just returns `std::numeric_limits<dtype>::is_signed`  for the types where it makes sense and explicitly code some types when it does not

Remove `default:` case from the switch to avoid regressions like the one reported in #125124
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: python frontend For issues relating to PyTorch's Python frontend triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants