-
Notifications
You must be signed in to change notification settings - Fork 25k
Open
Labels
module: NaNs and InfsProblems related to NaN and Inf handling in floating pointProblems related to NaN and Inf handling in floating pointmodule: complexRelated to complex number support in PyTorchRelated to complex number support in PyTorchtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Describe the bug
When adding two complex128 tensors with infinite real parts (e.g. (inf+1j) + (inf+2j)), torch.add returns inf+nanj instead of the correct inf+3j.
When subtracting a complex128 tensor with an infinite imaginary part from another (e.g. (3+3j) - (4+infj)), torch.sub returns nan-inf j instead of the correct -1-inf j.
These errors occur identically on both CPU and CUDA.
import torch
# CPU
a = torch.tensor(complex(float('inf'),1), dtype=torch.complex128)
b = torch.tensor(complex(float('inf'),2), dtype=torch.complex128)
c = torch.add(a, b)
print("CPU add result:", c)
if torch.cuda.is_available():
a_gpu = a.cuda()
b_gpu = b.cuda()
c_gpu = torch.add(a_gpu, b_gpu)
print("GPU add result:", c_gpu)
# CPU
a = torch.tensor(complex(3, 3), dtype=torch.complex128)
b = torch.tensor(complex(4, float('inf')), dtype=torch.complex128)
c = torch.sub(a, b)
print("CPU sub result:", c)
if torch.cuda.is_available():
a_gpu = a.cuda()
b_gpu = b.cuda()
c_gpu = torch.sub(a_gpu, b_gpu)
print("GPU sub result:", c_gpu)
Output:
CPU add result: tensor(inf+nanj, dtype=torch.complex128)
GPU add result: tensor(inf+nanj, device='cuda:0', dtype=torch.complex128)
CPU sub result: tensor(nan-infj, dtype=torch.complex128)
GPU sub result: tensor(nan-infj, device='cuda:0', dtype=torch.complex128)
Versions
PyTorch version: 2.5.1
cc @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames
Metadata
Metadata
Assignees
Labels
module: NaNs and InfsProblems related to NaN and Inf handling in floating pointProblems related to NaN and Inf handling in floating pointmodule: complexRelated to complex number support in PyTorchRelated to complex number support in PyTorchtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module