Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Norm of complex Tensor #50972

Closed
HamedHojatian opened this issue Jan 23, 2021 · 8 comments
Closed

Norm of complex Tensor #50972

HamedHojatian opened this issue Jan 23, 2021 · 8 comments
Labels
module: complex Related to complex number support in PyTorch module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@HamedHojatian
Copy link

HamedHojatian commented Jan 23, 2021

馃悰 Bug

The norm function (torch.norm) doesn't work on the complex tensor in the pytorch version 1.7.1 . However, in version 1.6.0, it works.

To Reproduce

Steps to reproduce the behavior:

  1. Example: torch.norm(torch.tensor([1+1j, 2+2j]))
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-28-872fa6766b44> in <module>
----> 1 x = torch.norm(torch.tensor([1+1j, 2+2j]))
      2 x

/export/tmp/hojatian/anaconda3/lib/python3.7/site-packages/torch/functional.py in norm(input, p, dim, keepdim, out, dtype)
   1288         if isinstance(p, str):
   1289             if p == "fro":
-> 1290                 return _VF.frobenius_norm(input, dim=(), keepdim=keepdim)  # type: ignore
   1291         if not isinstance(p, str):
   1292             _dim = [i for i in range(ndim)]  # noqa: C416 TODO: rewrite as list(range(m))
   
RuntimeError: frobenius norm not supported for complex tensors

Environment

  • PyTorch Version: 1.7.1
  • OS: Linux
  • How you installed PyTorch: Conda
  • Build command you used (if compiling from source): -
  • Python version: 3.7.6
  • CUDA/cuDNN version: cuda9.2.148_cudnn7.6.3_0
  • GPU models and configuration: Tesla P100
  • Any other relevant information: -

cc @ezyang @anjali411 @dylanbespalko @mruberry @jianyuh @nikitaved @pearu @heitorschueroff @walterddr @IvanYashchuk

@glaringlee glaringlee added module: complex Related to complex number support in PyTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Jan 25, 2021
@mruberry mruberry added the module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul label Jan 25, 2021
@glaringlee
Copy link
Contributor

@HamedHojatian
It works fine with the latest code built from master branch though.

@anjali411
Copy link
Contributor

replica of #47833 (comment) @HamedHojatian this was fixed in #48284 and works on the latest master and will be added in the 1.8 release. let us know if you run into any issues with this function on the latest master. thanks!!

@mruberry
Copy link
Collaborator

cc @kurtamohler, this is fixed, right?

Thank you for reporting this issue, @HamedHojatian. We've been updating torch.norm's complex behavior recently, and in PyTorch 1.7 there's the NumPy-compatible torch.linalg.norm,

I'm surprised this worked in PyTorch 1.6. The tensor provided in the snippet is only 1D, and the frobenius norm only operates on matrices. I don't think the 2-norm support complex inputs, either, in PyTorch 1.6.

What did this produce in PyTorch 1.6?

@mruberry mruberry reopened this Jan 25, 2021
@HamedHojatian
Copy link
Author

HamedHojatian commented Jan 25, 2021

@HamedHojatian
It works fine with the latest code built from master branch though.

Great! thank you ! Is cuda supporting it?

@HamedHojatian
Copy link
Author

HamedHojatian commented Jan 25, 2021

cc @kurtamohler, this is fixed, right?

Thank you for reporting this issue, @HamedHojatian. We've been updating torch.norm's complex behavior recently, and in PyTorch 1.7 there's the NumPy-compatible torch.linalg.norm,

I'm surprised this worked in PyTorch 1.6. The tensor provided in the snippet is only 1D, and the frobenius norm only operates on matrices. I don't think the 2-norm support complex inputs, either, in PyTorch 1.6.

What did this produce in PyTorch 1.6?

Thank you. As I check, the norm function is working correctly in 2D at 1.6.

@mruberry
Copy link
Collaborator

Interesting. Good thing it's fixed in PyTorch 1.8! Thanks for following up, @HamedHojatian!

@kurtamohler
Copy link
Collaborator

kurtamohler commented Jan 25, 2021

torch.norm() of order 2 for complex vector inputs was incorrect in 1.6, and inconsistent with the other supported vector norm orders, so it was purposely disabled for 1.7.

According to https://mathworld.wolfram.com/L2-Norm.html and numpy, the order 2 norm of a complex vector is vector.abs().pow(2).sum().pow(1/2). However, in pytorch 1.6, vector.pow(2).sum().pow(1/2) was being calculated instead. The absolute value (or modulus) of each element should be calculated first. In pytorch 1.6, we get, the following, showing that the absolute value of elements is not being calculated first.

>>> a = torch.tensor([1+1j, 2+2j])
>>> a.norm()
tensor(2.2361+2.2361j)
>>> a.pow(2).sum().pow(1/2)
tensor(2.2361+2.2361j)
>>> a.abs().pow(2).sum().pow(1/2)
tensor(3.1623)

This is what numpy gives, disagreeing with pytorch 1.6.

>>> numpy.linalg.norm([1+1j, 2+2j])
3.1622776601683795

And in pytorch 1.6, other vector norms calculate the absolute value of elements first with vector.abs().pow(order).sum().pow(1/order), not vector.pow(order).sum().pow(1/order). For p=3.2, for instance:

>>> a = torch.tensor([1+1j, 2+2j])
>>> a.norm(p=3.2)
tensor(2.9212+0.j)
>>> a.abs().pow(3.2).sum().pow(1/3.2)
tensor(2.9212)
>>> a.pow(3.2).sum().pow(1/3.2)
tensor(2.0656+2.0656j)

Pytorch 1.8 updates torch.norm to use the correct formula for order 2 with a complex vector.

@HamedHojatian
Copy link
Author

torch.norm() of order 2 for complex vector inputs was incorrect in 1.6, and inconsistent with the other supported vector norm orders, so it was purposely disabled for 1.7.

According to https://mathworld.wolfram.com/L2-Norm.html and numpy, the order 2 norm of a complex vector is vector.abs().pow(2).sum().pow(1/2). However, in pytorch 1.6, vector.pow(2).sum().pow(1/2) was being calculated instead. The absolute value (or modulus) of each element should be calculated first. In pytorch 1.6, we get, the following, showing that the absolute value of elements is not being calculated first.

>>> a = torch.tensor([1+1j, 2+2j])
>>> a.norm()
tensor(2.2361+2.2361j)
>>> a.pow(2).sum().pow(1/2)
tensor(2.2361+2.2361j)
>>> a.abs().pow(2).sum().pow(1/2)
tensor(3.1623)

This is what numpy gives, disagreeing with pytorch 1.6.

>>> numpy.linalg.norm([1+1j, 2+2j])
3.1622776601683795

And in pytorch 1.6, other vector norms calculate the absolute value of elements first with vector.abs().pow(order).sum().pow(1/order), not vector.pow(order).sum().pow(1/order). For p=3.2, for instance:

>>> a = torch.tensor([1+1j, 2+2j])
>>> a.norm(p=3.2)
tensor(2.9212+0.j)
>>> a.abs().pow(3.2).sum().pow(1/3.2)
tensor(2.9212)
>>> a.pow(3.2).sum().pow(1/3.2)
tensor(2.0656+2.0656j)

Pytorch 1.8 updates torch.norm to use the correct formula for order 2 with a complex vector.

You are right. It is wrong. Thank you !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: complex Related to complex number support in PyTorch module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

5 participants