Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: "log2" "_vml_cpu" not implemented for 'Half' #54774

Open
turian opened this issue Mar 26, 2021 · 2 comments
Open

RuntimeError: "log2" "_vml_cpu" not implemented for 'Half' #54774

turian opened this issue Mar 26, 2021 · 2 comments
Labels
module: half Related to float16 half-precision floats module: numpy Related to numpy support, and also numpy compatibility of our operators triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@turian
Copy link

turian commented Mar 26, 2021

🐛 Bug

On CPU, haven't tested on GPU

RuntimeError: "log2" "_vml_cpu" not implemented for 'Half'

RuntimeError: "log" "_vml_cpu" not implemented for 'Half'

RuntimeError: "exp" "_vml_cpu" not implemented for 'Half'

RuntimeError: "pow" not implemented for 'Half'

RuntimeError: "round" "_vml_cpu" not implemented for 'Half'

To Reproduce

Steps to reproduce the behavior:

torch.log2(torch.rand((3, 3), dtype=torch.float16))
torch.log(torch.rand((3, 3), dtype=torch.float16))
torch.exp(torch.rand((3, 3), dtype=torch.float16))
torch.pow(torch.rand((3, 3), dtype=torch.float16))
torch.round(torch.rand((3, 3), dtype=torch.float16))
RuntimeError: "log2" "_vml_cpu" not implemented for 'Half'

RuntimeError: "log" "_vml_cpu" not implemented for 'Half'

RuntimeError: "exp" "_vml_cpu" not implemented for 'Half'

RuntimeError: "pow" not implemented for 'Half'

RuntimeError: "round" "_vml_cpu" not implemented for 'Half'

Expected behavior

log2 and log and exp and pow and round should work for Half. exp2 does work tho.

Environment

  • PyTorch Version (e.g., 1.0): 1.8.1
  • OS (e.g., Linux): OSX
  • How you installed PyTorch (conda, pip, source): pip3
  • Build command you used (if compiling from source):
  • Python version: 3.8.5
  • CUDA/cuDNN version: n/a
  • GPU models and configuration: CPU
  • Any other relevant information: n/a

Related to #50789

cc @mruberry @rgommers @heitorschueroff

@imaginary-person
Copy link
Contributor

imaginary-person commented Mar 26, 2021

pow would be enabled for Float16 & BFloat16 on CPU via #50999, but log_vml_cpu, log2_vml_cpu, exp_vml_cpu, and round_vml_cpu can't currently be enabled for Float16 due to lack of complete AVX2 vectorization support for Float16 in /aten/src/ATen/cpu/vec256. So even after #50999 would land, pow wouldn't support autograd for Float16 on CPU until more AVX2 support would be added for Float16.

However, these ops are currently supported for BFloat16 on CPU, and pow's support for it (including autograd) would be enabled soon on CPU. exp2 hasn't been enabled for BFloat16 on CPU or CUDA yet, but I opened #54794 for it.

These ops are already supported on CUDA for Float16.

@H-Huang H-Huang added module: half Related to float16 half-precision floats module: numpy Related to numpy support, and also numpy compatibility of our operators triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Mar 30, 2021
@mruberry
Copy link
Collaborator

mruberry commented Apr 5, 2021

It'd be nice if we improved these error messages and better documented which dtypes these operations support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: half Related to float16 half-precision floats module: numpy Related to numpy support, and also numpy compatibility of our operators triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants