Skip to content

Cast int input to fp32 in torch reciprocal converter#2683

Open
john-rocky wants to merge 1 commit intoapple:mainfrom
john-rocky:fix-reciprocal-int-dtype
Open

Cast int input to fp32 in torch reciprocal converter#2683
john-rocky wants to merge 1 commit intoapple:mainfrom
john-rocky:fix-reciprocal-int-dtype

Conversation

@john-rocky
Copy link
Copy Markdown
Contributor

Summary

  • torch.reciprocal returns a float for int inputs in PyTorch, but mb.inverse only accepts fp16/fp32. Common patterns like 1 / x.shape[0] (which TorchScript traces as reciprocal(prim::NumToTensor(int))) failed to convert with:

    Op (op_type: inverse) Input x expects tensor or scalar of dtype from type domain ['fp16', 'fp32'] but got tensor[1, int32]

  • Insert a fp32 cast before mb.inverse when the input dtype is integer, mirroring the pattern already used by log, sqrt, and other unary ops that share the same MIL constraint.

Test plan

  • pytest test_torch_ops.py::TestReciprocal::test_reciprocal_int_shape — TorchScript path; the torch.export path folds the shape-derived constant and is skipped.
  • End-to-end mlpackage.predict on the issue repro produces 8.0 (within fp16 reciprocal precision). The same fix unblocks RoPE-style 1 / theta**(2i/d) expressions, verified separately.

Fixes #2579.

`torch.reciprocal` returns a float for int inputs in PyTorch, but
`mb.inverse` only accepts fp16/fp32. As a result, common patterns like
`1 / x.shape[0]` (which TorchScript traces as
`reciprocal(prim::NumToTensor(int))`) failed conversion with:
  Op (op_type: inverse) Input x expects tensor or scalar of dtype
  from type domain ['fp16', 'fp32'] but got tensor[1, int32]

Insert a fp32 cast before `mb.inverse` when the input dtype is integer,
mirroring the pattern already used by `log`, `sqrt`, and other unary
ops that share the same MIL constraint.

Verified end-to-end on the issue repro and a representative RoPE-style
inverse-frequency expression.

Fixes apple#2579.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Unsupported a / x.shape[0]

1 participant