Skip to content

Add MLX op handler for aten.isnan#18952

Merged
metascroy merged 3 commits intopytorch:mainfrom
Jah-yee:add-mlx-op-handler-aten-isnan
Apr 17, 2026
Merged

Add MLX op handler for aten.isnan#18952
metascroy merged 3 commits intopytorch:mainfrom
Jah-yee:add-mlx-op-handler-aten-isnan

Conversation

@Jah-yee
Copy link
Copy Markdown

@Jah-yee Jah-yee commented Apr 16, 2026

Good day

Summary

This PR adds an MLX op handler for aten.isnan to the PyTorch ExecuTorch MLX delegate, as requested in issue #18920.

Changes

Handler (backends/mlx/ops.py)

Added _isnan_handler registered for torch.ops.aten.isnan.default. The implementation uses the mathematical property that NaN != NaN — NaN is the only floating-point value that is not equal to itself — to decompose the operation as:

isnan(x) = x != x

This uses the existing NotEqualNode by comparing the input with itself, avoiding the need for a dedicated MLX isnan primitive.

Test (backends/mlx/test/test_ops.py)

  • Added _nan_input_fn(nan_frac) helper that generates tensors with a configurable fraction of NaN values.
  • Added isnan entry to _UNARY_OP_TESTS covering _SHAPES_3 and three float dtypes (float32, float16, bfloat16).

Testing

The test can be run with:

python -m executorch.backends.mlx.test.run_all_tests -k isnan

The implementation is consistent with the approach described in the linked issue and follows the existing code patterns in the MLX backend.

Thank you for your attention. If there are any issues or suggestions, please leave a comment and I will address them promptly.

Warmly,
RoomWithOutRoof

Add a handler for aten.isnan in the MLX delegate using the mathematical
property that NaN != NaN (NaN is the only value not equal to itself).
This uses the existing NotEqualNode by comparing x with itself.

Also add corresponding test with a custom input function that injects
NaN values at a configurable fraction.
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 16, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18952

Note: Links to docs will display an error until the docs builds have been completed.

❌ 14 New Failures, 21 Pending

As of commit 49811c4 with merge base 9576316 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 16, 2026
@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@kirklandsign kirklandsign requested a review from metascroy April 16, 2026 22:50
@kirklandsign
Copy link
Copy Markdown
Contributor

@claude review

@claude
Copy link
Copy Markdown

claude Bot commented Apr 16, 2026

Claude finished @kirklandsign's task in 1m 21s —— View job


Review of PR #18952: Add MLX op handler for aten.isnan

  • Read changed files and gather context
  • Review handler implementation in backends/mlx/ops.py
  • Review test additions in backends/mlx/test/test_ops.py
  • Post review feedback

This is a clean, well-structured PR. The implementation is correct and follows existing patterns in the codebase. A few observations:

Handler (backends/mlx/ops.py)

The approach is sound. Decomposing isnan(x) as x != x via NotEqualNode is the standard IEEE 754 identity — NaN is the only floating-point value where x != x is true. This avoids needing a dedicated MLX isnan primitive.

Nit — the docstring is verbose for this codebase. Per the repo's code style guidelines ("Minimal comments; code should be self-documenting" / "Comments only for non-obvious global context"), the multi-line docstring at ops.py:428-431 could be trimmed. The x != x trick is well-known among anyone working on floating-point ops. The section header comment block ("Numerical checks") is fine as a structural separator, but the docstring body explaining the NaN property could be shortened to a single line or removed. Minor nit — not blocking.

Placement is good. Putting this between the unary ops table and the binary ops table, under its own section header, makes sense since it doesn't fit the simple unary_node(x, out) pattern used by the table-driven unary handlers.

Tests (backends/mlx/test/test_ops.py)

_nan_input_fn is correct. The mask logic torch.rand(shape) > (1.0 - nan_frac) with default nan_frac=0.3 produces ~30% NaN values — verified. Using torch.randn(shape, dtype=dtype) matches the existing _input_fn pattern.

Good dtype coverage. Testing float32, float16, and bfloat16 covers the relevant floating-point types.

One suggestion: consider adding an all-NaN and/or no-NaN edge case. The current _nan_input_fn(0.3) relies on randomness, so there's a small but nonzero chance of generating a tensor with zero NaN values (especially for the (16,) shape). More importantly, there's no explicit test for boundary conditions like an all-NaN input or an all-finite input. This could be done by adding entries with _nan_input_fn(nan_frac=0.0) and _nan_input_fn(nan_frac=1.0), or by adding a dedicated test case. Not blocking, but would improve coverage confidence.

Summary

The PR is ready to merge with the optional suggestions above. The implementation is correct, concise, and consistent with the existing MLX backend patterns.


@metascroy
Copy link
Copy Markdown
Contributor

Looks good to me. Re-running tests to see if there are any failures

@metascroy metascroy merged commit 107eb92 into pytorch:main Apr 17, 2026
166 of 182 checks passed
@nil-is-all nil-is-all added the module: mlx Issues related to MLX Backend: Metal-accelerated inference on Apple Silicon label Apr 20, 2026
@Jah-yee Jah-yee deleted the add-mlx-op-handler-aten-isnan branch April 24, 2026 20:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. module: mlx Issues related to MLX Backend: Metal-accelerated inference on Apple Silicon

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants