Skip to content

Add coreml_compute_plan.py: report which CoreML ops dispatch to ANE / GPU / CPU#19252

Open
john-rocky wants to merge 1 commit intopytorch:mainfrom
john-rocky:coreml/compute-plan-analyzer
Open

Add coreml_compute_plan.py: report which CoreML ops dispatch to ANE / GPU / CPU#19252
john-rocky wants to merge 1 commit intopytorch:mainfrom
john-rocky:coreml/compute-plan-analyzer

Conversation

@john-rocky
Copy link
Copy Markdown

Summary

CoreML decides at compile/load time which device each MIL operation will
execute on, and coremltools 9.0+ exposes that through MLComputePlan.
The recurring question on the issue tracker is "why isn't my model
running fully on the ANE?"
— for example:

Today the only way for an ExecuTorch user to answer it is to break out
Swift / Xcode. This PR adds a Python wrapper around MLComputePlan so
the answer is one shell command:

$ python coreml_compute_plan.py --model_path my_model.mlpackage \
      --compute_units cpu_and_ne --show_non_ane

=== my_model.mlpackage ===
  ANE:   412 / 480 ( 85.8%)
  CPU:    68 / 480 ( 14.2%)

  Non-ANE op types:
       32  ios17.cast
       18  ios17.gather
       12  ios17.reshape
        6  ios17.constexpr_blockwise_shift_scale

Inputs supported:

Input Behavior
.pte Extract every Core ML partition into a tempdir, then analyze each.
.mlpackage Compile to .mlmodelc in a tempdir, then analyze.
.mlmodelc Analyze directly.

The PTE path reuses the same JSON/named-data extraction logic that
extract_coreml_models.py uses, and is inlined into the script so it can
be run against a plain CoreML model without depending on the executorch
package.

Test plan

Added test_coreml_compute_plan.py covering:

  • _device_name(...) for None and a stub MLNeuralEngineComputeDevice.
  • _COMPUTE_UNIT_CHOICES mapping (cpu_and_ne / all).
  • analyze_one(...) end-to-end on a tiny relu(x @ x.T) + x.sum()
    mlpackage built with coremltools.convert(...): returns rows for
    every dispatched op, with a main function and the expected MIL op
    types (matmul, relu, add, reduce_sum).
$ python -m pytest examples/apple/coreml/scripts/test_coreml_compute_plan.py -v
============================== 7 passed in 3.68s ===============================

I also ran the script against a few hand-built .mlpackage and
.mlmodelc files on macOS 26 with coremltools 9.0 and verified the
output matches what MLComputePlan returns directly.

Authored with Claude.

CoreML decides at compile/load time which device each MIL operation
will execute on; that decision is exposed through MLComputePlan in
coremltools 9.0+.  This script wraps it so users can answer 'why
isn't my model running on the ANE?' without writing Swift, which is
the recurring question behind issues like pytorch#4091, pytorch#11541, and pytorch#8439.

Inputs supported:
  * .pte         — extracts every Core ML partition first.
  * .mlpackage   — compiles to .mlmodelc in a tempdir.
  * .mlmodelc    — analyzed directly.

Reports per-op dispatch (ANE / GPU / CPU), an aggregate breakdown,
and optionally the op types that did not get assigned to the ANE
(--show_non_ane).

Authored with Claude.
@john-rocky john-rocky requested a review from metascroy as a code owner May 1, 2026 05:53
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented May 1, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19252

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 1, 2026
@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 1, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant