Skip to content

[CoreML Backend] macOS 26.1 ANE regression: fp16 LLaMA inference produces inf/nan (worked on macOS 15.7) #15833

@seyeong-han

Description

@seyeong-han

🐛 Describe the bug

Environment

  • macOS Version: macOS-26.1-arm64-arm-64bit
  • Previous Working Version: macOS-15.7.1-arm64-arm-64bit
  • Hardware: Apple Silicon (M-series)
  • ExecuTorch: [current version from repo]
  • CoreMLTools: [check with pip show coremltools]
  • Python: 3.11
  • Model: LLaMA 3.2 1B

Issue Description

LLaMA 3.2 1B model exported with fp16 precision and CPU_AND_NE compute units produces inf/nan values during inference on macOS 26.1. The exact same model and code worked correctly on macOS 15.7.

This appears to be a macOS 26.1 Apple Neural Engine (ANE) regression affecting fp16 numerical precision.

I encountered this fp16 issue on Metal as well. issue

Regression Evidence

export.py

compile_specs = CoreMLBackend.generate_compile_specs(  # pyre-fixme[16]
        minimum_deployment_target=ct.target.iOS18,
        compute_precision={
            torch.float16: ct.precision.FLOAT16,
            torch.float32: ct.precision.FLOAT32,
        }[float_dtype],
        compute_unit=ct.ComputeUnit.CPU_AND_NE,
        model_type=CoreMLBackend.MODEL_TYPE.MODEL,  # pyre-fixme[16]
    )
Configuration macOS 15.7 macOS 26.1
fp16 + CPU_AND_NE ✅ Works ❌ inf/nan
fp16 + CPU_ONLY ✅ Works ✅ Works
fp32 + CPU_AND_NE ✅ Works ✅ Works

Error Message

RuntimeError: probability tensor contains either `inf`, `nan` or element < 0


### Versions

PyTorch version: 2.9.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 26.1 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.3.19.1)
CMake version: version 4.0.3
Libc version: N/A

Python version: 3.11.14 (main, Oct 21 2025, 18:27:30) [Clang 20.1.8 ] (64-bit runtime)
Python platform: macOS-26.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M3 Pro

Versions of relevant libraries:
[pip3] executorch==1.0.0
[pip3] numpy==2.3.4
[pip3] pytorch_tokenizers==1.0.1
[pip3] torch==2.9.1
[pip3] torchao==0.14.0
[pip3] torchdata==0.11.0
[pip3] torchtune==0.6.1
[conda] executorch                1.0.0                    pypi_0    pypi
[conda] numpy                     2.3.4                    pypi_0    pypi
[conda] pytorch-tokenizers        1.0.1                    pypi_0    pypi
[conda] torch                     2.9.1                    pypi_0    pypi
[conda] torchao                   0.14.0                   pypi_0    pypi
[conda] torchdata                 0.11.0                   pypi_0    pypi
[conda] torchtune                 0.6.1                    pypi_0    pypi

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions