Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
101 changes: 68 additions & 33 deletions docs/source/apple-runtime.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The ExecuTorch Runtime for iOS and macOS is distributed as a collection of prebu
* `backend_coreml` - Core ML backend
* `backend_mps` - MPS backend
* `backend_xnnpack` - XNNPACK backend
* `kernels_custom` - Custom kernels
* `kernels_custom` - Custom kernels for LLMs
* `kernels_optimized` - Optimized kernels
* `kernels_portable` - Portable kernels (naive implementation used as a reference)
* `kernels_quantized` - Quantized kernels
Expand All @@ -19,19 +19,6 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by

## Integration

### Setup

#### CMake

Building the Xcode project requires CMake. Installing via homebrew does not
typically work; instead, install the packaged application and commandline tools
globally:

1. Download the macOS `.dmg` installer from https://cmake.org/download
2. Open the `.dmg`
3. Drag the CMake app to the `/Applications` folder
4. In a terminal, install the command line tools: `sudo /Applications/CMake.app/Contents/bin/cmake-gui --install`

### Swift Package Manager

The prebuilt ExecuTorch runtime, backend, and kernels are available as a [Swift PM](https://www.swift.org/documentation/package-manager/) package.
Expand Down Expand Up @@ -67,7 +54,7 @@ let package = Package(
],
dependencies: [
// Use "swiftpm-0.4.0.<year_month_day>" branch name for a nightly build.
.package(url: "https://github.com/pytorch/executorch.git", .branch("0.4.0"))
.package(url: "https://github.com/pytorch/executorch.git", .branch("swiftpm-0.4.0"))
],
targets: [
.target(
Expand Down Expand Up @@ -104,27 +91,19 @@ xcode-select --install
2. Clone ExecuTorch:

```bash
git clone https://github.com/pytorch/executorch.git --recursive --depth 1
cd executorch
git clone https://github.com/pytorch/executorch.git --depth 1 --recurse-submodules --shallow-submodules && cd executorch
```

3. Set up [Python](https://www.python.org/downloads/macos/) 3.10+ and activate a virtual environment:

```bash
python3 -m venv .venv
source .venv/bin/activate
```

4. Install [Cmake](https://cmake.org) and other helpful [PyPI](https://pypi.org) packages:

```bash
pip install --upgrade cmake pip zstd
python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip
```

5. Install the required dependencies, including those needed for the backends like [Core ML](build-run-coreml.md) or [MPS](build-run-mps.md), if you plan to build them as well:
4. Install the required dependencies, including those needed for the backends like [Core ML](build-run-coreml.md) or [MPS](build-run-mps.md), if you plan to build them as well:

```bash
./install_requirements.sh
./install_requirements.sh --pybind coreml mps xnnpack

# Optional dependencies for Core ML backend.
./backends/apple/coreml/scripts/install_requirements.sh
Expand All @@ -133,6 +112,14 @@ pip install --upgrade cmake pip zstd
./backends/apple/mps/install_requirements.sh
```

5. Install [CMake](https://cmake.org):

Download the macOS binary distribution from the [CMake website](https://cmake.org/download), open the `.dmg` file, move `CMake.app` to the `/Applications` directory, and then run the following command to install the CMake command-line tools:

```bash
sudo /Applications/CMake.app/Contents/bin/cmake-gui --install
```

6. Use the provided script to build .xcframeworks:

```bash
Expand All @@ -142,21 +129,52 @@ pip install --upgrade cmake pip zstd
For example, the following invocation will build the ExecuTorch Runtime and all currently available kernels and backends for the Apple platform:

```bash
./build/build_apple_frameworks.sh --coreml --custom --mps --optimized --portable --quantized --xnnpack
./build/build_apple_frameworks.sh --coreml --mps --xnnpack --custom --optimized --portable --quantized
```

Append a `--Debug` flag to the above command to build the binaries with debug symbols if needed.

After the build finishes successfully, the resulting frameworks can be found in the `cmake-out` directory.
Copy them to your project and link them against your targets.

## Linkage

ExecuTorch initializes its backends and kernels (operators) during app startup by registering them in a static dictionary. If you encounter errors like "unregistered kernel" or "unregistered backend" at runtime, you may need to explicitly force-load certain components. Use the `-all_load` or `-force_load` linker flags in your Xcode build configuration to ensure components are registered early.

Here's an example of a Xcode configuration file (`.xcconfig`):

```
OTHER_LDFLAGS[sdk=iphonesimulator*] = $(inherited) \
-force_load $(BUILT_PRODUCTS_DIR)/libexecutorch-ios-release.a \
-force_load $(BUILT_PRODUCTS_DIR)/libbackend_coreml-ios-release.a \
-force_load $(BUILT_PRODUCTS_DIR)/libbackend_mps-ios-release.a \
-force_load $(BUILT_PRODUCTS_DIR)/libbackend_xnnpack-ios-release.a

OTHER_LDFLAGS[sdk=iphoneos*] = $(inherited) \
-force_load $(BUILT_PRODUCTS_DIR)/libexecutorch-ios-release.a \
-force_load $(BUILT_PRODUCTS_DIR)/libbackend_coreml-ios-release.a \
-force_load $(BUILT_PRODUCTS_DIR)/libbackend_mps-ios-release.a \
-force_load $(BUILT_PRODUCTS_DIR)/libbackend_xnnpack-ios-release.a

OTHER_LDFLAGS[sdk=macos*] = $(inherited) \
-force_load $(BUILT_PRODUCTS_DIR)/libexecutorch-ios-release.a \
-force_load $(BUILT_PRODUCTS_DIR)/libbackend_coreml-ios-release.a \
-force_load $(BUILT_PRODUCTS_DIR)/libbackend_mps-ios-release.a \
-force_load $(BUILT_PRODUCTS_DIR)/libbackend_xnnpack-ios-release.a
```

Replace `release` with `debug` in library file names for a Debug mode config respectively. You can assign such `.xcconfig` file to your target in Xcode: add it to your project, navigate to the project's Info tab, and select it in the build configuration for Debug and Release modes.

## Runtime API

Check out the [C++ Runtime API Tutorial](extension-module.md) to learn more about how to load and run an exported model. It is recommended to use the C++ API for macOS or iOS, wrapped with Objective-C++ and Swift code if needed to expose it for other components. Please refer to the [Demo App](demo-apps-ios.md) as an example of such a setup.
Check out the [C++ Runtime API](extension-module.md) and [Tensors](extension-tensor.md) tutorials to learn more about how to load and run an exported model. It is recommended to use the C++ API for macOS or iOS, wrapped with Objective-C++ and Swift code if needed to expose it for other components. Please refer to the [Demo App](demo-apps-ios.md) as an example of such a setup.

Once linked against the `executorch` runtime framework, the target can now import all ExecuTorch public headers. For example, in Objective-C++:

```objectivecpp
#import <ExecuTorch/ExecuTorch.h>
#import <executorch/extension/module/module.h>
#import <executorch/extension/tensor/tensor.h>
```

Or in Swift:
Expand All @@ -167,6 +185,8 @@ import ExecuTorch

**Note:** Importing the ExecuTorch umbrella header (or ExecuTorch module in Swift) provides access to the logging API only. You still need to import the other runtime headers explicitly as needed, e.g., `module.h`. There is no support for other runtime APIs in Objective-C or Swift beyond logging described below.

**Note:** Logs are stripped in the release builds of ExecuTorch frameworks. To preserve logging, use debug builds during development.

### Logging

We provide extra APIs for logging in Objective-C and Swift as a lightweight wrapper of the internal ExecuTorch machinery. To use it, just import the main framework header in Objective-C. Then use the `ExecuTorchLog` interface (or the `Log` class in Swift) to subscribe your own implementation of the `ExecuTorchLogSink` protocol (or `LogSink` in Swift) to listen to log events.
Expand Down Expand Up @@ -268,16 +288,31 @@ extension MyClass: LogSink {

**Note:** In the example, the logs are intentionally stripped out when the code is not built for Debug mode, i.e., the `DEBUG` macro is not defined or equals zero.

## Debugging

If you are linking against a Debug build of the ExecuTorch frameworks, configure your debugger to map the source code correctly by using the following LLDB command in the debug session:

```
settings append target.source-map /executorch <path_to_executorch_source_code>
```

## Troubleshooting

### Missing operator or backend
### Slow execution

Ensure the exported model is using an appropriate backend, such as XNNPACK, Core ML, or MPS. If the correct backend is invoked but performance issues persist, confirm that you are linking against the Release build of the backend runtime.

If after linking against a certain ExecuTorch library you still get an unregistered kernel or backend error at runtime, you may need to use the `-all_load` or `-force_load $(BUILT_PRODUCTS_DIR)/<library_name>` linker flags to forcefully register the corresponding components at app start, e.g., `-force_load $(BUILT_PRODUCTS_DIR)/libkernels_portable-ios-release.a`.
For optimal performance, link the ExecuTorch runtime in Release mode too. If debugging is needed, you can keep the ExecuTorch runtime in Debug mode with minimal impact on performance, but preserve logging and debug symbols.

### Swift PM

If Swift PM complains about a checksum mismatch for the package contents, try cleaning the cache:
If you encounter a checksum mismatch error with Swift PM, clear the package cache using the Xcode menu (`File > Packages > Reset Package Caches`) or the following command:

```bash
rm -rf ~/Library/Caches/org.swift.swiftpm
rm -rf <YouProjectName>.xcodeproj/project.xcworkspace/xcshareddata/swiftpm \
~/Library/org.swift.swiftpm \
~/Library/Caches/org.swift.swiftpm \
~/Library/Caches/com.apple.dt.Xcode \
~/Library/Developer/Xcode/DerivedData
```
**Note:** Ensure Xcode is fully quit before running the terminal command to avoid conflicts with active processes.
Loading