Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions docs/source/backend-delegate-advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,6 @@

- {doc}`backend-delegates-integration` — Learn how to integrate a backend delegate into ExecuTorch

## XNNPACK Reference

- {doc}`backend-delegates-xnnpack-reference` — Deep dive into XNNPACK delegate internals and implementation details

## Dependency Management

- {doc}`backend-delegates-dependencies` — Manage third-party dependencies for backend delegates
Expand All @@ -27,7 +23,6 @@
:maxdepth: 1

backend-delegates-integration
backend-delegates-xnnpack-reference
backend-delegates-dependencies
compiler-delegate-and-partitioner
debug-backend-delegate
1 change: 0 additions & 1 deletion docs/source/backend-development.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
:maxdepth: 1
backend-delegates-integration
backend-delegates-xnnpack-reference
backend-delegates-dependencies
compiler-delegate-and-partitioner
debug-backend-delegate
Expand Down
32 changes: 16 additions & 16 deletions docs/source/backends-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,20 +18,20 @@ Backends are the bridge between your exported model and the hardware it runs on.

## Choosing a Backend

| Backend | Platform(s) | Hardware Type | Typical Use Case |
|------------------------------------------|---------------------|---------------|---------------------------------|
| [XNNPACK](backends-xnnpack) | All | CPU | General-purpose, fallback |
| [Core ML](backends-coreml) | iOS, macOS | NPU/GPU | Apple devices, high performance |
| [Metal Performance Shaders](backends-mps)| iOS, macOS | GPU | Apple GPU acceleration |
| [Vulkan ](backends-vulkan) | Android | GPU | Android GPU acceleration |
| [Qualcomm](backends-qualcomm) | Android | NPU | Qualcomm SoCs |
| [MediaTek](backends-mediatek) | Android | NPU | MediaTek SoCs |
| [ARM EthosU](backends-arm-ethos-u) | Embedded | NPU | ARM MCUs |
| [ARM VGF](backends-arm-vgf) | Android | NPU | ARM platforms |
| [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs |
| [NXP](backends-nxp) | Embedded | NPU | NXP SoCs |
| [Cadence](backends-cadence) | Embedded | DSP | DSP-optimized workloads |
| [Samsung Exynos](backends-samsung-exynos)| Android | NPU | Samsung SoCs |
| Backend | Platform(s) | Hardware Type | Typical Use Case |
|-----------------------------------------------|---------------------|---------------|---------------------------------|
| [XNNPACK](backends/xnnpack/xnnpack-overview) | All | CPU | General-purpose, fallback |
| [Core ML](backends-coreml) | iOS, macOS | NPU/GPU | Apple devices, high performance |
| [Metal Performance Shaders](backends-mps) | iOS, macOS | GPU | Apple GPU acceleration |
| [Vulkan ](backends-vulkan) | Android | GPU | Android GPU acceleration |
| [Qualcomm](backends-qualcomm) | Android | NPU | Qualcomm SoCs |
| [MediaTek](backends-mediatek) | Android | NPU | MediaTek SoCs |
| [ARM EthosU](backends-arm-ethos-u) | Embedded | NPU | ARM MCUs |
| [ARM VGF](backends-arm-vgf) | Android | NPU | ARM platforms |
| [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs |
| [NXP](backends-nxp) | Embedded | NPU | NXP SoCs |
| [Cadence](backends-cadence) | Embedded | DSP | DSP-optimized workloads |
| [Samsung Exynos](backends-samsung-exynos) | Android | NPU | Samsung Socs |

**Tip:** For best performance, export a `.pte` file for each backend you plan to support.

Expand All @@ -46,11 +46,11 @@ Backends are the bridge between your exported model and the hardware it runs on.
---

```{toctree}
:maxdepth: 1
:maxdepth: 3
:hidden:
:caption: Backend Overview

backends-xnnpack
backends/xnnpack/xnnpack-overview
backends-coreml
backends-mps
backends-vulkan
Expand Down
182 changes: 0 additions & 182 deletions docs/source/backends-xnnpack.md

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# XNNPACK Delegate Internals
# Architecture and Internals

This is a high-level overview of the ExecuTorch XNNPACK backend delegate. This high performance delegate is aimed to reduce CPU inference latency for ExecuTorch models. We will provide a brief introduction to the XNNPACK library and explore the delegate’s overall architecture and intended use cases.

Expand All @@ -9,12 +9,12 @@ XNNPACK is a library of highly-optimized neural network operators for ARM, x86,
A delegate is an entry point for backends to process and execute parts of the ExecuTorch program. Delegated portions of ExecuTorch models hand off execution to backends. The XNNPACK backend delegate is one of many available in ExecuTorch. It leverages the XNNPACK third-party library to accelerate ExecuTorch programs efficiently across a variety of CPUs. More detailed information on the delegates and developing your own delegates is available [here](compiler-delegate-and-partitioner.md). It is recommended that you get familiar with that content before continuing on to the Architecture section.

## Architecture
![High Level XNNPACK delegate Architecture](xnnpack-delegate-architecture.png)
![High Level XNNPACK delegate Architecture](/backends/xnnpack/xnnpack-delegate-architecture.png)

### Ahead-of-time
In the ExecuTorch export flow, lowering to the XNNPACK delegate happens at the `to_backend()` stage. In this stage, the model is partitioned by the `XnnpackPartitioner`. Partitioned sections of the graph are converted to a XNNPACK specific graph represenationed and then serialized via flatbuffer. The serialized flatbuffer is then ready to be deserialized and executed by the XNNPACK backend at runtime.

![ExecuTorch XNNPACK delegate Export Flow](xnnpack-et-flow-diagram.png)
![ExecuTorch XNNPACK delegate Export Flow](/backends/xnnpack/xnnpack-et-flow-diagram.png)

#### Partitioner
The partitioner is implemented by backend delegates to mark nodes suitable for lowering. The `XnnpackPartitioner` lowers using node targets and module metadata. Some more references for partitioners can be found [here](compiler-delegate-and-partitioner.md)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Partitioner API

The XNNPACK partitioner API allows for configuration of the model delegation to XNNPACK. Passing an `XnnpackPartitioner` instance with no additional parameters will run as much of the model as possible on the XNNPACK backend. This is the most common use-case. For advanced use cases, the partitioner exposes the following options via the [constructor](https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/xnnpack_partitioner.py#L31):

- `configs`: Control which operators are delegated to XNNPACK. By default, all available operators all delegated. See [../config/\_\_init\_\_.py](https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/config/__init__.py#L66) for an exhaustive list of available operator configs.
- `config_precisions`: Filter operators by data type. By default, delegate all precisions. One or more of `ConfigPrecisionType.FP32`, `ConfigPrecisionType.STATIC_QUANT`, or `ConfigPrecisionType.DYNAMIC_QUANT`. See [ConfigPrecisionType](https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/config/xnnpack_config.py#L24).
- `per_op_mode`: If true, emit individual delegate calls for every operator. This is an advanced option intended to reduce memory overhead in some contexts at the cost of a small amount of runtime overhead. Defaults to false.
- `verbose`: If true, print additional information during lowering.
Loading
Loading