diff --git a/docs/source/backend-delegate-advanced.md b/docs/source/backend-delegate-advanced.md index 752bd1cdc02..e82e5ee035d 100644 --- a/docs/source/backend-delegate-advanced.md +++ b/docs/source/backend-delegate-advanced.md @@ -6,10 +6,6 @@ - {doc}`backend-delegates-integration` — Learn how to integrate a backend delegate into ExecuTorch -## XNNPACK Reference - -- {doc}`backend-delegates-xnnpack-reference` — Deep dive into XNNPACK delegate internals and implementation details - ## Dependency Management - {doc}`backend-delegates-dependencies` — Manage third-party dependencies for backend delegates @@ -27,7 +23,6 @@ :maxdepth: 1 backend-delegates-integration -backend-delegates-xnnpack-reference backend-delegates-dependencies compiler-delegate-and-partitioner debug-backend-delegate diff --git a/docs/source/backend-development.md b/docs/source/backend-development.md index ec5ceb3b37a..40c50a8ad11 100644 --- a/docs/source/backend-development.md +++ b/docs/source/backend-development.md @@ -4,7 +4,6 @@ :maxdepth: 1 backend-delegates-integration -backend-delegates-xnnpack-reference backend-delegates-dependencies compiler-delegate-and-partitioner debug-backend-delegate diff --git a/docs/source/backends-overview.md b/docs/source/backends-overview.md index 4a3313964a8..7ea68324005 100644 --- a/docs/source/backends-overview.md +++ b/docs/source/backends-overview.md @@ -18,20 +18,20 @@ Backends are the bridge between your exported model and the hardware it runs on. ## Choosing a Backend -| Backend | Platform(s) | Hardware Type | Typical Use Case | -|------------------------------------------|---------------------|---------------|---------------------------------| -| [XNNPACK](backends-xnnpack) | All | CPU | General-purpose, fallback | -| [Core ML](backends-coreml) | iOS, macOS | NPU/GPU | Apple devices, high performance | -| [Metal Performance Shaders](backends-mps)| iOS, macOS | GPU | Apple GPU acceleration | -| [Vulkan ](backends-vulkan) | Android | GPU | Android GPU acceleration | -| [Qualcomm](backends-qualcomm) | Android | NPU | Qualcomm SoCs | -| [MediaTek](backends-mediatek) | Android | NPU | MediaTek SoCs | -| [ARM EthosU](backends-arm-ethos-u) | Embedded | NPU | ARM MCUs | -| [ARM VGF](backends-arm-vgf) | Android | NPU | ARM platforms | -| [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs | -| [NXP](backends-nxp) | Embedded | NPU | NXP SoCs | -| [Cadence](backends-cadence) | Embedded | DSP | DSP-optimized workloads | -| [Samsung Exynos](backends-samsung-exynos)| Android | NPU | Samsung SoCs | +| Backend | Platform(s) | Hardware Type | Typical Use Case | +|-----------------------------------------------|---------------------|---------------|---------------------------------| +| [XNNPACK](backends/xnnpack/xnnpack-overview) | All | CPU | General-purpose, fallback | +| [Core ML](backends-coreml) | iOS, macOS | NPU/GPU | Apple devices, high performance | +| [Metal Performance Shaders](backends-mps) | iOS, macOS | GPU | Apple GPU acceleration | +| [Vulkan ](backends-vulkan) | Android | GPU | Android GPU acceleration | +| [Qualcomm](backends-qualcomm) | Android | NPU | Qualcomm SoCs | +| [MediaTek](backends-mediatek) | Android | NPU | MediaTek SoCs | +| [ARM EthosU](backends-arm-ethos-u) | Embedded | NPU | ARM MCUs | +| [ARM VGF](backends-arm-vgf) | Android | NPU | ARM platforms | +| [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs | +| [NXP](backends-nxp) | Embedded | NPU | NXP SoCs | +| [Cadence](backends-cadence) | Embedded | DSP | DSP-optimized workloads | +| [Samsung Exynos](backends-samsung-exynos) | Android | NPU | Samsung Socs | **Tip:** For best performance, export a `.pte` file for each backend you plan to support. @@ -46,11 +46,11 @@ Backends are the bridge between your exported model and the hardware it runs on. --- ```{toctree} -:maxdepth: 1 +:maxdepth: 3 :hidden: :caption: Backend Overview -backends-xnnpack +backends/xnnpack/xnnpack-overview backends-coreml backends-mps backends-vulkan diff --git a/docs/source/backends-xnnpack.md b/docs/source/backends-xnnpack.md deleted file mode 100644 index 42e76741ec8..00000000000 --- a/docs/source/backends-xnnpack.md +++ /dev/null @@ -1,182 +0,0 @@ -# XNNPACK Backend - -The XNNPACK delegate is the ExecuTorch solution for CPU execution on mobile CPUs. [XNNPACK](https://github.com/google/XNNPACK/tree/master) is a library that provides optimized kernels for machine learning operators on Arm and x86 CPUs. - -## Features - -- Wide operator support on Arm and x86 CPUs, available on any modern mobile phone. -- Support for a wide variety of quantization schemes and quantized operators. -- Supports fp32 and fp16 activations. -- Supports 8-bit quantization. - -## Target Requirements - -- ARM64 on Android, iOS, macOS, Linux, and Windows. -- ARMv7 (with NEON) on Android. -- ARMv6 (with VFPv2) on Linux. -- x86 and x86-64 (up to AVX512) on Windows, Linux, Android. - -## Development Requirements - -The XNNPACK delegate does not introduce any development system requirements beyond those required by -the core ExecuTorch runtime. - ----- - -## Using the XNNPACK Backend - -To target the XNNPACK backend during the export and lowering process, pass an instance of the `XnnpackPartitioner` to `to_edge_transform_and_lower`. The example below demonstrates this process using the MobileNet V2 model from torchvision. - -```python -import torch -import torchvision.models as models -from torchvision.models.mobilenetv2 import MobileNet_V2_Weights -from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner -from executorch.exir import to_edge_transform_and_lower - -mobilenet_v2 = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() -sample_inputs = (torch.randn(1, 3, 224, 224), ) - -et_program = to_edge_transform_and_lower( - torch.export.export(mobilenet_v2, sample_inputs), - partitioner=[XnnpackPartitioner()], -).to_executorch() - -with open("mv2_xnnpack.pte", "wb") as file: - et_program.write_to_file(file) -``` - -### Partitioner API - -The XNNPACK partitioner API allows for configuration of the model delegation to XNNPACK. Passing an `XnnpackPartitioner` instance with no additional parameters will run as much of the model as possible on the XNNPACK backend. This is the most common use-case. For advanced use cases, the partitioner exposes the following options via the [constructor](https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/xnnpack_partitioner.py#L31): - - - `configs`: Control which operators are delegated to XNNPACK. By default, all available operators all delegated. See [../config/\_\_init\_\_.py](https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/config/__init__.py#L66) for an exhaustive list of available operator configs. - - `config_precisions`: Filter operators by data type. By default, delegate all precisions. One or more of `ConfigPrecisionType.FP32`, `ConfigPrecisionType.STATIC_QUANT`, or `ConfigPrecisionType.DYNAMIC_QUANT`. See [ConfigPrecisionType](https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/config/xnnpack_config.py#L24). - - `per_op_mode`: If true, emit individual delegate calls for every operator. This is an advanced option intended to reduce memory overhead in some contexts at the cost of a small amount of runtime overhead. Defaults to false. - - `verbose`: If true, print additional information during lowering. - -### Testing the Model - -After generating the XNNPACK-delegated .pte, the model can be tested from Python using the ExecuTorch runtime python bindings. This can be used to sanity check the model and evaluate numerical accuracy. See [Testing the Model](using-executorch-export.md#testing-the-model) for more information. - ----- - -## Quantization - -The XNNPACK delegate can also be used as a backend to execute symmetrically quantized models. To quantize a PyTorch model for the XNNPACK backend, use the `XNNPACKQuantizer`. `Quantizers` are backend specific, which means the `XNNPACKQuantizer` is configured to quantize models to leverage the quantized operators offered by the XNNPACK Library. - -### Supported Quantization Schemes -The XNNPACK delegate supports the following quantization schemes: - -- 8-bit symmetric weights with 8-bit asymmetric activations (via the PT2E quantization flow). - - Supports both static and dynamic activations. - - Supports per-channel and per-tensor schemes. - - Supports linear, convolution, add, mul, cat, and adaptive avg pool 2d operators. - -Weight-only quantization is not currently supported on XNNPACK. - -### 8-bit Quantization using the PT2E Flow - -To perform 8-bit quantization with the PT2E flow, perform the following steps prior to exporting the model: - -1) Create an instance of the `XnnpackQuantizer` class. Set quantization parameters. -2) Use `torch.export.export` to prepare for quantization. -3) Call `prepare_pt2e` to prepare the model for quantization. -4) For static quantization, run the prepared model with representative samples to calibrate the quantized tensor activation ranges. -5) Call `convert_pt2e` to quantize the model. -6) Export and lower the model using the standard flow. - -The output of `convert_pt2e` is a PyTorch model which can be exported and lowered using the normal flow. As it is a regular PyTorch model, it can also be used to evaluate the accuracy of the quantized model using standard PyTorch techniques. - -```python -import torch -import torchvision.models as models -from torchvision.models.mobilenetv2 import MobileNet_V2_Weights -from executorch.backends.xnnpack.quantizer.xnnpack_quantizer import XNNPACKQuantizer, get_symmetric_quantization_config -from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner -from executorch.exir import to_edge_transform_and_lower -from torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e - -model = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() -sample_inputs = (torch.randn(1, 3, 224, 224), ) - -qparams = get_symmetric_quantization_config(is_per_channel=True) # (1) -quantizer = XNNPACKQuantizer() -quantizer.set_global(qparams) - -training_ep = torch.export.export(model, sample_inputs).module() # (2) -prepared_model = prepare_pt2e(training_ep, quantizer) # (3) - -for cal_sample in [torch.randn(1, 3, 224, 224)]: # Replace with representative model inputs - prepared_model(cal_sample) # (4) Calibrate - -quantized_model = convert_pt2e(prepared_model) # (5) - -et_program = to_edge_transform_and_lower( # (6) - torch.export.export(quantized_model, sample_inputs), - partitioner=[XnnpackPartitioner()], -).to_executorch() -``` - -See [PyTorch 2 Export Post Training Quantization](https://docs.pytorch.org/ao/main/tutorials_source/pt2e_quant_ptq.html) for more information. - -### LLM quantization with quantize_ - -The XNNPACK backend also supports quantizing models with the [torchao](https://github.com/pytorch/ao) quantize_ API. This is most commonly used for LLMs, requiring more advanced quantization. Since quantize_ is not backend aware, it is important to use a config that is compatible with CPU/XNNPACK: - -* Quantize embeedings with IntxWeightOnlyConfig (with weight_dtype torch.int2, torch.int4, or torch.int8, using PerGroup or PerAxis granularity) -* Quantize linear layers with Int8DynamicActivationIntxWeightConfig (with weight_dtype=torch.int4, using PerGroup or PerAxis granularity) - -Below is a simple example, but a more detailed tutorial including accuracy evaluation on popular LLM benchmarks can be found in the [torchao documentation](https://docs.pytorch.org/ao/main/serving.html#mobile-deployment-with-executorch). - -```python -from torchao.quantization.granularity import PerGroup, PerAxis -from torchao.quantization.quant_api import ( - IntxWeightOnlyConfig, - Int8DynamicActivationIntxWeightConfig, - quantize_, -) - -# Quantize embeddings with 8-bits, per channel -embedding_config = IntxWeightOnlyConfig( - weight_dtype=torch.int8, - granularity=PerAxis(0), -) -qunatize_( - eager_model, - lambda m, fqn: isinstance(m, torch.nn.Embedding), -) - - -# Quatize linear layers with 8-bit dynamic activations and 4-bit weights -linear_config = Int8DynamicActivationIntxWeightConfig( - weight_dtype=torch.int4, - weight_granularity=PerGroup(32), -) -quantize_(eager_model, linear_config) -``` - ----- - -## Runtime Integration - -To run the model on-device, use the standard ExecuTorch runtime APIs. See [Running on Device](getting-started.md#running-on-device) for more information. - -The XNNPACK delegate is included by default in the published Android, iOS, and pip packages. When building from source, pass `-DEXECUTORCH_BUILD_XNNPACK=ON` when configuring the CMake build to compile the XNNPACK backend. - -To link against the backend, add the `xnnpack_backend` CMake target as a build dependency, or link directly against `libxnnpack_backend`. Due to the use of static registration, it may be necessary to link with whole-archive. This can typically be done by passing `"$"` to `target_link_libraries`. - -``` -# CMakeLists.txt -add_subdirectory("executorch") -... -target_link_libraries( - my_target - PRIVATE executorch - extension_module_static - extension_tensor - optimized_native_cpu_ops_lib - xnnpack_backend) -``` - -No additional steps are necessary to use the backend beyond linking the target. Any XNNPACK-delegated .pte file will automatically run on the registered backend. diff --git a/docs/source/backend-delegates-xnnpack-reference.md b/docs/source/backends/xnnpack/reference/xnnpack-reference-arch-internals.md similarity index 98% rename from docs/source/backend-delegates-xnnpack-reference.md rename to docs/source/backends/xnnpack/reference/xnnpack-reference-arch-internals.md index 8b4338e703c..39daed1dc4a 100644 --- a/docs/source/backend-delegates-xnnpack-reference.md +++ b/docs/source/backends/xnnpack/reference/xnnpack-reference-arch-internals.md @@ -1,4 +1,4 @@ -# XNNPACK Delegate Internals +# Architecture and Internals This is a high-level overview of the ExecuTorch XNNPACK backend delegate. This high performance delegate is aimed to reduce CPU inference latency for ExecuTorch models. We will provide a brief introduction to the XNNPACK library and explore the delegate’s overall architecture and intended use cases. @@ -9,12 +9,12 @@ XNNPACK is a library of highly-optimized neural network operators for ARM, x86, A delegate is an entry point for backends to process and execute parts of the ExecuTorch program. Delegated portions of ExecuTorch models hand off execution to backends. The XNNPACK backend delegate is one of many available in ExecuTorch. It leverages the XNNPACK third-party library to accelerate ExecuTorch programs efficiently across a variety of CPUs. More detailed information on the delegates and developing your own delegates is available [here](compiler-delegate-and-partitioner.md). It is recommended that you get familiar with that content before continuing on to the Architecture section. ## Architecture -![High Level XNNPACK delegate Architecture](xnnpack-delegate-architecture.png) +![High Level XNNPACK delegate Architecture](/backends/xnnpack/xnnpack-delegate-architecture.png) ### Ahead-of-time In the ExecuTorch export flow, lowering to the XNNPACK delegate happens at the `to_backend()` stage. In this stage, the model is partitioned by the `XnnpackPartitioner`. Partitioned sections of the graph are converted to a XNNPACK specific graph represenationed and then serialized via flatbuffer. The serialized flatbuffer is then ready to be deserialized and executed by the XNNPACK backend at runtime. -![ExecuTorch XNNPACK delegate Export Flow](xnnpack-et-flow-diagram.png) +![ExecuTorch XNNPACK delegate Export Flow](/backends/xnnpack/xnnpack-et-flow-diagram.png) #### Partitioner The partitioner is implemented by backend delegates to mark nodes suitable for lowering. The `XnnpackPartitioner` lowers using node targets and module metadata. Some more references for partitioners can be found [here](compiler-delegate-and-partitioner.md) diff --git a/docs/source/backends/xnnpack/reference/xnnpack-reference-partitioner.md b/docs/source/backends/xnnpack/reference/xnnpack-reference-partitioner.md new file mode 100644 index 00000000000..c8c85ca628c --- /dev/null +++ b/docs/source/backends/xnnpack/reference/xnnpack-reference-partitioner.md @@ -0,0 +1,8 @@ +# Partitioner API + +The XNNPACK partitioner API allows for configuration of the model delegation to XNNPACK. Passing an `XnnpackPartitioner` instance with no additional parameters will run as much of the model as possible on the XNNPACK backend. This is the most common use-case. For advanced use cases, the partitioner exposes the following options via the [constructor](https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/xnnpack_partitioner.py#L31): + + - `configs`: Control which operators are delegated to XNNPACK. By default, all available operators all delegated. See [../config/\_\_init\_\_.py](https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/config/__init__.py#L66) for an exhaustive list of available operator configs. + - `config_precisions`: Filter operators by data type. By default, delegate all precisions. One or more of `ConfigPrecisionType.FP32`, `ConfigPrecisionType.STATIC_QUANT`, or `ConfigPrecisionType.DYNAMIC_QUANT`. See [ConfigPrecisionType](https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/config/xnnpack_config.py#L24). + - `per_op_mode`: If true, emit individual delegate calls for every operator. This is an advanced option intended to reduce memory overhead in some contexts at the cost of a small amount of runtime overhead. Defaults to false. + - `verbose`: If true, print additional information during lowering. diff --git a/docs/source/backends/xnnpack/reference/xnnpack-reference-quantization.md b/docs/source/backends/xnnpack/reference/xnnpack-reference-quantization.md new file mode 100644 index 00000000000..e3a02d4bffc --- /dev/null +++ b/docs/source/backends/xnnpack/reference/xnnpack-reference-quantization.md @@ -0,0 +1,94 @@ +# Quantization + +The XNNPACK delegate can also be used as a backend to execute symmetrically quantized models. To quantize a PyTorch model for the XNNPACK backend, use the `XNNPACKQuantizer`. `Quantizers` are backend specific, which means the `XNNPACKQuantizer` is configured to quantize models to leverage the quantized operators offered by the XNNPACK Library. + +### Supported Quantization Schemes +The XNNPACK delegate supports the following quantization schemes: + +- 8-bit symmetric weights with 8-bit asymmetric activations (via the PT2E quantization flow). + - Supports both static and dynamic activations. + - Supports per-channel and per-tensor schemes. + - Supports linear, convolution, add, mul, cat, and adaptive avg pool 2d operators. + +Weight-only quantization is not currently supported on XNNPACK. + +### 8-bit Quantization using the PT2E Flow + +To perform 8-bit quantization with the PT2E flow, perform the following steps prior to exporting the model: + +1) Create an instance of the `XnnpackQuantizer` class. Set quantization parameters. +2) Use `torch.export.export` to prepare for quantization. +3) Call `prepare_pt2e` to prepare the model for quantization. +4) For static quantization, run the prepared model with representative samples to calibrate the quantizated tensor activation ranges. +5) Call `convert_pt2e` to quantize the model. +6) Export and lower the model using the standard flow. + +The output of `convert_pt2e` is a PyTorch model which can be exported and lowered using the normal flow. As it is a regular PyTorch model, it can also be used to evaluate the accuracy of the quantized model using standard PyTorch techniques. + +```python +import torch +import torchvision.models as models +from torchvision.models.mobilenetv2 import MobileNet_V2_Weights +from executorch.backends.xnnpack.quantizer.xnnpack_quantizer import XNNPACKQuantizer, get_symmetric_quantization_config +from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner +from executorch.exir import to_edge_transform_and_lower +from torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e + +model = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() +sample_inputs = (torch.randn(1, 3, 224, 224), ) + +qparams = get_symmetric_quantization_config(is_per_channel=True) # (1) +quantizer = XNNPACKQuantizer() +quantizer.set_global(qparams) + +training_ep = torch.export.export(model, sample_inputs).module() # (2) +prepared_model = prepare_pt2e(training_ep, quantizer) # (3) + +for cal_sample in [torch.randn(1, 3, 224, 224)]: # Replace with representative model inputs + prepared_model(cal_sample) # (4) Calibrate + +quantized_model = convert_pt2e(prepared_model) # (5) + +et_program = to_edge_transform_and_lower( # (6) + torch.export.export(quantized_model, sample_inputs), + partitioner=[XnnpackPartitioner()], +).to_executorch() +``` + +See [PyTorch 2 Export Post Training Quantization](https://docs.pytorch.org/ao/main/tutorials_source/pt2e_quant_ptq.html) for more information. + +### LLM quantization with quantize_ + +The XNNPACK backend also supports quantizing models with the [torchao](https://github.com/pytorch/ao) quantize_ API. This is most commonly used for LLMs, requiring more advanced quantization. Since quantize_ is not backend aware, it is important to use a config that is compatible with CPU/XNNPACK: + +* Quantize embeedings with IntxWeightOnlyConfig (with weight_dtype torch.int2, torch.int4, or torch.int8, using PerGroup or PerAxis granularity) +* Quantize linear layers with Int8DynamicActivationIntxWeightConfig (with weight_dtype=torch.int4, using PerGroup or PerAxis granularity) + +Below is a simple example, but a more detailed tutorial including accuracy evaluation on popular LLM benchmarks can be found in the [torchao documentation](https://docs.pytorch.org/ao/main/serving.html#mobile-deployment-with-executorch). + +```python +from torchao.quantization.granularity import PerGroup, PerAxis +from torchao.quantization.quant_api import ( + IntxWeightOnlyConfig, + Int8DynamicActivationIntxWeightConfig, + quantize_, +) + +# Quantize embeddings with 8-bits, per channel +embedding_config = IntxWeightOnlyConfig( + weight_dtype=torch.int8, + granularity=PerAxis(0), +) +qunatize_( + eager_model, + lambda m, fqn: isinstance(m, torch.nn.Embedding), +) + + +# Quatize linear layers with 8-bit dynamic activations and 4-bit weights +linear_config = Int8DynamicActivationIntxWeightConfig( + weight_dtype=torch.int4, + weight_granularity=PerGroup(32), +) +quantize_(eager_model, linear_config) +``` diff --git a/docs/source/backends/xnnpack/reference/xnnpack-reference.md b/docs/source/backends/xnnpack/reference/xnnpack-reference.md new file mode 100644 index 00000000000..32eab255f7e --- /dev/null +++ b/docs/source/backends/xnnpack/reference/xnnpack-reference.md @@ -0,0 +1,19 @@ +# Reference + +## Features + +**→{doc}`xnnpack-reference-partitioner` — Partitioner options.** + +**→{doc}`xnnpack-reference-quantization` — Supported quantization schemes.** + +## Internals + +**→{doc}`xnnpack-reference-arch-internals` — XNNPACK backend internals.** + +```{toctree} +:hidden: +:maxdepth: 1 + +xnnpack-reference-arch-internals +xnnpack-reference-partitioner +xnnpack-reference-quantization diff --git a/docs/source/backends/xnnpack/tutorials/xnnpack-basic-tutorial.md b/docs/source/backends/xnnpack/tutorials/xnnpack-basic-tutorial.md new file mode 100644 index 00000000000..da81b51c2b8 --- /dev/null +++ b/docs/source/backends/xnnpack/tutorials/xnnpack-basic-tutorial.md @@ -0,0 +1,94 @@ +# Preparing a Model + +This tutorial demonstrates the creation of an ExecuTorch .pte file for the MobileNet V3 Small model using the XNNPACK backend. This .pte file can be run on a variety of devices, including Android, iOS, and desktop. + +## Step 1: Environment Setup + +This tutorial is intended to be run from a Mac or Linux host and uses Conda for Python environment management. For full setup details and system requirements, see [Getting Started with ExecuTorch](/getting-started). + +Create a Conda environment and install the ExecuTorch Python package. +```bash +conda create -y --name executorch python=3.12 +conda activate executorch +conda install executorch +``` + +## Step 2: Model Preparation + +Create a python file named `export_mv3.py`. This script will be responsible for loading the MobileNet V3 model from torchvision and create an XNNPACK-targeted .pte file. + +```py +# export_mv3.py +from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner +from executorch.exir import to_edge_transform_and_lower +import torch +import torchvision +``` + +### Model Instantiation and Example Inputs + +Instantiate the MobileNet V3 Small model from [torchvision](https://docs.pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v3_small.html#torchvision.models.mobilenet_v3_small). The export process also needs an example model input to trace the model. The model takes a single tensor, so we'll create a single-element tuple with a tensor of size (1,3,224,224), matching the size of the input we'll provide at runtime. +```py +model = torchvision.models.mobilenet_v3_small(weights='IMAGENET1K_V1').eval() +example_inputs = (torch.randn(1,3,224,224),) +``` + +### Lower the Model + +Next, export and lower the model to ExecuTorch. Note that the `XnnpackPartitioner` passed to the `partitioner` parameter tells ExecuTorch to target the XNNPACK backend. +```py +exported_program = torch.export.export(model, example_inputs) + +executorch_program = to_edge_transform_and_lower( + exported_program, + partitioner=[XnnpackPartitioner()], +).to_executorch() + +executorch_program.save("mv3_xnnpack.pte") +``` + +### Run the Script + +Save the above script to export_mv3.py and run the script. You should see a file named `mv3_xnnpack.pte` in the current directory. +```bash +python export_mv3.py +``` + +## Step 3: Running the Model + +The .pte file created in the previous step can be run on a variety of devices, including Android, iOS, and desktop. ExecuTorch provides runtime APIs and language bindings for a variety of platforms. This tutorial will demonstrate running the model on a desktop using the Python runtime. + +### Smoke Test + +First, we'll verify that the model loads and runs correctly by running the model with an all ones input tensor. Create a new script, named `run_mv3.py`, and add the following code. +```py +# run_mv3.py + +from executorch.runtime import Runtime +import torch + +runtime = Runtime.get() + +input_tensor = torch.ones(1, 3, 224, 224) +program = runtime.load_program("mv3_xnnpack.pte") +method = program.load_method("forward") +outputs = method.execute([input_tensor])[0] + +print(outputs) +``` + +When running the script with `python run_mv3.py`, you should see a tensor of size (1, 1000) printed to the console. +``` +tensor([[-2.9747e-02, -1.1634e-01, 2.3453e-01, -1.1516e-01, 2.8407e-01, + 1.3327e+00, -1.2022e+00, -4.1820e-01, -8.6148e-01, 9.6264e-01, + 2.0528e+00, 3.2284e-02, -6.7234e-01, -1.3766e-01, -7.8548e-01, + ... + ]]) +``` + + +# Next Steps + + - See [Edge Platforms](/edge-platforms-section) to deploy the .pte file on Android, iOS, or other platforms. + - See [Model Export and Lowering](/using-executorch-export) for more information on model preparation. + - See [XNNPACK Overview](/backends/xnnpack/xnnpack-overview) for more information about the XNNPACK backend. diff --git a/docs/source/backends/xnnpack/tutorials/xnnpack-tutorials.md b/docs/source/backends/xnnpack/tutorials/xnnpack-tutorials.md new file mode 100644 index 00000000000..ab6bf307c4c --- /dev/null +++ b/docs/source/backends/xnnpack/tutorials/xnnpack-tutorials.md @@ -0,0 +1,9 @@ +# Tutorials + +**→{doc}`xnnpack-basic-tutorial` — Lower and run a model on the XNNPACK backend.** + +```{toctree} +:hidden: +:maxdepth: 1 + +xnnpack-basic-tutorial diff --git a/docs/source/xnnpack-delegate-architecture.png b/docs/source/backends/xnnpack/xnnpack-delegate-architecture.png similarity index 100% rename from docs/source/xnnpack-delegate-architecture.png rename to docs/source/backends/xnnpack/xnnpack-delegate-architecture.png diff --git a/docs/source/xnnpack-et-flow-diagram.png b/docs/source/backends/xnnpack/xnnpack-et-flow-diagram.png similarity index 100% rename from docs/source/xnnpack-et-flow-diagram.png rename to docs/source/backends/xnnpack/xnnpack-et-flow-diagram.png diff --git a/docs/source/backends/xnnpack/xnnpack-overview.md b/docs/source/backends/xnnpack/xnnpack-overview.md new file mode 100644 index 00000000000..46eb687a908 --- /dev/null +++ b/docs/source/backends/xnnpack/xnnpack-overview.md @@ -0,0 +1,87 @@ +# XNNPACK Backend + +The XNNPACK delegate is the ExecuTorch solution for CPU execution on mobile CPUs. [XNNPACK](https://github.com/google/XNNPACK/tree/master) is a library that provides optimized kernels for machine learning operators on Arm and x86 CPUs. + +## Features + +- Wide operator support on Arm and x86 CPUs, available on any modern mobile phone. +- Support for a wide variety of quantization schemes and quantized operators. +- Supports fp32 and fp16 activations. +- Supports 8-bit quantization. + +## Target Requirements + +- ARM64 on Android, iOS, macOS, Linux, and Windows. +- ARMv7 (with NEON) on Android. +- ARMv6 (with VFPv2) on Linux. +- x86 and x86-64 (up to AVX512) on Windows, Linux, Android. + +## Development Requirements + +The XNNPACK delegate does not introduce any development system requirements beyond those required by +the core ExecuTorch runtime. + +---- + +## Using the XNNPACK Backend + +To target the XNNPACK backend during the export and lowering process, pass an instance of the `XnnpackPartitioner` to `to_edge_transform_and_lower`. The example below demonstrates this process using the MobileNet V2 model from torchvision. + +```python +import torch +import torchvision.models as models +from torchvision.models.mobilenetv2 import MobileNet_V2_Weights +from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner +from executorch.exir import to_edge_transform_and_lower + +mobilenet_v2 = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() +sample_inputs = (torch.randn(1, 3, 224, 224), ) + +et_program = to_edge_transform_and_lower( + torch.export.export(mobilenet_v2, sample_inputs), + partitioner=[XnnpackPartitioner()], +).to_executorch() + +with open("mv2_xnnpack.pte", "wb") as file: + et_program.write_to_file(file) +``` + +See [Partitioner API](/backends/xnnpack/reference/xnnpack-partitioner-api.md) for a reference on available partitioner options. + +---- + +## Quantization + +The XNNPACK delegate can also be used as a backend to execute symmetrically quantized models. See [XNNPACK Quantization](/backends/xnnpack/reference/xnnpack-reference-quantization.md) for more information on available quantization schemes and APIs. + +---- + +## Runtime Integration + +To run the model on-device, use the standard ExecuTorch runtime APIs. + +The XNNPACK delegate is included by default in the published Android, iOS, and pip packages. When building from source, pass `-DEXECUTORCH_BUILD_XNNPACK=ON` when configuring the CMake build to compile the XNNPACK backend. See [Running on Device](/getting-started.md#running-on-device) for more information. + +To link against the backend, add the `executorch_backends` CMake target as a build dependency, or link directly against `libxnnpack_backend`. Due to the use of static registration, it may be necessary to link with whole-archive. This can typically be done by passing `"$"` to `target_link_libraries`. + +``` +# CMakeLists.txt +add_subdirectory("executorch") +... +target_link_libraries( + my_target + PRIVATE executorch + executorch_backends + ... +) +``` + +No additional steps are necessary to use the backend beyond linking the target. Any XNNPACK-delegated .pte file will automatically run on the registered backend. + +```{toctree} +:maxdepth: 1 +:hidden: +:caption: XNNPACK Backend + +reference/xnnpack-reference +tutorials/xnnpack-tutorials