From f9647a9a8fcfc0afece97e1b54d9e672937d9c2e Mon Sep 17 00:00:00 2001 From: Rob Elliott Date: Fri, 8 Aug 2025 09:40:43 +0000 Subject: [PATCH 1/6] Adding VGF to documentation and revising various out of date parts Signed-off-by: Rob Elliott Change-Id: I303bd9d91c80edd5176f242efecfd616b987860e --- backends/arm/README.md | 150 ++++++++---------- docs/source/backends-arm-ethos-u.md | 2 +- docs/source/index.md | 2 +- ...utorial-arm-ethos-u.md => tutorial-arm.md} | 123 +++++--------- examples/arm/ethos_u_minimal_example.ipynb | 10 +- 5 files changed, 121 insertions(+), 166 deletions(-) rename docs/source/{tutorial-arm-ethos-u.md => tutorial-arm.md} (75%) diff --git a/backends/arm/README.md b/backends/arm/README.md index 9fa8ff8f5be..11b60249729 100644 --- a/backends/arm/README.md +++ b/backends/arm/README.md @@ -5,43 +5,70 @@ This subtree contains the Arm(R) Delegate implementation for ExecuTorch. This delegate is structured to, over time, support a number of different Arm devices through an AoT flow which targets multiple Arm IP using the TOSA standard. -The expected flow is: - * torch.nn.module -> TOSA -> command_stream for fully AoT flows e.g. embedded. - * torch.nn.module -> TOSA for flows supporting a JiT compilation step. - -Current backend support is being developed for TOSA to Ethos(TM)-U55/65/85 via the -ethos-u-vela compilation stack. which follows the fully AoT flow. - -## Layout +For more information on TOSA see https://www.mlplatform.org/tosa/tosa_spec.html + +The expected flows are: +* torch.nn.module -> TOSA for development and validation of model export +* torch.nn.module -> TOSA/VGF for flows supporting a JiT compilation step. +* torch.nn.module -> TOSA -> command_stream for fully AoT flows e.g. embedded. + +Currently device support is for: +* TOSA to Ethos(TM)-U55/65/85 via the ethos-u-vela compilation stack. + * This is cross-compiled to the appropriate target CPU + * There is a seperate arm_executor_runner for bare-metal platforms +* TOSA to VGF via the model-converter for devices supporting the ML SDK for Vulkan(R) + * The VGF graph represents TOSA directly in a SPIR-V(TM) standardized form. + * As the VGF delegate runs on Vulkan, it's required to be built with the Vulkan delegate also present. + +Currently supported development platforms are: +* For ahead of time tooling + * Linux aarch64 + * Linux x86_64 + * macOS with Apple silicon +* Bare metal builds For the Ethos-U target and Cortex-M targets + * Full testing is available in tree for the Corstone(TM) FVPs + * This is a reference implementation for porting to silicon targets +* Linux target support For VGF capable targets + * This flow re-uses the common executor_runner + +## Layout of key components Export: -- `ethosu_backend.py` - Main entrypoint for the EthosUBackend. For more information see the section on -[Arm Backend Architecture](#arm-backend-architecture). For examples of use see `executorch/examples/arm`. -- `tosa_mapping.py` - utilities for mapping edge dialect to TOSA -- `tosa_quant_utils.py` - utilities for mapping quantization information to TOSA encoding +* `tosa_backend.py` - The TOSA conversion flow all other backends rely on. +* `ethosu/backend.py` - Main entrypoint for the EthosUBackend. +* `vgf_backend.py` - Main entrypoint for VgfBackend. + * For more information see the section on Arm Backend Architecture](#arm-backend-architecture). +* `scripts` - For the core scripts which prepare AoT dependencies such as backend compilers. -Operators: -- `node_visitor.py` - Base class for edge operator lowering -- `op_*.py` - Edge operator lowering/serialization to TOSA +Passes (which prepare the partitioned graphs for TOSA conversion): +* `_passes\arm_pass_manager.py` - Pass manager. Will decide which passes need to be applied depending on the compile_spec. +* `_passes\*_pass.py` - Compiler passes derived from ExportPass -Passes: -- `arm_pass_manager.py` - Pass manager. Will decide which passes need to be applied depending on the compile_spec. -- `*_pass.py` - Compiler passes derived from ExportPass +Operators (which handle mapping of operators to TOSA): +* `operators/node_visitor.py` - Base class for edge operator lowering +* `operators/op_*.py` - Edge operator lowering/serialization to TOSA Quantization: -- `arm_quantizer.py` - Quantizers for Arm backend. Contains the EthosUQuantizer which inherits from the TOSAQuantizer -- `arm_quantizer_utils.py` - Utilities for quantization +* `quantizer/arm_quantizer.py` - Quantizers for Arm backend. + * Contains the EthosUQuantizer which inherits from the TOSAQuantizer + * Contains the VgfQuantizer which inherits from the TOSAQuantizer +* `arm_quantizer_utils.py` - Utilities for quantization Runtime: -- `runtime/ArmEthosUBackend.cpp` - The Arm backend implementation of the ExecuTorch runtime backend (BackendInterface) for Ethos-U +- `runtime/ArmEthosUBackend.cpp` - The Arm delegate for Ethos-U targets +- `runtime/VGFBackend.cpp` - The Arm delegate for VGF capable targets +- `CMakeLists.txt` - the build configuration for both targets Other: -- `third-party/` - Dependencies on other code - in particular the TOSA serialization_lib for compiling to TOSA and the ethos-u-core-driver for the bare-metal backend supporting Ethos-U +- `third-party/` - Dependencies for runtime builds - `test/` - Unit test and test support functions + ## Testing -After a setup you can run unit tests with the test_arm_baremetal.sh script. +The unit tests and related support scripts will test TOSA, Ethos-U and VGF behaviour based on the installed tools. It is expected that the relevant environment preperation has been performed as outlined in the guide available here https://docs.pytorch.org/executorch/main/tutorial-arm.html + +After setup you can run unit tests with the test_arm_baremetal.sh script. To run the pytests suite run @@ -62,6 +89,7 @@ backends/arm/test/test_arm_baremetal.sh test_full_ethosu_fvp ``` ## Unit tests + This is the structure of the test directory ``` @@ -112,89 +140,51 @@ Please note that installing model test dependencies is a standalone process. Whe List of models with specific dependencies: - Stable Diffusion: [diffusers](https://github.com/huggingface/diffusers/tree/main) -## Passes - -With the default passes in the Arm Ethos-U backend, assuming the model lowers fully to the -Ethos-U, the exported program is composed of a Quantize node, Ethos-U custom delegate -and a Dequantize node. In some circumstances, you may want to feed quantized input to the Neural -Network straight away, e.g. if you have a camera sensor outputting (u)int8 data and keep all the -arithmetic of the application in the int8 domain. For these cases, you can apply the -`exir/passes/quantize_io_pass.py`. See the unit test in `executorch/backends/arm/ -test/passes/test_ioquantization_pass.py`for an example how to feed quantized inputs and -obtain quantized outputs. - - -### Code coverage - -To get code coverage: - -``` -coverage run --source= --rcfile=backends/arm/test/.coveragerc -m pytest \ ---config-file=/dev/null backends/arm/test/ -``` - -All files in `SRC` and its child directories will be analysed for code coverage, -unless explicitly exluded in the .coveragerc file. If using venv this might be -under `env/lib/python/site-packages/executorch/`. To get the -absolute path, run: - -``` -python -c "import executorch; print(executorch.__path__)" -``` - -This contains a list of paths where the source directory is located. Pick the -one that is located in `env/lib`. If that does not work try the others. Add -`backends/arm` to the path in `--source` to only get code coverage for the Arm -backend. - -### A note on unit tests -There are currently 3 ways we unit test our code. -1. TOSA main inference. These tests are using non-quantized data and ops. Edge IR representation of the module is lowered to a TOSA flatbuffer, which is tested for numerical correcteness using the ```tosa_reference_model``` tool. -2. TOSA base inference. Same as above, but data and ops are quantized. -3. Ethos-U55. These tests use quantized data and ops (aka TOSA base inference). Edge IR is lowered to a TOSA flatbuffer, which is fed into the Vela compiler. Theses tests are functional tests and do not test numerical correctness, since that should be guaranteed by TOSA. +There are currently a number of ways we unit test our code: +1. TOSA FP. These tests are using non-quantized data and ops. Edge IR representation of the module is lowered to a TOSA flatbuffer, which is tested for numerical correcteness using the ```tosa_reference_model``` tool. +2. TOSA INT. Same as above, but data and ops integer, and represent a quantized domain. +3. Ethos-U. These tests use quantized data and ops (aka TOSA base inference). Edge IR is lowered to a TOSA flatbuffer, which is fed into the Vela compiler. Theses tests are functional tests and do not test numerical correctness, since that should be guaranteed by TOSA. +4. VGF. These tests enable both FP and INT testing for the VGF/SPIR-V representation of TOSA. -In order to distinguise between the different tests, the following suffixes have been added to the respective test case. -* ```_MI``` for main inference -* ```_BI``` for base inference -* ```_U55_BI``` for base inference on U55 +In order to distinguise between general, and more targeted tests, you will find suffixes with FP, INT, U55, VGF, etc. ## Help & Improvements If you have problems or questions, or have suggestions for ways to make implementation and testing better, please reach out to the Arm team developing this delegate, or -create an issue on [github](https://www.github.com/pytorch/executorch/issues). +create an issue on [github](https://www.github.com/pytorch/executorch/issues) and add the "Partner: Arm" label. # Arm Backend Architecture The broad principle with the Arm backend implemention for ExecuTorch is to support multiple Arm devices and device configurations through a largely Homogeneous flow with maximal sharing of class logic. -The EthosUBackend is currently the one user facing API that target the Ethos-U55 and Ethos-U85 hardware IP. It is using the TOSABackend under the hood to share code and functionality, but also to separate testing possibilities to the TOSA flow itself. +The EthosUBackend and VgfBackend are the user facing targets available for the the Ethos-U55 and Ethos-U85 hardware IP, and VGF targets. It is using the TOSABackend under the hood to share compiler passes and legalisation, along with other code and functionality, but also to enable separate testing for the TOSA flow itself. In practice for compilation, this means that the flow goes via [Arm TOSA](https://www.mlplatform.org/tosa/tosa_spec.html) to produce a common IR and quantization behaviour compatible with our various IP, and typically, device-specific backends to further lower to a device specific binary which can happen ahead of time (within the Python development flow) or at runtime (during a JIT compilation stage). -In practice for the runtime, this means we will share common runtime backend functionality, with the aim for features like debugging to be available through common tooling. - ## Arm Backend Status and Maturity -The Arm EthosU Backend should be considered a prototype quality at this point, likely subject to significant change and improvement, and with a limited coverage of functionality. We are actively developing this codebase. +The Arm EthosU Backend should be considered reasonable quality at this point, supporting a large number of operators and major networks. +The Arm VGF Backend should be considered of Alpha quality, likely subject to significant change and improvement, and with a limited coverage of functionality. +We are actively developing the codebase for both targets. ## Current flows -The EthosUBackend has a two stage process, -- Compile to TOSA to rationalise the graph into known hardware support profiles. Currently this is to v1.0 TOSA INT with specific concern to a subset which gives support on Ethos-U55 and Ethos-U85, the target of the initial prototype efforts. This calls into the TOSABackend. -- Lower via the ethos-u-vela compilation flow which takes TOSA v1.0 as an input and produces a low level commandstream for the hardware which is then passed via the delegate to the ethos-u-core-driver for direct execution. +The Arm backends have a two stage process, +1. Compile to TOSA to by applying FX passes and legalizing the graph into supported TOSA profiles. Currently this is to v1.0 TOSA INT/FP, this is via calls into the TOSABackend. +1. Lower via the target compilation flow which takes TOSA v1.0 as an input and produces a lower level format for the hardware + * For Ethos-U this is a hardware command stream that is possible to directly execute on hardware + * For VGF this is a SPIR-V representation of TOSA to enable JiT compilation on the target platform -The EthosUPartitioner is currenly used to ensure the operations converted are Ethos-U compatible, but will be extended to offer spec-correct TOSA Base inference and TOSA Main Inference generation in future. +All targets provide a partitioner to enable the standard partially delegated flow offered by ExecuTorch. -There is also a generic TOSABackend with accompanying TOSAPartitioner and TOSAQuantizer, which are used by the EthosUBackend and friends. The Arm TOSA Backend can be used by it's own to verify the lowering to the TOSA representation of the model (refer to the unit tests in backends/arm/test which uses the TOSA backend in the test suites). +There is also a generic TOSABackend with accompanying TOSAPartitioner and TOSAQuantizer, these can be used directly to verify the lowering to the TOSA representation of the model (refer to the unit tests in backends/arm/test which uses the TOSA backend in the test suites). ### Controlling compilation It is possible to control the compilation flow to aid in development and debug of both networks and the code itself. -Configuration of the EthosUBackend export flow is controlled by CompileSpec information (essentially used as compilation flags) to determine which of these outputs is produced. In particular this allows for use of the tosa_reference_model to run intermediate output to check for correctness and quantization accuracy without a full loop via hardware implemntation. - -As this is in active development see the EthosUBackend for accurate information on [compilation flags](https://github.com/pytorch/executorch/blob/29f6dc9353e90951ed3fae3c57ae416de0520067/backends/arm/arm_backend.py#L319-L324) +Configuration of the export flow is controlled by CompileSpec information (essentially used as compilation flags) to determine which of these outputs is produced. In particular this allows for compilation flags, capturing intermediate forms during lowering, and use of the tosa_reference_model to run intermediate output to check for correctness and quantization accuracy without a full loop via hardware implemntation. ## Model specific and optional passes The current TOSA version does not support int64. However, int64 is commonly used in many models. In order to lower the operators with int64 inputs and/or outputs to TOSA, a few passes have been developed to handle the int64-related issues. The main idea behind these passes is to replace the uses of int64 with int32 where feasible. diff --git a/docs/source/backends-arm-ethos-u.md b/docs/source/backends-arm-ethos-u.md index 71e3be105de..f37319eb828 100644 --- a/docs/source/backends-arm-ethos-u.md +++ b/docs/source/backends-arm-ethos-u.md @@ -95,4 +95,4 @@ Finally, run the elf file on FVP using the script `executorch/backends/arm/scripts/run_fvp.sh --elf=executorch/mv2_arm_ethos_u55/cmake-out/arm_executor_runner --target=ethos-u55-128`. ## See Also -- [Arm Ethos-U Backend Tutorial](tutorial-arm-ethos-u.md) +- [Arm Ethos-U Backend Tutorial](tutorial-arm.md) diff --git a/docs/source/index.md b/docs/source/index.md index f0ec1d2c6b3..7fc4181c511 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -148,7 +148,7 @@ using-executorch-faqs Building an ExecuTorch Android Demo App Building an ExecuTorch iOS Demo App -tutorial-arm-ethos-u.md +tutorial-arm.md ``` ```{toctree} diff --git a/docs/source/tutorial-arm-ethos-u.md b/docs/source/tutorial-arm.md similarity index 75% rename from docs/source/tutorial-arm-ethos-u.md rename to docs/source/tutorial-arm.md index a1442a90fbe..1804054c829 100644 --- a/docs/source/tutorial-arm-ethos-u.md +++ b/docs/source/tutorial-arm.md @@ -1,5 +1,4 @@ - -# Arm Ethos-U Backend Tutorial +# Arm(R) Backend Tutorial ::::{grid} 2 @@ -13,17 +12,24 @@ :::{grid-item-card} What you will learn in this tutorial: :class-card: card-prerequisites -In this tutorial you will learn how to export a simple PyTorch model for ExecuTorch Arm Ethos-U backend delegate and run it on a Corstone FVP emulators. +In this tutorial you will learn how to export a simple PyTorch model for ExecuTorch Arm backends. ::: :::: ```{warning} -This ExecuTorch backend delegate is under active development. You may encounter some rough edges and features which may be documented or planned but not implemented. +This delegate is under active development, to get best results please use a recent version. +The TOSA and Ethos(tm) backend support is reasonably mature and used in production by some users. +The VGF backend support is in early development and you may encounter issues. +You may encounter some rough edges and features which may be documented or planned but not implemented, please refer to the in-tree documentation for the latest status of features. ``` ```{tip} -If you are already familiar with this delegate, you may want to jump directly to the examples source dir - [https://github.com/pytorch/executorch/tree/main/examples/arm](https://github.com/pytorch/executorch/tree/main/examples/arm) +If you are already familiar with this delegate, you may want to jump directly to the examples: +* [https://github.com/pytorch/executorch/tree/main/examples/arm](https://github.com/pytorch/executorch/tree/main/examples/arm) +* [https://github.com/pytorch/executorch/blob/main/examples/arm/ethos_u_minimal_example.ipynb](Compilation for Ethos-U) +* [https://github.com/pytorch/executorch/blob/main/examples/arm/vgf_minimal_example.ipynb](Compilation for VGF/ML-SDK) +* [https://github.com/pytorch/executorch/blob/main/examples/arm/aot_arm_compiler.py](A commandline compiler for example models) ``` ## Prerequisites @@ -32,110 +38,65 @@ Let's make sure you have everything you need before you get started. ### Hardware -To successfully complete this tutorial, you will need a Linux-based host machine with Arm aarch64 or x86_64 processor architecture. +To successfully complete this tutorial, you will need a Linux or MacOS host machine with Arm aarch64 or x86_64 processor architecture. -The target device will be an embedded platform with an Arm Cortex-M CPUs and Ethos-U NPUs (ML processor). This tutorial will show you how to run PyTorch models on both. +The target device will be an emulated platform to enable development without a specific development board. This tutorial has guidance for both Ethos-U targets and VGF via the ML SDK for Vulkan®. -We will be using a [Fixed Virtual Platform (FVP)](https://www.arm.com/products/development-tools/simulation/fixed-virtual-platforms), simulating [Corstone-300](https://developer.arm.com/Processors/Corstone-300)(cs300) and [Corstone-320](https://developer.arm.com/Processors/Corstone-320)(cs320)systems. Since we will be using the FVP (think of it as virtual hardware), we won't be requiring any real embedded hardware for this tutorial. +For Ethos-U and Cortex-M, We will be using a [Fixed Virtual Platform (FVP)](https://www.arm.com/products/development-tools/simulation/fixed-virtual-platforms), simulating [Corstone-300](https://developer.arm.com/Processors/Corstone-300)(cs300) and [Corstone-320](https://developer.arm.com/Processors/Corstone-320)(cs320)systems. Since we will be using the FVP (think of it as virtual hardware), we won't be requiring any real embedded hardware for this tutorial. -### Software +For VGF we will be using the [ML SDK for Vulkan(R)](https://github.com/arm/ai-ml-sdk-for-vulkan/)) to emulate the program consumer. -First, you will need to install ExecuTorch. Please follow the recommended tutorials if you haven't already, to set up a working ExecuTorch development environment. +### Software -To generate software which can be run on an embedded platform (real or virtual), we will need a tool chain for cross-compilation and an Arm Ethos-U software development kit, including the Vela compiler for Ethos-U NPUs. +First, you will need to install ExecuTorch. Please follow the recommended tutorials if you haven't already, to set up a working ExecuTorch development environment. For the VGF backend it's recommended you [install from source](https://docs.pytorch.org/executorch/stable/using-executorch-building-from-source.html), or from a [nightly](https://download.pytorch.org/whl/nightly/executorch/). -In the following sections we will walk through the steps to download each of the dependencies listed above. +In addition to this, you need to install a number of SDK dependencies for generating Ethos-U command streams or VGF files. There are scripts which automate this, which are found in the main [ExecuTorch repository](https://github.com/pytorch/executorch/examples/arm/). ## Set Up the Developer Environment -In this section, we will do a one-time setup, like downloading and installing necessary software, for the platform support files needed to run ExecuTorch programs in this tutorial. +In this section, we will do a one-time setup of the platform support files needed to run ExecuTorch programs in this tutorial. It is recommended to run the script in a conda or venv environment. -For that we will use the `examples/arm/setup.sh` script to pull each item in an automated fashion. It is recommended to run the script in a conda environment. +With a checkout of the ExecuTorch repository, we will use the `examples/arm/setup.sh` script to pull each item in an automated fashion. + +For Ethos-U run: ```bash -examples/arm/setup.sh --i-agree-to-the-contained-eula +./examples/arm/setup.sh --i-agree-to-the-contained-eula ``` -Upon successful execution, you can directly go to [the next step](#convert-the-pytorch-model-to-the-pte-file). - -As mentioned before, we currently support only Linux based platforms with x86_64 or aarch64 processor architecture. Let’s make sure we are indeed on a supported platform. +For VGF run: ```bash -uname -s -# Linux - -uname -m -# x86_64 or aarch64 +./examples/arm/setup.sh --i-agree-to-the-contained-eula --disable-ethos-u-deps --enable-mlsdk-deps ``` +It is possible to install both sets of dependencies if you omit the disable options. -Next we will walk through the steps performed by the `setup.sh` script to better understand the development setup. - -### Download and Set Up the Corstone-300 and Corstone-320 FVP +Upon successful execution, you can directly go to [the next step](#convert-the-pytorch-model-to-the-pte-file). -Fixed Virtual Platforms (FVPs) are pre-configured, functionally accurate simulations of popular system configurations. Here in this tutorial, we are interested in Corstone-300 and Corstone-320 systems. We can download this from the Arm website. +### Notes: -```{note} - By downloading and running the FVP software, you will be agreeing to the FVP [End-user license agreement (EULA)](https://developer.arm.com/downloads/-/arm-ecosystem-fvps/eula). +```{warning} +The `setup.sh` script has generated a `setup_path.sh` script that you need to source whenever you restart your shell. ``` -To download, we can either download `Corstone-300 Ecosystem FVP` and `Corstone-320 Ecosystem FVP`from [here](https://developer.arm.com/downloads/-/arm-ecosystem-fvps). or `setup.sh` script does that for you under `setup_fvp` function. - -### Download and Install the Arm GNU AArch32 Bare-Metal Toolchain - -Similar to the FVP, we would also need a tool-chain to cross-compile ExecuTorch runtime, executor-runner bare-metal application, as well as the rest of the bare-metal stack for Cortex-M55/M85 CPU available on the Corstone-300/Corstone-320 platform. - -These toolchains are available [here](https://developer.arm.com/downloads/-/arm-gnu-toolchain-downloads). We will be using GCC 13.3.rel1 targeting `arm-none-eabi` here for our tutorial. Just like FVP, `setup.sh` script will down the toolchain for you. See `setup_toolchain` function. - -### Setup the Arm Ethos-U Software Development - -This git repository is the root directory for all Arm Ethos-U software. It is to help us download required repositories and place them in a tree structure. See `setup_ethos_u` function of the setup script for more details. - -Once this is done, you should have a working FVP simulator, a functioning toolchain for cross compilation, and the Ethos-U software development setup ready for the bare-metal developement. - -### Install the Vela Compiler -Once this is done, the script will finish the setup by installing the Vela compiler for you, details are in `setup_vela` function. +i.e. run +`source executorch/examples/arm/ethos-u-scratch/setup_path.sh` -### Install the TOSA reference model -This is the last step of the setup process, using `setup_tosa_reference_model` function `setup.sh` script will install TOSA reference model for you. -At the end of the setup, if everything goes well, your top level devlopement dir might look something like this, +To confirm your environment is set up correctly and will enable you to generate .pte's for your target: +For Ethos-U run: ```bash -. -├── arm-gnu-toolchain-13.3.rel1-x86_64-arm-none-eabi # for x86-64 hosts -├── arm-gnu-toolchain-13.3.rel1-x86_64-arm-none-eabi.tar.xz -├── ethos-u -│   ├── core_platform -│   ├── core_software -│   ├── fetch_externals.py -│ └── [...] -├── FVP-corstone300 -│ ├── FVP_Corstone_SSE-300.sh -│ └── [...] -├── FVP-corstone320 -│ ├── FVP_Corstone_SSE-320.sh -│ └── [...] -├── FVP_corstone300.tgz -├── FVP_corstone320.tgz -└── setup_path.sh +# Check for Vela, which converts TOSA to Ethos-U command streams. +which vela ``` -### Notes: - -The `setup.sh` script has generated a `setup_path.sh` script that you need to source everytime you restart you shell. - -e.g. run -`source executorch/examples/arm/ethos-u-scratch/setup_path.sh` - -As `setup.sh` will download and setup the needed Arm toolchain make sure it is used by calling - -`which arm-none-eabi-gcc` - -It should show `arm-none-eabi-gcc` in the `executorch` project and not anything in `/usr/bin` something like: +For VGF run: +```bash +# Check for model-converter, which converts TOSA to ML-SDK VGF format. +which model-converter +``` -`/examples/arm/ethos-u-scratch/arm-gnu-toolchain-13.3.rel1-aarch64-arm-none-eabi/bin/arm-none-eabi-gcc` -or -`/examples/arm/ethos-u-scratch/arm-gnu-toolchain-13.3.rel1-x86_64-arm-none-eabi/bin/arm-none-eabi-gcc` +To ensure there's no environment pollution you should confirm these binaries reside within your executorch checkout, under the examples/arm tree. Other versions may present compatibility issues, so this should be corrected by modifying your environment variables such as ${PATH} appropriately. -If not you might need to uninstall `arm-none-eabi-gcc` or make sure its picked after the one in the project in your $PATH env varable. ## Convert the PyTorch Model to the `.pte` File diff --git a/examples/arm/ethos_u_minimal_example.ipynb b/examples/arm/ethos_u_minimal_example.ipynb index 72caed50149..0fa2c9e6f79 100644 --- a/examples/arm/ethos_u_minimal_example.ipynb +++ b/examples/arm/ethos_u_minimal_example.ipynb @@ -23,8 +23,10 @@ "\n", "Before you begin:\n", "1. (In a clean virtual environment with a compatible Python version) Install executorch using `./install_executorch.sh`\n", - "2. Install Arm cross-compilation toolchain and simulators using `examples/arm/setup.sh --i-agree-to-the-contained-eula`\n", - "3. Add Arm cross-compilation toolchain and simulators to PATH using `examples/arm/ethos-u-scratch/setup_path.sh` \n", + "2. Install Arm cross-compilation toolchain and simulators using `./examples/arm/setup.sh --i-agree-to-the-contained-eula`\n", + "3. Add Arm cross-compilation toolchain and simulators to PATH using `./examples/arm/ethos-u-scratch/setup_path.sh` \n", + "\n", + "For further guidance, refer to https://docs.pytorch.org/executorch/main/tutorial-arm.html", "\n", "With all commands executed from the base `executorch` folder.\n", "\n", @@ -70,7 +72,9 @@ "source": [ "To run on Ethos-U the `graph_module` must be quantized using the `arm_quantizer`. Quantization can be done in multiple ways and it can be customized for different parts of the graph; shown here is the recommended path for the EthosUBackend. Quantization also requires calibrating the module with example inputs.\n", "\n", - "Again printing the module, it can be seen that the quantization wraps the node in quantization/dequantization nodes which contain the computed quanitzation parameters." + "Again printing the module, it can be seen that the quantization wraps the node in quantization/dequantization nodes which contain the computed quanitzation parameters.", + "\n", + "With the default passes for the Arm Ethos-U backend, assuming the model lowers fully to the Ethos-U, the exported program is composed of a Quantize node, Ethos-U custom delegate and a Dequantize node. In some circumstances, you may want to feed quantized input to the Neural Network straight away, e.g. if you have a camera sensor outputting (u)int8 data and keep all the arithmetic of the application in the int8 domain. For these cases, you can apply the `exir/passes/quantize_io_pass.py`. See the unit test in `backends/arm/test/passes/test_ioquantization_pass.py`for an example how to feed quantized inputs and obtain quantized outputs.\n" ] }, { From 21183c23128b0b51cf7389c048dd494870dec158 Mon Sep 17 00:00:00 2001 From: Rob Elliott Date: Mon, 11 Aug 2025 16:01:53 +0000 Subject: [PATCH 2/6] Fix URL Signed-off-by: Rob Elliott --- docs/source/tutorial-arm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/tutorial-arm.md b/docs/source/tutorial-arm.md index 1804054c829..6ac2b7cc78f 100644 --- a/docs/source/tutorial-arm.md +++ b/docs/source/tutorial-arm.md @@ -50,7 +50,7 @@ For VGF we will be using the [ML SDK for Vulkan(R)](https://github.com/arm/ai-ml First, you will need to install ExecuTorch. Please follow the recommended tutorials if you haven't already, to set up a working ExecuTorch development environment. For the VGF backend it's recommended you [install from source](https://docs.pytorch.org/executorch/stable/using-executorch-building-from-source.html), or from a [nightly](https://download.pytorch.org/whl/nightly/executorch/). -In addition to this, you need to install a number of SDK dependencies for generating Ethos-U command streams or VGF files. There are scripts which automate this, which are found in the main [ExecuTorch repository](https://github.com/pytorch/executorch/examples/arm/). +In addition to this, you need to install a number of SDK dependencies for generating Ethos-U command streams or VGF files. There are scripts which automate this, which are found in the main [ExecuTorch repository](https://github.com/pytorch/executorch/tree/main/examples/arm/). ## Set Up the Developer Environment From 2365cd191921715c5832c01443551157617a16e0 Mon Sep 17 00:00:00 2001 From: Rob Elliott Date: Tue, 12 Aug 2025 09:46:49 +0000 Subject: [PATCH 3/6] testing and review fixes Signed-off-by: Rob Elliott --- backends/arm/README.md | 18 ++++---- docs/source/tutorial-arm.md | 82 +++++++++++++++++++++++++++++++------ 2 files changed, 78 insertions(+), 22 deletions(-) diff --git a/backends/arm/README.md b/backends/arm/README.md index 11b60249729..25f56482bdb 100644 --- a/backends/arm/README.md +++ b/backends/arm/README.md @@ -1,4 +1,4 @@ -# ExecuTorch Arm/TOSA Delegate +# ExecuTorch Arm® Delegate for TOSA devices This subtree contains the Arm(R) Delegate implementation for ExecuTorch. @@ -7,26 +7,26 @@ through an AoT flow which targets multiple Arm IP using the TOSA standard. For more information on TOSA see https://www.mlplatform.org/tosa/tosa_spec.html -The expected flows are: +**The expected flows are:** * torch.nn.module -> TOSA for development and validation of model export * torch.nn.module -> TOSA/VGF for flows supporting a JiT compilation step. * torch.nn.module -> TOSA -> command_stream for fully AoT flows e.g. embedded. -Currently device support is for: -* TOSA to Ethos(TM)-U55/65/85 via the ethos-u-vela compilation stack. +**Currently device support is for:** +* TOSA to Ethos™-U55/65/85 via the ethos-u-vela compilation stack. * This is cross-compiled to the appropriate target CPU * There is a seperate arm_executor_runner for bare-metal platforms -* TOSA to VGF via the model-converter for devices supporting the ML SDK for Vulkan(R) - * The VGF graph represents TOSA directly in a SPIR-V(TM) standardized form. +* TOSA to VGF via the model-converter for devices supporting the ML SDK for Vulkan® + * The VGF graph represents TOSA directly in a SPIR-V™ standardized form. * As the VGF delegate runs on Vulkan, it's required to be built with the Vulkan delegate also present. -Currently supported development platforms are: +**Currently supported development platforms are:** * For ahead of time tooling * Linux aarch64 * Linux x86_64 * macOS with Apple silicon * Bare metal builds For the Ethos-U target and Cortex-M targets - * Full testing is available in tree for the Corstone(TM) FVPs + * Full testing is available in tree for the Corstone™ FVPs * This is a reference implementation for porting to silicon targets * Linux target support For VGF capable targets * This flow re-uses the common executor_runner @@ -66,7 +66,7 @@ Other: ## Testing -The unit tests and related support scripts will test TOSA, Ethos-U and VGF behaviour based on the installed tools. It is expected that the relevant environment preperation has been performed as outlined in the guide available here https://docs.pytorch.org/executorch/main/tutorial-arm.html +The tests and related support scripts will test TOSA, Ethos-U and VGF behaviour based on the installed tools. It is expected that the relevant environment preperation has been performed as outlined in the guide available here https://docs.pytorch.org/executorch/main/tutorial-arm.html After setup you can run unit tests with the test_arm_baremetal.sh script. diff --git a/docs/source/tutorial-arm.md b/docs/source/tutorial-arm.md index 6ac2b7cc78f..7c9d70d3bb2 100644 --- a/docs/source/tutorial-arm.md +++ b/docs/source/tutorial-arm.md @@ -1,4 +1,4 @@ -# Arm(R) Backend Tutorial +# Arm® Backend Tutorial ::::{grid} 2 @@ -26,10 +26,10 @@ You may encounter some rough edges and features which may be documented or plann ```{tip} If you are already familiar with this delegate, you may want to jump directly to the examples: -* [https://github.com/pytorch/executorch/tree/main/examples/arm](https://github.com/pytorch/executorch/tree/main/examples/arm) -* [https://github.com/pytorch/executorch/blob/main/examples/arm/ethos_u_minimal_example.ipynb](Compilation for Ethos-U) -* [https://github.com/pytorch/executorch/blob/main/examples/arm/vgf_minimal_example.ipynb](Compilation for VGF/ML-SDK) -* [https://github.com/pytorch/executorch/blob/main/examples/arm/aot_arm_compiler.py](A commandline compiler for example models) +* [Examples in the ExecuTorch repository](https://github.com/pytorch/executorch/tree/main/examples/arm) +* [Compilation for Ethos-U](https://github.com/pytorch/executorch/blob/main/examples/arm/ethos_u_minimal_example.ipynb) +* [Compilation for VGF/ML-SDK](https://github.com/pytorch/executorch/blob/main/examples/arm/vgf_minimal_example.ipynb) +* [A commandline compiler for example models](https://github.com/pytorch/executorch/blob/main/examples/arm/aot_arm_compiler.py) ``` ## Prerequisites @@ -69,7 +69,6 @@ For VGF run: ``` It is possible to install both sets of dependencies if you omit the disable options. -Upon successful execution, you can directly go to [the next step](#convert-the-pytorch-model-to-the-pte-file). ### Notes: @@ -203,27 +202,50 @@ graph_module_edge.exported_program = to_backend( Similar to the non-delegate flow, the same script will server as a helper utility to help generate the `.pte` file. Notice the `--delegate` option to enable the `to_backend` call. +For Ethos targets: ```bash python3 -m examples.arm.aot_arm_compiler --model_name="add" --delegate +# This targets the default of ethos-u55-128, see --help for further targets # should produce ./add_arm_delegate_ethos-u55-128.pte ``` -### Delegated Quantized Workflow -Generating the `.pte` file can be done using the aot_arm_compiler: +For basic post-training quantization: ```bash python3 -m examples.arm.aot_arm_compiler --model_name="mv2" --delegate --quantize +# This targets the default of ethos-u55-128, see --help for further targets # should produce ./mv2_arm_delegate_ethos-u55-128.pte ``` + +For VGF targets: +```bash +python3 -m examples.arm.aot_arm_compiler --model_name="add" --target=vgf --delegate +# should produce ./add_arm_delegate_vgf.pte +``` + +For basic post-training quantization: +```bash +python3 -m examples.arm.aot_arm_compiler --model_name="mv2" --target=vgf --delegate --quantize +# should produce ./mv2_arm_delegate_vgf.pte +``` + +To capture intermediates such as VGF for lower level integration, invoke with the "-i" option: +```bash +python3 -m examples.arm.aot_arm_compiler --model_name="mv2" --target=vgf --delegate --quantize -i ./mv2_output +# should produce ./mv2_arm_delegate_vgf.pte and intermediates in ./mv2_out/ +``` +
-At the end of this, you should have three different `.pte` files. +At the end of this, you should have a number of different `.pte` files. -- The first one contains the [SoftmaxModule](#softmaxmodule), without any backend delegates. -- The second one contains the [AddModule](#addmodule), with Arm Ethos-U backend delegate enabled. -- The third one contains the [quantized MV2Model](#mv2module), with the Arm Ethos-U backend delegate enabled as well. +- the SoftmaxModule, without any backend delegates. +- the AddModule, targeting the Arm Ethos-U backend. +- the Quantized MV2Model, targeting the Arm Ethos-U backend. +- the AddModule, targeting the VGF backend. +- the Quantized MV2Model, targeting the VGF backend. -Now let's try to run these `.pte` files on a Corstone-300 and Corstone-320 platforms in a bare-metal environment. +Now let's try to run these `.pte` files on a target. ## Getting a Bare-Metal Executable @@ -391,6 +413,40 @@ I [executorch:arm_executor_runner.cpp:179] The `run.sh` script provides various options to select a particular FVP target, use desired models, select portable kernels and can be explored using the `--help` argument ``` +## Running on the VGF backend with the standard executor_runner for Linux + +Follow typical [Building ExecuTorch with CMake](using-executorch-building-from-source.md) flow to build the linux target, ensuring that the VGF delegate is enabled. + +```bash +-DEXECUTORCH_BUILD_VGF=ON +``` + +A full example buld line is: +``` +cmake bash \ + -DCMAKE_INSTALL_PREFIX=cmake-out \ + -DCMAKE_BUILD_TYPE=Release \ + -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \ + -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \ + -DEXECUTORCH_BUILD_EXTENSION_FLAT_TENSOR=ON \ + -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \ + -DEXECUTORCH_BUILD_XNNPACK=OFF \ + -DEXECUTORCH_BUILD_VULKAN=ON \ + -DEXECUTORCH_BUILD_VGF=ON \ + -DEXECUTORCH_ENABLE_LOGGING=ON \ + -DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=ON \ + -DPYTHON_EXECUTABLE=python \ + -Bcmake-out . +cmake --build cmake-out -j25 --target install --config Release +``` + +You can then invoke the executor runner on the host machine, which will use the VGF delegate, and requires the vulkan layer drivers we installed with setup.sh. + +```bash +./cmake-out/executor_runner -model_path add_arm_delegate_vgf.pte +``` + + ## Takeaways In this tutorial you have learnt how to use the ExecuTorch software to both export a standard model from PyTorch and to run it on the compact and fully functioned ExecuTorch runtime, enabling a smooth path for offloading models from PyTorch to Arm based platforms. From f3a08f15d39dfbfe2c6216c6bc0836141fd83092 Mon Sep 17 00:00:00 2001 From: Rob Elliott Date: Mon, 11 Aug 2025 11:57:07 +0000 Subject: [PATCH 4/6] Arm backend: Adding run_vmkl.sh script Signed-off-by: Rob Elliott Change-Id: I24b31ec7e31c2230cf7d48bbf03cd5f81dd064ba --- backends/arm/scripts/run_vkml.sh | 91 ++++++++++++++++++++++++++++++++ examples/arm/setup.sh | 10 ++-- 2 files changed, 96 insertions(+), 5 deletions(-) create mode 100755 backends/arm/scripts/run_vkml.sh diff --git a/backends/arm/scripts/run_vkml.sh b/backends/arm/scripts/run_vkml.sh new file mode 100755 index 00000000000..df5e4fbffd6 --- /dev/null +++ b/backends/arm/scripts/run_vkml.sh @@ -0,0 +1,91 @@ +#!/usr/bin/env bash +# Copyright 2025 Arm Limited and/or its affiliates. +# +# This source code is licensed under the BSD-style license found in the +# LICENSE file in the root directory of this source tree. + +# Optional parameter: +# --build_type= "Release" | "Debug" | "RelWithDebInfo" +# --etdump build with devtools-etdump support + +set -eu +set -o pipefail + +script_dir=$(cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd) +et_root_dir=$(cd ${script_dir}/../../.. && pwd) +et_root_dir=$(realpath ${et_root_dir}) +setup_path_script=${et_root_dir}/examples/arm/ethos-u-scratch/setup_path.sh +_setup_msg="please refer to ${et_root_dir}/examples/arm/setup.sh to properly install necessary tools." + + +model="" +build_path="cmake-out" +converter="model-converter" + +help() { + echo "Usage: $(basename $0) [options]" + echo "Options:" + echo " --model= .pte model file to run" + echo " --build= Target to build and run for Default: ${build_path}" + exit 0 +} + +for arg in "$@"; do + case $arg in + -h|--help) help ;; + --model=*) model="${arg#*=}";; + --build_path=*) build_path="${arg#*=}";; + *) + ;; + esac +done + +echo ${model} +if [[ -z ${model} ]]; then "Model name needs to be provided"; exit 1; fi + + +# Source the tools +# This should be prepared by the setup.sh +[[ -f ${setup_path_script} ]] \ + || { echo "Missing ${setup_path_script}. ${_setup_msg}"; exit 1; } + +source ${setup_path_script} + +# basic checks before we get started +hash ${converter} \ + || { echo "Could not find ${converter} on PATH, ${_setup_msg}"; exit 1; } + + + +runner="${build_path}/executor_runner" + +echo "--------------------------------------------------------------------------------" +echo "Running ${model} with ${runner}" +echo "WARNING: The VK_ML layer driver will not provide accurate performance information" +echo "--------------------------------------------------------------------------------" + +# Check if stdbuf is intalled and use stdbuf -oL together with tee below to make the output +# go all the way to the console more directly and not be buffered + +if hash stdbuf 2>/dev/null; then + nobuf="stdbuf -oL" +else + nobuf="" +fi + +log_file=$(mktemp) + + +${runner} -model_path ${model} | tee ${log_file} +echo "[${BASH_SOURCE[0]}] execution complete, $?" + +# Most of these can happen for bare metal or linx executor_runner runs. +echo "Checking for problems in log:" +! grep -E "^(F|E|\\[critical\\]|Hard fault.|Info: Simulation is stopping. Reason: CPU time has been exceeded.).*$" ${log_file} +if [ $? != 0 ]; then + echo "Found ERROR" + rm "${log_file}" + exit 1 +fi +echo "No problems found!" +rm "${log_file}" diff --git a/examples/arm/setup.sh b/examples/arm/setup.sh index 7c9c33b580c..e5dc6d07ba4 100755 --- a/examples/arm/setup.sh +++ b/examples/arm/setup.sh @@ -371,17 +371,17 @@ function create_setup_path(){ cd "${root_dir}" model_vgf_path="$(cd ${mlsdk_manifest_dir}/sw/vgf-lib/deploy && pwd)" echo "export PATH=\${PATH}:${model_vgf_path}/bin" >> ${setup_path_script} - echo "export LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:${model_vgf_path}/lib" >> ${setup_path_script} - echo "export DYLD_LIBRARY_PATH=\${DYLD_LIBRARY_PATH}:${model_vgf_path}/lib" >> ${setup_path_script} + echo "export LD_LIBRARY_PATH=\${LD_LIBRARY_PATH-}:${model_vgf_path}/lib" >> ${setup_path_script} + echo "export DYLD_LIBRARY_PATH=\${DYLD_LIBRARY_PATH-}:${model_vgf_path}/lib" >> ${setup_path_script} fi if [[ "${enable_emulation_layer}" -eq 1 ]]; then cd "${root_dir}" model_emulation_layer_path="$(cd ${mlsdk_manifest_dir}/sw/emulation-layer/ && pwd)" echo "export LD_LIBRARY_PATH=${model_emulation_layer_path}/deploy/lib:\${LD_LIBRARY_PATH}" >> ${setup_path_script} - echo "export DYLD_LIBRARY_PATH=${model_emulation_layer_path}/deploy/lib:\${DYLD_LIBRARY_PATH}" >> ${setup_path_script} - echo "export VK_INSTANCE_LAYERS=VK_LAYER_ML_Graph_Emulation:VK_LAYER_ML_Tensor_Emulation:\${VK_INSTANCE_LAYERS}" >> ${setup_path_script} - echo "export VK_ADD_LAYER_PATH=${model_emulation_layer_path}/deploy/share/vulkan/explicit_layer.d:\${VK_ADD_LAYER_PATH}" >> ${setup_path_script} + echo "export DYLD_LIBRARY_PATH=${model_emulation_layer_path}/deploy/lib:\${DYLD_LIBRARY_PATH-}" >> ${setup_path_script} + echo "export VK_INSTANCE_LAYERS=VK_LAYER_ML_Graph_Emulation:VK_LAYER_ML_Tensor_Emulation:\${VK_INSTANCE_LAYERS-}" >> ${setup_path_script} + echo "export VK_ADD_LAYER_PATH=${model_emulation_layer_path}/deploy/share/vulkan/explicit_layer.d:\${VK_ADD_LAYER_PATH-}" >> ${setup_path_script} fi } From f30c331dcaefecfa81ad090a1db1aba4b811a553 Mon Sep 17 00:00:00 2001 From: Rob Elliott Date: Wed, 13 Aug 2025 16:23:08 +0000 Subject: [PATCH 5/6] review comments Signed-off-by: Rob Elliott --- backends/arm/README.md | 6 +++--- backends/arm/scripts/run_vkml.sh | 5 ++--- 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/backends/arm/README.md b/backends/arm/README.md index 25f56482bdb..7353d30df4d 100644 --- a/backends/arm/README.md +++ b/backends/arm/README.md @@ -15,7 +15,7 @@ For more information on TOSA see https://www.mlplatform.org/tosa/tosa_spec.html **Currently device support is for:** * TOSA to Ethos™-U55/65/85 via the ethos-u-vela compilation stack. * This is cross-compiled to the appropriate target CPU - * There is a seperate arm_executor_runner for bare-metal platforms + * There is a separate arm_executor_runner for bare-metal platforms * TOSA to VGF via the model-converter for devices supporting the ML SDK for Vulkan® * The VGF graph represents TOSA directly in a SPIR-V™ standardized form. * As the VGF delegate runs on Vulkan, it's required to be built with the Vulkan delegate also present. @@ -37,7 +37,7 @@ Export: * `tosa_backend.py` - The TOSA conversion flow all other backends rely on. * `ethosu/backend.py` - Main entrypoint for the EthosUBackend. * `vgf_backend.py` - Main entrypoint for VgfBackend. - * For more information see the section on Arm Backend Architecture](#arm-backend-architecture). + * For more information see the section on [Arm Backend Architecture](#arm-backend-architecture). * `scripts` - For the core scripts which prepare AoT dependencies such as backend compilers. Passes (which prepare the partitioned graphs for TOSA conversion): @@ -66,7 +66,7 @@ Other: ## Testing -The tests and related support scripts will test TOSA, Ethos-U and VGF behaviour based on the installed tools. It is expected that the relevant environment preperation has been performed as outlined in the guide available here https://docs.pytorch.org/executorch/main/tutorial-arm.html +The tests and related support scripts will test TOSA, Ethos-U and VGF behaviour based on the installed tools. It is expected that the relevant environment preparation has been performed as outlined in the guide available here https://docs.pytorch.org/executorch/main/tutorial-arm.html After setup you can run unit tests with the test_arm_baremetal.sh script. diff --git a/backends/arm/scripts/run_vkml.sh b/backends/arm/scripts/run_vkml.sh index df5e4fbffd6..ebbdb7e415f 100755 --- a/backends/arm/scripts/run_vkml.sh +++ b/backends/arm/scripts/run_vkml.sh @@ -40,8 +40,7 @@ for arg in "$@"; do esac done -echo ${model} -if [[ -z ${model} ]]; then "Model name needs to be provided"; exit 1; fi +if [[ -z ${model} ]]; then echo "Model name needs to be provided"; exit 1; fi # Source the tools @@ -76,7 +75,7 @@ fi log_file=$(mktemp) -${runner} -model_path ${model} | tee ${log_file} +${nobuf} ${runner} -model_path ${model} | tee ${log_file} echo "[${BASH_SOURCE[0]}] execution complete, $?" # Most of these can happen for bare metal or linx executor_runner runs. From bd066a4e41bfd13dbc6973316f5e418a8fe7999b Mon Sep 17 00:00:00 2001 From: Rob Elliott Date: Thu, 14 Aug 2025 11:05:53 +0000 Subject: [PATCH 6/6] fix for url lint check Signed-off-by: Rob Elliott --- backends/arm/README.md | 2 +- docs/source/tutorial-arm.md | 1 - examples/arm/ethos_u_minimal_example.ipynb | 2 -- 3 files changed, 1 insertion(+), 4 deletions(-) diff --git a/backends/arm/README.md b/backends/arm/README.md index 7353d30df4d..e2e49c0c10f 100644 --- a/backends/arm/README.md +++ b/backends/arm/README.md @@ -66,7 +66,7 @@ Other: ## Testing -The tests and related support scripts will test TOSA, Ethos-U and VGF behaviour based on the installed tools. It is expected that the relevant environment preparation has been performed as outlined in the guide available here https://docs.pytorch.org/executorch/main/tutorial-arm.html +The tests and related support scripts will test TOSA, Ethos-U and VGF behaviour based on the installed tools. It is expected that the relevant environment preparation has been performed as outlined in ./examples/arm/README.md. After setup you can run unit tests with the test_arm_baremetal.sh script. diff --git a/docs/source/tutorial-arm.md b/docs/source/tutorial-arm.md index 7c9d70d3bb2..0692b631154 100644 --- a/docs/source/tutorial-arm.md +++ b/docs/source/tutorial-arm.md @@ -28,7 +28,6 @@ You may encounter some rough edges and features which may be documented or plann If you are already familiar with this delegate, you may want to jump directly to the examples: * [Examples in the ExecuTorch repository](https://github.com/pytorch/executorch/tree/main/examples/arm) * [Compilation for Ethos-U](https://github.com/pytorch/executorch/blob/main/examples/arm/ethos_u_minimal_example.ipynb) -* [Compilation for VGF/ML-SDK](https://github.com/pytorch/executorch/blob/main/examples/arm/vgf_minimal_example.ipynb) * [A commandline compiler for example models](https://github.com/pytorch/executorch/blob/main/examples/arm/aot_arm_compiler.py) ``` diff --git a/examples/arm/ethos_u_minimal_example.ipynb b/examples/arm/ethos_u_minimal_example.ipynb index 0fa2c9e6f79..96c75251c3e 100644 --- a/examples/arm/ethos_u_minimal_example.ipynb +++ b/examples/arm/ethos_u_minimal_example.ipynb @@ -26,8 +26,6 @@ "2. Install Arm cross-compilation toolchain and simulators using `./examples/arm/setup.sh --i-agree-to-the-contained-eula`\n", "3. Add Arm cross-compilation toolchain and simulators to PATH using `./examples/arm/ethos-u-scratch/setup_path.sh` \n", "\n", - "For further guidance, refer to https://docs.pytorch.org/executorch/main/tutorial-arm.html", - "\n", "With all commands executed from the base `executorch` folder.\n", "\n", "\n",