diff --git a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md
index 73b1e614f63..64811ee774f 100644
--- a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md
+++ b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md
@@ -1,17 +1,16 @@
-# Building Llama iOS Demo for XNNPack Backend
+# Building Llama iOS Demo for XNNPACK Backend
-**[UPDATE - 09/25]** We have added support for running [Llama 3.2 models](#for-llama-32-1b-and-3b-models) on the XNNPack backend. We currently support inference on their original data type (BFloat16).
+**[UPDATE - 09/25]** We have added support for running [Llama 3.2 models](#for-llama-32-1b-and-3b-models) on the XNNPACK backend. We currently support inference on their original data type (BFloat16).
-This tutorial covers the end to end workflow for building an iOS demo app using XNNPack backend on device.
+This tutorial covers the end to end workflow for building an iOS demo app using XNNPACK backend on device.
More specifically, it covers:
-1. Export and quantization of Llama models against the XNNPack backend.
-2. Building and linking libraries that are required to inference on-device for iOS platform using XNNPack.
+1. Export and quantization of Llama models against the XNNPACK backend.
+2. Building and linking libraries that are required to inference on-device for iOS platform using XNNPACK.
3. Building the iOS demo app itself.
## Prerequisites
* [Xcode 15](https://developer.apple.com/xcode)
* [iOS 17 SDK](https://developer.apple.com/ios)
-* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment:
## Setup ExecuTorch
In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)). The commands below are running on Linux (CentOS).
@@ -48,7 +47,7 @@ sh examples/models/llama2/install_requirements.sh
```
### For Llama 3.2 1B and 3B models
-We have supported BFloat16 as a data type on the XNNPack backend for Llama 3.2 1B/3B models.
+We have supported BFloat16 as a data type on the XNNPACK backend for Llama 3.2 1B/3B models.
* You can download original model weights for Llama through Meta official [website](https://llama.meta.com/).
* For chat use-cases, download the instruct models instead of pretrained.
* Run “examples/models/llama2/install_requirements.sh” to install dependencies.
@@ -59,8 +58,6 @@ We have supported BFloat16 as a data type on the XNNPack backend for Llama 3.2 1
python -m examples.models.llama2.export_llama --checkpoint