From f561e0392d9a2c68fc076a0059ab41180bf6d475 Mon Sep 17 00:00:00 2001 From: Anthony Shoumikhin Date: Thu, 11 Sep 2025 12:07:15 -0700 Subject: [PATCH] Remove LLaMA demo app. (#14195) Summary: Pull Request resolved: https://github.com/pytorch/executorch/pull/14195 It's moved to executorch-examples repo Reviewed By: kirklandsign, jackzhxng Differential Revision: D82183720 --- .github/workflows/apple.yml | 1 - .lintrunner.toml | 1 - README-wheel.md | 2 +- backends/apple/mps/setup.md | 2 +- docs/source/backends-mps.md | 2 +- docs/source/llm/getting-started.md | 2 +- docs/source/llm/run-on-ios.md | 2 +- examples/README.md | 4 +- .../LLaMA/LLaMA.xcodeproj/project.pbxproj | 530 --------------- .../xcshareddata/xcschemes/LLaMA.xcscheme | 76 --- .../xcschemes/LLaMARunner.xcscheme | 67 -- .../LLaMA/LLaMA/Application/App.swift | 18 - .../LLaMA/LLaMA/Application/Constants.swift | 30 - .../LLaMA/LLaMA/Application/ContentView.swift | 619 ------------------ .../LLaMA/LLaMA/Application/ImagePicker.swift | 50 -- .../LLaMA/LLaMA/Application/LogManager.swift | 51 -- .../LLaMA/LLaMA/Application/LogView.swift | 64 -- .../LLaMA/LLaMA/Application/Message.swift | 26 - .../LLaMA/Application/MessageListView.swift | 88 --- .../LLaMA/LLaMA/Application/MessageView.swift | 73 --- .../LLaMA/Application/ResourceManager.swift | 37 -- .../LLaMA/Application/ResourceMonitor.swift | 51 -- .../LLaMA/SupportingFiles/LLaMA-Info.plist | 8 - .../AppIcon.appiconset/Contents.json | 14 - .../AppIcon.appiconset/logo.png | Bin 33036 -> 0 bytes .../LLaMAAssets/Assets.xcassets/Contents.json | 6 - .../LLaMAEntitlements/LLaMA.entitlements | 8 - examples/demo-apps/apple_ios/LLaMA/README.md | 47 -- examples/demo-apps/apple_ios/LLaMA/TARGETS | 0 .../LLaMA/docs/delegates/mps_README.md | 106 --- .../LLaMA/docs/delegates/xnnpack_README.md | 212 ------ .../demo-apps/react-native/rnllama/README.md | 2 +- .../react-native/rnllama/ios/LlamaBridge.h | 4 +- .../react-native/rnllama/ios/LlamaBridge.mm | 11 +- .../ios/rnllama.xcodeproj/project.pbxproj | 151 +++-- examples/models/llama/README.md | 2 +- examples/models/llama/non_cpu_backends.md | 2 +- examples/models/llava/README.md | 6 +- 38 files changed, 101 insertions(+), 2274 deletions(-) delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.pbxproj delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/xcshareddata/xcschemes/LLaMA.xcscheme delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/xcshareddata/xcschemes/LLaMARunner.xcscheme delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/App.swift delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/Constants.swift delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ContentView.swift delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ImagePicker.swift delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/LogManager.swift delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/LogView.swift delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/Message.swift delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/MessageListView.swift delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/MessageView.swift delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ResourceManager.swift delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ResourceMonitor.swift delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMA/SupportingFiles/LLaMA-Info.plist delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMAAssets/Assets.xcassets/AppIcon.appiconset/Contents.json delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMAAssets/Assets.xcassets/AppIcon.appiconset/logo.png delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMAAssets/Assets.xcassets/Contents.json delete mode 100644 examples/demo-apps/apple_ios/LLaMA/LLaMAEntitlements/LLaMA.entitlements delete mode 100644 examples/demo-apps/apple_ios/LLaMA/README.md delete mode 100644 examples/demo-apps/apple_ios/LLaMA/TARGETS delete mode 100644 examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md delete mode 100644 examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md diff --git a/.github/workflows/apple.yml b/.github/workflows/apple.yml index 1443446c674..fb3c04d07fb 100644 --- a/.github/workflows/apple.yml +++ b/.github/workflows/apple.yml @@ -17,7 +17,6 @@ on: - scripts/build_apple_llm_demo.sh - scripts/create_frameworks.sh - .ci/scripts/test_ios_ci.sh - - examples/demo-apps/apple_ios/** - extension/apple/** - extension/benchmark/apple/** - extension/module/** diff --git a/.lintrunner.toml b/.lintrunner.toml index c060836cb72..0b6a6eb8908 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -73,7 +73,6 @@ exclude_patterns = [ '**/third-party/**', # NB: Objective-C is not supported 'examples/apple/**', - 'examples/demo-apps/apple_ios/**', 'examples/demo-apps/react-native/rnllama/ios/**', 'extension/apple/**', 'extension/llm/apple/**', diff --git a/README-wheel.md b/README-wheel.md index 12906bfd382..a59af8ea05f 100644 --- a/README-wheel.md +++ b/README-wheel.md @@ -25,6 +25,6 @@ tutorials and documentation. Here are some starting points: * [Exporting to ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) * Learn the fundamentals of exporting a PyTorch `nn.Module` to ExecuTorch, and optimizing its performance using quantization and hardware delegation. -* Running LLaMA on [iOS](docs/source/llm/llama-demo-ios.md) and [Android](docs/source/llm/llama-demo-android.md) devices. +* Running etLLM on [iOS](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) and [Android](docs/source/llm/llama-demo-android.md) devices. * Build and run LLaMA in a demo mobile app, and learn how to integrate models with your own apps. diff --git a/backends/apple/mps/setup.md b/backends/apple/mps/setup.md index f4819c104a5..a82a4ee2eea 100644 --- a/backends/apple/mps/setup.md +++ b/backends/apple/mps/setup.md @@ -16,7 +16,7 @@ The MPS backend device maps machine learning computational graphs and primitives * [Setting up ExecuTorch](../../../docs/source/getting-started-setup.rst) * [Building ExecuTorch with CMake](../../../docs/source/using-executorch-cpp.md#building-with-cmake) * [ExecuTorch iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo) -* [ExecuTorch iOS LLaMA Demo App](../../../docs/source/llm/llama-demo-ios.md) +* [ExecuTorch LLM iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) ::: :::: diff --git a/docs/source/backends-mps.md b/docs/source/backends-mps.md index c1d8d8eaf1d..184bd88e3a7 100644 --- a/docs/source/backends-mps.md +++ b/docs/source/backends-mps.md @@ -16,7 +16,7 @@ The MPS backend device maps machine learning computational graphs and primitives * [Getting Started](getting-started.md) * [Building ExecuTorch with CMake](using-executorch-building-from-source.md) * [ExecuTorch iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo) -* [ExecuTorch iOS LLaMA Demo App](llm/llama-demo-ios.md) +* [ExecuTorch LLM iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) ::: :::: diff --git a/docs/source/llm/getting-started.md b/docs/source/llm/getting-started.md index c75d5bbc3f5..849418342b6 100644 --- a/docs/source/llm/getting-started.md +++ b/docs/source/llm/getting-started.md @@ -23,4 +23,4 @@ Deploying LLMs to ExecuTorch can be boiled down to a two-step process: (1) expor - [Running with C++](run-with-c-plus-plus.md) - [Running on Android (XNNPack)](llama-demo-android.md) - [Running on Android (Qualcomm)](build-run-llama3-qualcomm-ai-engine-direct-backend.md) -- [Running on iOS](llama-demo-ios.md) +- [Running on iOS](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) diff --git a/docs/source/llm/run-on-ios.md b/docs/source/llm/run-on-ios.md index c4994dd0e06..88ad94c38d3 100644 --- a/docs/source/llm/run-on-ios.md +++ b/docs/source/llm/run-on-ios.md @@ -123,4 +123,4 @@ runner.stop() ## Demo -Get hands-on with our [LLaMA iOS Demo App](llama-demo-ios.md) to see the LLM runtime APIs in action. +Get hands-on with our [etLLM iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) to see the LLM runtime APIs in action. diff --git a/examples/README.md b/examples/README.md index 096a9850b29..3af92f8ef90 100644 --- a/examples/README.md +++ b/examples/README.md @@ -21,7 +21,7 @@ examples | └── mps # Contains end-to-end demos of MPS backend ├── arm # Contains demos of the Arm TOSA and Ethos-U NPU flows ├── qualcomm # Contains demos of Qualcomm QNN backend -├── samsung # Contains demos of Samsung Exynos backend +�├── samsung # Contains demos of Samsung Exynos backend ├── cadence # Contains demos of exporting and running a simple model on Xtensa DSPs ├── third-party # Third-party libraries required for working on the demos └── README.md # This file @@ -34,7 +34,7 @@ A user's journey may commence by exploring the demos located in the [`portable/` ## Demos Apps -Explore mobile apps with ExecuTorch models integrated and deployable on [Android](demo-apps/android) and [iOS](demo-apps/apple_ios). This provides end-to-end instructions on how to export Llama models, load on device, build the app, and run it on device. +Explore mobile apps with ExecuTorch models integrated and deployable on [Android](demo-apps/android) and [iOS](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple). This provides end-to-end instructions on how to export Llama models, load on device, build the app, and run it on device. For specific details related to models and backend, you can explore the various subsections. diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.pbxproj b/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.pbxproj deleted file mode 100644 index 7197b9be814..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.pbxproj +++ /dev/null @@ -1,530 +0,0 @@ -// !$*UTF8*$! -{ - archiveVersion = 1; - classes = { - }; - objectVersion = 56; - objects = { - -/* Begin PBXBuildFile section */ - 0324D68B2BAACB6900DEF36F /* App.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0324D6802BAACB6900DEF36F /* App.swift */; }; - 0324D68C2BAACB6900DEF36F /* ContentView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0324D6812BAACB6900DEF36F /* ContentView.swift */; }; - 0324D68D2BAACB6900DEF36F /* LogManager.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0324D6822BAACB6900DEF36F /* LogManager.swift */; }; - 0324D68E2BAACB6900DEF36F /* LogView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0324D6832BAACB6900DEF36F /* LogView.swift */; }; - 0324D68F2BAACB6900DEF36F /* Message.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0324D6842BAACB6900DEF36F /* Message.swift */; }; - 0324D6902BAACB6900DEF36F /* MessageListView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0324D6852BAACB6900DEF36F /* MessageListView.swift */; }; - 0324D6912BAACB6900DEF36F /* MessageView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0324D6862BAACB6900DEF36F /* MessageView.swift */; }; - 0324D6922BAACB6900DEF36F /* ResourceManager.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0324D6872BAACB6900DEF36F /* ResourceManager.swift */; }; - 0324D6932BAACB6900DEF36F /* ResourceMonitor.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0324D6882BAACB6900DEF36F /* ResourceMonitor.swift */; }; - 0324D6962BAACB7000DEF36F /* Assets.xcassets in Resources */ = {isa = PBXBuildFile; fileRef = 0324D6942BAACB7000DEF36F /* Assets.xcassets */; }; - 03F546242E70906E0040AE84 /* backend_coreml in Frameworks */ = {isa = PBXBuildFile; productRef = 03F546232E70906E0040AE84 /* backend_coreml */; }; - 03F546262E70906E0040AE84 /* backend_mps in Frameworks */ = {isa = PBXBuildFile; productRef = 03F546252E70906E0040AE84 /* backend_mps */; }; - 03F546282E70906E0040AE84 /* backend_xnnpack in Frameworks */ = {isa = PBXBuildFile; productRef = 03F546272E70906E0040AE84 /* backend_xnnpack */; }; - 03F5462A2E70906E0040AE84 /* executorch_debug in Frameworks */ = {isa = PBXBuildFile; productRef = 03F546292E70906E0040AE84 /* executorch_debug */; }; - 03F5462C2E70906E0040AE84 /* executorch_llm_debug in Frameworks */ = {isa = PBXBuildFile; productRef = 03F5462B2E70906E0040AE84 /* executorch_llm_debug */; }; - 03F5462E2E70906E0040AE84 /* kernels_llm in Frameworks */ = {isa = PBXBuildFile; productRef = 03F5462D2E70906E0040AE84 /* kernels_llm */; }; - 03F546302E70906E0040AE84 /* kernels_optimized in Frameworks */ = {isa = PBXBuildFile; productRef = 03F5462F2E70906E0040AE84 /* kernels_optimized */; }; - 03F546322E70906E0040AE84 /* kernels_quantized in Frameworks */ = {isa = PBXBuildFile; productRef = 03F546312E70906E0040AE84 /* kernels_quantized */; }; - 03F546342E70906E0040AE84 /* kernels_torchao in Frameworks */ = {isa = PBXBuildFile; productRef = 03F546332E70906E0040AE84 /* kernels_torchao */; }; - 26A6A4282C8A3769005A761E /* ImagePicker.swift in Sources */ = {isa = PBXBuildFile; fileRef = 26A6A4272C8A3769005A761E /* ImagePicker.swift */; }; - 3072D5232DC3EA280083FC83 /* Constants.swift in Sources */ = {isa = PBXBuildFile; fileRef = 3072D5222DC3EA280083FC83 /* Constants.swift */; }; -/* End PBXBuildFile section */ - -/* Begin PBXCopyFilesBuildPhase section */ - 03729EE02BB1F8DE00152F2E /* Embed Frameworks */ = { - isa = PBXCopyFilesBuildPhase; - buildActionMask = 2147483647; - dstPath = ""; - dstSubfolderSpec = 10; - files = ( - ); - name = "Embed Frameworks"; - runOnlyForDeploymentPostprocessing = 0; - }; -/* End PBXCopyFilesBuildPhase section */ - -/* Begin PBXFileReference section */ - 0320439D2BB4AC6600050211 /* LLaMA-Info.plist */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.plist.xml; path = "LLaMA-Info.plist"; sourceTree = ""; }; - 0324D6802BAACB6900DEF36F /* App.swift */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.swift; path = App.swift; sourceTree = ""; }; - 0324D6812BAACB6900DEF36F /* ContentView.swift */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.swift; path = ContentView.swift; sourceTree = ""; }; - 0324D6822BAACB6900DEF36F /* LogManager.swift */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.swift; path = LogManager.swift; sourceTree = ""; }; - 0324D6832BAACB6900DEF36F /* LogView.swift */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.swift; path = LogView.swift; sourceTree = ""; }; - 0324D6842BAACB6900DEF36F /* Message.swift */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.swift; path = Message.swift; sourceTree = ""; }; - 0324D6852BAACB6900DEF36F /* MessageListView.swift */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.swift; path = MessageListView.swift; sourceTree = ""; }; - 0324D6862BAACB6900DEF36F /* MessageView.swift */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.swift; path = MessageView.swift; sourceTree = ""; }; - 0324D6872BAACB6900DEF36F /* ResourceManager.swift */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.swift; path = ResourceManager.swift; sourceTree = ""; }; - 0324D6882BAACB6900DEF36F /* ResourceMonitor.swift */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.swift; path = ResourceMonitor.swift; sourceTree = ""; }; - 0324D6942BAACB7000DEF36F /* Assets.xcassets */ = {isa = PBXFileReference; lastKnownFileType = folder.assetcatalog; path = Assets.xcassets; sourceTree = ""; }; - 035A5E942BB4B523001E0553 /* LLaMA.entitlements */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.plist.entitlements; path = LLaMA.entitlements; sourceTree = ""; }; - 036CAF9D2BB1444500D6C2D5 /* LLaMA.app */ = {isa = PBXFileReference; explicitFileType = wrapper.application; includeInIndex = 0; path = LLaMA.app; sourceTree = BUILT_PRODUCTS_DIR; }; - 26A6A4272C8A3769005A761E /* ImagePicker.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ImagePicker.swift; sourceTree = ""; }; - 3072D5222DC3EA280083FC83 /* Constants.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = Constants.swift; sourceTree = ""; }; -/* End PBXFileReference section */ - -/* Begin PBXFrameworksBuildPhase section */ - 032C016C2AC228E6002955E1 /* Frameworks */ = { - isa = PBXFrameworksBuildPhase; - buildActionMask = 2147483647; - files = ( - 03F5462C2E70906E0040AE84 /* executorch_llm_debug in Frameworks */, - 03F546342E70906E0040AE84 /* kernels_torchao in Frameworks */, - 03F546322E70906E0040AE84 /* kernels_quantized in Frameworks */, - 03F5462E2E70906E0040AE84 /* kernels_llm in Frameworks */, - 03F5462A2E70906E0040AE84 /* executorch_debug in Frameworks */, - 03F546262E70906E0040AE84 /* backend_mps in Frameworks */, - 03F546242E70906E0040AE84 /* backend_coreml in Frameworks */, - 03F546302E70906E0040AE84 /* kernels_optimized in Frameworks */, - 03F546282E70906E0040AE84 /* backend_xnnpack in Frameworks */, - ); - runOnlyForDeploymentPostprocessing = 0; - }; -/* End PBXFrameworksBuildPhase section */ - -/* Begin PBXGroup section */ - 0320439E2BB4AC6600050211 /* SupportingFiles */ = { - isa = PBXGroup; - children = ( - 0320439D2BB4AC6600050211 /* LLaMA-Info.plist */, - ); - path = SupportingFiles; - sourceTree = ""; - }; - 0324D6892BAACB6900DEF36F /* Application */ = { - isa = PBXGroup; - children = ( - 3072D5222DC3EA280083FC83 /* Constants.swift */, - 0324D6802BAACB6900DEF36F /* App.swift */, - 0324D6812BAACB6900DEF36F /* ContentView.swift */, - 0324D6822BAACB6900DEF36F /* LogManager.swift */, - 0324D6832BAACB6900DEF36F /* LogView.swift */, - 0324D6842BAACB6900DEF36F /* Message.swift */, - 0324D6852BAACB6900DEF36F /* MessageListView.swift */, - 0324D6862BAACB6900DEF36F /* MessageView.swift */, - 0324D6872BAACB6900DEF36F /* ResourceManager.swift */, - 0324D6882BAACB6900DEF36F /* ResourceMonitor.swift */, - 26A6A4272C8A3769005A761E /* ImagePicker.swift */, - ); - path = Application; - sourceTree = ""; - }; - 0324D68A2BAACB6900DEF36F /* LLaMA */ = { - isa = PBXGroup; - children = ( - 0324D6892BAACB6900DEF36F /* Application */, - 0320439E2BB4AC6600050211 /* SupportingFiles */, - ); - path = LLaMA; - sourceTree = ""; - }; - 0324D6952BAACB7000DEF36F /* LLaMAAssets */ = { - isa = PBXGroup; - children = ( - 0324D6942BAACB7000DEF36F /* Assets.xcassets */, - ); - path = LLaMAAssets; - sourceTree = ""; - }; - 032C01662AC228E5002955E1 = { - isa = PBXGroup; - children = ( - 0324D68A2BAACB6900DEF36F /* LLaMA */, - 0324D6952BAACB7000DEF36F /* LLaMAAssets */, - 035A5E952BB4B523001E0553 /* LLaMAEntitlements */, - 036CAF9D2BB1444500D6C2D5 /* LLaMA.app */, - ); - sourceTree = ""; - }; - 035A5E952BB4B523001E0553 /* LLaMAEntitlements */ = { - isa = PBXGroup; - children = ( - 035A5E942BB4B523001E0553 /* LLaMA.entitlements */, - ); - path = LLaMAEntitlements; - sourceTree = ""; - }; -/* End PBXGroup section */ - -/* Begin PBXNativeTarget section */ - 032C016E2AC228E6002955E1 /* LLaMA */ = { - isa = PBXNativeTarget; - buildConfigurationList = 032C017D2AC228E7002955E1 /* Build configuration list for PBXNativeTarget "LLaMA" */; - buildPhases = ( - 032C016B2AC228E6002955E1 /* Sources */, - 032C016C2AC228E6002955E1 /* Frameworks */, - 032C016D2AC228E6002955E1 /* Resources */, - 03729EE02BB1F8DE00152F2E /* Embed Frameworks */, - ); - buildRules = ( - ); - dependencies = ( - ); - name = LLaMA; - packageProductDependencies = ( - 03F546232E70906E0040AE84 /* backend_coreml */, - 03F546252E70906E0040AE84 /* backend_mps */, - 03F546272E70906E0040AE84 /* backend_xnnpack */, - 03F546292E70906E0040AE84 /* executorch_debug */, - 03F5462B2E70906E0040AE84 /* executorch_llm_debug */, - 03F5462D2E70906E0040AE84 /* kernels_llm */, - 03F5462F2E70906E0040AE84 /* kernels_optimized */, - 03F546312E70906E0040AE84 /* kernels_quantized */, - 03F546332E70906E0040AE84 /* kernels_torchao */, - ); - productName = LLaMA; - productReference = 036CAF9D2BB1444500D6C2D5 /* LLaMA.app */; - productType = "com.apple.product-type.application"; - }; -/* End PBXNativeTarget section */ - -/* Begin PBXProject section */ - 032C01672AC228E5002955E1 /* Project object */ = { - isa = PBXProject; - attributes = { - BuildIndependentTargetsInParallel = 1; - LastSwiftUpdateCheck = 1540; - LastUpgradeCheck = 1530; - TargetAttributes = { - 032C016E2AC228E6002955E1 = { - CreatedOnToolsVersion = 15.0; - }; - }; - }; - buildConfigurationList = 032C016A2AC228E5002955E1 /* Build configuration list for PBXProject "LLaMA" */; - compatibilityVersion = "Xcode 14.0"; - developmentRegion = en; - hasScannedForEncodings = 0; - knownRegions = ( - en, - Base, - ); - mainGroup = 032C01662AC228E5002955E1; - packageReferences = ( - 03CF43942CEC5CEC00C7113B /* XCRemoteSwiftPackageReference "executorch" */, - ); - productRefGroup = 032C01662AC228E5002955E1; - projectDirPath = ""; - projectRoot = ""; - targets = ( - 032C016E2AC228E6002955E1 /* LLaMA */, - ); - }; -/* End PBXProject section */ - -/* Begin PBXResourcesBuildPhase section */ - 032C016D2AC228E6002955E1 /* Resources */ = { - isa = PBXResourcesBuildPhase; - buildActionMask = 2147483647; - files = ( - 0324D6962BAACB7000DEF36F /* Assets.xcassets in Resources */, - ); - runOnlyForDeploymentPostprocessing = 0; - }; -/* End PBXResourcesBuildPhase section */ - -/* Begin PBXSourcesBuildPhase section */ - 032C016B2AC228E6002955E1 /* Sources */ = { - isa = PBXSourcesBuildPhase; - buildActionMask = 2147483647; - files = ( - 0324D6932BAACB6900DEF36F /* ResourceMonitor.swift in Sources */, - 3072D5232DC3EA280083FC83 /* Constants.swift in Sources */, - 0324D68D2BAACB6900DEF36F /* LogManager.swift in Sources */, - 0324D68E2BAACB6900DEF36F /* LogView.swift in Sources */, - 0324D68F2BAACB6900DEF36F /* Message.swift in Sources */, - 0324D6922BAACB6900DEF36F /* ResourceManager.swift in Sources */, - 0324D68C2BAACB6900DEF36F /* ContentView.swift in Sources */, - 0324D6902BAACB6900DEF36F /* MessageListView.swift in Sources */, - 26A6A4282C8A3769005A761E /* ImagePicker.swift in Sources */, - 0324D6912BAACB6900DEF36F /* MessageView.swift in Sources */, - 0324D68B2BAACB6900DEF36F /* App.swift in Sources */, - ); - runOnlyForDeploymentPostprocessing = 0; - }; -/* End PBXSourcesBuildPhase section */ - -/* Begin XCBuildConfiguration section */ - 032C017B2AC228E7002955E1 /* Debug */ = { - isa = XCBuildConfiguration; - buildSettings = { - ALWAYS_SEARCH_USER_PATHS = NO; - ASSETCATALOG_COMPILER_GENERATE_SWIFT_ASSET_SYMBOL_EXTENSIONS = YES; - CLANG_ANALYZER_NONNULL = YES; - CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE; - CLANG_CXX_LANGUAGE_STANDARD = "c++17"; - CLANG_ENABLE_MODULES = YES; - CLANG_ENABLE_OBJC_ARC = YES; - CLANG_ENABLE_OBJC_WEAK = YES; - CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES; - CLANG_WARN_BOOL_CONVERSION = YES; - CLANG_WARN_COMMA = YES; - CLANG_WARN_CONSTANT_CONVERSION = YES; - CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES; - CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR; - CLANG_WARN_DOCUMENTATION_COMMENTS = YES; - CLANG_WARN_EMPTY_BODY = YES; - CLANG_WARN_ENUM_CONVERSION = YES; - CLANG_WARN_INFINITE_RECURSION = YES; - CLANG_WARN_INT_CONVERSION = YES; - CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES; - CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES; - CLANG_WARN_OBJC_LITERAL_CONVERSION = YES; - CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR; - CLANG_WARN_QUOTED_INCLUDE_IN_FRAMEWORK_HEADER = YES; - CLANG_WARN_RANGE_LOOP_ANALYSIS = YES; - CLANG_WARN_STRICT_PROTOTYPES = YES; - CLANG_WARN_SUSPICIOUS_MOVE = YES; - CLANG_WARN_UNGUARDED_AVAILABILITY = YES_AGGRESSIVE; - CLANG_WARN_UNREACHABLE_CODE = YES; - CLANG_WARN__DUPLICATE_METHOD_MATCH = YES; - COPY_PHASE_STRIP = NO; - DEBUG_INFORMATION_FORMAT = dwarf; - ENABLE_STRICT_OBJC_MSGSEND = YES; - ENABLE_TESTABILITY = YES; - ENABLE_USER_SCRIPT_SANDBOXING = YES; - "EXCLUDED_ARCHS[sdk=iphonesimulator*]" = x86_64; - GCC_C_LANGUAGE_STANDARD = c17; - GCC_DYNAMIC_NO_PIC = NO; - GCC_NO_COMMON_BLOCKS = YES; - GCC_OPTIMIZATION_LEVEL = 0; - GCC_PREPROCESSOR_DEFINITIONS = ( - "DEBUG=1", - "$(inherited)", - ); - GCC_WARN_64_TO_32_BIT_CONVERSION = YES; - GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR; - GCC_WARN_UNDECLARED_SELECTOR = YES; - GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE; - GCC_WARN_UNUSED_FUNCTION = YES; - GCC_WARN_UNUSED_VARIABLE = YES; - IPHONEOS_DEPLOYMENT_TARGET = 17.0; - LD_RUNPATH_SEARCH_PATHS = ( - "$(inherited)", - "@executable_path/Frameworks", - "@loader_path/Frameworks", - ); - LOCALIZATION_PREFERS_STRING_CATALOGS = YES; - MTL_ENABLE_DEBUG_INFO = INCLUDE_SOURCE; - MTL_FAST_MATH = YES; - ONLY_ACTIVE_ARCH = YES; - SDKROOT = iphoneos; - SWIFT_ACTIVE_COMPILATION_CONDITIONS = "DEBUG $(inherited)"; - SWIFT_OPTIMIZATION_LEVEL = "-Onone"; - SWIFT_VERSION = 5.0; - }; - name = Debug; - }; - 032C017C2AC228E7002955E1 /* Release */ = { - isa = XCBuildConfiguration; - buildSettings = { - ALWAYS_SEARCH_USER_PATHS = NO; - ASSETCATALOG_COMPILER_GENERATE_SWIFT_ASSET_SYMBOL_EXTENSIONS = YES; - CLANG_ANALYZER_NONNULL = YES; - CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE; - CLANG_CXX_LANGUAGE_STANDARD = "c++17"; - CLANG_ENABLE_MODULES = YES; - CLANG_ENABLE_OBJC_ARC = YES; - CLANG_ENABLE_OBJC_WEAK = YES; - CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES; - CLANG_WARN_BOOL_CONVERSION = YES; - CLANG_WARN_COMMA = YES; - CLANG_WARN_CONSTANT_CONVERSION = YES; - CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES; - CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR; - CLANG_WARN_DOCUMENTATION_COMMENTS = YES; - CLANG_WARN_EMPTY_BODY = YES; - CLANG_WARN_ENUM_CONVERSION = YES; - CLANG_WARN_INFINITE_RECURSION = YES; - CLANG_WARN_INT_CONVERSION = YES; - CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES; - CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES; - CLANG_WARN_OBJC_LITERAL_CONVERSION = YES; - CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR; - CLANG_WARN_QUOTED_INCLUDE_IN_FRAMEWORK_HEADER = YES; - CLANG_WARN_RANGE_LOOP_ANALYSIS = YES; - CLANG_WARN_STRICT_PROTOTYPES = YES; - CLANG_WARN_SUSPICIOUS_MOVE = YES; - CLANG_WARN_UNGUARDED_AVAILABILITY = YES_AGGRESSIVE; - CLANG_WARN_UNREACHABLE_CODE = YES; - CLANG_WARN__DUPLICATE_METHOD_MATCH = YES; - COPY_PHASE_STRIP = NO; - DEBUG_INFORMATION_FORMAT = "dwarf-with-dsym"; - ENABLE_NS_ASSERTIONS = NO; - ENABLE_STRICT_OBJC_MSGSEND = YES; - ENABLE_USER_SCRIPT_SANDBOXING = YES; - "EXCLUDED_ARCHS[sdk=iphonesimulator*]" = x86_64; - GCC_C_LANGUAGE_STANDARD = c17; - GCC_NO_COMMON_BLOCKS = YES; - GCC_WARN_64_TO_32_BIT_CONVERSION = YES; - GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR; - GCC_WARN_UNDECLARED_SELECTOR = YES; - GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE; - GCC_WARN_UNUSED_FUNCTION = YES; - GCC_WARN_UNUSED_VARIABLE = YES; - IPHONEOS_DEPLOYMENT_TARGET = 17.0; - LD_RUNPATH_SEARCH_PATHS = ( - "$(inherited)", - "@executable_path/Frameworks", - "@loader_path/Frameworks", - ); - LOCALIZATION_PREFERS_STRING_CATALOGS = YES; - MTL_ENABLE_DEBUG_INFO = NO; - MTL_FAST_MATH = YES; - SDKROOT = iphoneos; - SWIFT_COMPILATION_MODE = wholemodule; - SWIFT_VERSION = 5.0; - VALIDATE_PRODUCT = YES; - }; - name = Release; - }; - 032C017E2AC228E7002955E1 /* Debug */ = { - isa = XCBuildConfiguration; - buildSettings = { - ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon; - CODE_SIGN_ENTITLEMENTS = LLaMAEntitlements/LLaMA.entitlements; - CODE_SIGN_IDENTITY = "Apple Development"; - CODE_SIGN_STYLE = Automatic; - CURRENT_PROJECT_VERSION = 1; - DEVELOPMENT_TEAM = ""; - ENABLE_PREVIEWS = YES; - GENERATE_INFOPLIST_FILE = YES; - INFOPLIST_FILE = "LLaMA/SupportingFiles/LLaMA-Info.plist"; - INFOPLIST_KEY_CFBundleDisplayName = iLLaMA; - INFOPLIST_KEY_LSSupportsOpeningDocumentsInPlace = YES; - INFOPLIST_KEY_NSCameraUsageDescription = ""; - INFOPLIST_KEY_UIApplicationSceneManifest_Generation = YES; - INFOPLIST_KEY_UIApplicationSupportsIndirectInputEvents = YES; - INFOPLIST_KEY_UILaunchScreen_Generation = YES; - INFOPLIST_KEY_UIRequiresFullScreen = YES; - INFOPLIST_KEY_UISupportedInterfaceOrientations = UIInterfaceOrientationPortrait; - MARKETING_VERSION = 1.0; - OTHER_LDFLAGS = "-all_load"; - PRODUCT_BUNDLE_IDENTIFIER = org.pytorch.executorch.illama; - PRODUCT_NAME = "$(PROJECT_NAME)"; - PROVISIONING_PROFILE_SPECIFIER = ""; - SUPPORTED_PLATFORMS = "iphoneos iphonesimulator"; - SUPPORTS_MACCATALYST = NO; - SUPPORTS_MAC_DESIGNED_FOR_IPHONE_IPAD = NO; - SUPPORTS_XR_DESIGNED_FOR_IPHONE_IPAD = NO; - SWIFT_EMIT_LOC_STRINGS = YES; - TARGETED_DEVICE_FAMILY = "1,2"; - }; - name = Debug; - }; - 032C017F2AC228E7002955E1 /* Release */ = { - isa = XCBuildConfiguration; - buildSettings = { - ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon; - CODE_SIGN_ENTITLEMENTS = LLaMAEntitlements/LLaMA.entitlements; - CODE_SIGN_IDENTITY = "Apple Development"; - CODE_SIGN_STYLE = Automatic; - CURRENT_PROJECT_VERSION = 1; - DEVELOPMENT_TEAM = ""; - ENABLE_PREVIEWS = YES; - GENERATE_INFOPLIST_FILE = YES; - INFOPLIST_FILE = "LLaMA/SupportingFiles/LLaMA-Info.plist"; - INFOPLIST_KEY_CFBundleDisplayName = iLLaMA; - INFOPLIST_KEY_LSSupportsOpeningDocumentsInPlace = YES; - INFOPLIST_KEY_NSCameraUsageDescription = ""; - INFOPLIST_KEY_UIApplicationSceneManifest_Generation = YES; - INFOPLIST_KEY_UIApplicationSupportsIndirectInputEvents = YES; - INFOPLIST_KEY_UILaunchScreen_Generation = YES; - INFOPLIST_KEY_UIRequiresFullScreen = YES; - INFOPLIST_KEY_UISupportedInterfaceOrientations = UIInterfaceOrientationPortrait; - MARKETING_VERSION = 1.0; - OTHER_LDFLAGS = "-all_load"; - PRODUCT_BUNDLE_IDENTIFIER = org.pytorch.executorch.illama; - PRODUCT_NAME = "$(PROJECT_NAME)"; - PROVISIONING_PROFILE_SPECIFIER = ""; - SUPPORTED_PLATFORMS = "iphoneos iphonesimulator"; - SUPPORTS_MACCATALYST = NO; - SUPPORTS_MAC_DESIGNED_FOR_IPHONE_IPAD = NO; - SUPPORTS_XR_DESIGNED_FOR_IPHONE_IPAD = NO; - SWIFT_EMIT_LOC_STRINGS = YES; - TARGETED_DEVICE_FAMILY = "1,2"; - }; - name = Release; - }; -/* End XCBuildConfiguration section */ - -/* Begin XCConfigurationList section */ - 032C016A2AC228E5002955E1 /* Build configuration list for PBXProject "LLaMA" */ = { - isa = XCConfigurationList; - buildConfigurations = ( - 032C017B2AC228E7002955E1 /* Debug */, - 032C017C2AC228E7002955E1 /* Release */, - ); - defaultConfigurationIsVisible = 0; - defaultConfigurationName = Release; - }; - 032C017D2AC228E7002955E1 /* Build configuration list for PBXNativeTarget "LLaMA" */ = { - isa = XCConfigurationList; - buildConfigurations = ( - 032C017E2AC228E7002955E1 /* Debug */, - 032C017F2AC228E7002955E1 /* Release */, - ); - defaultConfigurationIsVisible = 0; - defaultConfigurationName = Release; - }; -/* End XCConfigurationList section */ - -/* Begin XCRemoteSwiftPackageReference section */ - 03CF43942CEC5CEC00C7113B /* XCRemoteSwiftPackageReference "executorch" */ = { - isa = XCRemoteSwiftPackageReference; - repositoryURL = "https://github.com/pytorch/executorch"; - requirement = { - branch = "swiftpm-0.8.0.20250909"; - kind = branch; - }; - }; -/* End XCRemoteSwiftPackageReference section */ - -/* Begin XCSwiftPackageProductDependency section */ - 03F546232E70906E0040AE84 /* backend_coreml */ = { - isa = XCSwiftPackageProductDependency; - package = 03CF43942CEC5CEC00C7113B /* XCRemoteSwiftPackageReference "executorch" */; - productName = backend_coreml; - }; - 03F546252E70906E0040AE84 /* backend_mps */ = { - isa = XCSwiftPackageProductDependency; - package = 03CF43942CEC5CEC00C7113B /* XCRemoteSwiftPackageReference "executorch" */; - productName = backend_mps; - }; - 03F546272E70906E0040AE84 /* backend_xnnpack */ = { - isa = XCSwiftPackageProductDependency; - package = 03CF43942CEC5CEC00C7113B /* XCRemoteSwiftPackageReference "executorch" */; - productName = backend_xnnpack; - }; - 03F546292E70906E0040AE84 /* executorch_debug */ = { - isa = XCSwiftPackageProductDependency; - package = 03CF43942CEC5CEC00C7113B /* XCRemoteSwiftPackageReference "executorch" */; - productName = executorch_debug; - }; - 03F5462B2E70906E0040AE84 /* executorch_llm_debug */ = { - isa = XCSwiftPackageProductDependency; - package = 03CF43942CEC5CEC00C7113B /* XCRemoteSwiftPackageReference "executorch" */; - productName = executorch_llm_debug; - }; - 03F5462D2E70906E0040AE84 /* kernels_llm */ = { - isa = XCSwiftPackageProductDependency; - package = 03CF43942CEC5CEC00C7113B /* XCRemoteSwiftPackageReference "executorch" */; - productName = kernels_llm; - }; - 03F5462F2E70906E0040AE84 /* kernels_optimized */ = { - isa = XCSwiftPackageProductDependency; - package = 03CF43942CEC5CEC00C7113B /* XCRemoteSwiftPackageReference "executorch" */; - productName = kernels_optimized; - }; - 03F546312E70906E0040AE84 /* kernels_quantized */ = { - isa = XCSwiftPackageProductDependency; - package = 03CF43942CEC5CEC00C7113B /* XCRemoteSwiftPackageReference "executorch" */; - productName = kernels_quantized; - }; - 03F546332E70906E0040AE84 /* kernels_torchao */ = { - isa = XCSwiftPackageProductDependency; - package = 03CF43942CEC5CEC00C7113B /* XCRemoteSwiftPackageReference "executorch" */; - productName = kernels_torchao; - }; -/* End XCSwiftPackageProductDependency section */ - }; - rootObject = 032C01672AC228E5002955E1 /* Project object */; -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/xcshareddata/xcschemes/LLaMA.xcscheme b/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/xcshareddata/xcschemes/LLaMA.xcscheme deleted file mode 100644 index 84fa8d52802..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/xcshareddata/xcschemes/LLaMA.xcscheme +++ /dev/null @@ -1,76 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/xcshareddata/xcschemes/LLaMARunner.xcscheme b/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/xcshareddata/xcschemes/LLaMARunner.xcscheme deleted file mode 100644 index d820e0a5f8a..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/xcshareddata/xcschemes/LLaMARunner.xcscheme +++ /dev/null @@ -1,67 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/App.swift b/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/App.swift deleted file mode 100644 index ceddbde1e61..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/App.swift +++ /dev/null @@ -1,18 +0,0 @@ -/* - * Copyright (c) Meta Platforms, Inc. and affiliates. - * All rights reserved. - * - * This source code is licensed under the BSD-style license found in the - * LICENSE file in the root directory of this source tree. - */ - -import SwiftUI - -@main -struct App: SwiftUI.App { - var body: some Scene { - WindowGroup { - ContentView() - } - } -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/Constants.swift b/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/Constants.swift deleted file mode 100644 index 1c2a9d12b97..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/Constants.swift +++ /dev/null @@ -1,30 +0,0 @@ -/* - * Copyright (c) Meta Platforms, Inc. and affiliates. - * All rights reserved. - * - * This source code is licensed under the BSD-style license found in the - * LICENSE file in the root directory of this source tree. - */ - -import Foundation - -public enum Constants { - public static let qwen3PromptTemplate = """ -<|im_start|>system -You are a helpful assistant. -<|im_end|> -<|im_start|>user -%@<|im_end|> -<|im_start|>assistant - - - - - - -""" - - public static let llama3PromptTemplate = "<|begin_of_text|><|start_header_id|>user<|end_header_id|>%@<|eot_id|><|start_header_id|>assistant<|end_header_id|>" - -public static let phi4PromptTemplate = "<|user|>%@<|end|><|assistant|>" -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ContentView.swift b/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ContentView.swift deleted file mode 100644 index 02dfbac18b2..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ContentView.swift +++ /dev/null @@ -1,619 +0,0 @@ -/* - * Copyright (c) Meta Platforms, Inc. and affiliates. - * All rights reserved. - * - * This source code is licensed under the BSD-style license found in the - * LICENSE file in the root directory of this source tree. - */ - -import ExecuTorchLLM -import SwiftUI -import UniformTypeIdentifiers - -class RunnerHolder: ObservableObject { - var textRunner: TextRunner? - var multimodalRunner: MultimodalRunner? -} - -extension UIImage { - func resized(to newSize: CGSize) -> UIImage { - let format = UIGraphicsImageRendererFormat.default() - format.scale = 1 - return UIGraphicsImageRenderer(size: newSize, format: format).image { - _ in draw(in: CGRect(origin: .zero, size: newSize)) - } - } - - func toRGBArray() -> [UInt8]? { - guard let cgImage = self.cgImage else { return nil } - - let width = Int(cgImage.width), height = Int(cgImage.height) - let totalPixels = width * height, bytesPerPixel = 4, bytesPerRow = bytesPerPixel * width - var rgbValues = [UInt8](repeating: 0, count: totalPixels * 3) - var pixelData = [UInt8](repeating: 0, count: width * height * bytesPerPixel) - - guard let context = CGContext( - data: &pixelData, width: width, height: height, bitsPerComponent: 8, - bytesPerRow: bytesPerRow, space: CGColorSpaceCreateDeviceRGB(), - bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue - ) else { return nil } - - context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height)) - - for y in 0.. ModelType { - let filename = (path as NSString).lastPathComponent.lowercased() - if filename.hasPrefix("llama") { - return .llama - } else if filename.hasPrefix("llava") { - return .llava - } else if filename.hasPrefix("qwen3") { - return .qwen3 - } else if filename.hasPrefix("phi4") { - return .phi4 - } - print("Unknown model type in path: \(path). Model filename should start with one of: llama, llava, qwen3, or phi4") - exit(1) - } - } - - private var placeholder: String { - resourceManager.isModelValid ? resourceManager.isTokenizerValid ? "Prompt..." : "Select Tokenizer..." : "Select Model..." - } - - private var title: String { - resourceManager.isModelValid ? resourceManager.isTokenizerValid ? resourceManager.modelName : "Select Tokenizer..." : "Select Model..." - } - - private var modelTitle: String { - resourceManager.isModelValid ? resourceManager.modelName : "Select Model..." - } - - private var tokenizerTitle: String { - resourceManager.isTokenizerValid ? resourceManager.tokenizerName : "Select Tokenizer..." - } - - private var isInputEnabled: Bool { resourceManager.isModelValid && resourceManager.isTokenizerValid } - - var body: some View { - NavigationView { - ZStack { - VStack { - if showingSettings { - VStack(spacing: 20) { - HStack { - VStack(spacing: 10) { - Button(action: { pickerType = .model }) { - Label(modelTitle, systemImage: "doc") - .lineLimit(1) - .truncationMode(.middle) - .frame(maxWidth: 300, alignment: .leading) - } - Button(action: { pickerType = .tokenizer }) { - Label(tokenizerTitle, systemImage: "doc") - .lineLimit(1) - .truncationMode(.middle) - .frame(maxWidth: 300, alignment: .leading) - } - } - .padding() - .background(Color.gray.opacity(0.1)) - .cornerRadius(10) - .fixedSize(horizontal: true, vertical: false) - Spacer() - } - .padding() - } - } - - MessageListView(messages: $messages) - .simultaneousGesture( - DragGesture().onChanged { value in - if value.translation.height > 10 { - hideKeyboard() - } - showingSettings = false - textFieldFocused = false - } - ) - .onTapGesture { - showingSettings = false - textFieldFocused = false - } - - HStack { - Button(action: { - imagePickerSourceType = .photoLibrary - isImagePickerPresented = true - }) { - Image(systemName: "photo.on.rectangle") - .resizable() - .scaledToFit() - .frame(width: 24, height: 24) - } - .background(Color.clear) - .cornerRadius(8) - - Button(action: { - if UIImagePickerController.isSourceTypeAvailable(.camera) { - imagePickerSourceType = .camera - isImagePickerPresented = true - } else { - print("Camera not available") - } - }) { - Image(systemName: "camera") - .resizable() - .scaledToFit() - .frame(width: 24, height: 24) - } - .background(Color.clear) - .cornerRadius(8) - - if resourceManager.isModelValid && ModelType.fromPath(resourceManager.modelPath) == .qwen3 { - Button(action: { - thinkingMode.toggle() - showThinkingModeNotification = true - DispatchQueue.main.asyncAfter(deadline: .now() + 3) { - showThinkingModeNotification = false - } - }) { - Image(systemName: "brain") - .resizable() - .scaledToFit() - .frame(width: 24, height: 24) - .foregroundColor(thinkingMode ? .blue : .gray) - } - .background(Color.clear) - .cornerRadius(8) - } - - TextField(placeholder, text: $prompt, axis: .vertical) - .padding(8) - .background(Color.gray.opacity(0.1)) - .cornerRadius(20) - .lineLimit(1...10) - .overlay( - RoundedRectangle(cornerRadius: 20) - .stroke(isInputEnabled ? Color.blue : Color.gray, lineWidth: 1) - ) - .disabled(!isInputEnabled) - .focused($textFieldFocused) - .onAppear { textFieldFocused = false } - .onTapGesture { - showingSettings = false - } - - Button(action: isGenerating ? stop : generate) { - Image(systemName: isGenerating ? "stop.circle" : "arrowshape.up.circle.fill") - .resizable() - .aspectRatio(contentMode: .fit) - .frame(height: 28) - } - .disabled(isGenerating ? shouldStopGenerating : (!isInputEnabled || prompt.isEmpty)) - } - .padding([.leading, .trailing, .bottom], 10) - } - .sheet(isPresented: $isImagePickerPresented, onDismiss: addSelectedImageMessage) { - ImagePicker(selectedImage: $selectedImage, sourceType: imagePickerSourceType) - .id(imagePickerSourceType.rawValue) - } - - if showThinkingModeNotification { - Text(thinkingMode ? "Thinking mode enabled" : "Thinking mode disabled") - .padding(8) - .background(Color(UIColor.secondarySystemBackground)) - .cornerRadius(8) - .transition(.opacity) - .animation(.easeInOut(duration: 0.2), value: showThinkingModeNotification) - } - } - .navigationBarTitle(title, displayMode: .inline) - .navigationBarItems( - leading: - Button(action: { - showingSettings.toggle() - }) { - Image(systemName: "folder") - .imageScale(.large) - }, - trailing: - HStack { - Menu { - Section(header: Text("Memory")) { - Text("Used: \(resourceMonitor.usedMemory) Mb") - Text("Available: \(resourceMonitor.usedMemory) Mb") - } - } label: { - Text("\(resourceMonitor.usedMemory) Mb") - } - .onAppear { - resourceMonitor.start() - } - .onDisappear { - resourceMonitor.stop() - } - Button(action: { showingLogs = true }) { - Image(systemName: "list.bullet.rectangle") - } - } - ) - .sheet(isPresented: $showingLogs) { - NavigationView { - LogView(logManager: logManager) - } - } - .fileImporter( - isPresented: Binding( - get: { pickerType != nil }, - set: { if !$0 { pickerType = nil } } - ), - allowedContentTypes: allowedContentTypes(), - allowsMultipleSelection: false - ) { [pickerType] result in - handleFileImportResult(pickerType, result) - } - .onAppear { - do { - try resourceManager.createDirectoriesIfNeeded() - } catch { - withAnimation { - messages.append(Message(type: .info, text: "Error creating content directories: \(error.localizedDescription)")) - } - } - } - } - .navigationViewStyle(StackNavigationViewStyle()) - } - - private func addSelectedImageMessage() { - if let selectedImage { - messages.append(Message(image: selectedImage)) - } - } - - private func generate() { - guard !prompt.isEmpty else { return } - isGenerating = true - shouldStopGenerating = false - shouldStopShowingToken = false - let text = prompt.trimmingCharacters(in: .whitespacesAndNewlines) - let seq_len = 768 // text: 256, vision: 768 - let modelPath = resourceManager.modelPath - let tokenizerPath = resourceManager.tokenizerPath - let modelType = ModelType.fromPath(modelPath) - - prompt = "" - hideKeyboard() - showingSettings = false - - messages.append(Message(text: text)) - messages.append(Message(type: modelType == .llama ? .llamagenerated : .llavagenerated)) - - runnerQueue.async { - defer { - DispatchQueue.main.async { - isGenerating = false - selectedImage = nil - } - } - - switch modelType { - case .llama, .qwen3, .phi4: - runnerHolder.textRunner = runnerHolder.textRunner ?? TextRunner( - modelPath: modelPath, - tokenizerPath: tokenizerPath, - specialTokens: [ - "<|begin_of_text|>", - "<|end_of_text|>", - "<|reserved_special_token_0|>", - "<|reserved_special_token_1|>", - "<|finetune_right_pad_id|>", - "<|step_id|>", - "<|start_header_id|>", - "<|end_header_id|>", - "<|eom_id|>", - "<|eot_id|>", - "<|python_tag|>" - ] + (2..<256).map { "<|reserved_special_token_\($0)|>" } - ) - case .llava: - runnerHolder.multimodalRunner = runnerHolder.multimodalRunner ?? MultimodalRunner( - modelPath: modelPath, - tokenizerPath: tokenizerPath - ) - } - - guard !shouldStopGenerating else { return } - switch modelType { - case .llama, .qwen3, .phi4: - if let runner = runnerHolder.textRunner, !runner.isLoaded() { - var error: Error? - let startLoadTime = Date() - do { - try runner.load() - } catch let loadError { - error = loadError - } - - let loadTime = Date().timeIntervalSince(startLoadTime) - DispatchQueue.main.async { - withAnimation { - var message = messages.removeLast() - message.type = .info - if let error { - message.text = "Model loading failed: error \((error as NSError).code)" - } else { - message.text = "Model loaded in \(String(format: "%.2f", loadTime)) s" - } - messages.append(message) - if error == nil { - messages.append(Message(type: .llamagenerated)) - } - } - } - if error != nil { - return - } - } - case .llava: - if let runner = runnerHolder.multimodalRunner, !runner.isLoaded() { - var error: Error? - let startLoadTime = Date() - do { - try runner.load() - } catch let loadError { - error = loadError - } - - let loadTime = Date().timeIntervalSince(startLoadTime) - DispatchQueue.main.async { - withAnimation { - var message = messages.removeLast() - message.type = .info - if let error { - message.text = "Model loading failed: error \((error as NSError).code)" - } else { - message.text = "Model loaded in \(String(format: "%.2f", loadTime)) s" - } - messages.append(message) - if error == nil { - messages.append(Message(type: .llavagenerated)) - } - } - } - if error != nil { - return - } - } - } - - guard !shouldStopGenerating else { - DispatchQueue.main.async { - withAnimation { - _ = messages.removeLast() - } - } - return - } - do { - var tokens: [String] = [] - - if let img = selectedImage { - let llava_prompt = "\(text) ASSISTANT" - let MAX_WIDTH = 336.0 - let newHeight = MAX_WIDTH * img.size.height / img.size.width - let resizedImage = img.resized(to: CGSize(width: MAX_WIDTH, height: newHeight)) - - try runnerHolder.multimodalRunner?.generate([ - MultimodalInput(Image(data: Data(resizedImage.toRGBArray() ?? []), width: Int(MAX_WIDTH), height: Int(newHeight.rounded()), channels: 3)), - MultimodalInput(llava_prompt), - ], sequenceLength: seq_len) { token in - if token != llava_prompt { - if token == "" { - shouldStopGenerating = true - runnerHolder.multimodalRunner?.stop() - } else { - tokens.append(token) - if tokens.count > 2 { - let text = tokens.joined() - let count = tokens.count - tokens = [] - DispatchQueue.main.async { - var message = messages.removeLast() - message.text += text - message.tokenCount += count - message.dateUpdated = Date() - messages.append(message) - } - } - if shouldStopGenerating { - runnerHolder.multimodalRunner?.stop() - } - } - } - } - } else { - let prompt: String - switch modelType { - case .qwen3: - let basePrompt = String(format: Constants.qwen3PromptTemplate, text) - // If thinking mode is enabled for Qwen, don't skip the special tokens - // and have them be generated. - prompt = thinkingMode ? basePrompt.replacingOccurrences(of: "\n\n\n\n\n", with: "") : basePrompt - case .llama: - prompt = String(format: Constants.llama3PromptTemplate, text) - case .llava: - prompt = String(format: Constants.llama3PromptTemplate, text) - case .phi4: - prompt = String(format: Constants.phi4PromptTemplate, text) - } - - try runnerHolder.textRunner?.generate(prompt, sequenceLength: seq_len) { token in - - if token != prompt { - if token == "<|eot_id|>" { - // hack to fix the issue that extension/llm/runner/text_token_generator.h - // keeps generating after <|eot_id|> - shouldStopShowingToken = true - } else if token == "<|im_end|>" { - // Qwen3 specific token. - // Skip. - } else if token == "" { - // Qwen3 specific token. - let textToFlush = tokens.joined() - let flushedTokenCount = tokens.count - tokens = [] - DispatchQueue.main.async { - var message = messages.removeLast() - message.text += textToFlush - message.text += message.text.isEmpty ? "Thinking...\n\n" : "\n\nThinking...\n\n" - message.tokenCount += flushedTokenCount + 1 // + 1 for the start thinking token. - message.dateUpdated = Date() - messages.append(message) - } - } else if token == "" { - // Qwen3 specific token. - let textToFlush = tokens.joined() - let flushedTokenCount = tokens.count - tokens = [] - DispatchQueue.main.async { - var message = messages.removeLast() - message.text += textToFlush - message.text += "\n\nFinished thinking.\n\n" - message.tokenCount += flushedTokenCount + 1 // + 1 for the end thinking token. - message.dateUpdated = Date() - messages.append(message) - } - } else { - tokens.append(token.trimmingCharacters(in: .newlines)) - // Flush tokens in groups of 3 so that it's closer to whole words being generated - // rather than parts of words (tokens). - if tokens.count > 2 { - let text = tokens.joined() - let count = tokens.count - tokens = [] - DispatchQueue.main.async { - var message = messages.removeLast() - message.text += text - message.tokenCount += count - message.dateUpdated = Date() - messages.append(message) - } - } - if shouldStopGenerating { - runnerHolder.textRunner?.stop() - } - } - } - } - } - } catch { - DispatchQueue.main.async { - withAnimation { - var message = messages.removeLast() - message.type = .info - message.text = "Text generation failed: error \((error as NSError).code)" - messages.append(message) - } - } - } - } - } - - private func stop() { - shouldStopGenerating = true - } - - private func allowedContentTypes() -> [UTType] { - guard let pickerType else { return [] } - switch pickerType { - case .model: - return [UTType(filenameExtension: "pte")].compactMap { $0 } - case .tokenizer: - return [UTType(filenameExtension: "bin"), UTType(filenameExtension: "model"), UTType(filenameExtension: "json"), ].compactMap { $0 } - } - } - - private func handleFileImportResult(_ pickerType: PickerType?, _ result: Result<[URL], Error>) { - switch result { - case .success(let urls): - guard let url = urls.first, let pickerType else { - withAnimation { - messages.append(Message(type: .info, text: "Failed to select a file")) - } - return - } - runnerQueue.async { - runnerHolder.textRunner = nil - runnerHolder.multimodalRunner = nil - } - switch pickerType { - case .model: - resourceManager.modelPath = url.path - case .tokenizer: - resourceManager.tokenizerPath = url.path - } - if resourceManager.isModelValid && resourceManager.isTokenizerValid { - showingSettings = false - textFieldFocused = true - } - case .failure(let error): - withAnimation { - messages.append(Message(type: .info, text: "Failed to select a file: \(error.localizedDescription)")) - } - } - } -} - -extension View { - func hideKeyboard() { - UIApplication.shared.sendAction(#selector(UIResponder.resignFirstResponder), to: nil, from: nil, for: nil) - } -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ImagePicker.swift b/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ImagePicker.swift deleted file mode 100644 index 57d71f6686d..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ImagePicker.swift +++ /dev/null @@ -1,50 +0,0 @@ -/* - * Copyright (c) Meta Platforms, Inc. and affiliates. - * All rights reserved. - * - * This source code is licensed under the BSD-style license found in the - * LICENSE file in the root directory of this source tree. - */ - -import SwiftUI -import UIKit - -struct ImagePicker: UIViewControllerRepresentable { - class Coordinator: NSObject, UINavigationControllerDelegate, UIImagePickerControllerDelegate { - let parent: ImagePicker - - init(parent: ImagePicker) { - self.parent = parent - } - - func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) { - if let image = info[.originalImage] as? UIImage { - parent.selectedImage = image - } - - parent.presentationMode.wrappedValue.dismiss() - } - - func imagePickerControllerDidCancel(_ picker: UIImagePickerController) { - parent.selectedImage = nil - parent.presentationMode.wrappedValue.dismiss() - } - } - - @Environment(\.presentationMode) var presentationMode - @Binding var selectedImage: UIImage? - var sourceType: UIImagePickerController.SourceType = .photoLibrary - - func makeCoordinator() -> Coordinator { - Coordinator(parent: self) - } - - func makeUIViewController(context: Context) -> UIImagePickerController { - let picker = UIImagePickerController() - picker.delegate = context.coordinator - picker.sourceType = sourceType - return picker - } - - func updateUIViewController(_ uiViewController: UIImagePickerController, context: Context) {} -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/LogManager.swift b/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/LogManager.swift deleted file mode 100644 index 038e2807fed..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/LogManager.swift +++ /dev/null @@ -1,51 +0,0 @@ -/* - * Copyright (c) Meta Platforms, Inc. and affiliates. - * All rights reserved. - * - * This source code is licensed under the BSD-style license found in the - * LICENSE file in the root directory of this source tree. - */ - -import SwiftUI - -import ExecuTorch - -struct LogEntry: Identifiable, Codable { - let id: UUID - let level: Int - let timestamp: TimeInterval - let filename: String - let line: UInt - let message: String -} - -class LogManager: ObservableObject, LogSink { - @AppStorage("logs") private var data = Data() - - @Published var logs: [LogEntry] = [] { - didSet { - data = (try? JSONEncoder().encode(logs)) ?? Data() - } - } - - init() { - logs = (try? JSONDecoder().decode([LogEntry].self, from: data)) ?? [] - Log.shared.add(sink: self) - } - - deinit { - Log.shared.remove(sink: self) - } - - func log(level: LogLevel, timestamp: TimeInterval, filename: String, line: UInt, message: String) { - let log = LogEntry(id: UUID(), level: level.rawValue, timestamp: timestamp, filename: filename, line: line, message: message) - - DispatchQueue.main.sync { - self.logs.append(log) - } - } - - func clear() { - logs.removeAll() - } -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/LogView.swift b/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/LogView.swift deleted file mode 100644 index 1dc6ebac2b3..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/LogView.swift +++ /dev/null @@ -1,64 +0,0 @@ -/* - * Copyright (c) Meta Platforms, Inc. and affiliates. - * All rights reserved. - * - * This source code is licensed under the BSD-style license found in the - * LICENSE file in the root directory of this source tree. - */ - -import SwiftUI - -import ExecuTorch - -struct LogView: View { - @ObservedObject var logManager: LogManager - - var body: some View { - ScrollView { - VStack(alignment: .leading) { - ForEach(logManager.logs) { log in - Text("\(format(timestamp: log.timestamp)) \(log.filename):\(log.line)") - .padding(.top) - .foregroundColor(.secondary) - .textSelection(.enabled) - Text(log.message) - .padding(.bottom) - .foregroundColor(color(for: log.level)) - .textSelection(.enabled) - } - } - } - .padding() - .defaultScrollAnchor(.bottom) - .navigationBarTitle("Logs", displayMode: .inline) - .navigationBarItems(trailing: - Button(action: { logManager.clear() }) { - Image(systemName: "trash") - } - ) - } - - private func format(timestamp: TimeInterval) -> String { - let totalSeconds = Int(timestamp) - let hours = (totalSeconds / 3600) % 24 - let minutes = (totalSeconds / 60) % 60 - let seconds = totalSeconds % 60 - let microseconds = Int((timestamp - Double(totalSeconds)) * 1000000) - return String(format: "%02d:%02d:%02d.%06d", hours, minutes, seconds, microseconds) - } - - private func color(for level: Int) -> Color { - switch LogLevel(rawValue: level) { - case .debug: - return .blue - case .info: - return .primary - case .error: - return .red - case .fatal: - return .purple - default: - return .secondary - } - } -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/Message.swift b/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/Message.swift deleted file mode 100644 index 34ed0d7e933..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/Message.swift +++ /dev/null @@ -1,26 +0,0 @@ -/* - * Copyright (c) Meta Platforms, Inc. and affiliates. - * All rights reserved. - * - * This source code is licensed under the BSD-style license found in the - * LICENSE file in the root directory of this source tree. - */ - -import UIKit - -enum MessageType { - case prompted - case llamagenerated // TODO: change this to to something more general, like "textgenerated". - case llavagenerated - case info -} - -struct Message: Identifiable, Equatable { - let id = UUID() - let dateCreated = Date() - var dateUpdated = Date() - var type: MessageType = .prompted - var text = "" - var tokenCount = 0 - var image: UIImage? -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/MessageListView.swift b/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/MessageListView.swift deleted file mode 100644 index bc0e459123f..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/MessageListView.swift +++ /dev/null @@ -1,88 +0,0 @@ -/* - * Copyright (c) Meta Platforms, Inc. and affiliates. - * All rights reserved. - * - * This source code is licensed under the BSD-style license found in the - * LICENSE file in the root directory of this source tree. - */ - -import SwiftUI - -struct MessageListView: View { - @Binding var messages: [Message] - @State private var showScrollToBottomButton = false - @State private var userHasScrolled = false - @State private var keyboardHeight: CGFloat = 0 - - var body: some View { - ScrollViewReader { value in - ScrollView { - VStack { - ForEach(messages) { message in - MessageView(message: message) - .padding([.leading, .trailing], 20) - } - GeometryReader { geometry -> Color in - DispatchQueue.main.async { - let maxY = geometry.frame(in: .global).maxY - let screenHeight = UIScreen.main.bounds.height - keyboardHeight - let isBeyondBounds = maxY > screenHeight - 50 - if showScrollToBottomButton != isBeyondBounds { - showScrollToBottomButton = isBeyondBounds - userHasScrolled = isBeyondBounds - } - } - return Color.clear - } - .frame(height: 0) - } - } - .onChange(of: messages) { - if !userHasScrolled, let lastMessageId = messages.last?.id { - withAnimation { - value.scrollTo(lastMessageId, anchor: .bottom) - } - } - } - .overlay( - Group { - if showScrollToBottomButton { - Button(action: { - withAnimation { - if let lastMessageId = messages.last?.id { - value.scrollTo(lastMessageId, anchor: .bottom) - } - userHasScrolled = false - } - }) { - ZStack { - Circle() - .fill(Color(UIColor.secondarySystemBackground).opacity(0.9)) - .frame(height: 28) - Image(systemName: "arrow.down.circle") - .resizable() - .aspectRatio(contentMode: .fit) - .frame(height: 28) - } - } - .transition(AnyTransition.opacity.animation(.easeInOut(duration: 0.2))) - } - }, - alignment: .bottom - ) - } - .onAppear { - NotificationCenter.default.addObserver(forName: UIResponder.keyboardWillShowNotification, object: nil, queue: .main) { notification in - let keyboardFrame = notification.userInfo?[UIResponder.keyboardFrameEndUserInfoKey] as? CGRect ?? .zero - keyboardHeight = keyboardFrame.height - 40 - } - NotificationCenter.default.addObserver(forName: UIResponder.keyboardWillHideNotification, object: nil, queue: .main) { _ in - keyboardHeight = 0 - } - } - .onDisappear { - NotificationCenter.default.removeObserver(self, name: UIResponder.keyboardWillShowNotification, object: nil) - NotificationCenter.default.removeObserver(self, name: UIResponder.keyboardWillHideNotification, object: nil) - } - } -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/MessageView.swift b/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/MessageView.swift deleted file mode 100644 index 542a88377b7..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/MessageView.swift +++ /dev/null @@ -1,73 +0,0 @@ -/* - * Copyright (c) Meta Platforms, Inc. and affiliates. - * All rights reserved. - * - * This source code is licensed under the BSD-style license found in the - * LICENSE file in the root directory of this source tree. - */ - -import SwiftUI - -struct MessageView: View { - let message: Message - - var body: some View { - VStack(alignment: .center) { - if message.type == .info { - Text(message.text) - .font(.caption) - .foregroundColor(.secondary) - .padding([.leading, .trailing], 10) - } else { - VStack(alignment: message.type == .llamagenerated || message.type == .llavagenerated ? .leading : .trailing) { - if message.type == .llamagenerated || message.type == .llavagenerated || message.type == .prompted { - Text(message.type == .llamagenerated ? "Llama" : (message.type == .llavagenerated ? "Llava" : "Prompt")) - .font(.caption) - .foregroundColor(.secondary) - .padding(message.type == .llamagenerated || message.type == .llavagenerated ? .trailing : .leading, 20) - } - HStack { - if message.type != .llamagenerated && message.type != .llavagenerated { Spacer() } - if message.text.isEmpty { - if let img = message.image { - Image(uiImage: img) - .resizable() - .scaledToFit() - .frame(maxWidth: 200, maxHeight: 200) - .padding() - .background(Color.gray.opacity(0.2)) - .cornerRadius(8) - .padding(.vertical, 2) - } else { - ProgressView() - .progressViewStyle(CircularProgressViewStyle()) - } - } else { - Text(message.text) - .padding(10) - .foregroundColor(message.type == .llamagenerated || message.type == .llavagenerated ? .primary : .white) - .background(message.type == .llamagenerated || message.type == .llavagenerated ? Color(UIColor.secondarySystemBackground) : Color.blue) - .cornerRadius(20) - .contextMenu { - Button(action: { - UIPasteboard.general.string = message.text - }) { - Text("Copy") - Image(systemName: "doc.on.doc") - } - } - } - if message.type == .llamagenerated || message.type == .llavagenerated { Spacer() } - } - let elapsedTime = message.dateUpdated.timeIntervalSince(message.dateCreated) - if elapsedTime > 0 && message.type != .info { - Text(String(format: "%.1f t/s", Double(message.tokenCount) / elapsedTime)) - .font(.caption) - .foregroundColor(.secondary) - .padding(message.type == .llamagenerated || message.type == .llavagenerated ? .trailing : .leading, 20) - } - }.padding([.leading, .trailing], message.type == .info ? 0 : 10) - } - } - } -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ResourceManager.swift b/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ResourceManager.swift deleted file mode 100644 index 7d3be7975de..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ResourceManager.swift +++ /dev/null @@ -1,37 +0,0 @@ -/* - * Copyright (c) Meta Platforms, Inc. and affiliates. - * All rights reserved. - * - * This source code is licensed under the BSD-style license found in the - * LICENSE file in the root directory of this source tree. - */ - -import SwiftUI - -final class ResourceManager: ObservableObject { - @AppStorage("modelPath") var modelPath = "" - @AppStorage("tokenizerPath") var tokenizerPath = "" - private let fileManager = FileManager.default - - var isModelValid: Bool { - fileManager.fileExists(atPath: modelPath) - } - - var isTokenizerValid: Bool { - fileManager.fileExists(atPath: tokenizerPath) - } - - var modelName: String { - URL(fileURLWithPath: modelPath).deletingPathExtension().lastPathComponent - } - - var tokenizerName: String { - URL(fileURLWithPath: tokenizerPath).deletingPathExtension().lastPathComponent - } - - func createDirectoriesIfNeeded() throws { - guard let documentsDirectory = fileManager.urls(for: .documentDirectory, in: .userDomainMask).first else { return } - try fileManager.createDirectory(at: documentsDirectory.appendingPathComponent("models"), withIntermediateDirectories: true, attributes: nil) - try fileManager.createDirectory(at: documentsDirectory.appendingPathComponent("tokenizers"), withIntermediateDirectories: true, attributes: nil) - } -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ResourceMonitor.swift b/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ResourceMonitor.swift deleted file mode 100644 index 3ec16463e8a..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA/Application/ResourceMonitor.swift +++ /dev/null @@ -1,51 +0,0 @@ -/* - * Copyright (c) Meta Platforms, Inc. and affiliates. - * All rights reserved. - * - * This source code is licensed under the BSD-style license found in the - * LICENSE file in the root directory of this source tree. - */ - -import Foundation - -final class ResourceMonitor: ObservableObject { - @Published var usedMemory = 0 - @Published var availableMemory = 0 - private var memoryUpdateTimer: Timer? - - deinit { - stop() - } - - public func start() { - memoryUpdateTimer = Timer.scheduledTimer(withTimeInterval: 1, repeats: true) { [weak self] _ in - self?.updateMemoryUsage() - } - } - - public func stop() { - memoryUpdateTimer?.invalidate() - } - - private func updateMemoryUsage() { - usedMemory = usedMemoryInMB() - availableMemory = availableMemoryInMB() - } - - private func usedMemoryInMB() -> Int { - var info = task_vm_info_data_t() - var count = mach_msg_type_number_t(MemoryLayout.size) / 4 - - let kerr: kern_return_t = withUnsafeMutablePointer(to: &info) { - $0.withMemoryRebound(to: integer_t.self, capacity: Int(count)) { - task_info(mach_task_self_, task_flavor_t(TASK_VM_INFO), $0, &count) - } - } - guard kerr == KERN_SUCCESS else { return 0 } - return Int(info.phys_footprint / 0x100000) - } - - private func availableMemoryInMB() -> Int { - return Int(os_proc_available_memory() / 0x100000) - } -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA/SupportingFiles/LLaMA-Info.plist b/examples/demo-apps/apple_ios/LLaMA/LLaMA/SupportingFiles/LLaMA-Info.plist deleted file mode 100644 index ff579a6caff..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA/SupportingFiles/LLaMA-Info.plist +++ /dev/null @@ -1,8 +0,0 @@ - - - - - UIFileSharingEnabled - - - diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMAAssets/Assets.xcassets/AppIcon.appiconset/Contents.json b/examples/demo-apps/apple_ios/LLaMA/LLaMAAssets/Assets.xcassets/AppIcon.appiconset/Contents.json deleted file mode 100644 index f4344003c80..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMAAssets/Assets.xcassets/AppIcon.appiconset/Contents.json +++ /dev/null @@ -1,14 +0,0 @@ -{ - "images" : [ - { - "filename" : "logo.png", - "idiom" : "universal", - "platform" : "ios", - "size" : "1024x1024" - } - ], - "info" : { - "author" : "xcode", - "version" : 1 - } -} diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMAAssets/Assets.xcassets/AppIcon.appiconset/logo.png b/examples/demo-apps/apple_ios/LLaMA/LLaMAAssets/Assets.xcassets/AppIcon.appiconset/logo.png deleted file mode 100644 index 60e3e5174e9bdec2caf09cd42a9232e1dff65530..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 33036 zcmZ6ycRbbK9|wNkcjMxcJueB#-kH}3WmYo7wf9IeD!h$+j6||BZ={r2grvGkMU;z- zQn`tcnZ5mdACKSu>-~N_-s_y#Iq%mw&+$sJvM^?5k@Dv45$iqXr zl4||oLCng``jjBA_`$(J_xra?vlH}h{f+hY1v&XgSa>7CBGgq*+Mc&P*j+heux$kZ z5u7&EvyS{W_tHF=|M`%LL%Bhox8h)g&?&`GrSb}8S=v$4r+ty$HHPt;?mq9%kQ(Ifa?5#aeJo|rSE4w)Nbz~g`ZxdJESpbS z{Cjp@I6UZa2V@~6Sm6Qr9{^Pg1O6z0%7uiUY+j6f1_Vd_KX*u1-mj5*eG%<~;QRky z`ad5oWC3=?H-&Qc(Kd5CN_a94Rg39P@&D%~|MzE3Y*H@j2A+DQ>wmi3<{m@O?Be71 z5iH!h?`t;m9Dl3S%Rha*U;OqIMBa=Yc%_R3M-iV6TUwINd=UeZpq_@V_&z|XNBMt( zVAWn0cov5`@wS6sRTcmL?#;1oLgNzv&xgMr3$mRGoOb6S+y7?POds-5yoZZHo?dtz{bf6kxIrUat5lDt??_dMk{^8xb5ff8)8l1aU7JER{(e6|2duN`I zxB&HB-ofhEySE>)YK@M46bO!-OZ@4TG@sWnC;~+T%Ch%so2rc31(-8BNH;s*Z(fHi z5G8y)_8&uy0y4XKgvH2~Y-FsiNY8kQ@V48VyPksVe;lu9ZvdzXzYD+iuxq*K$_2Kc z;$2$8Fgg3EjtbQo@z{v~>JL+njH2W57ySyQezwVYU_}hHrd#tF5;tl{W=}zuO6jFN~29!Xf>;nxn6CmO>X}I)=20 zqX>kyqO>DWEve{-tv$|02!w2SJy1AkT6>8CySI?> zy~}8Tb9J=|dls4w!SolV6i$e`whUPvs103J+qpCbi2gwR)st01{j=*}u4p{}&yhT- zGlBN@o`-=JE^wdAM!+Vh%>MIN5=if$r3LE@k8U^cBhRtL)6)R5(#42mymSMbGu9c! zm3Vc76Rhy546&Xi0Ude!^egg9AC_+neNJ#j*%^(EJ1A0ra#OmW)ZK#Y0lUwW$FUV;~$nLXVHkd3Fv@ zDSTLDeGY5G1F+l2$tj`)yH;s*lP*8Fe+KgQ{ZpDsYV2?zo~eR5SE3Jlh=z9QOlimB zNqG?MDKGKHnJ2Ga$(hg92?4q>+1~ek9w2`e6w#EzQ0Gcbq>{A^-T~swC3A9-K2kcG z9DUSeyFfj^BYvJWN7MJ|V*ndB*rlDCG^9L)sEk$cV+47TV<$y;Xbqw-$DcUE2)$|E z-~I_mi~r0>(RCNvLA?nka}Ez)g)_HlK8Q9oLi!jE%QZwLF|UfYKP@I@cL4qApqHw6 z%F6B>`NZV;GK>{bsrAzRP|i3%1PDEXLn%ttNRy-Zz_vXwxiK(0i~(KWw--cIAa&18 z@c0%{y4LMmDKp5*8!jwtTnUa-)Tcg50=S4{x@vI;LSY^*xyp0&k*zf(@RI4Y?3;H>-vcs6-} z%VChGTVOXvCk1ArJhH}~;RTN^BhYktN+7K%16*Fx)O-!ue{2<5=|n~8yyNr+1a;{S zIW9W&`GX|LntkcF^zo9Pn(RP2$~A?T{#BROv*_Edr(jpXW2Ftk*u{uVF0w*m+Hn`i zU}t>tr2H5uB%tT@Iy2y4qyMMz_}j+R!1=pJ>|SBO=pCk*``VbkBjs)C|h zIoZC{EluX3C^Hi#tLD>Ix^j*%Ak@FB+uRdGhH}hS9rD!p9g!xEJS-V{@M5Xf5i`YR_cbX~ozfZqN zu`5U8X@GRuJgWCdTCFuo%g%!HW4jwBKWi}dlAiO*xIH_F)a#UoB^k0KzTLrRO4l48 z1m&rTfB`9wvU+y)%2COS&r|5JeuE=NFDd|mGslSar8v-8X(1?=@FY=K)W{{KRH$H04lpr z#Eo)?ksh@cn5d9Ap8}FH745V)H+Eu1Xrsn92|}@x7LWsgDn<_zWra+Q;0`RM&P2R) zOON$w`7O(RcOK&Y?$x6bK(UNlxyji~sur4F9pIswbyv~8U={V2KFPx*L}&AtVwWNp znk01@L~+3%vEXvtwBtE&QC)2HcMFDML3Kco?=3#w4@N^vl#e*lXVCZhm0fO5BD}(o zCnfEnZEBs~!9e2w1eV>07kVF|O`gJ+#SxZO0cgv7HhEWHpx17`$#pj>WohqxQtNWHmUH;SiNpY}LGluj=jNqBzucT4#lFC2! z)cbD+N#8hoKBYeDw#~B(&7hq?!b=Sj4RJwtcijp3{Vo|3f+CG@dTZrtV+Q-}srTzUf9$Z*-{<|Eovj;?QN`hKwoi@~(;whRTl9Y!?tz$45+^sZ`mT zEY}=x_ygGA=F_|L{G#?l6T)J6O%iv$65!g(r7u3!Al&Oqd!$=CaP~%bJItBzAc(;8 zgggArGmxPWq?QD&KJYYlLybLYWpp_>FUK4v)g(u4j%;$;nmV2P&tq9YMldv34R0_-kyl8MIOnWSA-}%}{(H4O8`@n~ zVG)vzS$09f^rODGA3EN9IYoO;m)5M(mE}7@2@Y(m+>IDMFzm6L*ct^yBF8VbTvNbx z;o)UAJ6;KSX`mp7Nsk6*58YFtlp|I*%H50d;!38$znKt21PYgBdvLfsYc|@)E18SR0>eMx4~m(uSI=TQrt=u4tSKL)~+Wzrzmtf`l1C zFV0IvL?8S1_6Q@Rre4TpUf7IT9|dmlfa}U1UbP5yU57mvSz>TbLd;si@LHY4`=h!6 z=I^*Z#7cbxlOsS^?AjZ9wI%gR=)?fWHo<>mk`JDeF<=GUQ|B*L90qSG`2{rh;dx2P zHl8SlEw4Zlzs+6}?sRzX?4B$i{DkU_1P5%w?7^iO;!nKA9%0@bo8`4c!u1$(vS6RE z`!JrcA#YEAjRE^BT*=B*RV)~Y(*vE3Z*nVjA8H>R=cla|8enZ7^e{=)!nLr3@j=qvG9Q<-WlWatsRApTj-Sp+g0f-_M> zz=PJJ-e&sI=Xs&35X}fwtW?QAGh4+JMW;KQ^e7lcskDq97yJmc3ofpVBmGp^1F6fyc_Ha zVpXEqyyJ>iqlL-p=l%ny#9>A^5L|_2*#sQG;4fN%EC1#L{XF-oHvm2csJ9hi7l-0b zKIweUo7epn6#h*3p|SeZ5#(I*S$McZOiMEPVJ%G_@b;K0Ad5GjLMpJ1r&gs|xvLorNZ^1sf_831=sI{c zaeRKjB|TC~y|rV|$Gu7l&^|vp8ieD&*0FfyM!JU%>G&S#8*s7XA`R$jtQ`4+CLnma zATke$6VWs>mofDTb{DdL=S$|DBnxQb_J_w1jE zbs<)jVZkGD*^p7twH!y+J54vW7W!n6e+S88g_+GB5_Y) zndXKP8Z}=f;cWD=Q6Uyj6kzf^OgSOC-?Kb*J^EwcRKk9C@o}c3fOHxP#gMyW%wDc3 z3=fqnixN1K1Y3WibK)?-z5z7f z|Ge#=|5_uUQ}sVlnX>=35X3MNVFQu`k~Via$On8F8UrNZ2Jm+`l!{uS21B$)v}=+m zjK!?<9niSlJ#e6)lEA~n1AM%p@YU;?|2p@m0wSk$!Nn%Fr2zwvecv;YrWcqhzyicB z&{)Z(tV4?DmcERT2=eD#e8T&5(7MI}olWkKZ$lK&z0f0{yS{!nn{Amv46 zlEEos!nw4AcmufVuvJFY1T$b7qn(=wej`!hUFJx%4!8pO26LwDSGT*3@TJK5pE-}$ z--1BL$M!D`VFS7gsQQTuAHf5z!xkej^gZRCi{7p|kE+goGc6}8cr2Y)W}@?vI~m&p z7L0!nCEv+qfc|nG-+;bH4_&WP5j?Z?lT>+Tozc$M8Asi}5|Rxi1qrE72pgS|An<}s z5z>9gsKd7r)xDKqgXubD@`@y1?JU&wfRo_$kJkH>!$GfX@ z6MV!=Z(Jcfp${zloeHPy2%i8Tf8g9_2zMGO{61j-|Wdv zb%Yt0Rd)y2ZToTCA1GYAF%cK<0H2RP!lN$P?;Y>BUM8cq+v`j9fp1z6{cCwI8@wxr zhY|u@sLL5?tiGT}g_^7cOwkUQ!?_M=Em)59uqIW%y|rs`iY0A-M$gU3Dt8>Bo_bp^ z5{|eHk<6nE@enm%)!IBsM)9$XM$yFEg@oe2L-N){Bp_%=jjTZ<%fU~&rZ{K>B^ZJ= z4VIOJsy z^8v|K%NMX`YDnn=U3{?ddByBI?VT&u9(aNdA&fynew*7UQJLhg#nU<)yN|z*Ctn8@ z7)rUiTjJkaUZ!PG(bImCsRw$+_KM5hKXT$Iu^A->fO(tbXzhCCSs$^Fu@(S_g=;L) z$0-=_EfTW?=p7K9^NWu>IW@K8%Yfe&u%kdp$YF?2-k9R444*@iP9n{;{!BS8ND^2q z2IiTibD@yU zrCmN@;wAYC9V*$-Jm+5w2pgvvm>S9FI`Uaz-k;y>b!_W0d0gIFRWl?qWzu5C=ZwlfDnJ)XVjPH52em zg6Zh8=34tcH$2U+kx!Zr$4l;(C-q=M$^fP0TU)bNN2;F8-iw5@zrfOvE}#yBiHaZ7 zW8_PR)5}nzUbYMI{e7xYw<)$u&SMXlXzWymV5pZdME{tQ7fp8Z#L7SuUaAL#N0{&- z?7jyt@%g7oBt8udS@tQr*iL9wWqF}HrrKE+kucaYnN&AK_w&++&y^2O0-jH&Nd7IF z;=f1xC9&lmD+(_HdA;E%OW+DadC>_*Y23)uz^Ad+0}q$E#6EvhFKXODyn96b*RR+P z!7C1){>%<96q)NM(410O_r*W7Tu00wKs-?4!GE!!JXg7C=Hz69A5Zh?@rRJjd_E0< z6Sw4ZhV!*9;rUvAc?D8Foh}mdcy70G zX;KNWc*UcD$FDoS5B*C+N#%p07s(E9vf|;*#7uuaOYBXE?~9#?zaIw*PGSWrZmpx5 zFtN^*`UOvz!-fRXp(}u4XW)R?I{9%E_z6nRVtt}2sPEg3Q7&a(ci3F6A&=q8D8~Q~ zEsWCcT)2B(x)0C*n4_b>d9)g4_bjd`7ZP6kO52av1?%lzbfpBL*+DwOsu9i$xV zMObD1432O(s``Y?Y&LZZt z3N@z$gS*D-W55GKF;*3EZDu#&t> zxsYvk@)Sa6KAOtR0C~v2#ubzNs`_rhakdQiO+O9;i$*SZqBSGr_1pZPXRgcdDew4d z;iU8{`4N>nqi$TJghI%WQT3M00B7}00S-{6ocZ2rJMO{-o&&Y}_ORbthOFcoQb2dm zc+=_zJioJ|S103=!VYLf{b*9cLBRyVwk$Z9>29ukMA+?oDCq9aTZbw;N8(xA^!qig zmskSt&<;Ye%DFKfivNCeziTmmg|prQ#C-fYfFF6|eUH*7sK`e70v_<;kUFp+3)F)h zUyur}E8BP2&6anJu@XBQBfaX#E42L3xkV`P8y}_1&p@v|RWx)-rGE${LJmdyg<3qc1xjuYqpTyw zX6g@3E-MDcEY|OT2cc%DcXE?gc6~_MfkA36mj!wb@K|^db$^nNbtE z4C1#2=9mrnxe*i;xL(RR>isDcO1fb~0Jq{Hjl@0gh8t6*J%8sWdO-|?%Au5F!@R3F9DAz{UAJ|TUuMYF)T=a8&!8GA z!_&UL4D|{^eb}mp_dd#SQT(pw_sYlbTp2}oT)O%z1Y=H zB?c~}#=r0w|03&$(*gn0u zw${~@fGEuR~)}NgX^zEcg!!( z>UH~LDKpCcHJK??XX2J_IDCPM^jaUFmNu3eF6u$FiC?j#*q~5@rI9ey@hRa!gLg!U zdhIrq2SKgFJ~g+|B=-H^e6kYwBCY6x-fy?)+9tRqKIRv)1RkD5n-Clbxx-5eLOl_3 zY}$YH$Y>DQpBtAZm+ZN)tpyub7kxJYW4AVr@KCzBboScoX%ZJ{oDS_t5aJz0uIv`Y z|NZV4iwZt?vC%h-)uO_w-Fn|8>637#62@!PSwqqbeotLgejr8*=)Y)XiKKo)fv^TR zGGlEm6G>hB!tkQ0Gek&IpA@~p%klnTp&@#ROA+j(@dAex0&U8K=l(|o9g_q~p`v=u z9$TF~YA)Biw{cpiFBua*V`Jo7bVqIF;aQ;UBQ!FhdnFxL{Yb!4(jWKSi%UKb;q&CB zO_=91OekiAU%PjlJ8H5D;lt7LFi=#CnWN`#?u*q&TytvELwB;lh@k|{6G-IZe)fOL zL3*M3g^@q~kNV(#@lh_Xy4x&`N3bpBh|(n>AcbTtu9VfxG(_rM*qkA5_+iw>S}k5)7J0)z4_0;0aU&exbr}SM1f!q!+(Sblyl) zU!C#W`NWHCm`|zHj7keu`J+m@v#Uf@xF?4t`7YPJ;Unf_SbYBU3OBX8Q328S=bg`5 zohO+?%Hg9BX}}bj&vE~?PeBRklQgjeqbOW4`Rzot#->FFxDeGe(i*J#!pRTOFG*We zYmL5x_E8`yzuXyyKW)P|4&tVsoaP~Y>$fi=05@jZF{amD`xV7{(PdCUfsCuqQHbyn zF=yqdzE~^Vs3k-Z#?AoxJHRBJ03x|a?z(q3yh@YgmPI&`)O1gvhvQk2wORln#KsOC z(kY=3(`eATz+ynnR2ui6R49ID2@g>t5~am|~%3y$##vjxs;gNg#7Hp2ecZeQF8K0KVg>`cMuc z8{KFC9tV<*30aHGbE=q6NzGqFotf;u)VSUWOv6!6ce5&BrdKA%Jl05$kvm@qTB^^$ z+R>lTHV?qu^=!$R|CblwYh_ixR;MK@iRwBUwm*->$ z25L-Do{xMhT5=rF$MiD}p8vGq28c)IK0k^!E}7z9*3up88$*?#E-BW;SRykY zntGD%Jb*bEKLs6hDvDY?LPSf@81TAmam=loFXD{+zi};)qd9?_UTMjW+t}-iw6}5g zw@w$bh~G)e^A5KVLnjxXqGuQPWyc%0<9L+4@84HR?cJODG7&vjY%q(DVbX$uGNDqtlr_UXM_2`n5TY&!^ zNOd3AX_07EBCI+J;ABCFA7h9Bytq&*E1uwG873L|;>YBa$$;gv8Xgf}Zz$mBjx;pa zU`^4@5oGt&hesH~E;7@+Rf&$fNK9hTUuoj%u|>J}=`k;4CXbFRC>wFhu?#29lop-@ zKH8OIlUY{n00X2XGQFS@B$YqV=h6I+ncb|~un9}c*kk^>F={w{$xh_1oxPSNt|$yo zD+fM`>=fMw)&lT9twZZ`k|A*x5&XIN>%YH#1rA`{?Oi~)Wo2y4iDY`Jf=$8YMZv#~ zM%*k%67TX*2Bz3xf9M8^KyutNc_P~4|=-TDhel`SU}!q+#j8q zo7kvaN%i{}uhoZ`yDuWaiCn#Eh6;NT!$Uj}@q`_opQT8N5dWaQ2_f;DZz3W_>r11! zD2u4)54HS4&o0f>CnOED#Hl%cIX``Exp!}5@gooQJOA`B0x?OJkVF(ZjWKv&gIjzl zK|&EUcV%19cJH|rQCHY^6%`bukld5na&$UZu(gZvgp)H< znWo&v>IQlw?1)Rlyx?YO(90(bu2_o)^*5S3A3snUo8380a=DGpFQG~z4Gk&Y$mIc> zk)+v$M-`4tVDos+dPX90JvcV(&WE>*pQvN5%<kA!fO)`T2RA`>esdnUM$VI z>OKK%TF77Dd6b9!@bqNa!`>MsP$RLL%s#!QS%lJX6DE%(g5aCSywMHm;cX~GQQtfB zjUITAHFPj17T&g_U+}2m8d|vlcpc698jLMF#Nc>v9aH~X+wQ<`xnaF>-P>HX`iECW z1p34jGSOn~g1wc>r(QP+nz$|IySod~@gQ0jz4Pf+Hbi-Z{^HlqZ&1tc#D$!`wWIS1 zA<4Y@>^l6ARr298UE72gd8G45c}Aas!QN*igD&vrCrOtHR};&B@!Ee2{qLoQ(G+0v z5dtw^@?%rhj$)4wLQSo<9FbVvLzH-q)O+KwaaMPalP=VdU!x+-BfhCUM|W)4(Y$s% zFzQ{Z2Mp7sE6m4Qo-LlE>I$muM^y5YmEnTamn%?_56)-#H`iCpiEr!{8yDw_mL1{y z0%YBa;@JBJXhTNYo#f&~Kg!P6M#(R;&-pR|Rly#X!u?lGmdHjgDU$Czha0K z&FJdb{nlcq#SjzYHh8B9REI9HYliw z4+z)vnMD%bQIo9=`%bo=9YSBn5T)RnSW6_17X+yz#d!6G&Ho_&2@X5Ij<);+=p|#z z3~6>3E1Qf7lpe%Y9XkzcOn5%8hcf9+3xNFKKz=?p3WA67$<7fE~zUSSA=Yy;ju=uQ!(faOF>>4pWe9S;`EFS zko9H6tv=s%ALGp8Y&0bu4XRW4^aXM;p@6_h=AZXTbE?sS*UNx0@#xZu%f(#Z7DXy>r&Swj31+Wqgh&ra}_l-Ml>!d49}vsOraTh;hrIr0sjwDK$;6S_E#@ z#WQd~yHCaM9Z62(1||5-I2C1ZJMN|G+4#UGocNnKd@{A0J|2MpiXTYz2gV0 zvXPQmkXn{Z=RE?+R9~5~+ERv657nme!s`kD1XZ1@UUGnu?K4Q=@Ofi-;fxJ4-drl6 z*%wT&irJGM6pMt+^MHQ@@_#6x{cy72Dy;nt(_v0pJ`xHMGeO47Q*5b?*yquk#41eE zE287R7@@7t6=s&j9%Vq%&T|*0Z@KcDW8m%Ar;KO;vu_oRpIk=W#}gvaQV1XO&Q;Xm zII@GVk#NdFTl1}^sG(Ty1+Z&leLX(Qj4Zwiy}83UDwxH%T7ic@I&5F)MHHO`ZGug^ z_paz{+Hl0wxyWqgL()$Qf{n%SsU{#R$=;Ax?I>$i=# zfLbm@;Ewy}+&%js;^+8xbZB4;8DIZe#BNCc+YWnT*`7TxFnP)?iU>Shd#?jW9z?}`}W zh5}LA!uYm#P}CvsU)LJ1MIWw59#C1j$j?ju+BP;<9txj5^8OQ5J`gD#yq|LQ&{e|Ne(Ms^5c%Io5+_a~CUik!k*>(f*flM#wMY&p@mTaEFDCK!VB#0qWjE;f}+QWGLlp+ODXv z==P}^h~EFR@hKi0x%+@#6?+4j0K(b};2LL!FemJs~)OC$prko}!o_S4Z|g z3TgVIZ|t&n2zEU7$sZ}C%Ad-YCftHdaLaC{ z@n{akueV7Ym~l}KInadHonHnFY&42FuRwo@hqBfi5*sMO>4V#V1CfZeLgzhmg?CFz zJ47@6sH&z&TLpQU!x9`c|guRC#mzycm;Sqjmk1#d0&-KOS zU2TjaD8hXpPNj@J&L02~Wd1FpAlqiIu75Yky%yC7fnmTdOpZ2y|3McS@HsY14u;?g z*P*yLZngL zx6V8Id(y-(bZBdfrdyi`Cv|L`1J@(?$WW+Gt@IY6q) zp-ZN|U4-o91|Pz@pQtP(Iq)gyYu@;9;w|`j^l*TCar%b}p<@TAqkP_ZyM_T-{{d

`5eqyv=sNbZS=eDEM*Xau+I2kIH{6hQSq)SkJ>X*^(X$Pk(Q zu^j_my&t>d3C;Gil6f&EZosqy_}{^N!yKAA!N>0QSy9C6LxAqXLpBE#8mQDF*uCMj z=LH_!2(CSGF5X~{&D$jqxxbLo_BusM9;^a7E*jVg>Hg^gMo6(O@MPx0>;&?g($IPb(TcEJ)$Y2O(PF+h>+kmDiDv8dt+9>5U~8g1}kLIm((MredXFZ&=^-~JB> z`~lpcQkvU`lZQCjoMDp!o%07RMu4PY59|t?mQDhFfkWWVkssM2iie+;MNPG{A9^?L z7~R-6Un3wvAjXc$5t8U3ZPd&#HGkiTaB*I|lJq9HSK}mqlVRzP>6#Nj zBfW2Re0+RnW+ruaXL&XB)V^zPgAXjdGr6!exxc@E?32N3*@Jfj-S0e1~!k^h`ee!9#p7F&uG8>4Que09uHsJ{rpZj}h07))^b)}j8Rv~VhW{ePVM z-C_(mB5dvYhoBw6g~?sg9$9o>FIa_E@1lhTwr}lvt4?Vw(^wZCe%FrvD zU7pCaj5@RSan$Y!nD>>gum`Vsq=tDIAr@aQr{b0;Sze$TR=IJ#%XHAyZ6q-LNz&>{Sn!FlQRpv8g_q59fYN{orTp z=jzc_Tx3KJp{DJ%5U%U=vLM+>NZr6RL@kd85gwc>(!cfBTew2YUpLx>{~#kc)nVVk zWy7Xl2DZ5abrhC2N(e~4reU+3=1LB}1qfxaeHuI4#|OXlWlIw$<3HrC>$m)r`&sKO1XzN(ewhkT5cK_r z&HM7bR1h-YyKz2l!?;ZZJM5d)zFqNoVpbk>Zl{kOG)c3y;Cp9eGMSB*3&7*fu2}9J z`SxEt^n>F&j;RiV{MaeA2I6XMYwYL!HX|LdE4b6Ak;lQ;t&%gb2%5Lw98rKmJt<_cn4%ts~-OiA#xp6*e&A>D()9KZYtm8E6;%HOeiLrscSELlf$DhtGHaD;5x_UkSrV@o4Jx zld4`#lsn(MGNtbP{DkO$Y2^OMfBxlAjE0jWqGL_;^SKiU3HRbbYu(n9cJ7w+v3;gMtP)y)%^IGDCbdP53{%iTY6dw@Fj zM&gZ2)Q5AZo&9wKe$nMdkuBrRuyDDRe8bGsKN6P#0%Y5HI6Ws2>`JJ6#=*%X0aV-| zR$q7`5W7C$q0DD4{<|xA*G(W+oLi0<{aM!S;Vv(jTwsu815ICsEcs*vELvbubF(27 zUF=@_+}FEX`EO=yY=?g6f7k_MoPelP7WrY7I1Gq@SU$zbRayPUVAy1=msH|842;$beV0fP>}N!yr1 zHwKv)K0HF$nO+T{J?!E4fB-e#g!09Y=$xz;x2kkl)l;8+LoVWELriZQ7qQf@F$;9+ zzfQS4a|kI%*T+etcr&Df)#1NZ8RIRz@w z^e>Nj9u}ljygOK``n8Mv-1;EyhR1}y_XEico198M(0LV+*5JO%B_-*xmr(X<-N6QF zI`edJUaRJ3LoOc9`vA3&|1`gO>GKYN+PfIZ7AbdzS%hg9L=pi6&z%r7L=VJ6rgwK} zE~wO%gpl@=4@x+14mqJ@&7Km?{|gd123Pn3>(hXOhJfM+pjYOc$b6G&oaZD6)zpsc z5+Qdvw2_(NG})8Y=Z$HfL(?e;+nC*|0teDBK63j>soG1uo1j&Zf(LfT3|ANVTH`~#k>>3$XA0y#>g)}_axglpdl%2+%pZSrXl-UHo^&X zwZNf>H(zxH0c_(uvk}zCS=0#B@WbQQJG*<=-?mGAfa^??K{tx}`N^bo#JcV7y^T|p zXTFRT97}MX_FhY;QX6B;f=>CG8|*OU^IR$JcL`0Z0(ifq)Fu7#qNhT{&G(D7MW_=r z)qhmMgq@Z9JAw4W?H9Aq#h&1^&L6iet0B#G_OR0!>oMIMe}rp{abz_GOP3-~x#Ps% z#)K7j+y9UYz1YBrpTi=|_!!ptnBUW`O+}}%_b!+JwkBvRHHi|y`T%9jd8pu`EV4y9 zQ#uAReCXveC0C@ z?k7DhU%QvScK+ibjRn6*Nq>#EZm#GDT+$6{^2eQP;Jht7VLSh$n~^}Rxn_>PXvAWr z49F9By-43c%FG}w9W~H)>YkH(`5Ziqj^ey*wo7VGwA4szTcI}mwj3u?8Dc;qWjxP7L z?@5eVIaZg^f0^I=-tp7?mcmk?UIn)56IdBGcyl6g?qVd_>(1&vFUnFhuc{!FI2rW8YRR$+R-ld$Z3X>{tXk$>1^r#W$_aQsxP%sKRTmz!NL zoU}Sa_UrZvBriEMNW<@%)K0x~*x9{O=oR+CB4qi}Q^b-ag{mE}ZqM58@Zz#G@~cZ{ zVd#Cen$-Gj_oM%arf{-Bwk#aqC#@huAzF&SRd? zs_ZXB2iF5kM(Kd#xaMh#FE7L2t<~)W{`~LZ8%y=AHa)Nz0XZ`g8TPeO7X(a$I7ZGD z&THP7cWWzYjvCfwoLpd6O|Zd$rHG4(=z2jrgCKszWVTECOV=GaAQWz&WTNEa@-U^W ziWx-Y7Z;tPZooGeF1r0fE1r&9UwS}96C%}qdHF7V*#9FBkffiAARrdE&uKjF|MlPw zM{z0}wEi&j3rF;dOv+!VO>aHN~1#WFcv^slOUwT$Tay2Nv_xTVCe5_NWXo1~=SV5`=(+4X) zpC`0{vq<`|C|DXvOQxei{Zl+a>(~15ZXkV>_4eyU_a3ZR?60wux~MPK($_fYzu?1!YA|9nQ;IkfC4+RqJ@3Z?!7WDfB8y%MS95Cv?$8GUqWiv6o6 zAXZ>KSe2)-?``ep$kP2%-dd6KLQdX_))zDW_jYDn+#(ojTr##V-m<_c^sUTbznWE z8K{IHJ!|0HL6?B)(Zc!C>8t_6l;tG zSNi6~BV$Hx2F*n$NBfM_?Kcz)<TKCKx&Z35+`;R@38=$? zH_r;9szGQNnU3X)j&MaXpf16eML_Fb&rf;Ku|kXxZaUJtmGE0jca~X#jg;tdjDN4# zF?-skeeA~_w-EYGwmu_w6=J;^L_Nx|savN8-jthJ;zE>_d?}1rYpA1lAhX!CdLROO zFh;$_9=AO3Fz$|}8yY^OQXHZ&q-PPjs1FeNIg6PAuzK}j!{{M>nMy&lcF21xoM4lz5UbjA1W2PPM=9nEjbJN4)5ML^q_7-nQk zsXaIy-((mP7|WIM4Y) z-8<_ZJmSK7-Gv>+&hgwXlH(d!_E{el2h<~T9H7Y%2~0P>4ezk3R3MU39Oe~*!xHKx z4CF_a^QkN%C!@?6^)vo9n=?XIQCzG`X+a+y#}XH|epbXQh9z7YEib$-r(T)b{L_dtGp;rt-4H9s2M zA4yF2bk2P5Ot^N{2M>8K_nGm&_L{(-gPuF<}JGe!cS23monbfRzv+X%fSDzw~fmGbO>h zv%d}^z6##!#sd8AIH41J(x=5_kLwIDDMG;V$%uhR5nNNfaUWT^rWv#(B&FxvKDZ&K zD;Mu?EgC71=8N6^+5H}SRhh;j|0CxdXA0UM-sxhKUgDUCH=js|ad{rO`O+1bd`o0z zR}4B`hpzYV+`aDvwcG6x#G}8;9^*|(VIW(YAC`>r>cGAC^(I1Hlc!&7+}w|M>sw zp3PwF`%)$?WXqBeaZ9$chY*=$r^uf5wqz+qmLl6&LMnONOEJ=hvPLM3Eo8~QGr#$q z-|u(6|9sE6&YUxI&VBAP*L|PsdcCgK^Z9uF$B^s|+ythh)0mLE9e9pgZ(`##u^?uF z`1D`ER*Tm*XHC()J$HKuzwl?csOn3j7c1xrO+Rkh8 z{|mdHqC*bux$8&0dN}&+MZ(geZcn-Fr)z)%npY3v1z{J?AkLjEWqro~kz9!ES(i*D z{QcfZ}t8~U+B&{%3Cv| zQc3UTu*&5z(0v#FbUWV4)R1lQ0AHOPe-oJtvnx>hpsyH_$#|^s(VwLgt$I?0hl*@* zU&CdZsN)*^7U08U=CB<9_*WGZN6)=DW`LsXG@klq>}(Cc9|5IdAKeTCK8a>Y0*kc% zz+VwQ<1EBmRv(DJ#e_e9Oj!E2wS8ncYmfORJg1#8CME3}(~s}FgOI9S%0nY0op?V7Xc z)P70xR1RePh04i}F|Coiti;6~RNrSJQ34~6Os%OxoS8VCi@<>QqgopkJfvFdR6b99@zZbwhJoi^ySnv|M%a`OXCGK8Q2E?Nu_M29oYTS1AFt1*O7%$ z_h#R|XD%q6uiH`!?fYM;X4=daONAlRk1Aiw-VCyF1DL|VTIM0q?NHZt#g0NwZa-9y zx_!&l{?UB_UuJKKqx|IxXiAQ!#L`t)t9L9{7_TOn&3XR86!Ai11s!S{j5SKv+4Zd> z-(G}%p21L(GM`2rj*e@2OBeb1#Q7Q5&OOk&Lk>J4{BZbenk|*LsMv!6NxVjq?TGC} zCv&wWB29*tQQ>w7%V)_9x4DaCY2m}W|8NCaiBYdZcFSrL z2$Z{)kBM*{&KECii)&cn#((t@>Fl)LYHd2f)Od&TD9cM~`G}U&AUuVJgZVF+kq=ck z;Wi&Cs+{euz2D(hEY#tpal6(nr=y@EEwnD;=NVq0G3tmy)=$sW{mNN1Egimg%@@ji zyQpB(S9Nv6hvACCzh?#ACsY0l-+yNfkg=3Y#SEi)NpY!?x=&!U^{e0Pmt=4Jk`H@R z=+e!)0()?1fdA%F6t)zjL4Z2S(h^U`Q)Ify;GF4Vw3u##i1^ zdbHpV&H7BQ6bqC7bZ{o$e&BSvOpkkMgylwZC|5__kn>%n!up`4ZHvnNCBSK+ephAg z*WG_=3Bxgo3Y!wV44~TaAduGhF>LjY+QxL^sNKH&D$cT%mq!G(Dmhyu1Edb*m+1=q zdy+Rpx{nw+bry#_T!|?OnnN8lHnkoh2Uw{=Qr0H=Qo4)AL#y=ljo{&%^oH`~|lA(d!fA7R1(l zEz;b32=rMiydbKwPip@6>A896`=;vvxwH}G!!fu%5yj6mkmnH#sh#>tf2&d^C;Vj& zZN_1t>Um+Z6reooL(kI+egm72zO`Qb|Qof z_m~c{#DH@9pJy^56TlPqh6~jC{PtzIpN}+kzonn<(8I-1cPD0)#Gi`QU;c*X@WCDT zor|(Ik~loe(m$*?@?~#vTq+8Me8a$Vhfv8gSutL#43wY7D&;Rw?|`>mFh+d4sO#>6 z<@!-sh~wg|>bMHS+=1ysYLf?1S9gUNz;-F^)>IDubGi3ob?ij9cQQNQ&MU?{pIe;M z3f2GERbe9d0e2|`7jY2|C(2y=>MLiN9!K)!|}7haZc=LOAh61|J5>%~QuFMz9?zV59IOaY&UlMEL{H0o*S(eTe#QpM32ekew%Dm14G@ls-J5l z3R}1@0jx~Ng+ef25W9VY(Muj(?_1Pcd$z_j_yfC6_HR0Z;mPNxD4^XqrUrq=@bmdi zFli#{SB6l9*!bqJfhq1PQ7te8DX_8w8?`q9+ZhF@RWGJ z;JD9n>&tF;%Iirrhl)eWYUSbC-Z@-rqw7?DQVZbe1DiK$%g%EzdAPnsv9EZbysroj z(qcK@o?lvGn#6(nWk7AQ>0|)SOt~V%$9j4o5`F7Fg8FRe1ko|G->F(D!ieN+akQTpM zhI(lXWe?j-Xi|FtkTXJnNpkCTwA+jgzqM?Ah{KBU-mx6@d`sU%Nc7`)B*|-Me5N#g zx8%PicHtJFPgv5Z=@38jInwIZs+ITVnr1~R`a4vSY{T4-HpSHE^jmPEKz>G*Gu_*- z$sFG~u_q$`d3aVK15#R2y?!IZ?of4@+^t$3`!Fv}@Q!YXN>T&(9E4G9GAPKExSIjQ^lVA|n-hGkja zcV}B>(~zJyyC;r- zm(q^>@FCQJ^Zf^J8|c0-IblN?ncSI!#wR8BSC=cg|J6Lo$~l>c3wqZz$oldq250L5 zrToeXuanC*82qEPV*BW)(COVqX=^q>F;$@8P@vDzPW3c~7&A0!e-WK?tFX^~sbut) z?^&b6AC>Ma2MnD1jE#LDdB8Q9C8W}UztB3zGTZBZIz-$oi9M#CA5j0u!{Qj_5&ZtS z(XOPSP?ctzictp5oZG6lMFAc(KBk!9Nv{~aNlk3{>s)S>iN12Ffv!m+B-{bIL_`Fc z*c}qVFa2n4;jTR1R&-ONv91&6kzhf$i?>5T+7PE%jysi)*bADWVmbcrSd|{325r`$ z0;PwuI_`IcUkPz!T*M6d0?N~0N$81|2Y!6xsOjV=g|x@tF1)5c@-Z~Eeqwhs^iiR9 zh7Y7LaFE@ducc3~$^#6JIpI9A;EvplV?5km>wiOXth(Fp%bTOtdTiHJC$9Wgm;7JU zlo9y}&iQaQ#XxujZ;pigP_~3Yc9EI~??rYZ3shlyv_b+P6^jbU!q44^cAriQQ7Avld4JEZ#`uTgUS!K;O;Gi@#pe-MDuVW3wG`3Qtt)|`8r+4=!_NvJ+I z>V)DudiGc!kj37ArHH1y{fe)&V?ju*X(B;vqUq=qegs=k_2t?dxeVAiM8w`DWEg`W*vwWzQcsvM95j-Xjtd!B71} z?h<;w>e}MgH69j()1^!qN2z2-eD*@FuP|zC$I|`Ct&U);@Mz zLxW^}+AG4kPG%980j)=M!`;TRw>)(j*+2243{&a*|t&t)T*vp~_ zn-J8#HW57k~SWBbBf*E6gOZ3h$qqU@8r{TJ(f}c z@<)(zEk7L8PY&u<^Aeqq^LmwLgo@80==?>RnH+p%;<7R@`8sVg8N*)x0qAj{3JBDf z)`F$^5Oqfm+2EH}Wwr{(C%YmkLMVx9MS`R|v~5gy=}8w=vI3Lm5R<%a_{l@y%zGvF zegj35V(8v0oJ=ZYB1*TfVUdsl)s%h|lRJU-3?+z+3g5f<0A-oEdYBdBmwZjncNIXX zfg*dl9*Rc-mmq*Rw}}Vz`5h)y5CH5X=(@B?wwi=+lH?L2$Fp-YVY0iO#vf|qjHm2Z8jEUSUAi)CF#yQud)F4r64RFe|T;p<{XP zeCNGGhg^0uEnl2wW~U0>6a=IwoE}YBJ*3K`ohh_Hif-k_lf9SBp0C}>%EgH{vzBtA z0o`CPsiRdtX8{H4&3L=w!JeYaKx$TC0d=o3w3{1)(v-+jQ679|XTM=D2%B!xK!34h- zPH(R0ltI-G8+ANb!E7U)7r_d4%2hEwyJzk_p*o!ojaTJiWOI_(L{QVf$^ zpK}o{o|khur=2*OWAl`5YpJu5)u%IeUgI7&@Aq*J;H*L_^X|of&%jKCXs1faw``yZwf{Hi!WBp6{H3!eea_;=i%64eT=InE z4pZ{ea~uA(Bh2qaU*5TWY%ku|TkNCm$9f~;kROg}S+N(2n4>@1fy%~(JM0;s9EqV~ zjjC2e)W*2O*r_x+eoM^zy;7V^8<4Fz%=sHH@Fa3+Mkgq+`cNmotC^eb9nt-p>22<$ zoqs;x0VM?0JkD|?CI8`IUC4MtA=YGYn=>bTMgR3mJF5QU-K-9@%e*U(l{ zcz5MaNMs@74HFe^lTZCG)Ux4Uy+Cj2dp^Csq7-GmiJeJe_!|#Mcc0tofZL_<>Rg`h z+vJ#K-RPR{|Ef=(?hF34+iQP4yf~1iBHORpD>S%wRcyNX0y%sVqxMM!z4!P)HO&b+ zu8aKLM7v5y97Dk~%}L?DRxWGA2$`cPrrcjqN)wo17m{j5-5+-U>?%AYCepocar1FwN!J3#MOItT-r% z8C@t@U3Qg!Z}z=OlE3521DzYE!)|}H|GQ)Hx3A0rm`g5G2Fe^E@j7?%fKUG1%>a7# z@5A7%zv}gU{>lCO_n%~CPDARixuwiFb|siZ!m7YL7xtu4SGsp!jW@D0!CApa-19r$p??b=>@rW1C3Q z<~7-(^V^s%u36C)UV@dQgckD#Z$Iv2nB{TqCiFVyeJLj9m~Ut61i8mx(|5dJyL@{l%ekBhF@oKspbn;Ph;dEz{UWxoG)WQjWZbd%wY10`fdXkY`=1}f(fRq&)M7} zPIw!dUf{>l_{Get?#7Dk3uB1b4ia)B4FseY!8X<(4wfyT%#kNEi;QsP_DKS zC4IVtwaXw)2c4FMGrtRw=@_8Wse_L=Iq&b$d(Fqn+dBx%o&7(Jt&17mT|p{Vd;=#V ztG4vDn}}=@BwD$>R|c+9Rnt6xuMyFFfyFkSxY>9N7s+*Eek?2N0_Jyk=@biSqcdRf z=I5BL;tyRfV9kD$xj*tGlh>?rC!2|dWP~?$hVC;Kb(nr=Q@=eCENZ?qF&Dv0sXLD* zUS=jeuSkUIjbqJDsVL!vC7qD{E%$Q1TQiE$X54CFF!T3R19V!Lm*donNLS5=YW0}l z+hPCs#g(ZCmxs1>$Lw1L=(_XUw(Iou|0TM!xhCXn>S2LZMd~~SeWyqHm-?KvvM!2+ zS}#bEg{9qGOmK^vy5Vb5FgUjbdB(}0VjDU+OOv;aj{wrv?!^^|tdHLQqwxl3)g|#- zX} z9|JE8WUEFBjp+9{`ic)(36$%j#Nc58N}CoM&*6>5zaNMcCpd51QE-;yD*g6B?ZUzD zh5DZzN7~cn5qy)E>0 zW4a+RrV_Z}twNCvohzNJ8Q;Mj=l)gSF0^{O-77&NBiL?o@EArl$) z7d5KG{f=~G?THB;`1PP6n_b_?EQv~DHGt?+F$K!L-wCPIIOX_J>Q;k6aFWx zqe-WP{pNSLK$1R!!JpteeC?d-<1k&_#&2De(H5V@PJ5SbFndGjH~sXk+mLtB*W16v zCeV3v!t6yE!xvI)IEMlYS$>!q!j><;Z05c*+|i^dVrJ@YTutyCUWT?Pj?^a-z)0+>xN9dFv}=D z$=#Sf0yf+PNYCWW1Ej~k*L1y=4aA+)*BbO!yFWbrM-q%E}p8X7JTTve( zIo;#H&bGbz6*XzUX8TI-SP_d&_M0@zUINsy*I!}y6X@NBV}iV?LOfuR`4zL4ml(3O z{j*U95YQKR{htzjv-%>+& z{VqFt9dNgo4y@{gjc$gSDn_=n$4LI!)MolAi|DY^120D-gwJ*PvYb5WpHyout@UtO5>^l7-Yabr<`yvdIj$N#=@|OdTOo9W;R{(%Em-gEje{)nKiPsT zWbQ~aTr$TowTomBnw2UtnX_spa!!oI`cbI&kr>K&N|3SzWauOXs3uUKTejiQq)PSr zaGW&?vRZMDQAR8ODIEZB`biXqiqs3lupwE^3d*ip}p#<(-EF-lNS80d~;bWZdoI){^T<8mbVGKaAM3zg$| zSxqWG*lsM_oK``P{*;7^H%r_Z+I{ORkril`EJyVdMD9Pv=R(wul9o z2p36kvHwmtGn{5L$ivDqDI2nW@AuqQ>&-!h?vMi4WP7)+zWm~BPdNP;_q7sOZQK>N zP!ZJ`Vx!2%Je|B?aS?J4$t_nk=+UG#ZuxlvwugrgTk=tqN$%>c@Y3dHkp_2xW5YI!_r(weuW~!NY?S7gjn1*MSLFu;ZSeOkOybwFPV_og%dFc!!*Xo##m!g21~Z(hAZDy}H?7E*J%`R0#hNLmjJmR`p$k6@_e5iF>|~M*%ep_nET60dVlKHFzed~&SL2Q*#f<~ zRDDFiN#8Y>?<%6~^$|})T^U?;m+z`J^wQI<#V56$@u->2Ugby`kh~1Nv(>+IJtj0` z#7?DF2IahU5UMsKHXTDhTIu^v@X(sO-E~}_xDJtH%(&IT%B@UYA!aa@qse%wDBO?H zPps|fcX4KRy!dB9y2Go`MK3pLzbMrYA()k;zPFpRfY*ZcM6tCzM&@3U-e zo*#yZOMY?!N>BF#;+cru`-QW8_fr?=IIsgVN`iE>jK2h~;+y6pO(D=`%xoMF6+Tbp zf3-l(wLSk&VfSEa+(BM|EiSs5teI=+p`2|XlbbUr)%UlADY+OK4lf*i@vIY-tU?qW zs}C~0cX+3_p*M}2@)6vTZEOg+^XBbaa%V5|@$eO-+Wx*cEWddH(xX>z1t>89i!Oad z;zbwtGI&}Gz%T}4osG7ReO!ySN%$=wwsK-;m2)z?ycVU8#v1Qm=4~l?+ zPrg?DRvCKWSa>oaSr1s?^+=$;8^HH$kdAR>elb(z2G2V3%h++pjWyX6&42VOHqtS$ z;^AdW5u!hlc)5GY~FcIVFElqwL6jC4TJz_|2Rgw9p>)}7$OB~?yF)aCvtRed8 zcY+`sfNJ~fhNKyh97|QT1TmFIkc#Nkn>(*|jUQ;HJ_UXUeWDdIJdvN7{s@wfu0np} zM?4KbzYFQKE&vssrPwYfby>8gG#X72910Sl%rSL5#hD2x3mEELQEKOo;@4o^VR?Hn z`1gqmB!m9i5cTMdGN;5MYi!M=0ej!o)K+CJLlOs23;TOUWmAE%)Dv&E13u$@LQuWi zV5~${z*r~K38t&cF(hC0&9!KEW*fW1s52u>8*ig7|2{0kPwn=W&lEmoi27CvTmu@) zF~?{DS-aM!fdiYrK>wlEh!prLc%B=;-3H#L1E+9`|EN5(MUeW`L zU!rk^5r@IqK>=8JeeBO&pn|4cV7`8ho&WL?<*<`)PpdVE)ZvD4`l#nD^XaINury0F zsFlMANM#vfh28-K>kANvdq13`N?5Edmlj)9uB;dAn_C5Y|50e188ON9Rrku zghuWWb&=D%45t~0ZLgcRntJE!cF#>cR2U3{oLnL$$&guSLVwT&=ZFuL&DtEu_MWcM zT~t+w4fHo~_7eakZ%KeGVl5Kg9M%ASmHdZZcBH@cbc+>7N}Zkq8Hs)){T?U`Rz^m zbuqnCT^(@tSpbM-i5pk(6o@if0Utp^6eKxSBXjrfaoD=n#eOUtyuU{YGB^zkaXJRfUGH0Xzs5&qI`xisl1Cl) zj6oOm@91~h$?a9OipL4`WOB)}l8(^)gE?`EuU@tI;Zg=CsELUe&gfE3)@JIuF(nhU z()lOv<$=sFy5Aq_nqvokI}6v$Fv;%V26hz?=a>#8TbKrgI9SO?NZ$?}Igg*1{X~Zc zy0zvR(HsXntM&eL`XDD#BSJSCq_&UlFx6DK6+!H`8o}EnI_!<(W8uD2)RJJ7I!!Dn%L}`X(mqg2*Sqp^4SeARWM6< zzIU|Oc(qy16n#vr$mzrE{X!OKXEFFTp-VuVemdgT-*yY*DVfX;Bl2nCJ}J2 zNvv8Ebp6F&d?a%yG0t;2HC^Mx{k((Ijx@#h!L^*9KrsX3&?eupb&u}ZX<>L*L+CdM z=!_;-jpnP%>O)+|k#{2WO?kO?MpzN#BfJ(7X$+;32GhAuwOTBr4`a7i42&NfpQoSp*2gmAPpBYO&Oi0;%|iz|1X^Y%UupHtNhfaeY#e5Oo<{{>yN01udjb8mJ& z&=o_b9kYG`^uB8%OYf3en{KsS zjy$j=mC@c_*ncFlCUOYq36SUUwl_tH3g|9Xjj!3P)=y@>+UXDMJ?+snn@M>o^6rZd60maUm^@D04zVcZFX{ram)QA}p7Tup z1xDzXdh_=pd7%&WnkCu9{7>QRr~PJiqaK{N0_>F7O;(@W#LQQoHF-UKwmGaAQUvr# z@(l?pU`xIoS#8ZuXDjt%20!}R!FjmdIz?&%lo`gm%W&PI|8>Fn9k6XHV|2H{F%=xiA0PRPw6rls~%;tFWDFmWxF%iRZhMn@&QS$c)!(vFB+0PZ@S`|h#fNE)Dm}F}- zMVt5g)4Pap7*PKsJtQpnG~_YoTa7BJF$FHkX`?-EIIz>4(MAWe56EA4j2QPR~bnvyf>ZEGoira=#KvhI8-QUsWZYt`^jw8~dLp^KiXM|4w zwm<#es>{7%ovvW=9bxEkevg=gE_xlF|9?cVd*6!)$zh?ys_59?*nrC#<-4(K_evqV zz0;!A8FQo5Bjo0S^X*Qk4_hp6*^ryq`QR*Q_mCx;EC9YkwC?<)sn}yS&9l2uicV54 zQFToL?qn+F_tunkTs1VM=#vMRBmug3fAII^{VxI0D+}~)5%ag%tP1%qJi>bBo86?Q zFF_K}*CA+!G0o>0|-UyFLsVnn*5olsrOv$C1X2 zjA)qS`YYvuAzDG`kkA-rrw`7+VGeq|9$5dLu0eP&feb-@!pYU%BvPjLkp54W>XJp}$I(9y-spaUAh0G(a90DAH= z*(la|BD5A;z=W!0MEub~E|LCL^Cc%pSs_4=A$RbxvNx2~XU}*WMSr@5yl2D(EDZNLQ0KqgdAK|M zq3l+8g5kY1z31ANBF+oTLq{vGH3eJPS?vE#rY6czQBG#AJ=48mhv9bQ*Xfm#=)>vI zc$mn4IP>#ey`M{764PnP>(4hlbk32lo1u7P%pt+0)4+nUXpyY`4V%~?cVRQk+?eUp z`0>$K5R@;$NVR)iKoM`nguE!4(Q-qic!bMjh^)*#Zq&Ctr%f_J-k^MP z9CnN0hZ_v_|Am4nhl%>_mQhSC~q~&wY;S01P1dHRuB^S zzU<)X?VY%Bgw%-NA}Dt~fKT)M*?D>|GV2~C0g!Y>U5Ax<68+C8?HzeHS(O)Jf%`a~ zR2@-tk+%KOa(*<&I0T)+*_UKa%kCt7&JW3Uy1{%%tn#bw~)iG5?lGT z;?-~dzG!0@9(^r3#aunSt*5LHm8v7M(l|yH91)MnFQ=`(VUICGE=|$$_NV^#oEkq#)A=V_M`MamBQp-F0?WUp9vEPt=9oUG>1|GA-goOhFQZ z0%ZmQwqgc4yU*~vTP$ewUb&wVTO$F)vn zxap2y$vudYZryzA`};^73$j2?zXRnkQZe51Ybwm}k)|g?fa7k&rq%wca&q!(tX=F` ztne@O{_rwh6A^T@EF386JyaoAq;i-kqK|Do!NiO~?+=^5v^OV1%6F|CMgwDw`IDh0vjKXvfeh8#Aik18lYU* z$lS0nu1jty>IKNPP#y4d$dJjWI_UvKA+6I57gct6K;6_Pc_4Hv+mnuOPSu?u8LT-?LovX(2mnUw= zCnA}3Pp6n*PB!?Y{b8t$iQb;FN|U{eSC+~I4nej1wlyx$jeamIRA1(REP|3EEvk0@ zlhdf#6W%r3EzjOE70WVZ6h;VQ{-d{Q$-6_lA=GOjNALJ&wZ0cLXp1?|JoKfq^Y+IrQaLky0U?z#Q|@qPcRm0H2S4bP*6&A3BK_5c1QlS0UszNhD$OyXXM$LU z8UrSsQ9t}iYA8}>ZDgq>xmGNwFBA6r(gu>5i^?vLhWNT>J($jO(#-b-rvAbXhfTDQ z^s(n$8J7`1LcLzG=52c4$BAyW;|8aZtZO-;Z}M=Z%lTG(gm31|CPk=-MwD~rr-(iO z{9j;(U}YR-s!Q1&c##5m7YUM7I9?up0j3&qDH^b}sFzGwLqnZW#Y0 zsn~#e`M$yxaoU{gNU6R+O6Q=)Fw5EL5WoZ02+G(fTToA%c*;*X*@sS4=9s*u{CCo=~34G88GhTBGbgVxlw3H2WBeC4jR&PLBsY$HcBCd;M}M<402zxZcj) zz)z*VzxGoF!_85Y6>*q$mimI2VvWSDt_@y*zU*DN`y-nJ^yUOIXt^Mhr6z6^;&W(u zRvp9-(nw|HZD{hcfH#$Z#QqajVv-FNNk;8s>u9$*$ay;li8U3MaNftbGPb}=5*>?E zTavmXVj0I-whrg4UY`r(d1j#lO|YQvkJ&kKnFDCLxs3X<3+>1flKiMWC^B$$}2z5Rq2M% z?(k9LapGJu<`DT`{p&xznkAr=5q*7<02ED;Uo86P^>rN*&}8Gw{1?cD0651+YodYB zG-OnADVR5s4m{U}c4y=Vv_`c$RbPJuLN88Jo@?k@aLb0tN7NdvT@AYjB~d^M(?}*h zSb`rFvX9rjkE&FLS3rX;B_xn2*$IT`omK1}81hpmfm&?3G+#67}#xVp!y*2*1_u z^-Y@k*b@0c09^P5$sd7!z3RRtKQT=67w}#oc#qUhpYr;dWhP46DFnn2Lt3dfq)fmC z)yO=H3=&jl0za6eG%|sA;5V!HR}7aOUfq#?g3AaWV?q}>2bro90>VRl>-D00+wmU; zri=CK1Iyzne*1l=F9qJFu-3E(hHm7C_K*F}KF}Jv?L}M z23~!CeUecoe6)FQ6|H4+N#?=5pY1DmmeyVz5r<#meU(zZj79}rA2G+Hk4Gm7>k!~} zqK}4k@>B&LoND8wdZFdiUr^YLELT3-ANbzTOldz1>htz1SP(DTyE>1mlZ=jjx8*SE z1=u7<4sdaHCD6Ve?E_t*+0F@5KIcjy+6hD^j0QKfwCoITFl?^AlARv^tlbT;B1`LU zWp$?_TI4idwMQ7I4mX)5F^r?`WldUKkDKg476*5u2KtxATlyykmkiE8h&a(O-FP(2 z{}0dD@l~9c2q%_Z7r+Zj z(MnkiH8e4e(R{^fwnK6s`d)}|A`$PZFqM{ca-vkzaOvf$e>6pIEdIF`j0*a8Q;s3@ z-D-0UYV0utJ}5PP{5R8(SNJK}2h7@x9LV3XilB{tt<(Nx>nCPfqjReKnBoN4A)sMTuSZ4=`m4zMJZ%il1P zxsHmoOibLwi)xyl$Q z_T4@fRARt+Miel`qkuJi+(1)gPd2t1TB8uT^8^fN;ALXJ3pvtTdsY9_1jI}K8+{tg zbR$3T)xV%={{Q|$|CAk%r+NZnoVVSn>ZRk%= - - - - com.apple.developer.kernel.increased-memory-limit - - - diff --git a/examples/demo-apps/apple_ios/LLaMA/README.md b/examples/demo-apps/apple_ios/LLaMA/README.md deleted file mode 100644 index 70775d63a68..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/README.md +++ /dev/null @@ -1,47 +0,0 @@ -# ExecuTorch Llama iOS Demo App - -Get hands-on with running LLaMA and LLaVA models — exported via ExecuTorch — natively on your iOS device! - -*Click the image below to see it in action!* - -

- - iOS app running a LlaMA model - -

- -## Requirements -- [Xcode](https://apps.apple.com/us/app/xcode/id497799835?mt=12/) 15.0 or later -- [Cmake](https://cmake.org/download/) 3.19 or later - - Download and open the macOS `.dmg` installer and move the Cmake app to `/Applications` folder. - - Install Cmake command line tools: `sudo /Applications/CMake.app/Contents/bin/cmake-gui --install` -- A development provisioning profile with the [`increased-memory-limit`](https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_developer_kernel_increased-memory-limit) entitlement. - -## Models - -Download already exported LLaMA/LLaVA models along with tokenizers from [HuggingFace](https://huggingface.co/executorch-community) or export your own empowered by [XNNPACK](docs/delegates/xnnpack_README.md) or [MPS](docs/delegates/mps_README.md) backends. - -## Build and Run - -1. Make sure git submodules are up-to-date: - ```bash - git submodule update --init --recursive - ``` - -2. Open the Xcode project: - ```bash - open examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj - ``` - -3. Click the Play button to launch the app in the Simulator. - -4. To run on a device, ensure you have it set up for development and a provisioning profile with the `increased-memory-limit` entitlement. Update the app's bundle identifier to match your provisioning profile with the required capability. - -5. After successfully launching the app, copy the exported ExecuTorch model (`.pte`) and tokenizer (`.model`) files to the iLLaMA folder. Four models are currently supported at the moment - Llama, Qwen3, Phi4-mini, and Llava multimodal. Please ensure that your model `.pte` file starts with `llama`, `qwen3`, `phi4` or `llava` so that the app selects the correct model type. - - - **For the Simulator:** Drag and drop both files onto the Simulator window and save them in the `On My iPhone > iLLaMA` folder. - - **For a Device:** Open a separate Finder window, navigate to the Files tab, drag and drop both files into the iLLaMA folder, and wait for the copying to finish. - -6. Follow the app's UI guidelines to select the model and tokenizer files from the local filesystem and issue a prompt. - -For more details check out the [Using ExecuTorch on iOS](../../../../docs/source/using-executorch-ios.md) page. diff --git a/examples/demo-apps/apple_ios/LLaMA/TARGETS b/examples/demo-apps/apple_ios/LLaMA/TARGETS deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md deleted file mode 100644 index d6bccc0ef47..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md +++ /dev/null @@ -1,106 +0,0 @@ -# Building Llama iOS Demo for MPS Backend - -This tutorial covers the end to end workflow for building an iOS demo app using MPS backend on device. -More specifically, it covers: -1. Export and quantization of Llama models against the MPS backend. -2. Building and linking libraries that are required to inference on-device for iOS platform using MPS. -3. Building the iOS demo app itself. - -## Prerequisites -* [Xcode 15](https://developer.apple.com/xcode) -* [iOS 18 SDK](https://developer.apple.com/ios) -* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/main/using-executorch-building-from-source) to set up the repo and dev environment: - -## Setup ExecuTorch -In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)). The commands below are running on Linux (CentOS). - -Checkout ExecuTorch repo and sync submodules - -``` -git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch -``` - -Create either a Python virtual environment: - -``` -python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip -``` - -Or a Conda environment - -``` -conda create -n et_mps python=3.10.0 && conda activate et_mps -``` - -Install dependencies - -``` -./install_executorch.sh -``` - -## Prepare Models -In this demo app, we support text-only inference with Llama 3.1, Llama 3, and Llama 2 models. - -Install the required packages to export the model - -``` -./examples/models/llama/install_requirements.sh -``` - -Export the model -``` -python -m extension.llm.export.export_llm base.checkpoint="${MODEL_DIR}/consolidated.00.pth" base.params="${MODEL_DIR}/params.json" model.use_kv_cache=True model.use_sdpa_with_kv_cache=True backend.mps.enabled=True model.dtype_override="fp32" model.enable_dynamic_shape=False quantization.qmode="8da4w" quantization.group_size=32 -``` - -## Pushing Model and Tokenizer - -### Copy the model to Simulator -* Drag&drop the model and tokenizer files onto the Simulator window and save them somewhere inside the iLLaMA folder. -* Pick the files in the app dialog, type a prompt and click the arrow-up button. - -### Copy the model to Device -* Wire-connect the device and open the contents in Finder. -* Navigate to the Files tab and drag & drop the model and tokenizer files onto the iLLaMA folder. -* Wait until the files are copied. - -## Configure the XCode Project - -### Install CMake -Download and open the macOS .dmg installer at https://cmake.org/download and move the Cmake app to /Applications folder. -Install Cmake command line tools: - -``` -sudo /Applications/CMake.app/Contents/bin/cmake-gui --install -``` - - -### Swift Package Manager -The prebuilt ExecuTorch runtime, backend, and kernels are available as a Swift PM package. - -### Xcode -Open the project in Xcode.In Xcode, go to `File > Add Package Dependencies`. Paste the URL of the ExecuTorch repo into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version, e.g., “swiftpm-0.6.0”, or a branch name in format "swiftpm-." (e.g. "swiftpm-0.7.0-20250401") for a nightly build on a specific date. - -Link your binary with the ExecuTorch runtime and any backends or kernels used by the exported ML model. It is recommended to link the core runtime to the components that use ExecuTorch directly, and link kernels and backends against the main app target. - -Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed. - -For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios). - -

-iOS LLaMA App Swift PM -

- -Then select which ExecuTorch framework should link against which target. - -

-iOS LLaMA App Choosing package -

- -Click “Run” to build the app and run in on your iPhone. If the app successfully run on your device, you should see something like below: - -

-iOS LLaMA App mps -

- -## Reporting Issues -If you encountered any bugs or issues following this tutorial please file a bug/issue here on [Github](https://github.com/pytorch/executorch/issues/new). diff --git a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md deleted file mode 100644 index 4ec10032c1f..00000000000 --- a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md +++ /dev/null @@ -1,212 +0,0 @@ -# Building Llama iOS Demo for XNNPACK Backend - -This tutorial covers the end to end workflow for building an iOS demo app using XNNPACK backend on device. -More specifically, it covers: -1. Export and quantization of Llama models against the XNNPACK backend. -2. Building and linking libraries that are required to inference on-device for iOS platform using XNNPACK. -3. Building the iOS demo app itself. - -## Prerequisites -* [Xcode 15](https://developer.apple.com/xcode) -* [iOS 17 SDK](https://developer.apple.com/ios) - -## Setup ExecuTorch -In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)). The commands below are running on Linux (CentOS). - -Checkout ExecuTorch repo and sync submodules - -``` -git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch -``` - -Create either a Python virtual environment: - -``` -python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip -``` - -Or a Conda environment: - -``` -conda create -n et_xnnpack python=3.10.0 && conda activate et_xnnpack -``` - -Install dependencies - -``` -./install_executorch.sh -``` - -## Prepare Models -In this demo app, we support text-only inference with up-to-date Llama models and image reasoning inference with LLaVA 1.5. -* You can request and download model weights for Llama through Meta official [website](https://llama.meta.com/). -* For chat use-cases, download the instruct models instead of pretrained. -* Install the required packages to export the model: - -``` -./examples/models/llama/install_requirements.sh -``` - -### For Llama 3.2 1B and 3B SpinQuant models -Meta has released prequantized INT4 SpinQuant Llama 3.2 models that ExecuTorch supports on the XNNPACK backend. -* Export Llama model and generate .pte file as below: -``` -python -m extension.llm.export.export_llm base.model_class="llama3_2" base.checkpoint= base.params= model.use_kv_cache=True model.use_sdpa_with_kv_cache=True backend.xnnpack.enabled=True model.dtype_override="fp32" backend.xnnpack.extended_ops=True base.preq_mode="preq_8da4w_out_8da8w" base.preq_group_size=32 export.max_seq_length=2048 export.max_context_length=2048 base.preq_embedding_quantize=\'8,0\' quantization.use_spin_quant="native" base.metadata='"{\"get_bos_id\":128000, \"get_eos_ids\":[128009, 128001]}"' export.output_name="llama3_2_spinquant.pte" -``` -For convenience, an [exported ExecuTorch SpinQuant model](https://huggingface.co/executorch-community/Llama-3.2-1B-Instruct-SpinQuant_INT4_EO8-ET/blob/main/Llama-3.2-1B-Instruct-SpinQuant_INT4_EO8.pte) is available on Hugging Face. The export was created using [this detailed recipe notebook](https://huggingface.co/executorch-community/Llama-3.2-1B-Instruct-SpinQuant_INT4_EO8-ET/blob/main/Export_Recipe_Llama_3_2_1B_Instruct_SpinQuant_INT4_EO8.ipynb). - -### For Llama 3.2 1B and 3B QAT+LoRA models -Meta has released prequantized INT4 QAT+LoRA Llama 3.2 models that ExecuTorch supports on the XNNPACK backend. -* Export Llama model and generate .pte file as below: -``` -python -m extension.llm.export.export_llm base.model_class="llama3_2" base.checkpoint= base.params= quantization.use_qat=True base.use_lora=16 model.use_kv_cache=True model.use_sdpa_with_kv_cache=True backend.xnnpack.enabled=True model.dtype_override="fp32" backend.xnnpack.extended_ops=True base.preq_mode="preq_8da4w_out_8da8w" base.preq_group_size=32 export.max_seq_length=2048 export.max_context_length=2048 base.preq_embedding_quantize=\'8,0\' base.metadata='"{\"get_bos_id\":128000, \"get_eos_ids\":[128009, 128001]}"' export.output_name="llama3_2_qat_lora.pte" -``` -For convenience, an [exported ExecuTorch QAT+LoRA model](https://huggingface.co/executorch-community/Llama-3.2-1B-Instruct-QLORA_INT4_EO8-ET/blob/main/Llama-3.2-1B-Instruct-QLORA_INT4_EO8.pte) is available on Hugging Face. The export was created using [this detailed recipe notebook](https://huggingface.co/executorch-community/Llama-3.2-1B-Instruct-QLORA_INT4_EO8-ET/blob/main/Export_Recipe_Llama_3_2_1B_Instruct_QLORA_INT4_EO8.ipynb). - -### For Llama 3.2 1B and 3B BF16 models -We have supported BF16 as a data type on the XNNPACK backend for Llama 3.2 1B/3B models. -* The 1B model in BF16 format can run on mobile devices with 8GB RAM (iPhone 15 Pro and later). The 3B model will require 12GB+ RAM and hence will not fit on 8GB RAM phones. -* Export Llama model and generate .pte file as below: - -``` -python -m extension.llm.export.export_llm base.model_class="llama3_2" base.checkpoint= base.params= model.use_kv_cache=True model.use_sdpa_with_kv_cache=True backend.xnnpack.enabled=True model.dtype_override="bf16" base.metadata='"{\"get_bos_id\":128000, \"get_eos_ids\":[128009, 128001]}"' export.output_name="llama3_2_bf16.pte" -``` -For convenience, an [exported ExecuTorch bf16 model](https://huggingface.co/executorch-community/Llama-3.2-1B-ET/blob/main/llama3_2-1B.pte) is available on Hugging Face. The export was created using [this detailed recipe notebook](https://huggingface.co/executorch-community/Llama-3.2-1B-ET/blob/main/ExportRecipe_1B.ipynb). - -For more detail using Llama 3.2 lightweight models including prompt template, please go to our official [website](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2/#-llama-3.2-lightweight-models-(1b/3b)-). - -### For Llama 3.1 and Llama 2 models - -Export the model -``` -python -m extension.llm.export.export_llm base.checkpoint= base.params= model.use_kv_cache=True model.use_sdpa_with_kv_cache=True backend.xnnpack.enabled=True quantization.qmode="8da4w" quantization.group_size=128 model.dtype_override="fp32" base.metadata='"{\"get_bos_id\":128000, \"get_eos_ids\":[128009, 128001]}"' quantization.embedding_quantize=\'4,32\' export.output_name="llama3_kv_sdpa_xnn_qe_4_32.pte" -``` - -### For LLaVA model -* For the Llava 1.5 model, you can get it from Huggingface [here](https://huggingface.co/llava-hf/llava-1.5-7b-hf). -* Run `examples/models/llava/install_requirements.sh` to install dependencies. -* Run the following command to generate llava.pte, tokenizer.bin and download an image basketball.jpg. - -``` -python -m executorch.examples.models.llava.export_llava --pte-name llava.pte --with-artifacts -``` -* You can find more information [here](https://github.com/pytorch/executorch/tree/main/examples/models/llava). - - -## Configure the XCode Project - -### 1. Install CMake -Download and open the macOS .dmg installer at https://cmake.org/download and move the Cmake app to /Applications folder. -Install Cmake command line tools: - -``` -sudo /Applications/CMake.app/Contents/bin/cmake-gui --install -``` - -### 2. Add ExecuTorch Runtime Package - -There are two options to add ExecuTorch runtime package into your XCode project: - -- [Recommended] Prebuilt package (via Swift Package Manager) -- Manually build the package locally and link them - - -### 2.1 [Recommended] Prebuilt package (via Swift Package Manager) - -The current XCode project is pre-configured to automatically download and link the latest prebuilt package via Swift Package Manager. - -#### (Optional) Changing the prebuilt package version -While we recommended using the latest prebuilt package pre-configured with the XCode project, you can also change the package version manually to your desired version. - -Go to Project Navigator, click on LLaMA. `Project --> LLaMA --> Package Dependencies`, and update the package dependencies to any of the available options below: - -- Branch --> swiftpm-0.7.0.20250401 (amend to match the latest nightly build) -- Branch --> swiftpm-0.6.0 - -### 2.2 Manually build the package locally and link them - -Note: You should only use this step if the prebuilt package doesn't work for your usecase (For example, you require the latest PRs from main, where there are no pre-built package yet) - -If you need to manually build the package, run the following command in your terminal: -``` -# Install a compatible version of Buck2 -BUCK2_RELEASE_DATE="2024-12-16" -BUCK2_ARCHIVE="buck2-aarch64-apple-darwin.zst" -BUCK2=".venv/bin/buck2" - -curl -LO "https://github.com/facebook/buck2/releases/download/${BUCK2_RELEASE_DATE}/${BUCK2_ARCHIVE}" -zstd -cdq "$BUCK2_ARCHIVE" > "$BUCK2" && chmod +x "$BUCK2" -rm "$BUCK2_ARCHIVE" - -./scripts/build_apple_frameworks.sh -``` - - After the build finishes successfully, the resulting frameworks can be found in the `cmake-out` directory. Copy them to your project and link them against your targets. - -The following packages should be linked in your app target `LLaMA` (left side, LLaMA --> General --> select LLaMA under "TARGETS" --> scroll down to "Frameworks, Libraries, and Embedded Content") -- backend_coreml -- backend_mps -- backend_xnnpack -- kernels_llm -- kernels_optimized -- kernels_portable -- kernels_quantized - -The following package should be linked in your target app `LLaMARunner` (left side, LLaMA --> General --> select LLaMARunner under "TARGETS" --> scroll down to "Frameworks and Libraries") -- executorch - -

-iOS LLaMA App Choosing package -

- -If you cannot add the package into your app target (it's greyed out), it might have been linked before. You can verify it by checking it from your target app `(LLaMA / LLaMARunner) --> Build Phases --> Link Binary With Libraries`. - - - - More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/main/using-executorch-ios#local-build). - -### 3. Configure Build Schemes - -The project has two build configurations: -- Debug -- [Recommended] Release - -Navigate to `Product --> Scheme --> Edit Scheme --> Info --> Build Configuration` and update the configuration to "Release". - -We recommend that you only use the Debug build scheme during development, where you might need to access additional logs. Debug build has logging overhead and will impact inferencing performance, while release build has compiler optimizations enabled and all logging overhead removed. - -For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios). - -### 4. Build and Run the project - -Click the "play" button on top right of your XCode app, or navigate to `Product --> Run` to build and run the app on your phone. - -### 5. Pushing Model and Tokenizer - -There are two options to copy the model (.pte) and tokenizer files (.model) to your app, depending on whether you are running it on a simulator or device. - -#### 5.1 Copy the model and tokenizer to Simulator -* Drag&drop the model and tokenizer files onto the Simulator window and save them somewhere inside the iLLaMA folder. -* Pick the files in the app dialog, type a prompt and click the arrow-up button. - -#### 5.2 Copy the model and tokenizer to Device -* Plug the device into your Mac and open the contents in Finder. -* Navigate to the Files tab and drag & drop the model and tokenizer files onto the iLLaMA folder. -* Wait until the files are copied. - -### 6. Try out the app -Open the iLLaMA app, click the settings button at the top left of the app to select the model and tokenizer files. When the app successfully runs on your device, you should see something like below: - -

-iOS LLaMA App -

- - -For Llava 1.5 models, you can select and image (via image/camera selector button) before typing prompt and send button. - -

-iOS LLaMA App -

- -## Reporting Issues -If you encountered any bugs or issues following this tutorial please file a bug/issue here on [Github](https://github.com/pytorch/executorch/issues/new). diff --git a/examples/demo-apps/react-native/rnllama/README.md b/examples/demo-apps/react-native/rnllama/README.md index 7729f7a153a..46e1c66dc13 100644 --- a/examples/demo-apps/react-native/rnllama/README.md +++ b/examples/demo-apps/react-native/rnllama/README.md @@ -10,7 +10,7 @@ A React Native mobile application for running LLaMA language models using ExecuT - Run LLaMA models directly on device, build the UI using React Native - Tested using Llama 3.2 SpinQuant 1B on iPhone 12 Pro -- The setup is heavily inspired by the [LLaMA iOS app example](https://github.com/pytorch/executorch/tree/main/examples/demo-apps/apple_ios/LLaMA) +- The setup is heavily inspired by the [etLLM app](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) ## Prerequisites diff --git a/examples/demo-apps/react-native/rnllama/ios/LlamaBridge.h b/examples/demo-apps/react-native/rnllama/ios/LlamaBridge.h index 5aaf4bc5724..993b6501a9d 100644 --- a/examples/demo-apps/react-native/rnllama/ios/LlamaBridge.h +++ b/examples/demo-apps/react-native/rnllama/ios/LlamaBridge.h @@ -1,14 +1,14 @@ #ifndef LLaMABridge_h #define LLaMABridge_h -#import +#import #import #import NS_ASSUME_NONNULL_BEGIN @interface LLaMABridge : RCTEventEmitter -@property (nonatomic, strong) LLaMARunner *runner; +@property (nonatomic, strong) ExecuTorchLLMTextRunner *runner; @end NS_ASSUME_NONNULL_END diff --git a/examples/demo-apps/react-native/rnllama/ios/LlamaBridge.mm b/examples/demo-apps/react-native/rnllama/ios/LlamaBridge.mm index 65a3e003f7c..dc5c5c1325c 100644 --- a/examples/demo-apps/react-native/rnllama/ios/LlamaBridge.mm +++ b/examples/demo-apps/react-native/rnllama/ios/LlamaBridge.mm @@ -17,8 +17,9 @@ + (BOOL)requiresMainQueueSetup { resolver:(RCTPromiseResolveBlock)resolve rejecter:(RCTPromiseRejectBlock)reject) { dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ - self.runner = [[LLaMARunner alloc] initWithModelPath:modelPath tokenizerPath:tokenizerPath]; - + self.runner = [[ExecuTorchLLMTextRunner alloc] initWithModelPath:modelPath + tokenizerPath:tokenizerPath]; + NSError *error = nil; if (![self.runner loadWithError:&error]) { reject(@"load_error", error.localizedDescription, error); @@ -36,8 +37,8 @@ + (BOOL)requiresMainQueueSetup { dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ NSError *error = nil; BOOL success = [self.runner generate:prompt - sequenceLength:[seqLen integerValue] - withTokenCallback:^(NSString *token) { + sequenceLength:[seqLen integerValue] + withTokenCallback:^(NSString *token) { [self sendEventWithName:@"onToken" body:token]; } error:&error]; @@ -56,4 +57,4 @@ + (BOOL)requiresMainQueueSetup { }); } -@end \ No newline at end of file +@end diff --git a/examples/demo-apps/react-native/rnllama/ios/rnllama.xcodeproj/project.pbxproj b/examples/demo-apps/react-native/rnllama/ios/rnllama.xcodeproj/project.pbxproj index 68d8ed3e955..490269d8a4d 100644 --- a/examples/demo-apps/react-native/rnllama/ios/rnllama.xcodeproj/project.pbxproj +++ b/examples/demo-apps/react-native/rnllama/ios/rnllama.xcodeproj/project.pbxproj @@ -7,8 +7,15 @@ objects = { /* Begin PBXBuildFile section */ - 036509DE2E1F7CA700C1BC1B /* LLaMARunner.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 036509D32E1F7C0800C1BC1B /* LLaMARunner.framework */; }; - 036509DF2E1F7CB100C1BC1B /* LLaMARunner.framework in Embed Frameworks */ = {isa = PBXBuildFile; fileRef = 036509D32E1F7C0800C1BC1B /* LLaMARunner.framework */; settings = {ATTRIBUTES = (CodeSignOnCopy, RemoveHeadersOnCopy, ); }; }; + 03F814712E729261002D91CC /* backend_coreml in Frameworks */ = {isa = PBXBuildFile; productRef = 03F814702E729261002D91CC /* backend_coreml */; }; + 03F814732E729261002D91CC /* backend_mps in Frameworks */ = {isa = PBXBuildFile; productRef = 03F814722E729261002D91CC /* backend_mps */; }; + 03F814752E729261002D91CC /* backend_xnnpack in Frameworks */ = {isa = PBXBuildFile; productRef = 03F814742E729261002D91CC /* backend_xnnpack */; }; + 03F814772E729261002D91CC /* executorch_debug in Frameworks */ = {isa = PBXBuildFile; productRef = 03F814762E729261002D91CC /* executorch_debug */; }; + 03F814792E729261002D91CC /* executorch_llm_debug in Frameworks */ = {isa = PBXBuildFile; productRef = 03F814782E729261002D91CC /* executorch_llm_debug */; }; + 03F8147B2E729261002D91CC /* kernels_llm in Frameworks */ = {isa = PBXBuildFile; productRef = 03F8147A2E729261002D91CC /* kernels_llm */; }; + 03F8147D2E729261002D91CC /* kernels_optimized in Frameworks */ = {isa = PBXBuildFile; productRef = 03F8147C2E729261002D91CC /* kernels_optimized */; }; + 03F8147F2E729261002D91CC /* kernels_quantized in Frameworks */ = {isa = PBXBuildFile; productRef = 03F8147E2E729261002D91CC /* kernels_quantized */; }; + 03F814812E729261002D91CC /* kernels_torchao in Frameworks */ = {isa = PBXBuildFile; productRef = 03F814802E729261002D91CC /* kernels_torchao */; }; 13B07FBC1A68108700A75B9A /* AppDelegate.mm in Sources */ = {isa = PBXBuildFile; fileRef = 13B07FB01A68108700A75B9A /* AppDelegate.mm */; }; 13B07FBF1A68108700A75B9A /* Images.xcassets in Resources */ = {isa = PBXBuildFile; fileRef = 13B07FB51A68108700A75B9A /* Images.xcassets */; }; 13B07FC11A68108700A75B9A /* main.m in Sources */ = {isa = PBXBuildFile; fileRef = 13B07FB71A68108700A75B9A /* main.m */; }; @@ -21,30 +28,6 @@ E931C67F2CFAF17500DA599B /* LlamaBridge.mm in Sources */ = {isa = PBXBuildFile; fileRef = E931C67E2CFAF17500DA599B /* LlamaBridge.mm */; }; /* End PBXBuildFile section */ -/* Begin PBXContainerItemProxy section */ - 036509D22E1F7C0800C1BC1B /* PBXContainerItemProxy */ = { - isa = PBXContainerItemProxy; - containerPortal = 036509C92E1F7C0800C1BC1B /* LLaMA.xcodeproj */; - proxyType = 2; - remoteGlobalIDString = 03729ED52BB1F8DE00152F2E; - remoteInfo = LLaMARunner; - }; - 036509DC2E1F7C9B00C1BC1B /* PBXContainerItemProxy */ = { - isa = PBXContainerItemProxy; - containerPortal = 036509C92E1F7C0800C1BC1B /* LLaMA.xcodeproj */; - proxyType = 1; - remoteGlobalIDString = 03729ED42BB1F8DE00152F2E; - remoteInfo = LLaMARunner; - }; - 036509E32E1F983A00C1BC1B /* PBXContainerItemProxy */ = { - isa = PBXContainerItemProxy; - containerPortal = 036509C92E1F7C0800C1BC1B /* LLaMA.xcodeproj */; - proxyType = 2; - remoteGlobalIDString = 036CAF9D2BB1444500D6C2D5; - remoteInfo = LLaMA; - }; -/* End PBXContainerItemProxy section */ - /* Begin PBXCopyFilesBuildPhase section */ E931C64C2CFAF07E00DA599B /* Embed Frameworks */ = { isa = PBXCopyFilesBuildPhase; @@ -52,7 +35,6 @@ dstPath = ""; dstSubfolderSpec = 10; files = ( - 036509DF2E1F7CB100C1BC1B /* LLaMARunner.framework in Embed Frameworks */, ); name = "Embed Frameworks"; runOnlyForDeploymentPostprocessing = 0; @@ -60,7 +42,6 @@ /* End PBXCopyFilesBuildPhase section */ /* Begin PBXFileReference section */ - 036509C92E1F7C0800C1BC1B /* LLaMA.xcodeproj */ = {isa = PBXFileReference; lastKnownFileType = "wrapper.pb-project"; name = LLaMA.xcodeproj; path = "/Users/shoumikhin/executorch/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj"; sourceTree = ""; }; 13B07F961A680F5B00A75B9A /* rnllama.app */ = {isa = PBXFileReference; explicitFileType = wrapper.application; includeInIndex = 0; path = rnllama.app; sourceTree = BUILT_PRODUCTS_DIR; }; 13B07FAF1A68108700A75B9A /* AppDelegate.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = AppDelegate.h; path = rnllama/AppDelegate.h; sourceTree = ""; }; 13B07FB01A68108700A75B9A /* AppDelegate.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; name = AppDelegate.mm; path = rnllama/AppDelegate.mm; sourceTree = ""; }; @@ -86,23 +67,22 @@ isa = PBXFrameworksBuildPhase; buildActionMask = 2147483647; files = ( - 036509DE2E1F7CA700C1BC1B /* LLaMARunner.framework in Frameworks */, + 03F814792E729261002D91CC /* executorch_llm_debug in Frameworks */, + 03F8147F2E729261002D91CC /* kernels_quantized in Frameworks */, + 03F8147D2E729261002D91CC /* kernels_optimized in Frameworks */, + 03F814772E729261002D91CC /* executorch_debug in Frameworks */, 96905EF65AED1B983A6B3ABC /* libPods-rnllama.a in Frameworks */, + 03F814752E729261002D91CC /* backend_xnnpack in Frameworks */, + 03F814812E729261002D91CC /* kernels_torchao in Frameworks */, + 03F814732E729261002D91CC /* backend_mps in Frameworks */, + 03F8147B2E729261002D91CC /* kernels_llm in Frameworks */, + 03F814712E729261002D91CC /* backend_coreml in Frameworks */, ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXFrameworksBuildPhase section */ /* Begin PBXGroup section */ - 036509CC2E1F7C0800C1BC1B /* Products */ = { - isa = PBXGroup; - children = ( - 036509E42E1F983A00C1BC1B /* LLaMA.app */, - 036509D32E1F7C0800C1BC1B /* LLaMARunner.framework */, - ); - name = Products; - sourceTree = ""; - }; 13B07FAE1A68108700A75B9A /* rnllama */ = { isa = PBXGroup; children = ( @@ -147,7 +127,6 @@ 2D16E6871FA4F8E400B85C8A /* Frameworks */, D65327D7A22EEC0BE12398D9 /* Pods */, D7E4C46ADA2E9064B798F356 /* ExpoModulesProviders */, - 036509C92E1F7C0800C1BC1B /* LLaMA.xcodeproj */, ); indentWidth = 2; sourceTree = ""; @@ -216,7 +195,6 @@ buildRules = ( ); dependencies = ( - 036509DD2E1F7C9B00C1BC1B /* PBXTargetDependency */, ); name = rnllama; productName = rnllama; @@ -246,14 +224,11 @@ Base, ); mainGroup = 83CBB9F61A601CBA00E9B192; + packageReferences = ( + 03F8146F2E729261002D91CC /* XCRemoteSwiftPackageReference "executorch" */, + ); productRefGroup = 83CBBA001A601CBA00E9B192 /* Products */; projectDirPath = ""; - projectReferences = ( - { - ProductGroup = 036509CC2E1F7C0800C1BC1B /* Products */; - ProjectRef = 036509C92E1F7C0800C1BC1B /* LLaMA.xcodeproj */; - }, - ); projectRoot = ""; targets = ( 13B07F861A680F5B00A75B9A /* rnllama */, @@ -261,23 +236,6 @@ }; /* End PBXProject section */ -/* Begin PBXReferenceProxy section */ - 036509D32E1F7C0800C1BC1B /* LLaMARunner.framework */ = { - isa = PBXReferenceProxy; - fileType = wrapper.framework; - path = LLaMARunner.framework; - remoteRef = 036509D22E1F7C0800C1BC1B /* PBXContainerItemProxy */; - sourceTree = BUILT_PRODUCTS_DIR; - }; - 036509E42E1F983A00C1BC1B /* LLaMA.app */ = { - isa = PBXReferenceProxy; - fileType = wrapper.application; - path = LLaMA.app; - remoteRef = 036509E32E1F983A00C1BC1B /* PBXContainerItemProxy */; - sourceTree = BUILT_PRODUCTS_DIR; - }; -/* End PBXReferenceProxy section */ - /* Begin PBXResourcesBuildPhase section */ 13B07F8E1A680F5B00A75B9A /* Resources */ = { isa = PBXResourcesBuildPhase; @@ -418,14 +376,6 @@ }; /* End PBXSourcesBuildPhase section */ -/* Begin PBXTargetDependency section */ - 036509DD2E1F7C9B00C1BC1B /* PBXTargetDependency */ = { - isa = PBXTargetDependency; - name = LLaMARunner; - targetProxy = 036509DC2E1F7C9B00C1BC1B /* PBXContainerItemProxy */; - }; -/* End PBXTargetDependency section */ - /* Begin XCBuildConfiguration section */ 13B07F941A680F5B00A75B9A /* Debug */ = { isa = XCBuildConfiguration; @@ -640,6 +590,65 @@ defaultConfigurationName = Release; }; /* End XCConfigurationList section */ + +/* Begin XCRemoteSwiftPackageReference section */ + 03F8146F2E729261002D91CC /* XCRemoteSwiftPackageReference "executorch" */ = { + isa = XCRemoteSwiftPackageReference; + repositoryURL = "https://github.com/pytorch/executorch"; + requirement = { + branch = "swiftpm-0.8.0.20250909"; + kind = branch; + }; + }; +/* End XCRemoteSwiftPackageReference section */ + +/* Begin XCSwiftPackageProductDependency section */ + 03F814702E729261002D91CC /* backend_coreml */ = { + isa = XCSwiftPackageProductDependency; + package = 03F8146F2E729261002D91CC /* XCRemoteSwiftPackageReference "executorch" */; + productName = backend_coreml; + }; + 03F814722E729261002D91CC /* backend_mps */ = { + isa = XCSwiftPackageProductDependency; + package = 03F8146F2E729261002D91CC /* XCRemoteSwiftPackageReference "executorch" */; + productName = backend_mps; + }; + 03F814742E729261002D91CC /* backend_xnnpack */ = { + isa = XCSwiftPackageProductDependency; + package = 03F8146F2E729261002D91CC /* XCRemoteSwiftPackageReference "executorch" */; + productName = backend_xnnpack; + }; + 03F814762E729261002D91CC /* executorch_debug */ = { + isa = XCSwiftPackageProductDependency; + package = 03F8146F2E729261002D91CC /* XCRemoteSwiftPackageReference "executorch" */; + productName = executorch_debug; + }; + 03F814782E729261002D91CC /* executorch_llm_debug */ = { + isa = XCSwiftPackageProductDependency; + package = 03F8146F2E729261002D91CC /* XCRemoteSwiftPackageReference "executorch" */; + productName = executorch_llm_debug; + }; + 03F8147A2E729261002D91CC /* kernels_llm */ = { + isa = XCSwiftPackageProductDependency; + package = 03F8146F2E729261002D91CC /* XCRemoteSwiftPackageReference "executorch" */; + productName = kernels_llm; + }; + 03F8147C2E729261002D91CC /* kernels_optimized */ = { + isa = XCSwiftPackageProductDependency; + package = 03F8146F2E729261002D91CC /* XCRemoteSwiftPackageReference "executorch" */; + productName = kernels_optimized; + }; + 03F8147E2E729261002D91CC /* kernels_quantized */ = { + isa = XCSwiftPackageProductDependency; + package = 03F8146F2E729261002D91CC /* XCRemoteSwiftPackageReference "executorch" */; + productName = kernels_quantized; + }; + 03F814802E729261002D91CC /* kernels_torchao */ = { + isa = XCSwiftPackageProductDependency; + package = 03F8146F2E729261002D91CC /* XCRemoteSwiftPackageReference "executorch" */; + productName = kernels_torchao; + }; +/* End XCSwiftPackageProductDependency section */ }; rootObject = 83CBB9F71A601CBA00E9B192 /* Project object */; } diff --git a/examples/models/llama/README.md b/examples/models/llama/README.md index 784142b61f1..dba0cf8d8a8 100644 --- a/examples/models/llama/README.md +++ b/examples/models/llama/README.md @@ -333,7 +333,7 @@ adb shell "cd /data/local/tmp/llama && ./llama_main --model_path --t ### iOS -Please refer to [this tutorial](https://pytorch.org/executorch/main/llm/llama-demo-ios) to for full instructions on building the iOS LLAMA Demo App. Rename `tokenizer.model` file to `tokenizer.bin` because the demo app looks for the tokenizer file with .bin extension. +Please refer to [this tutorial](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) to for full instructions on building the iOS etLLM Demo App. ### Android Please refer to [this tutorial](https://pytorch.org/executorch/main/llm/llama-demo-android) to for full instructions on building the Android LLAMA Demo App. diff --git a/examples/models/llama/non_cpu_backends.md b/examples/models/llama/non_cpu_backends.md index f414582a3c1..6a4c5e16cc3 100644 --- a/examples/models/llama/non_cpu_backends.md +++ b/examples/models/llama/non_cpu_backends.md @@ -10,7 +10,7 @@ Export: python -m examples.models.llama2.export_llama --checkpoint llama3.pt --params params.json -kv --disable_dynamic_shape --mps --use_sdpa_with_kv_cache -d fp32 -qmode 8da4w -G 32 --embedding-quantize 4,32 ``` -After exporting the MPS model .pte file, the [iOS LLAMA](https://pytorch.org/executorch/main/llm/llama-demo-ios) app can support running the model. ` --embedding-quantize 4,32` is an optional args for quantizing embedding to reduce the model size. +After exporting the MPS model .pte file, the [iOS etLLM](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) app can support running the model. ` --embedding-quantize 4,32` is an optional args for quantizing embedding to reduce the model size. ### CoreML Export: diff --git a/examples/models/llava/README.md b/examples/models/llava/README.md index 86d522862f0..e7b8ba523fd 100644 --- a/examples/models/llava/README.md +++ b/examples/models/llava/README.md @@ -86,9 +86,9 @@ to for full instructions on building the Android LLAMA Demo App. #### iOS -We can run LLAVA using the LLAMA Demo Apps. Please refer to [this -tutorial](https://github.com/pytorch/executorch/tree/main/examples/demo-apps/apple_ios/LLaMA) -to for full instructions on building the iOS LLAMA Demo App. +We can run LLAVA using the etLLM Demo Apps. Please refer to [this +tutorial](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) +to for full instructions on building the iOS etLLM Demo App. ### Running LLaVA