Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .github/workflows/apple.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ on:
- scripts/build_apple_llm_demo.sh
- scripts/create_frameworks.sh
- .ci/scripts/test_ios_ci.sh
- examples/demo-apps/apple_ios/**
- extension/apple/**
- extension/benchmark/apple/**
- extension/module/**
Expand Down
1 change: 0 additions & 1 deletion .lintrunner.toml
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,6 @@ exclude_patterns = [
'**/third-party/**',
# NB: Objective-C is not supported
'examples/apple/**',
'examples/demo-apps/apple_ios/**',
'examples/demo-apps/react-native/rnllama/ios/**',
'extension/apple/**',
'extension/llm/apple/**',
Expand Down
2 changes: 1 addition & 1 deletion README-wheel.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,6 @@ tutorials and documentation. Here are some starting points:
* [Exporting to ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial)
* Learn the fundamentals of exporting a PyTorch `nn.Module` to ExecuTorch, and
optimizing its performance using quantization and hardware delegation.
* Running LLaMA on [iOS](docs/source/llm/llama-demo-ios.md) and [Android](docs/source/llm/llama-demo-android.md) devices.
* Running etLLM on [iOS](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) and [Android](docs/source/llm/llama-demo-android.md) devices.
* Build and run LLaMA in a demo mobile app, and learn how to integrate models
with your own apps.
2 changes: 1 addition & 1 deletion backends/apple/mps/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The MPS backend device maps machine learning computational graphs and primitives
* [Setting up ExecuTorch](../../../docs/source/getting-started-setup.rst)
* [Building ExecuTorch with CMake](../../../docs/source/using-executorch-cpp.md#building-with-cmake)
* [ExecuTorch iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo)
* [ExecuTorch iOS LLaMA Demo App](../../../docs/source/llm/llama-demo-ios.md)
* [ExecuTorch LLM iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple)
:::
::::

Expand Down
2 changes: 1 addition & 1 deletion docs/source/backends-mps.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The MPS backend device maps machine learning computational graphs and primitives
* [Getting Started](getting-started.md)
* [Building ExecuTorch with CMake](using-executorch-building-from-source.md)
* [ExecuTorch iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo)
* [ExecuTorch iOS LLaMA Demo App](llm/llama-demo-ios.md)
* [ExecuTorch LLM iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple)
:::
::::

Expand Down
2 changes: 1 addition & 1 deletion docs/source/llm/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,4 @@ Deploying LLMs to ExecuTorch can be boiled down to a two-step process: (1) expor
- [Running with C++](run-with-c-plus-plus.md)
- [Running on Android (XNNPack)](llama-demo-android.md)
- [Running on Android (Qualcomm)](build-run-llama3-qualcomm-ai-engine-direct-backend.md)
- [Running on iOS](llama-demo-ios.md)
- [Running on iOS](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple)
2 changes: 1 addition & 1 deletion docs/source/llm/run-on-ios.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,4 +123,4 @@ runner.stop()

## Demo

Get hands-on with our [LLaMA iOS Demo App](llama-demo-ios.md) to see the LLM runtime APIs in action.
Get hands-on with our [etLLM iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) to see the LLM runtime APIs in action.
4 changes: 2 additions & 2 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ examples
| └── mps # Contains end-to-end demos of MPS backend
├── arm # Contains demos of the Arm TOSA and Ethos-U NPU flows
├── qualcomm # Contains demos of Qualcomm QNN backend
â├── samsung # Contains demos of Samsung Exynos backend
�├── samsung # Contains demos of Samsung Exynos backend
├── cadence # Contains demos of exporting and running a simple model on Xtensa DSPs
├── third-party # Third-party libraries required for working on the demos
└── README.md # This file
Expand All @@ -34,7 +34,7 @@ A user's journey may commence by exploring the demos located in the [`portable/`

## Demos Apps

Explore mobile apps with ExecuTorch models integrated and deployable on [Android](demo-apps/android) and [iOS](demo-apps/apple_ios). This provides end-to-end instructions on how to export Llama models, load on device, build the app, and run it on device.
Explore mobile apps with ExecuTorch models integrated and deployable on [Android](demo-apps/android) and [iOS](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple). This provides end-to-end instructions on how to export Llama models, load on device, build the app, and run it on device.

For specific details related to models and backend, you can explore the various subsections.

Expand Down
Loading
Loading