diff --git a/docs/source/conf.py b/docs/source/conf.py index 7128e34ed8d..65845c03868 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -202,7 +202,6 @@ "export-overview": "using-executorch-export.html", "runtime-build-and-cross-compilation": "using-executorch-building-from-source.html", "tutorials/export-to-executorch-tutorial": "../using-executorch-export.html", - "running-a-model-cpp-tutorial": "using-executorch-cpp.html", "build-run-vulkan": "backends-vulkan.html", "executorch-arm-delegate-tutorial": "backends-arm-ethos-u.html", "build-run-coreml": "backends-coreml.html", diff --git a/docs/source/executorch-runtime-api-reference.rst b/docs/source/executorch-runtime-api-reference.rst index 2b4239271c1..8853e5444eb 100644 --- a/docs/source/executorch-runtime-api-reference.rst +++ b/docs/source/executorch-runtime-api-reference.rst @@ -4,7 +4,7 @@ Runtime API Reference The ExecuTorch C++ API provides an on-device execution framework for exported PyTorch models. For a tutorial style introduction to the runtime API, check out the -`runtime tutorial `__ and its `simplified `__ version. +`using executorch with cpp tutorial `__ and its `simplified `__ version. For detailed information on how APIs evolve and the deprecation process, please refer to the `ExecuTorch API Life Cycle and Deprecation Policy `__. diff --git a/docs/source/extension-module.md b/docs/source/extension-module.md index 24f16aa8a3a..29aa6712d37 100644 --- a/docs/source/extension-module.md +++ b/docs/source/extension-module.md @@ -2,7 +2,7 @@ **Author:** [Anthony Shoumikhin](https://github.com/shoumikhin) -In the [Running an ExecuTorch Model in C++ Tutorial](running-a-model-cpp-tutorial.md), we explored the lower-level ExecuTorch APIs for running an exported model. While these APIs offer zero overhead, great flexibility, and control, they can be verbose and complex for regular use. To simplify this and resemble PyTorch's eager mode in Python, we introduce the `Module` facade APIs over the regular ExecuTorch runtime APIs. The `Module` APIs provide the same flexibility but default to commonly used components like `DataLoader` and `MemoryAllocator`, hiding most intricate details. +In the [Detailed C++ Runtime APIs Tutorial](running-a-model-cpp-tutorial.md), we explored the lower-level ExecuTorch APIs for running an exported model. While these APIs offer zero overhead, great flexibility, and control, they can be verbose and complex for regular use. To simplify this and resemble PyTorch's eager mode in Python, we introduce the `Module` facade APIs over the regular ExecuTorch runtime APIs. The `Module` APIs provide the same flexibility but default to commonly used components like `DataLoader` and `MemoryAllocator`, hiding most intricate details. ## Example diff --git a/docs/source/index.md b/docs/source/index.md index ff3eefec7f5..d0c9142cf4a 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -71,7 +71,7 @@ ExecuTorch provides support for: - [Overview](runtime-overview) - [Extension Module](extension-module) - [Extension Tensor](extension-tensor) -- [Running a Model (C++ Tutorial)](running-a-model-cpp-tutorial) +- [Detailed C++ Runtime APIs Tutorial](running-a-model-cpp-tutorial) - [Backend Delegate Implementation and Linking](runtime-backend-delegate-implementation-and-linking) - [Platform Abstraction Layer](runtime-platform-abstraction-layer) #### Portable C++ Programming diff --git a/docs/source/running-a-model-cpp-tutorial.md b/docs/source/running-a-model-cpp-tutorial.md index 43692f49a1b..a12ef122bc8 100644 --- a/docs/source/running-a-model-cpp-tutorial.md +++ b/docs/source/running-a-model-cpp-tutorial.md @@ -1,8 +1,8 @@ -# Running an ExecuTorch Model in C++ Tutorial +# Detailed C++ Runtime APIs Tutorial **Author:** [Jacob Szwejbka](https://github.com/JacobSzwejbka) -In this tutorial, we will cover how to run an ExecuTorch model in C++ using the more detailed, lower-level APIs: prepare the `MemoryManager`, set inputs, execute the model, and retrieve outputs. However, if you’re looking for a simpler interface that works out of the box, consider trying the [Module Extension Tutorial](extension-module.md). +In this tutorial, we will cover how to run an ExecuTorch model in C++ using the more detailed, lower-level APIs: prepare the `MemoryManager`, set inputs, execute the model, and retrieve outputs. However, if you’re looking for a simpler interface that works out of the box, consider trying the [Module Extension Tutorial](extension-module.md) and [Using ExecuTorch with C++](using-executorch-cpp.md). For a high level overview of the ExecuTorch Runtime please see [Runtime Overview](runtime-overview.md), and for more in-depth documentation on each API please see the [Runtime API Reference](executorch-runtime-api-reference.rst). diff --git a/docs/source/runtime-overview.md b/docs/source/runtime-overview.md index b1aa3870dd6..96a618a2a41 100644 --- a/docs/source/runtime-overview.md +++ b/docs/source/runtime-overview.md @@ -155,6 +155,7 @@ However, please note: For more details about the ExecuTorch runtime, please see: +* [Using ExecuTorch with C++](using-executorch-cpp.md) * [Detailed Runtime APIs Tutorial](running-a-model-cpp-tutorial.md) * [Simplified Runtime APIs Tutorial](extension-module.md) * [Building from Source](using-executorch-building-from-source.md) diff --git a/docs/source/using-executorch-cpp.md b/docs/source/using-executorch-cpp.md index f68f412943c..3736226bc06 100644 --- a/docs/source/using-executorch-cpp.md +++ b/docs/source/using-executorch-cpp.md @@ -36,7 +36,7 @@ For complete examples of building and running a C++ application using the Module ## Low-Level APIs -Running a model using the low-level runtime APIs allows for a high-degree of control over memory allocation, placement, and loading. This allows for advanced use cases, such as placing allocations in specific memory banks or loading a model without a file system. For an end to end example using the low-level runtime APIs, see [Running an ExecuTorch Model in C++ Tutorial](running-a-model-cpp-tutorial.md). +Running a model using the low-level runtime APIs allows for a high-degree of control over memory allocation, placement, and loading. This allows for advanced use cases, such as placing allocations in specific memory banks or loading a model without a file system. For an end to end example using the low-level runtime APIs, see [Detailed C++ Runtime APIs Tutorial](running-a-model-cpp-tutorial.md). ## Building with CMake