Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/source/backend-delegates-xnnpack-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Since weight packing creates an extra copy of the weights inside XNNPACK, We fre
When executing the XNNPACK subgraphs, we prepare the tensor inputs and outputs and feed them to the XNNPACK runtime graph. After executing the runtime graph, the output pointers are filled with the computed tensors.

#### **Profiling**
We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)).
We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) <!-- @lint-ignore --> on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)).


[comment]: <> (TODO: Refactor quantizer to a more official quantization doc)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/bundled-io.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ This stage mainly focuses on the creation of a `BundledProgram` and dumping it o

### Step 1: Create a Model and Emit its ExecuTorch Program.

ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Follow the [Generate and emit sample ExecuTorch program](getting-started.md#exporting) or [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial).
ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Follow the [Generate and emit sample ExecuTorch program](getting-started.md#exporting) or [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore -->.

### Step 2: Construct `List[MethodTestSuite]` to hold test info

Expand Down
2 changes: 1 addition & 1 deletion docs/source/devtools-tutorial.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
## Developer Tools Usage Tutorial

Please refer to the [Developer Tools tutorial](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for a walkthrough on how to profile a model in ExecuTorch using the Developer Tools.
Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) <!-- @lint-ignore --> for a walkthrough on how to profile a model in ExecuTorch using the Developer Tools.
2 changes: 1 addition & 1 deletion docs/source/export-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,5 @@ program, making it easier for you to understand and implement the process.

To learn more about exporting your model:

* Complete the [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial).
* Complete the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore -->.
* Read the [torch.export documentation](https://pytorch.org/docs/2.1/export.html).
2 changes: 1 addition & 1 deletion docs/source/extension-module.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ In the [Detailed C++ Runtime APIs Tutorial](running-a-model-cpp-tutorial.md), we

## Example

Let's see how we can run the `SimpleConv` model generated from the [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) using the `Module` and [`TensorPtr`](extension-tensor.md) APIs:
Let's see how we can run the `SimpleConv` model generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore --> using the `Module` and [`TensorPtr`](extension-tensor.md) APIs:

```cpp
#include <executorch/extension/module/module.h>
Expand Down
4 changes: 2 additions & 2 deletions docs/source/llm/export-custom-llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ with open("nanogpt.pte", "wb") as file:

To export, run the script with `python export_nanogpt.py` (or python3, as appropriate for your environment). It will generate a `nanogpt.pte` file in the current directory.

For more information, see [Exporting to ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) and
For more information, see [Exporting to ExecuTorch](../tutorials/export-to-executorch-tutorial) <!-- @lint-ignore --> and
[torch.export](https://pytorch.org/docs/stable/export.html).

## Backend delegation
Expand Down Expand Up @@ -143,7 +143,7 @@ example_inputs = (
# long as they adhere to the rules specified in the dynamic shape configuration.
# Here we set the range of 0th model input's 1st dimension as
# [0, model.config.block_size].
# See https://pytorch.org/executorch/main/concepts.html#dynamic-shapes
# See ../concepts.html#dynamic-shapes
# for details about creating dynamic shapes.
dynamic_shape = (
{1: torch.export.Dim("token_dim", max=model.config.block_size - 1)},
Expand Down
2 changes: 1 addition & 1 deletion docs/source/running-a-model-cpp-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ each API please see the [Runtime API Reference](executorch-runtime-api-reference
## Prerequisites

You will need an ExecuTorch model to follow along. We will be using
the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial).
the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore -->.

## Model Loading

Expand Down
2 changes: 1 addition & 1 deletion docs/source/runtime-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Works](intro-how-it-works.md).
At the highest level, the ExecuTorch runtime is responsible for:

* Loading binary `.pte` program files that were generated by the
[`to_executorch()`](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) step of the
[`to_executorch()`](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore --> step of the
model-lowering process.
* Executing the series of instructions that implement a lowered model.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/runtime-profiling.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ We provide access to all the profiling data via the Python [Inspector API](model
- Through the Inspector API, users can do a wide range of analysis varying from printing out performance details to doing more finer granular calculation on module level.


Please refer to the [Developer Tools tutorial](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for a step-by-step walkthrough of the above process on a sample model.
Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) <!-- @lint-ignore --> for a step-by-step walkthrough of the above process on a sample model.
2 changes: 1 addition & 1 deletion docs/source/tutorial-xnnpack-delegate-lowering.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ In this tutorial, you will learn how to export an XNNPACK lowered Model and run
:::{grid-item-card} Before you begin it is recommended you go through the following:
:class-card: card-prerequisites
* [Setting up ExecuTorch](getting-started-setup.rst)
* [Model Lowering Tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial)
* [Model Lowering Tutorial](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore -->
* [ExecuTorch XNNPACK Delegate](backends-xnnpack.md)
:::
::::
Expand Down
2 changes: 1 addition & 1 deletion docs/source/using-executorch-troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,5 @@ The ExecuTorch developer tools, or devtools, are a collection of tooling for tro

- [Frequently Asked Questions](using-executorch-faqs.md) for solutions to commonly encountered questions and issues.
- [Introduction to the ExecuTorch Developer Tools](runtime-profiling.md) for a high-level introduction to available developer tooling.
- [Using the ExecuTorch Developer Tools to Profile a Model](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for information on runtime performance profiling.
- [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) <!-- @lint-ignore --> for information on runtime performance profiling.
- [Inspector APIs](runtime-profiling.md) for reference material on trace inspector APIs.
Loading