diff --git a/docs/source/backend-delegates-xnnpack-reference.md b/docs/source/backend-delegates-xnnpack-reference.md index cfb915aca59..8b4338e703c 100644 --- a/docs/source/backend-delegates-xnnpack-reference.md +++ b/docs/source/backend-delegates-xnnpack-reference.md @@ -70,7 +70,7 @@ Since weight packing creates an extra copy of the weights inside XNNPACK, We fre When executing the XNNPACK subgraphs, we prepare the tensor inputs and outputs and feed them to the XNNPACK runtime graph. After executing the runtime graph, the output pointers are filled with the computed tensors. #### **Profiling** -We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)). +We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)). [comment]: <> (TODO: Refactor quantizer to a more official quantization doc) diff --git a/docs/source/bundled-io.md b/docs/source/bundled-io.md index 6c45f09c542..c0b03938374 100644 --- a/docs/source/bundled-io.md +++ b/docs/source/bundled-io.md @@ -17,7 +17,7 @@ This stage mainly focuses on the creation of a `BundledProgram` and dumping it o ### Step 1: Create a Model and Emit its ExecuTorch Program. -ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Follow the [Generate and emit sample ExecuTorch program](getting-started.md#exporting) or [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial). +ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Follow the [Generate and emit sample ExecuTorch program](getting-started.md#exporting) or [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) . ### Step 2: Construct `List[MethodTestSuite]` to hold test info diff --git a/docs/source/devtools-tutorial.md b/docs/source/devtools-tutorial.md index 7c6cedc311b..6d540dc7f35 100644 --- a/docs/source/devtools-tutorial.md +++ b/docs/source/devtools-tutorial.md @@ -1,3 +1,3 @@ ## Developer Tools Usage Tutorial -Please refer to the [Developer Tools tutorial](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for a walkthrough on how to profile a model in ExecuTorch using the Developer Tools. +Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) for a walkthrough on how to profile a model in ExecuTorch using the Developer Tools. diff --git a/docs/source/export-overview.md b/docs/source/export-overview.md index d07701d06cd..c96716a0949 100644 --- a/docs/source/export-overview.md +++ b/docs/source/export-overview.md @@ -11,5 +11,5 @@ program, making it easier for you to understand and implement the process. To learn more about exporting your model: -* Complete the [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial). +* Complete the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) . * Read the [torch.export documentation](https://pytorch.org/docs/2.1/export.html). diff --git a/docs/source/extension-module.md b/docs/source/extension-module.md index 29aa6712d37..690256fecbb 100644 --- a/docs/source/extension-module.md +++ b/docs/source/extension-module.md @@ -6,7 +6,7 @@ In the [Detailed C++ Runtime APIs Tutorial](running-a-model-cpp-tutorial.md), we ## Example -Let's see how we can run the `SimpleConv` model generated from the [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) using the `Module` and [`TensorPtr`](extension-tensor.md) APIs: +Let's see how we can run the `SimpleConv` model generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) using the `Module` and [`TensorPtr`](extension-tensor.md) APIs: ```cpp #include diff --git a/docs/source/llm/export-custom-llm.md b/docs/source/llm/export-custom-llm.md index 57537ba31d8..4797f773fa3 100644 --- a/docs/source/llm/export-custom-llm.md +++ b/docs/source/llm/export-custom-llm.md @@ -81,7 +81,7 @@ with open("nanogpt.pte", "wb") as file: To export, run the script with `python export_nanogpt.py` (or python3, as appropriate for your environment). It will generate a `nanogpt.pte` file in the current directory. -For more information, see [Exporting to ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) and +For more information, see [Exporting to ExecuTorch](../tutorials/export-to-executorch-tutorial) and [torch.export](https://pytorch.org/docs/stable/export.html). ## Backend delegation @@ -143,7 +143,7 @@ example_inputs = ( # long as they adhere to the rules specified in the dynamic shape configuration. # Here we set the range of 0th model input's 1st dimension as # [0, model.config.block_size]. -# See https://pytorch.org/executorch/main/concepts.html#dynamic-shapes +# See ../concepts.html#dynamic-shapes # for details about creating dynamic shapes. dynamic_shape = ( {1: torch.export.Dim("token_dim", max=model.config.block_size - 1)}, diff --git a/docs/source/running-a-model-cpp-tutorial.md b/docs/source/running-a-model-cpp-tutorial.md index a993eba6b40..5ae4235995d 100644 --- a/docs/source/running-a-model-cpp-tutorial.md +++ b/docs/source/running-a-model-cpp-tutorial.md @@ -12,7 +12,7 @@ each API please see the [Runtime API Reference](executorch-runtime-api-reference ## Prerequisites You will need an ExecuTorch model to follow along. We will be using -the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial). +the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) . ## Model Loading diff --git a/docs/source/runtime-overview.md b/docs/source/runtime-overview.md index 96a618a2a41..1df3da40478 100644 --- a/docs/source/runtime-overview.md +++ b/docs/source/runtime-overview.md @@ -11,7 +11,7 @@ Works](intro-how-it-works.md). At the highest level, the ExecuTorch runtime is responsible for: * Loading binary `.pte` program files that were generated by the - [`to_executorch()`](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) step of the + [`to_executorch()`](tutorials/export-to-executorch-tutorial) step of the model-lowering process. * Executing the series of instructions that implement a lowered model. diff --git a/docs/source/runtime-profiling.md b/docs/source/runtime-profiling.md index 120d31954fd..56b62de599d 100644 --- a/docs/source/runtime-profiling.md +++ b/docs/source/runtime-profiling.md @@ -20,4 +20,4 @@ We provide access to all the profiling data via the Python [Inspector API](model - Through the Inspector API, users can do a wide range of analysis varying from printing out performance details to doing more finer granular calculation on module level. -Please refer to the [Developer Tools tutorial](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for a step-by-step walkthrough of the above process on a sample model. +Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) for a step-by-step walkthrough of the above process on a sample model. diff --git a/docs/source/tutorial-xnnpack-delegate-lowering.md b/docs/source/tutorial-xnnpack-delegate-lowering.md index 5471b39052b..3fb079f24d6 100644 --- a/docs/source/tutorial-xnnpack-delegate-lowering.md +++ b/docs/source/tutorial-xnnpack-delegate-lowering.md @@ -11,7 +11,7 @@ In this tutorial, you will learn how to export an XNNPACK lowered Model and run :::{grid-item-card} Before you begin it is recommended you go through the following: :class-card: card-prerequisites * [Setting up ExecuTorch](getting-started-setup.rst) -* [Model Lowering Tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) +* [Model Lowering Tutorial](tutorials/export-to-executorch-tutorial) * [ExecuTorch XNNPACK Delegate](backends-xnnpack.md) ::: :::: diff --git a/docs/source/using-executorch-troubleshooting.md b/docs/source/using-executorch-troubleshooting.md index 1abc5ed999e..75648dc5b46 100644 --- a/docs/source/using-executorch-troubleshooting.md +++ b/docs/source/using-executorch-troubleshooting.md @@ -16,5 +16,5 @@ The ExecuTorch developer tools, or devtools, are a collection of tooling for tro - [Frequently Asked Questions](using-executorch-faqs.md) for solutions to commonly encountered questions and issues. - [Introduction to the ExecuTorch Developer Tools](runtime-profiling.md) for a high-level introduction to available developer tooling. -- [Using the ExecuTorch Developer Tools to Profile a Model](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for information on runtime performance profiling. +- [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) for information on runtime performance profiling. - [Inspector APIs](runtime-profiling.md) for reference material on trace inspector APIs.