From 4983ce5f589dfeb98fa8edea00c893b35cbdc567 Mon Sep 17 00:00:00 2001 From: gasoonjia Date: Fri, 10 Oct 2025 10:55:26 -0700 Subject: [PATCH 1/4] use reference link for html doc Differential Revision: [D84367515](https://our.internmc.facebook.com/intern/diff/D84367515/) [ghstack-poisoned] --- docs/source/api-section.md | 2 +- docs/source/backend-delegates-xnnpack-reference.md | 2 +- docs/source/bundled-io.md | 2 +- docs/source/devtools-tutorial.md | 2 +- docs/source/export-overview.md | 2 +- docs/source/extension-module.md | 2 +- docs/source/llm/export-custom-llm.md | 6 +++--- docs/source/running-a-model-cpp-tutorial.md | 2 +- docs/source/runtime-overview.md | 2 +- docs/source/runtime-profiling.md | 2 +- docs/source/tutorial-xnnpack-delegate-lowering.md | 2 +- docs/source/using-executorch-android.md | 2 +- docs/source/using-executorch-troubleshooting.md | 2 +- 13 files changed, 15 insertions(+), 15 deletions(-) diff --git a/docs/source/api-section.md b/docs/source/api-section.md index f5725a063d4..d41c9a972cd 100644 --- a/docs/source/api-section.md +++ b/docs/source/api-section.md @@ -7,7 +7,7 @@ In this section, find complete API documentation for ExecuTorch's export, runtim - {doc}`executorch-runtime-api-reference` — ExecuTorch Runtime API Reference - {doc}`runtime-python-api-reference` — Runtime Python API Reference - {doc}`api-life-cycle` — API Life Cycle -- [Android doc →](https://pytorch.org/executorch/main/javadoc/)** — Android API Documentation +- [Android doc →](javadoc/)** — Android API Documentation - {doc}`extension-module` — Extension Module - {doc}`extension-tensor` — Extension Tensor - {doc}`running-a-model-cpp-tutorial` — Detailed C++ Runtime APIs Tutorial diff --git a/docs/source/backend-delegates-xnnpack-reference.md b/docs/source/backend-delegates-xnnpack-reference.md index cfb915aca59..fcfb17c5c1b 100644 --- a/docs/source/backend-delegates-xnnpack-reference.md +++ b/docs/source/backend-delegates-xnnpack-reference.md @@ -70,7 +70,7 @@ Since weight packing creates an extra copy of the weights inside XNNPACK, We fre When executing the XNNPACK subgraphs, we prepare the tensor inputs and outputs and feed them to the XNNPACK runtime graph. After executing the runtime graph, the output pointers are filled with the computed tensors. #### **Profiling** -We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)). +We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)). [comment]: <> (TODO: Refactor quantizer to a more official quantization doc) diff --git a/docs/source/bundled-io.md b/docs/source/bundled-io.md index 79897737268..b58e4550e5a 100644 --- a/docs/source/bundled-io.md +++ b/docs/source/bundled-io.md @@ -17,7 +17,7 @@ This stage mainly focuses on the creation of a `BundledProgram` and dumping it o ### Step 1: Create a Model and Emit its ExecuTorch Program. -ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Follow the [Generate and emit sample ExecuTorch program](getting-started.md#exporting) or [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial). +ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Follow the [Generate and emit sample ExecuTorch program](getting-started.md#exporting) or [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial). ### Step 2: Construct `List[MethodTestSuite]` to hold test info diff --git a/docs/source/devtools-tutorial.md b/docs/source/devtools-tutorial.md index 7c6cedc311b..4b230b0d38a 100644 --- a/docs/source/devtools-tutorial.md +++ b/docs/source/devtools-tutorial.md @@ -1,3 +1,3 @@ ## Developer Tools Usage Tutorial -Please refer to the [Developer Tools tutorial](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for a walkthrough on how to profile a model in ExecuTorch using the Developer Tools. +Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) for a walkthrough on how to profile a model in ExecuTorch using the Developer Tools. diff --git a/docs/source/export-overview.md b/docs/source/export-overview.md index d07701d06cd..0e7bd344ca2 100644 --- a/docs/source/export-overview.md +++ b/docs/source/export-overview.md @@ -11,5 +11,5 @@ program, making it easier for you to understand and implement the process. To learn more about exporting your model: -* Complete the [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial). +* Complete the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial). * Read the [torch.export documentation](https://pytorch.org/docs/2.1/export.html). diff --git a/docs/source/extension-module.md b/docs/source/extension-module.md index 29aa6712d37..92185bc1dea 100644 --- a/docs/source/extension-module.md +++ b/docs/source/extension-module.md @@ -6,7 +6,7 @@ In the [Detailed C++ Runtime APIs Tutorial](running-a-model-cpp-tutorial.md), we ## Example -Let's see how we can run the `SimpleConv` model generated from the [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) using the `Module` and [`TensorPtr`](extension-tensor.md) APIs: +Let's see how we can run the `SimpleConv` model generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) using the `Module` and [`TensorPtr`](extension-tensor.md) APIs: ```cpp #include diff --git a/docs/source/llm/export-custom-llm.md b/docs/source/llm/export-custom-llm.md index 57537ba31d8..244e4f1eff6 100644 --- a/docs/source/llm/export-custom-llm.md +++ b/docs/source/llm/export-custom-llm.md @@ -57,7 +57,7 @@ example_inputs = (torch.randint(0, 100, (1, model.config.block_size), dtype=torc # long as they adhere to the rules specified in the dynamic shape configuration. # Here we set the range of 0th model input's 1st dimension as # [0, model.config.block_size]. -# See https://pytorch.org/executorch/main/concepts#dynamic-shapes +# See ../concepts.html#dynamic-shapes # for details about creating dynamic shapes. dynamic_shape = ( {1: torch.export.Dim("token_dim", max=model.config.block_size)}, @@ -81,7 +81,7 @@ with open("nanogpt.pte", "wb") as file: To export, run the script with `python export_nanogpt.py` (or python3, as appropriate for your environment). It will generate a `nanogpt.pte` file in the current directory. -For more information, see [Exporting to ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) and +For more information, see [Exporting to ExecuTorch](../tutorials/export-to-executorch-tutorial) and [torch.export](https://pytorch.org/docs/stable/export.html). ## Backend delegation @@ -143,7 +143,7 @@ example_inputs = ( # long as they adhere to the rules specified in the dynamic shape configuration. # Here we set the range of 0th model input's 1st dimension as # [0, model.config.block_size]. -# See https://pytorch.org/executorch/main/concepts.html#dynamic-shapes +# See ../concepts.html#dynamic-shapes # for details about creating dynamic shapes. dynamic_shape = ( {1: torch.export.Dim("token_dim", max=model.config.block_size - 1)}, diff --git a/docs/source/running-a-model-cpp-tutorial.md b/docs/source/running-a-model-cpp-tutorial.md index f7bc3773949..1e0e83e6b35 100644 --- a/docs/source/running-a-model-cpp-tutorial.md +++ b/docs/source/running-a-model-cpp-tutorial.md @@ -12,7 +12,7 @@ each API please see the [Runtime API Reference](executorch-runtime-api-reference ## Prerequisites You will need an ExecuTorch model to follow along. We will be using -the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial). +the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial). ## Model Loading diff --git a/docs/source/runtime-overview.md b/docs/source/runtime-overview.md index 96a618a2a41..a82d8e46cfc 100644 --- a/docs/source/runtime-overview.md +++ b/docs/source/runtime-overview.md @@ -11,7 +11,7 @@ Works](intro-how-it-works.md). At the highest level, the ExecuTorch runtime is responsible for: * Loading binary `.pte` program files that were generated by the - [`to_executorch()`](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) step of the + [`to_executorch()`](tutorials/export-to-executorch-tutorial) step of the model-lowering process. * Executing the series of instructions that implement a lowered model. diff --git a/docs/source/runtime-profiling.md b/docs/source/runtime-profiling.md index 120d31954fd..a55425704e9 100644 --- a/docs/source/runtime-profiling.md +++ b/docs/source/runtime-profiling.md @@ -20,4 +20,4 @@ We provide access to all the profiling data via the Python [Inspector API](model - Through the Inspector API, users can do a wide range of analysis varying from printing out performance details to doing more finer granular calculation on module level. -Please refer to the [Developer Tools tutorial](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for a step-by-step walkthrough of the above process on a sample model. +Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) for a step-by-step walkthrough of the above process on a sample model. diff --git a/docs/source/tutorial-xnnpack-delegate-lowering.md b/docs/source/tutorial-xnnpack-delegate-lowering.md index bccd4e4add3..5f3f7361cf4 100644 --- a/docs/source/tutorial-xnnpack-delegate-lowering.md +++ b/docs/source/tutorial-xnnpack-delegate-lowering.md @@ -11,7 +11,7 @@ In this tutorial, you will learn how to export an XNNPACK lowered Model and run :::{grid-item-card} Before you begin it is recommended you go through the following: :class-card: card-prerequisites * [Setting up ExecuTorch](getting-started-setup.rst) -* [Model Lowering Tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) +* [Model Lowering Tutorial](tutorials/export-to-executorch-tutorial) * [ExecuTorch XNNPACK Delegate](backends-xnnpack.md) ::: :::: diff --git a/docs/source/using-executorch-android.md b/docs/source/using-executorch-android.md index ce9977218a1..375f325d6e0 100644 --- a/docs/source/using-executorch-android.md +++ b/docs/source/using-executorch-android.md @@ -207,4 +207,4 @@ using ExecuTorch AAR package. ## Java API reference -Please see [Java API reference](https://pytorch.org/executorch/main/javadoc/). +Please see [Java API reference](javadoc/). diff --git a/docs/source/using-executorch-troubleshooting.md b/docs/source/using-executorch-troubleshooting.md index 1abc5ed999e..3813838d600 100644 --- a/docs/source/using-executorch-troubleshooting.md +++ b/docs/source/using-executorch-troubleshooting.md @@ -16,5 +16,5 @@ The ExecuTorch developer tools, or devtools, are a collection of tooling for tro - [Frequently Asked Questions](using-executorch-faqs.md) for solutions to commonly encountered questions and issues. - [Introduction to the ExecuTorch Developer Tools](runtime-profiling.md) for a high-level introduction to available developer tooling. -- [Using the ExecuTorch Developer Tools to Profile a Model](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for information on runtime performance profiling. +- [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) for information on runtime performance profiling. - [Inspector APIs](runtime-profiling.md) for reference material on trace inspector APIs. From f3bdee2979e88d2a3cd6f203e1aeba34372c4660 Mon Sep 17 00:00:00 2001 From: gasoonjia Date: Fri, 10 Oct 2025 11:22:01 -0700 Subject: [PATCH 2/4] Update on "use reference link for html doc" Differential Revision: [D84367515](https://our.internmc.facebook.com/intern/diff/D84367515/) [ghstack-poisoned] --- docs/source/backend-delegates-xnnpack-reference.md | 2 +- docs/source/bundled-io.md | 2 +- docs/source/devtools-tutorial.md | 2 +- docs/source/export-overview.md | 2 +- docs/source/extension-module.md | 2 +- docs/source/llm/export-custom-llm.md | 2 +- docs/source/running-a-model-cpp-tutorial.md | 2 +- docs/source/runtime-overview.md | 2 +- docs/source/runtime-profiling.md | 2 +- docs/source/tutorial-xnnpack-delegate-lowering.md | 2 +- docs/source/using-executorch-troubleshooting.md | 2 +- 11 files changed, 11 insertions(+), 11 deletions(-) diff --git a/docs/source/backend-delegates-xnnpack-reference.md b/docs/source/backend-delegates-xnnpack-reference.md index fcfb17c5c1b..8b4338e703c 100644 --- a/docs/source/backend-delegates-xnnpack-reference.md +++ b/docs/source/backend-delegates-xnnpack-reference.md @@ -70,7 +70,7 @@ Since weight packing creates an extra copy of the weights inside XNNPACK, We fre When executing the XNNPACK subgraphs, we prepare the tensor inputs and outputs and feed them to the XNNPACK runtime graph. After executing the runtime graph, the output pointers are filled with the computed tensors. #### **Profiling** -We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)). +We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)). [comment]: <> (TODO: Refactor quantizer to a more official quantization doc) diff --git a/docs/source/bundled-io.md b/docs/source/bundled-io.md index b58e4550e5a..27fb230a546 100644 --- a/docs/source/bundled-io.md +++ b/docs/source/bundled-io.md @@ -17,7 +17,7 @@ This stage mainly focuses on the creation of a `BundledProgram` and dumping it o ### Step 1: Create a Model and Emit its ExecuTorch Program. -ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Follow the [Generate and emit sample ExecuTorch program](getting-started.md#exporting) or [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial). +ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Follow the [Generate and emit sample ExecuTorch program](getting-started.md#exporting) or [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) . ### Step 2: Construct `List[MethodTestSuite]` to hold test info diff --git a/docs/source/devtools-tutorial.md b/docs/source/devtools-tutorial.md index 4b230b0d38a..6d540dc7f35 100644 --- a/docs/source/devtools-tutorial.md +++ b/docs/source/devtools-tutorial.md @@ -1,3 +1,3 @@ ## Developer Tools Usage Tutorial -Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) for a walkthrough on how to profile a model in ExecuTorch using the Developer Tools. +Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) for a walkthrough on how to profile a model in ExecuTorch using the Developer Tools. diff --git a/docs/source/export-overview.md b/docs/source/export-overview.md index 0e7bd344ca2..c96716a0949 100644 --- a/docs/source/export-overview.md +++ b/docs/source/export-overview.md @@ -11,5 +11,5 @@ program, making it easier for you to understand and implement the process. To learn more about exporting your model: -* Complete the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial). +* Complete the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) . * Read the [torch.export documentation](https://pytorch.org/docs/2.1/export.html). diff --git a/docs/source/extension-module.md b/docs/source/extension-module.md index 92185bc1dea..690256fecbb 100644 --- a/docs/source/extension-module.md +++ b/docs/source/extension-module.md @@ -6,7 +6,7 @@ In the [Detailed C++ Runtime APIs Tutorial](running-a-model-cpp-tutorial.md), we ## Example -Let's see how we can run the `SimpleConv` model generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) using the `Module` and [`TensorPtr`](extension-tensor.md) APIs: +Let's see how we can run the `SimpleConv` model generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) using the `Module` and [`TensorPtr`](extension-tensor.md) APIs: ```cpp #include diff --git a/docs/source/llm/export-custom-llm.md b/docs/source/llm/export-custom-llm.md index 244e4f1eff6..bce73a8faf8 100644 --- a/docs/source/llm/export-custom-llm.md +++ b/docs/source/llm/export-custom-llm.md @@ -81,7 +81,7 @@ with open("nanogpt.pte", "wb") as file: To export, run the script with `python export_nanogpt.py` (or python3, as appropriate for your environment). It will generate a `nanogpt.pte` file in the current directory. -For more information, see [Exporting to ExecuTorch](../tutorials/export-to-executorch-tutorial) and +For more information, see [Exporting to ExecuTorch](../tutorials/export-to-executorch-tutorial) and [torch.export](https://pytorch.org/docs/stable/export.html). ## Backend delegation diff --git a/docs/source/running-a-model-cpp-tutorial.md b/docs/source/running-a-model-cpp-tutorial.md index 1e0e83e6b35..9d4adc58fb1 100644 --- a/docs/source/running-a-model-cpp-tutorial.md +++ b/docs/source/running-a-model-cpp-tutorial.md @@ -12,7 +12,7 @@ each API please see the [Runtime API Reference](executorch-runtime-api-reference ## Prerequisites You will need an ExecuTorch model to follow along. We will be using -the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial). +the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) . ## Model Loading diff --git a/docs/source/runtime-overview.md b/docs/source/runtime-overview.md index a82d8e46cfc..1df3da40478 100644 --- a/docs/source/runtime-overview.md +++ b/docs/source/runtime-overview.md @@ -11,7 +11,7 @@ Works](intro-how-it-works.md). At the highest level, the ExecuTorch runtime is responsible for: * Loading binary `.pte` program files that were generated by the - [`to_executorch()`](tutorials/export-to-executorch-tutorial) step of the + [`to_executorch()`](tutorials/export-to-executorch-tutorial) step of the model-lowering process. * Executing the series of instructions that implement a lowered model. diff --git a/docs/source/runtime-profiling.md b/docs/source/runtime-profiling.md index a55425704e9..56b62de599d 100644 --- a/docs/source/runtime-profiling.md +++ b/docs/source/runtime-profiling.md @@ -20,4 +20,4 @@ We provide access to all the profiling data via the Python [Inspector API](model - Through the Inspector API, users can do a wide range of analysis varying from printing out performance details to doing more finer granular calculation on module level. -Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) for a step-by-step walkthrough of the above process on a sample model. +Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) for a step-by-step walkthrough of the above process on a sample model. diff --git a/docs/source/tutorial-xnnpack-delegate-lowering.md b/docs/source/tutorial-xnnpack-delegate-lowering.md index 5f3f7361cf4..ee0ef95ff08 100644 --- a/docs/source/tutorial-xnnpack-delegate-lowering.md +++ b/docs/source/tutorial-xnnpack-delegate-lowering.md @@ -11,7 +11,7 @@ In this tutorial, you will learn how to export an XNNPACK lowered Model and run :::{grid-item-card} Before you begin it is recommended you go through the following: :class-card: card-prerequisites * [Setting up ExecuTorch](getting-started-setup.rst) -* [Model Lowering Tutorial](tutorials/export-to-executorch-tutorial) +* [Model Lowering Tutorial](tutorials/export-to-executorch-tutorial) * [ExecuTorch XNNPACK Delegate](backends-xnnpack.md) ::: :::: diff --git a/docs/source/using-executorch-troubleshooting.md b/docs/source/using-executorch-troubleshooting.md index 3813838d600..75648dc5b46 100644 --- a/docs/source/using-executorch-troubleshooting.md +++ b/docs/source/using-executorch-troubleshooting.md @@ -16,5 +16,5 @@ The ExecuTorch developer tools, or devtools, are a collection of tooling for tro - [Frequently Asked Questions](using-executorch-faqs.md) for solutions to commonly encountered questions and issues. - [Introduction to the ExecuTorch Developer Tools](runtime-profiling.md) for a high-level introduction to available developer tooling. -- [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) for information on runtime performance profiling. +- [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) for information on runtime performance profiling. - [Inspector APIs](runtime-profiling.md) for reference material on trace inspector APIs. From e42ddc6ce5009233bfc14bc6c5ca75a8a92181db Mon Sep 17 00:00:00 2001 From: gasoonjia Date: Fri, 10 Oct 2025 11:36:47 -0700 Subject: [PATCH 3/4] Update on "use reference link for html doc" Differential Revision: [D84367515](https://our.internmc.facebook.com/intern/diff/D84367515/) [ghstack-poisoned] --- docs/source/api-section.md | 2 +- docs/source/llm/export-custom-llm.md | 2 +- docs/source/using-executorch-android.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/source/api-section.md b/docs/source/api-section.md index d41c9a972cd..f5725a063d4 100644 --- a/docs/source/api-section.md +++ b/docs/source/api-section.md @@ -7,7 +7,7 @@ In this section, find complete API documentation for ExecuTorch's export, runtim - {doc}`executorch-runtime-api-reference` — ExecuTorch Runtime API Reference - {doc}`runtime-python-api-reference` — Runtime Python API Reference - {doc}`api-life-cycle` — API Life Cycle -- [Android doc →](javadoc/)** — Android API Documentation +- [Android doc →](https://pytorch.org/executorch/main/javadoc/)** — Android API Documentation - {doc}`extension-module` — Extension Module - {doc}`extension-tensor` — Extension Tensor - {doc}`running-a-model-cpp-tutorial` — Detailed C++ Runtime APIs Tutorial diff --git a/docs/source/llm/export-custom-llm.md b/docs/source/llm/export-custom-llm.md index bce73a8faf8..4797f773fa3 100644 --- a/docs/source/llm/export-custom-llm.md +++ b/docs/source/llm/export-custom-llm.md @@ -57,7 +57,7 @@ example_inputs = (torch.randint(0, 100, (1, model.config.block_size), dtype=torc # long as they adhere to the rules specified in the dynamic shape configuration. # Here we set the range of 0th model input's 1st dimension as # [0, model.config.block_size]. -# See ../concepts.html#dynamic-shapes +# See https://pytorch.org/executorch/main/concepts#dynamic-shapes # for details about creating dynamic shapes. dynamic_shape = ( {1: torch.export.Dim("token_dim", max=model.config.block_size)}, diff --git a/docs/source/using-executorch-android.md b/docs/source/using-executorch-android.md index 375f325d6e0..ce9977218a1 100644 --- a/docs/source/using-executorch-android.md +++ b/docs/source/using-executorch-android.md @@ -207,4 +207,4 @@ using ExecuTorch AAR package. ## Java API reference -Please see [Java API reference](javadoc/). +Please see [Java API reference](https://pytorch.org/executorch/main/javadoc/). From dabcbc8af8049d403712b5e33d2fcbeb1faa604b Mon Sep 17 00:00:00 2001 From: gasoonjia Date: Fri, 10 Oct 2025 13:52:39 -0700 Subject: [PATCH 4/4] Update on "use reference link for html doc" Differential Revision: [D84367515](https://our.internmc.facebook.com/intern/diff/D84367515/) [ghstack-poisoned] --- docs/source/backend-delegates-xnnpack-reference.md | 2 +- docs/source/bundled-io.md | 2 +- docs/source/devtools-tutorial.md | 2 +- docs/source/export-overview.md | 2 +- docs/source/extension-module.md | 2 +- docs/source/llm/export-custom-llm.md | 4 ++-- docs/source/running-a-model-cpp-tutorial.md | 2 +- docs/source/runtime-overview.md | 2 +- docs/source/runtime-profiling.md | 2 +- docs/source/tutorial-xnnpack-delegate-lowering.md | 2 +- docs/source/using-executorch-troubleshooting.md | 2 +- 11 files changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/source/backend-delegates-xnnpack-reference.md b/docs/source/backend-delegates-xnnpack-reference.md index 8b4338e703c..dd6ab2f3515 100644 --- a/docs/source/backend-delegates-xnnpack-reference.md +++ b/docs/source/backend-delegates-xnnpack-reference.md @@ -70,7 +70,7 @@ Since weight packing creates an extra copy of the weights inside XNNPACK, We fre When executing the XNNPACK subgraphs, we prepare the tensor inputs and outputs and feed them to the XNNPACK runtime graph. After executing the runtime graph, the output pointers are filled with the computed tensors. #### **Profiling** -We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)). +We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](/tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)). [comment]: <> (TODO: Refactor quantizer to a more official quantization doc) diff --git a/docs/source/bundled-io.md b/docs/source/bundled-io.md index 27fb230a546..0eff599fddb 100644 --- a/docs/source/bundled-io.md +++ b/docs/source/bundled-io.md @@ -17,7 +17,7 @@ This stage mainly focuses on the creation of a `BundledProgram` and dumping it o ### Step 1: Create a Model and Emit its ExecuTorch Program. -ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Follow the [Generate and emit sample ExecuTorch program](getting-started.md#exporting) or [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) . +ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Follow the [Generate and emit sample ExecuTorch program](getting-started.md#exporting) or [Exporting to ExecuTorch tutorial](/tutorials/export-to-executorch-tutorial). ### Step 2: Construct `List[MethodTestSuite]` to hold test info diff --git a/docs/source/devtools-tutorial.md b/docs/source/devtools-tutorial.md index 6d540dc7f35..902d106af2a 100644 --- a/docs/source/devtools-tutorial.md +++ b/docs/source/devtools-tutorial.md @@ -1,3 +1,3 @@ ## Developer Tools Usage Tutorial -Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) for a walkthrough on how to profile a model in ExecuTorch using the Developer Tools. +Please refer to the [Developer Tools tutorial](/tutorials/devtools-integration-tutorial) for a walkthrough on how to profile a model in ExecuTorch using the Developer Tools. diff --git a/docs/source/export-overview.md b/docs/source/export-overview.md index c96716a0949..18cbb5778de 100644 --- a/docs/source/export-overview.md +++ b/docs/source/export-overview.md @@ -11,5 +11,5 @@ program, making it easier for you to understand and implement the process. To learn more about exporting your model: -* Complete the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) . +* Complete the [Exporting to ExecuTorch tutorial](/tutorials/export-to-executorch-tutorial). * Read the [torch.export documentation](https://pytorch.org/docs/2.1/export.html). diff --git a/docs/source/extension-module.md b/docs/source/extension-module.md index 690256fecbb..31a538b56c8 100644 --- a/docs/source/extension-module.md +++ b/docs/source/extension-module.md @@ -6,7 +6,7 @@ In the [Detailed C++ Runtime APIs Tutorial](running-a-model-cpp-tutorial.md), we ## Example -Let's see how we can run the `SimpleConv` model generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) using the `Module` and [`TensorPtr`](extension-tensor.md) APIs: +Let's see how we can run the `SimpleConv` model generated from the [Exporting to ExecuTorch tutorial](/tutorials/export-to-executorch-tutorial) using the `Module` and [`TensorPtr`](extension-tensor.md) APIs: ```cpp #include diff --git a/docs/source/llm/export-custom-llm.md b/docs/source/llm/export-custom-llm.md index 4797f773fa3..476524cab28 100644 --- a/docs/source/llm/export-custom-llm.md +++ b/docs/source/llm/export-custom-llm.md @@ -81,7 +81,7 @@ with open("nanogpt.pte", "wb") as file: To export, run the script with `python export_nanogpt.py` (or python3, as appropriate for your environment). It will generate a `nanogpt.pte` file in the current directory. -For more information, see [Exporting to ExecuTorch](../tutorials/export-to-executorch-tutorial) and +For more information, see [Exporting to ExecuTorch](/../tutorials/export-to-executorch-tutorial) and [torch.export](https://pytorch.org/docs/stable/export.html). ## Backend delegation @@ -143,7 +143,7 @@ example_inputs = ( # long as they adhere to the rules specified in the dynamic shape configuration. # Here we set the range of 0th model input's 1st dimension as # [0, model.config.block_size]. -# See ../concepts.html#dynamic-shapes +# See https://pytorch.org/executorch/main/concepts.html#dynamic-shapes # for details about creating dynamic shapes. dynamic_shape = ( {1: torch.export.Dim("token_dim", max=model.config.block_size - 1)}, diff --git a/docs/source/running-a-model-cpp-tutorial.md b/docs/source/running-a-model-cpp-tutorial.md index 9d4adc58fb1..b7bfe094a65 100644 --- a/docs/source/running-a-model-cpp-tutorial.md +++ b/docs/source/running-a-model-cpp-tutorial.md @@ -12,7 +12,7 @@ each API please see the [Runtime API Reference](executorch-runtime-api-reference ## Prerequisites You will need an ExecuTorch model to follow along. We will be using -the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) . +the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](/tutorials/export-to-executorch-tutorial). ## Model Loading diff --git a/docs/source/runtime-overview.md b/docs/source/runtime-overview.md index 1df3da40478..a2d07042542 100644 --- a/docs/source/runtime-overview.md +++ b/docs/source/runtime-overview.md @@ -11,7 +11,7 @@ Works](intro-how-it-works.md). At the highest level, the ExecuTorch runtime is responsible for: * Loading binary `.pte` program files that were generated by the - [`to_executorch()`](tutorials/export-to-executorch-tutorial) step of the + [`to_executorch()`](/tutorials/export-to-executorch-tutorial) step of the model-lowering process. * Executing the series of instructions that implement a lowered model. diff --git a/docs/source/runtime-profiling.md b/docs/source/runtime-profiling.md index 56b62de599d..bff7430f95f 100644 --- a/docs/source/runtime-profiling.md +++ b/docs/source/runtime-profiling.md @@ -20,4 +20,4 @@ We provide access to all the profiling data via the Python [Inspector API](model - Through the Inspector API, users can do a wide range of analysis varying from printing out performance details to doing more finer granular calculation on module level. -Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) for a step-by-step walkthrough of the above process on a sample model. +Please refer to the [Developer Tools tutorial](/tutorials/devtools-integration-tutorial) for a step-by-step walkthrough of the above process on a sample model. diff --git a/docs/source/tutorial-xnnpack-delegate-lowering.md b/docs/source/tutorial-xnnpack-delegate-lowering.md index ee0ef95ff08..aa8de92afbb 100644 --- a/docs/source/tutorial-xnnpack-delegate-lowering.md +++ b/docs/source/tutorial-xnnpack-delegate-lowering.md @@ -11,7 +11,7 @@ In this tutorial, you will learn how to export an XNNPACK lowered Model and run :::{grid-item-card} Before you begin it is recommended you go through the following: :class-card: card-prerequisites * [Setting up ExecuTorch](getting-started-setup.rst) -* [Model Lowering Tutorial](tutorials/export-to-executorch-tutorial) +* [Model Lowering Tutorial](/tutorials/export-to-executorch-tutorial) * [ExecuTorch XNNPACK Delegate](backends-xnnpack.md) ::: :::: diff --git a/docs/source/using-executorch-troubleshooting.md b/docs/source/using-executorch-troubleshooting.md index 75648dc5b46..fc8662ad6dd 100644 --- a/docs/source/using-executorch-troubleshooting.md +++ b/docs/source/using-executorch-troubleshooting.md @@ -16,5 +16,5 @@ The ExecuTorch developer tools, or devtools, are a collection of tooling for tro - [Frequently Asked Questions](using-executorch-faqs.md) for solutions to commonly encountered questions and issues. - [Introduction to the ExecuTorch Developer Tools](runtime-profiling.md) for a high-level introduction to available developer tooling. -- [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) for information on runtime performance profiling. +- [Using the ExecuTorch Developer Tools to Profile a Model](/tutorials/devtools-integration-tutorial) for information on runtime performance profiling. - [Inspector APIs](runtime-profiling.md) for reference material on trace inspector APIs.