From 194af2a1a2d053c254f14595e0824ab49d23d22a Mon Sep 17 00:00:00 2001 From: Olivia Liu Date: Wed, 2 Oct 2024 18:09:22 -0700 Subject: [PATCH] New URL for the Profiling page (#5819) Summary: Pull Request resolved: https://github.com/pytorch/executorch/pull/5819 This diff is to rename the "sdk-profiling" documentation page to just "profiling". Old URL: https://pytorch.org/executorch/main/sdk-profiling.html New URL: https://pytorch.org/executorch/main/profiling.html Design doc: https://docs.google.com/document/d/1l6DYTq9Kq6VrPohruRFP-qScZDj01W_g4zlKyvqKGF4/edit?usp=sharing Reviewed By: dbort Differential Revision: D63771297 fbshipit-source-id: 452fd105d9beca35242a2d60a9869b4ebbc54df1 (cherry picked from commit 79b78966ccb22c049d26a7b5bcd4c6b25aa7b1c8) --- docs/source/index.rst | 12 ++++++------ docs/source/runtime-overview.md | 4 ++-- docs/source/runtime-profiling.md | 23 +++++++++++++++++++++++ docs/source/sdk-profiling.md | 22 +--------------------- 4 files changed, 32 insertions(+), 29 deletions(-) create mode 100644 docs/source/runtime-profiling.md diff --git a/docs/source/index.rst b/docs/source/index.rst index 20f0c944820..22bbccff015 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -201,12 +201,12 @@ Topics in this section will help you get started with ExecuTorch. :hidden: devtools-overview - sdk-bundled-io - sdk-etrecord - sdk-etdump - sdk-profiling - sdk-debugging - sdk-inspector + bundled-io + etrecord + etdump + runtime-profiling + model-debugging + model-inspector memory-planning-inspection sdk-delegate-integration devtools-tutorial diff --git a/docs/source/runtime-overview.md b/docs/source/runtime-overview.md index 6766e678e0e..1a421fdcc0a 100644 --- a/docs/source/runtime-overview.md +++ b/docs/source/runtime-overview.md @@ -33,7 +33,7 @@ The runtime is also responsible for: semantics of those operators. * Dispatching predetermined sections of the model to [backend delegates](compiler-delegate-and-partitioner.md) for acceleration. -* Optionally gathering [profiling data](sdk-profiling.md) during load and +* Optionally gathering [profiling data](runtime-profiling.md) during load and execution. ## Design Goals @@ -159,7 +159,7 @@ For more details about the ExecuTorch runtime, please see: * [Simplified Runtime APIs Tutorial](extension-module.md) * [Runtime Build and Cross Compilation](runtime-build-and-cross-compilation.md) * [Runtime Platform Abstraction Layer](runtime-platform-abstraction-layer.md) -* [Runtime Profiling](sdk-profiling.md) +* [Runtime Profiling](runtime-profiling.md) * [Backends and Delegates](compiler-delegate-and-partitioner.md) * [Backend Delegate Implementation](runtime-backend-delegate-implementation-and-linking.md) * [Kernel Library Overview](kernel-library-overview.md) diff --git a/docs/source/runtime-profiling.md b/docs/source/runtime-profiling.md new file mode 100644 index 00000000000..c228971d28c --- /dev/null +++ b/docs/source/runtime-profiling.md @@ -0,0 +1,23 @@ +# Profiling Models in ExecuTorch + +Profiling in ExecuTorch gives users access to these runtime metrics: +- Model Load Time. +- Operator Level Execution Time. +- Delegate Execution Time. + - If the delegate that the user is calling into has been integrated with the [Developer Tools](./delegate-debugging.md), then users will also be able to access delegated operator execution time. +- End-to-end Inference Execution Time. + +One uniqe aspect of ExecuTorch Profiling is the ability to link every runtime executed operator back to the exact line of python code from which this operator originated. This capability enables users to easily identify hotspots in their model, source them back to the exact line of Python code, and optimize if chosen to. + +We provide access to all the profiling data via the Python [Inspector API](./model-inspector.rst). The data mentioned above can be accessed through these interfaces, allowing users to perform any post-run analysis of their choice. + +## Steps to Profile a Model in ExecuTorch + +1. [Optional] Generate an [ETRecord](./etrecord.rst) while you're exporting your model. If provided this will enable users to link back profiling details to eager model source code (with stack traces and module hierarchy). +2. Build the runtime with the pre-processor flags that enable profiling. Detailed in the [ETDump documentation](./etdump.md). +3. Run your Program on the ExecuTorch runtime and generate an [ETDump](./etdump.md). +4. Create an instance of the [Inspector API](./model-inspector.rst) by passing in the ETDump you have sourced from the runtime along with the optionally generated ETRecord from step 1. + - Through the Inspector API, users can do a wide range of analysis varying from printing out performance details to doing more finer granular calculation on module level. + + +Please refer to the [Developer Tools tutorial](./tutorials/devtools-integration-tutorial.rst) for a step-by-step walkthrough of the above process on a sample model. diff --git a/docs/source/sdk-profiling.md b/docs/source/sdk-profiling.md index e17fb1ae48e..9c99a979757 100644 --- a/docs/source/sdk-profiling.md +++ b/docs/source/sdk-profiling.md @@ -1,23 +1,3 @@ # Profiling Models in ExecuTorch -Profiling in ExecuTorch gives users access to these runtime metrics: -- Model Load Time. -- Operator Level Execution Time. -- Delegate Execution Time. - - If the delegate that the user is calling into has been integrated with the [Developer Tools](./sdk-delegate-integration.md), then users will also be able to access delegated operator execution time. -- End-to-end Inference Execution Time. - -One uniqe aspect of ExecuTorch Profiling is the ability to link every runtime executed operator back to the exact line of python code from which this operator originated. This capability enables users to easily identify hotspots in their model, source them back to the exact line of Python code, and optimize if chosen to. - -We provide access to all the profiling data via the Python [Inspector API](./sdk-inspector.rst). The data mentioned above can be accessed through these interfaces, allowing users to perform any post-run analysis of their choice. - -## Steps to Profile a Model in ExecuTorch - -1. [Optional] Generate an [ETRecord](./sdk-etrecord.rst) while you're exporting your model. If provided this will enable users to link back profiling details to eager model source code (with stack traces and module hierarchy). -2. Build the runtime with the pre-processor flags that enable profiling. Detailed in the [ETDump documentation](./sdk-etdump.md). -3. Run your Program on the ExecuTorch runtime and generate an [ETDump](./sdk-etdump.md). -4. Create an instance of the [Inspector API](./sdk-inspector.rst) by passing in the ETDump you have sourced from the runtime along with the optionally generated ETRecord from step 1. - - Through the Inspector API, users can do a wide range of analysis varying from printing out performance details to doing more finer granular calculation on module level. - - -Please refer to the [Developer Tools tutorial](./tutorials/devtools-integration-tutorial.rst) for a step-by-step walkthrough of the above process on a sample model. +Please update your link to . This URL will be deleted after v0.4.0.