From 5a686cf77edba59e1a4f6612068fa079caf28e72 Mon Sep 17 00:00:00 2001 From: Kimish Patel Date: Wed, 9 Apr 2025 20:22:02 -0700 Subject: [PATCH] [pytorch/executorch][diff_train] Fix LLM getting-started.md (#10028) Remove unnecessary line Fix ETDump part Internal: << DO NOT EDIT BELOW THIS LINE >> **GitHub Author**: Hansong <107070759+kirklandsign@users.noreply.github.com> (Meta Employee) **GitHub Repo**: [pytorch/executorch](https://github.com/pytorch/executorch) **GitHub Pull Request**: [#10028](https://github.com/pytorch/executorch/pull/10028) Initially generated by: https://www.internalfb.com/intern/sandcastle/job/31525199173943941/ This was imported as part of a Diff Train. Please review this as soon as possible. Since it is a direct copy of a commit on GitHub, there shouldn't be much to do. diff-train-source-id: 7e8eb2b0e688f55c830ca0cf370440ab987e3bdc Differential Revision: [D72765239](https://our.internmc.facebook.com/intern/diff/D72765239/) [ghstack-poisoned] --- docs/source/llm/getting-started.md | 24 +++++++++++------------- 1 file changed, 11 insertions(+), 13 deletions(-) diff --git a/docs/source/llm/getting-started.md b/docs/source/llm/getting-started.md index 035da31f119..f19982660ae 100644 --- a/docs/source/llm/getting-started.md +++ b/docs/source/llm/getting-started.md @@ -395,7 +395,6 @@ At this point, the working directory should contain the following files: If all of these are present, you can now build and run: ```bash -./install_executorch.sh --clean (mkdir cmake-out && cd cmake-out && cmake ..) cmake --build cmake-out -j10 ./cmake-out/nanogpt_runner @@ -661,19 +660,15 @@ edge_config = get_xnnpack_edge_compile_config() # Convert to edge dialect and lower to XNNPack. edge_manager = to_edge_transform_and_lower(traced_model, partitioner = [XnnpackPartitioner()], compile_config = edge_config) et_program = edge_manager.to_executorch() -``` - -Finally, ensure that the runner links against the `xnnpack_backend` target in CMakeLists.txt. +with open("nanogpt.pte", "wb") as file: + file.write(et_program.buffer) ``` -add_executable(nanogpt_runner main.cpp) -target_link_libraries( - nanogpt_runner - PRIVATE - executorch - extension_module_static # Provides the Module class - optimized_native_cpu_ops_lib # Provides baseline cross-platform kernels - xnnpack_backend) # Provides the XNNPACK CPU acceleration backend + +Then run: +```bash +python export_nanogpt.py +./cmake-out/nanogpt_runner ``` For more information, see [Quantization in ExecuTorch](../quantization-overview.md). @@ -782,11 +777,14 @@ Run the export script and the ETRecord will be generated as `etrecord.bin`. An ETDump is an artifact generated at runtime containing a trace of the model execution. For more information, see [the ETDump docs](../etdump.md). -Include the ETDump header in your code. +Include the ETDump header and namespace in your code. ```cpp // main.cpp #include + +using executorch::etdump::ETDumpGen; +using torch::executor::etdump_result; ``` Create an Instance of the ETDumpGen class and pass it to the Module constructor.