Skip to content

Conversation

pytorchbot
Copy link
Collaborator

Test Plan:
examples/raspberry_pi/rpi_setup.sh pi5
...
[100%] Linking CXX executable llama_main
[100%] Built target llama_main
[SUCCESS] LLaMA runner built successfully

==== Extracting Bundled Libraries ====
[INFO] Extracting GLIBC libraries from toolchain... [WARNING] Use bundled GLIBC script on RPI device ONLY if you encounter a GLIBC mismatch error when running llama_main. [SUCCESS] Bundled libraries prepared in: /home/sidart/working/executorch/cmake-out/bundled-libs [INFO] On Raspberry Pi, run: sudo ./install_libs.sh

==== Verifying Build Outputs ====
[INFO] Checking required binaries...
[SUCCESS] ✓ llama_main (6.1M)
[SUCCESS] ✓ libllama_runner.so (4.0M)
[SUCCESS] ✓ libextension_module.a (89K) - static library [SUCCESS] All required binaries built successfully!

==== Setup Complete! ====

✓ ExecuTorch cross-compilation setup completed successfully!

📦 Built binaries:
• llama_main: /home/sidart/working/executorch/cmake-out/examples/models/llama/llama_main
• libllama_runner.so: /home/sidart/working/executorch/cmake-out/examples/models/llama/runner/libllama_runner.so
• libextension_module.a: Statically linked into llama_main ✅
• Bundled libraries: /home/sidart/working/executorch/cmake-out/bundled-libs/

📋 Next steps:

  1. Copy binaries to your Raspberry Pi pi5: scp /home/sidart/working/executorch/cmake-out/examples/models/llama/llama_main pi@:/ scp /home/sidart/working/executorch/cmake-out/examples/models/llama/runner/libllama_runner.so pi@:/ scp -r /home/sidart/working/executorch/cmake-out/bundled-libs/ pi@:~/

  2. Copy shared libraries to system location: sudo cp libllama_runner.so /lib/ # Only this one needed! sudo ldconfig

  3. Dry run to check for GLIBC or other issues: ./llama_main --help # Ensure there are no GLIBC or other errors before proceeding.

  4. If you see GLIBC errors, install bundled libraries: cd ~/bundled-libs && sudo ./install_libs.sh source setup_env.sh # Only do this if you encounter a GLIBC version mismatch or similar error.

  5. Download your model and tokenizer: # Refer to the official documentation for exact details.

  6. Run ExecuTorch with your model: ./llama_main --model_path ./model.pte --tokenizer_path ./tokenizer.model --seq_len 128 --prompt "What is the meaning of life ?"

🎯 Deployment Summary:
📁 Files to copy: 2 (llama_main + libllama_runner.so)
🏗️ Extension module: Built-in (no separate .so needed)

🔧 Toolchain saved at: /home/sidart/working/executorch/arm-toolchain/arm-gnu-toolchain-14.3.rel1-x86_64-aarch64-none-linux-gnu
🔧 CMake toolchain file: /home/sidart/working/executorch/arm-toolchain-pi5.cmake

Happy inferencing! 🚀

Summary

[PLEASE REMOVE] See CONTRIBUTING.md's Pull Requests for ExecuTorch PR guidelines.

[PLEASE REMOVE] If this PR closes an issue, please add a Fixes #<issue-id> line.

[PLEASE REMOVE] If this PR introduces a fix or feature that should be the upcoming release notes, please add a "Release notes: " label. For a list of available release notes labels, check out CONTRIBUTING.md's Pull Requests.

Test plan

[PLEASE REMOVE] How did you test this PR? Please write down any manual commands you used and note down tests that you have written if applicable.

…ost machine (#15014)

Test Plan:
examples/raspberry_pi/rpi_setup.sh pi5
...
[100%] Linking CXX executable llama_main
[100%] Built target llama_main
[SUCCESS] LLaMA runner built successfully

==== Extracting Bundled Libraries ====
[INFO] Extracting GLIBC libraries from toolchain... [WARNING] Use
bundled GLIBC script on RPI device ONLY if you encounter a GLIBC
mismatch error when running llama_main. [SUCCESS] Bundled libraries
prepared in: /home/sidart/working/executorch/cmake-out/bundled-libs
[INFO] On Raspberry Pi, run: sudo ./install_libs.sh

==== Verifying Build Outputs ====
[INFO] Checking required binaries...
[SUCCESS] ✓ llama_main (6.1M)
[SUCCESS] ✓ libllama_runner.so (4.0M)
[SUCCESS] ✓ libextension_module.a (89K) - static library [SUCCESS] All
required binaries built successfully!

==== Setup Complete! ====

✓ ExecuTorch cross-compilation setup completed successfully!

📦 Built binaries:
• llama_main:
/home/sidart/working/executorch/cmake-out/examples/models/llama/llama_main
• libllama_runner.so:
/home/sidart/working/executorch/cmake-out/examples/models/llama/runner/libllama_runner.so
  • libextension_module.a: Statically linked into llama_main ✅
• Bundled libraries:
/home/sidart/working/executorch/cmake-out/bundled-libs/

📋 Next steps:
1. Copy binaries to your Raspberry Pi pi5: scp
/home/sidart/working/executorch/cmake-out/examples/models/llama/llama_main
pi@<rpi-ip>:~/ scp
/home/sidart/working/executorch/cmake-out/examples/models/llama/runner/libllama_runner.so
pi@<rpi-ip>:~/ scp -r
/home/sidart/working/executorch/cmake-out/bundled-libs/ pi@<rpi-ip>:~/

2. Copy shared libraries to system location: sudo cp libllama_runner.so
/lib/ # Only this one needed! sudo ldconfig

3. Dry run to check for GLIBC or other issues: ./llama_main --help #
Ensure there are no GLIBC or other errors before proceeding.

4. If you see GLIBC errors, install bundled libraries: cd ~/bundled-libs
&& sudo ./install_libs.sh source setup_env.sh # Only do this if you
encounter a GLIBC version mismatch or similar error.

5. Download your model and tokenizer: # Refer to the official
documentation for exact details.

6. Run ExecuTorch with your model: ./llama_main --model_path ./model.pte
--tokenizer_path ./tokenizer.model --seq_len 128 --prompt "What is the
meaning of life ?"

🎯 Deployment Summary:
  📁 Files to copy: 2 (llama_main + libllama_runner.so)
  🏗️   Extension module: Built-in (no separate .so needed)

🔧 Toolchain saved at:
/home/sidart/working/executorch/arm-toolchain/arm-gnu-toolchain-14.3.rel1-x86_64-aarch64-none-linux-gnu
🔧 CMake toolchain file:
/home/sidart/working/executorch/arm-toolchain-pi5.cmake

Happy inferencing! 🚀

### Summary
[PLEASE REMOVE] See [CONTRIBUTING.md's Pull
Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests)
for ExecuTorch PR guidelines.

[PLEASE REMOVE] If this PR closes an issue, please add a `Fixes
#<issue-id>` line.

[PLEASE REMOVE] If this PR introduces a fix or feature that should be
the upcoming release notes, please add a "Release notes: <area>" label.
For a list of available release notes labels, check out
[CONTRIBUTING.md's Pull
Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests).

### Test plan
[PLEASE REMOVE] How did you test this PR? Please write down any manual
commands you used and note down tests that you have written if
applicable.

(cherry picked from commit a0e5280)
Copy link

pytorch-bot bot commented Oct 15, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15151

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 Cancelled Jobs

As of commit b046b20 with merge base e0dda90 (image):

CANCELLED JOBS - The following jobs were cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 15, 2025
@psiddh psiddh merged commit e302938 into release/1.0 Oct 15, 2025
121 of 124 checks passed
@psiddh psiddh deleted the cherry-pick-15014-by-pytorch_bot_bot_ branch October 15, 2025 21:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants