Summary: Add the Cross compilation Script for RPi (4 & 5) for Linux host machine #15151
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Test Plan:
examples/raspberry_pi/rpi_setup.sh pi5
...
[100%] Linking CXX executable llama_main
[100%] Built target llama_main
[SUCCESS] LLaMA runner built successfully
==== Extracting Bundled Libraries ====
[INFO] Extracting GLIBC libraries from toolchain... [WARNING] Use bundled GLIBC script on RPI device ONLY if you encounter a GLIBC mismatch error when running llama_main. [SUCCESS] Bundled libraries prepared in: /home/sidart/working/executorch/cmake-out/bundled-libs [INFO] On Raspberry Pi, run: sudo ./install_libs.sh
==== Verifying Build Outputs ====
[INFO] Checking required binaries...
[SUCCESS] ✓ llama_main (6.1M)
[SUCCESS] ✓ libllama_runner.so (4.0M)
[SUCCESS] ✓ libextension_module.a (89K) - static library [SUCCESS] All required binaries built successfully!
==== Setup Complete! ====
✓ ExecuTorch cross-compilation setup completed successfully!
📦 Built binaries:
• llama_main: /home/sidart/working/executorch/cmake-out/examples/models/llama/llama_main
• libllama_runner.so: /home/sidart/working/executorch/cmake-out/examples/models/llama/runner/libllama_runner.so
• libextension_module.a: Statically linked into llama_main ✅
• Bundled libraries: /home/sidart/working/executorch/cmake-out/bundled-libs/
📋 Next steps:
Copy binaries to your Raspberry Pi pi5: scp /home/sidart/working/executorch/cmake-out/examples/models/llama/llama_main pi@:
/ scp /home/sidart/working/executorch/cmake-out/examples/models/llama/runner/libllama_runner.so pi@:/ scp -r /home/sidart/working/executorch/cmake-out/bundled-libs/ pi@:~/Copy shared libraries to system location: sudo cp libllama_runner.so /lib/ # Only this one needed! sudo ldconfig
Dry run to check for GLIBC or other issues: ./llama_main --help # Ensure there are no GLIBC or other errors before proceeding.
If you see GLIBC errors, install bundled libraries: cd ~/bundled-libs && sudo ./install_libs.sh source setup_env.sh # Only do this if you encounter a GLIBC version mismatch or similar error.
Download your model and tokenizer: # Refer to the official documentation for exact details.
Run ExecuTorch with your model: ./llama_main --model_path ./model.pte --tokenizer_path ./tokenizer.model --seq_len 128 --prompt "What is the meaning of life ?"
🎯 Deployment Summary:
📁 Files to copy: 2 (llama_main + libllama_runner.so)
🏗️ Extension module: Built-in (no separate .so needed)
🔧 Toolchain saved at: /home/sidart/working/executorch/arm-toolchain/arm-gnu-toolchain-14.3.rel1-x86_64-aarch64-none-linux-gnu
🔧 CMake toolchain file: /home/sidart/working/executorch/arm-toolchain-pi5.cmake
Happy inferencing! 🚀
Summary
[PLEASE REMOVE] See CONTRIBUTING.md's Pull Requests for ExecuTorch PR guidelines.
[PLEASE REMOVE] If this PR closes an issue, please add a
Fixes #<issue-id>
line.[PLEASE REMOVE] If this PR introduces a fix or feature that should be the upcoming release notes, please add a "Release notes: " label. For a list of available release notes labels, check out CONTRIBUTING.md's Pull Requests.
Test plan
[PLEASE REMOVE] How did you test this PR? Please write down any manual commands you used and note down tests that you have written if applicable.