Skip to content

Conversation

@pareenaverma
Copy link
Contributor

Before submitting a pull request for a new Learning Path, please review Create a Learning Path

  • [ x] I have reviewed Create a Learning Path

Please do not include any confidential information in your contribution. This includes confidential microarchitecture details and unannounced product information.

  • [ x] I have checked my contribution for confidential information

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of the Creative Commons Attribution 4.0 International License.

annietllnd and others added 30 commits September 9, 2025 18:48
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Review top-down methodology Learning Path
… integration

- Updated sections to clarify the setup and integration of Streamline annotations in llama.cpp.
- Improved explanations of performance analysis for Prefill and Decode stages.
- Added detailed steps for implementing Annotation Channels to monitor operator execution times.
- Included instructions for analyzing multi-threaded performance and configuring thread affinity.
- Revised titles and summaries for clarity and consistency.
- Updated further reading resources to include relevant links for KleidiAI and llama.cpp.
Update ExecuTorch LPs for Android and RPi 5
Add Learning Path about tracking resource usage on WoA
Put Windows resource usage in draft mode for tech review
@pareenaverma pareenaverma merged commit 6f7d319 into production Oct 10, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants