Add benchmark-pr skill for PR performance comparison#328
Add benchmark-pr skill for PR performance comparison#328ChaoWao merged 1 commit intohw-native-sys:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a crucial automation tool designed to quantify the performance impact of code changes within a pull request. By establishing a new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new benchmark-pr skill, which is well-documented in the SKILL.md file. The workflow is comprehensive and thoughtfully designed. My review focuses on enhancing the robustness and reliability of the shell script snippets within the documentation. I've pointed out a few areas where the implementation could be simplified and made more resilient, specifically concerning dependency management, file parsing, and handling of temporary files.
feaba14 to
8348f7c
Compare
8348f7c to
a53185a
Compare
Summary
Add a new Claude Code skill (
/benchmark-pr) and two supporting slashcommands (
/perf-example-device,/perf-runtime-device) that measurethe performance impact of GitHub PRs and individual examples on Ascend
hardware.
What's included
/benchmark-prskill (.claude/skills/benchmark-pr/SKILL.md)Given a PR number (e.g.
/benchmark-pr #123 -d 4 -n 20), the skill:tools/benchmark_rounds.shat the merge-base (baseline)/perf-example-devicecommand (.claude/commands/perf-example-device.md)Benchmarks a single example on hardware. Auto-detects an idle device via
npu-smi, runs 10 rounds, parses device logs for timing data, and reportsper-round latency with trimmed averages.
/perf-runtime-devicecommand (.claude/commands/perf-runtime-device.md)Benchmarks all examples under a given runtime directory on hardware.
Enumerates test cases, runs each sequentially on an idle device, and
produces a summary table with per-example average and trimmed-average
latencies.
Why
Previously there was no automated way to quantify the latency impact of a PR.
Developers had to manually checkout commits, run benchmarks, and compare
numbers by hand. These additions automate the full workflow and standardize
the reporting format.