Add structured stats reporting and GPU memory tracking to Qwen3.5 MoE runner#19228
Merged
Add structured stats reporting and GPU memory tracking to Qwen3.5 MoE runner#19228
Conversation
INT8 tensor core variants of the batched MoE GEMM kernels that dynamically quantize bf16 activations to INT8 per-row per-tile and dequantize INT4 weights directly to INT8 (skipping bf16 conversion). Uses tl.dot(int8, int8) → int32 accumulation with per-tile float32 rescale. 1.7× MoE speedup on A100 at M=1024 with 0.9998 cosine similarity vs bf16 baseline. Co-authored-by: Claude <noreplyanthropic.com> ghstack-source-id: 809c2cc Pull Request resolved: #19187
Add three new Triton kernels for dense W4A16 linear projections that replace tinygemm's tiled INT4 format with simple [N, K//2] packed weights (same format as MoE experts): - int4_matmul: fused dequant+tl.dot GEMM for medium M (prefill crossover) - int4_matvec: bandwidth-optimized vec-mat for M=1 decode - dequant_w4_to_bf16: weight dequant for large-M prefill via Inductor mm W4DequantLinear wraps these with dual decode/prefill dispatch: - Decode (M=1): int4_matvec (73 tok/s, ~35% slower than tinygemm) - Prefill (M>1): dequant+F.linear via Inductor (3400 tok/s at 3K tokens, +67% over tinygemm baseline) Single 18GB weight blob (no duplication). Decode perf regression is a known trade-off for uniform weight format — to be revisited with a CUDA C++ matvec kernel. Also adds INT8 dynamic-activation MoE tests and comprehensive correctness tests (48 tests, all passing at rtol=0.01). Co-authored-by: Claude <noreplyanthropic.com> ghstack-source-id: 89acc9b Pull Request resolved: #19188
… runner Runner now uses llm::Stats with proper timestamps for model load, prefill, decode, and GPU memory (via cudaMemGetInfo). Output matches stats.h print_report format: PyTorchObserver JSON line plus human-readable table. This commit was authored with the assistance of Claude Code. ghstack-source-id: 9227519 Pull Request resolved: #19190
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19228
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Gasoonjia
approved these changes
Apr 30, 2026
This PR needs a
|
rascani
added a commit
that referenced
this pull request
May 1, 2026
…CUDA (#19265) ### Summary #19228 added structured GPU memory tracking to the qwen3_5_moe runner but did not wrap the new cudaMemGetInfo blocks in the existing EXECUTORCH_BUILD_CUDA guard that the rest of the file uses for CUDA-only APIs. The same main.cpp is built for the Metal target where the CUDA runtime headers are not available, so the new blocks failed to compile on macOS: error: use of undeclared identifier 'cudaMemGetInfo' if (cudaMemGetInfo(&free, &total) == cudaSuccess) { Wrap the three new scoped blocks in #ifdef EXECUTORCH_BUILD_CUDA, matching the existing guard pattern at lines 27, 68, 113, 168, and 184. The stats struct fields they would have populated (gpu_free_before_load_bytes, gpu_free_after_load_bytes, gpu_free_after_generate_bytes, gpu_peak_usage_mb) default to their sentinel values on non-CUDA builds, so the rest of the runner's stats reporting tolerates their absence. Authored with Claude Code. ### Test plan CI
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #19190 by @digantdesai
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/digantdesai/53/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/digantdesai/53/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/digantdesai/51/orig
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/digantdesai/53/orig
@diff-train-skip-merge