-
Notifications
You must be signed in to change notification settings - Fork 25.7k
A few usability improvements for the dynamo benchmarks. #92713
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
--diff_main renamed to --diff-branch BRANCH and now works again Summary table splits results per branch. csv output now has column with branch name when run in this mode Added --progress flag so you can track how many models are going to be run. [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/92713
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 FailuresAs of commit 1f456bb: NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
--diff_main renamed to --diff-branch BRANCH and now works again Summary table splits results per branch. csv output now has column with branch name when run in this mode Added --progress flag so you can track how many models are going to be run. ghstack-source-id: 0b44480 Pull Request resolved: #92713
--diff_main renamed to --diff-branch BRANCH and now works again Summary table splits results per branch. csv output now has column with branch name when run in this mode Added --progress flag so you can track how many models are going to be run. Example output: ``` $ python benchmarks/dynamo/torchbench.py --quiet --performance --backend inductor --float16 --batch-size-file $(realpath benchmarks/dynamo/torchbench_models_list.txt) --filter 'alexnet|vgg16' --progress --diff viable/strict Running model 1/2 batch size: 1024 cuda eval alexnet dynamo_bench_diff_branch 1.251x p=0.00 cuda eval alexnet viable/strict 1.251x p=0.00 Running model 2/2 batch size: 128 cuda eval vgg16 dynamo_bench_diff_branch 1.344x p=0.00 cuda eval vgg16 viable/strict 1.342x p=0.00 Summary for tag=dynamo_bench_diff_branch: speedup gmean=1.30x mean=1.30x abs_latency gmean=24.09x mean=25.26x compilation_latency mean=2.0 seconds compression_ratio mean=0.9x Summary for tag=viable/strict: speedup gmean=1.30x mean=1.30x abs_latency gmean=24.11x mean=25.29x compilation_latency mean=0.5 seconds compression_ratio mean=1.0x ``` cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire [ghstack-poisoned]
--diff_main renamed to --diff-branch BRANCH and now works again Summary table splits results per branch. csv output now has column with branch name when run in this mode Added --progress flag so you can track how many models are going to be run. ghstack-source-id: 229a1f4 Pull Request resolved: #92713
|
@pytorchbot merge -f 'Failures are unrelated' |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
--diff_main renamed to --diff-branch BRANCH and now works again
Summary table splits results per branch.
csv output now has column with branch name when run in this mode
Added --progress flag so you can track how many models are going to be
run.
Example output:
cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire