Skip to content

Latest commit

 

History

History
21 lines (16 loc) · 1.17 KB

README.md

File metadata and controls

21 lines (16 loc) · 1.17 KB

[ Back to CM documentation ]

Run and customize MLPerf benchmarks using the MLCommons CM automation framework

This documentation explains how to compose, run, customize and extend MLPerf benchmarks in a unified way across diverse models, data sets, software and hardware from different vendors using MLCommons Collective Mind automation recipes:

Note that the MLCommons Task Force on Automation and Reproducibility is preparing a GUI to make it easier to run, customize, reproduce and compare MLPerf benchmarks - please stay tuned for more details!

Don't hesitate to get in touch via the public Discord server if you have questions or feedback!