Skip to content
@mlcommons

MLCommons

Better ML for everyone

MLCommons

The mission of MLCommons™ is to make machine learning better for everyone. Together with its 50+ founding Members and Affiliates, including startups, leading companies, academics, and non-profits from around the globe, MLCommons will help grow machine learning from a research field into a mature industry through benchmarks, public datasets and best practices. MLCommons firmly believes in the power of open-source and open data. Our software projects are generally available under the Apache 2.0 license and our datasets generally use CC-BY 4.0.

You can visit the MLCommons website here for more information, or head straight to our Community page if you want to join our Working Groups.

Individuals, companies, and other entities can become members and/or affiliates.

Policies, License and Code of Conduct

Pinned Loading

  1. training Public

    Reference implementations of MLPerf™ training benchmarks

    Python 1.7k 575

  2. inference Public

    Reference implementations of MLPerf™ inference benchmarks

    Python 1.4k 568

  3. training_results_v5.0 Public

    This repository contains the results and code for the MLPerf™ Training v5.0 benchmark.

    Python 4 3

  4. inference_results_v5.0 Public

    This repository contains the results and code for the MLPerf™ Inference v5.0 benchmark.

    HTML 7 9

  5. ailuminate Public

    The AILuminate v1.1 benchmark suite is an AI risk assessment benchmark developed with broad involvement from leading AI companies, academia, and civil society.

    21 8

  6. modelbench Public

    Run safety benchmarks against AI models and view detailed reports showing how well they performed.

    Python 97 24

Repositories

Showing 10 of 111 repositories
  • modelplane Public
    Python 0 Apache-2.0 0 17 0 Updated Aug 1, 2025
  • Mermaid 0 5 0 0 Updated Aug 1, 2025
  • JavaScript 0 Apache-2.0 0 2 2 Updated Aug 1, 2025
  • mlperf-automations Public

    This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.

    Python 7 Apache-2.0 16 76 (4 issues need help) 4 Updated Aug 1, 2025
  • inference Public

    Reference implementations of MLPerf™ inference benchmarks

    Python 1,426 Apache-2.0 568 228 20 Updated Aug 1, 2025
  • modelbench Public

    Run safety benchmarks against AI models and view detailed reports showing how well they performed.

    Python 97 Apache-2.0 24 385 5 Updated Aug 1, 2025
  • algorithmic-efficiency Public

    MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.

    Python 389 Apache-2.0 72 28 (4 issues need help) 6 Updated Jul 31, 2025
  • training_policies Public

    Issues related to MLPerf™ training policies, including rules and suggested changes

    Python 95 Apache-2.0 67 110 1 Updated Jul 31, 2025
  • logging Public

    MLPerf™ logging library

    Python 37 Apache-2.0 49 42 2 Updated Jul 31, 2025
  • tiny Public

    MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers

    C 422 Apache-2.0 102 14 7 Updated Jul 31, 2025