Skip to content

Releases: mlcommons/ck

V1.2.1

26 May 17:29
Compare
Choose a tag to compare

V1.2.0: Major update: transparent indexing of all artifacts to speed up search and scripts by ~10..50x.

It is currently OFF by default for further testing.
You can turn it on and test by setting this environment variable: "CM_INDEX=yes|on|true".

V1.2.1: Fixed Bug in indexing to avoid returning duplicate artifacts when wildcards are used.

MLCommons CM v1.1.6

16 May 11:05
Compare
Choose a tag to compare

MLCommons CM aka CK2 v1.1.3

07 Dec 14:44
Compare
Choose a tag to compare

Stable release of the MLCommons CM automation meta-framework from the MLCommons taskforce on education and reproducibility:

  • improved removal of CM entries on Windows
  • fixed #574
  • improved detection of CM entries with "."
  • added --yaml option in "cm add" to save meta in YAML
  • added --save_to_json to save output to JSON (useful for web services)
  • extended "cm info {automation} {artifact}" (copy to clipboard)

Stable release of MLCommons CM v1.1.1

15 Nov 14:48
f9abc76
Compare
Choose a tag to compare

Stable release of MLCommons CM v1.1.0

15 Nov 14:44
fe95527
Compare
Choose a tag to compare

cm-v1.0.5

09 Nov 09:02
d5d1773
Compare
Choose a tag to compare

Stable release from the MLCommons taskforce on education and reproducibility to test the MLPerf inference benchmark automation for the Student Cluster Competition at SC'22.

MLCommons CM v1.0.0 - the next generation of the MLCommons Collective Knowledge framework

13 Sep 12:46
Compare
Choose a tag to compare

This is the stable release of the MLCommons Collective Mind framework v1.0.1 with reusable and portable MLOps components - the next generation of the MLCommons Collective Knowledge framework developed to modularize AI/ML Systems and automate their benchmarking, optimization and design space exploration based on the mature MLPerf methodology.

After donating the CK framework to the MLCommons, we have been developing this portable workflow automation technology as a community effort within the open education workgroup to modularize MLPerf and make it easier to plug in real-world tasks, models, data sets, software and hardware from the cloud to the edge.

We are very glad to see that more than 80% of all performance results and more than 95% of all power results were automated by the MLCommons CK v2.6.1 in the latest MLPerf inference round thanks to submissions from Qualcomm, Krai, Dell, HPE and Lenovo!

We invite you to join our public workgroup to continue developing this portable workflow framework and reusable automation for MLOps and DevOps as a community effort to:

  • develop an open-source educational toolkit to make it easier to plug any real-world ML & AI tasks, models, data sets, software and hardware into the MLPerf benchmarking infrastructure;
  • automate design space exploration of diverse ML/SW/HW stacks to trade off performance, accuracy, energy, size and costs;
  • help end-users reproduce MLPerf results and deploy the most suitable ML/SW/HW stacks in production;
  • support collaborative and reproducible research.

(C)opyright MLCommons 2022

Shortcuts

MLCommons CM toolkit v0.7.24 - the first stable release to modularize and automate MLPerf inference v2.1

04 Sep 14:47
Compare
Choose a tag to compare

A fix to support Python 3.9+

05 Jan 09:40
Compare
Choose a tag to compare

This release includes a fix for issue #184 .

Stable release for MLCommons CK v2.6.0

03 Jan 21:59
Compare
Choose a tag to compare

This is a stable release of the MLCommons CK framework with a few minor fixes to automate MLPerf inference benchmark v2.0+ submissions.