Collective Knowledge extension to let users implement customizable, portable, multi-dimensional and multi-objective SW/HW auto-tuning workflows using Collective Knowledge Framework. It is used as a universal engine for collaborative AI crowd-benchmarking and crowd-tuning across different compilers, libraries, run-time systems and platforms:
Clone or download
Latest commit c5bc72e Oct 16, 2018
Failed to load latest commit information.
.cm small clean up Apr 28, 2016
compiler added program template demonstrating how to validate numerical output… May 25, 2018
dataset.features Cleaning COPYRIGHT.txt and LICENSE.txt Sep 11, 2015
dataset adding universal OpenCL stubs package (1.2) targeting Android devices Mar 21, 2016
demo added --shared_solution_cid= (-Ocid-uid1-uid2) to reuse shared soluti… Aug 13, 2017
experiment.view fix table view Feb 19, 2018
graph Cleaning COPYRIGHT.txt and LICENSE.txt Sep 11, 2015
module cosmetic renames and shorter joins Oct 9, 2018
package clean up Jun 28, 2018
program.species added "image classification (simulation)" program.species to differen… May 20, 2018
program.static.features adding program static MILEPOST features Mar 26, 2016
program fixed C++ template to include iostream May 25, 2018
report Cleaning COPYRIGHT.txt and LICENSE.txt Sep 11, 2015
script added LWS/GWS per kernel printing during openCL profile Nov 9, 2017
.ckr.json correcting stable branch id Apr 29, 2017
.gitignore Ignoring *.pyc files and leaf tmp directories. Sep 4, 2015
.travis.yml testfix: removing the offending pip binary Sep 10, 2018
AUTHORS fixing authors date Dec 28, 2015
CHANGES [FGG] added support for new website when generating lists of components Jul 17, 2018
CONTRIBUTIONS added contributors Sep 12, 2018
COPYRIGHT.txt added "skip_compiler_vars" to program module to support Java programs Feb 22, 2018
LICENSE.txt fixing LICENSE/COPYRIGHT filename (asked by Windows users) Sep 8, 2015 updated logos Oct 16, 2018
appveyor.yml testfix: switching back to the compiler v19.1* Sep 12, 2018
requirements.txt first iteration of Travis tests Aug 15, 2018

compatibility License

Linux & MacOS: Travis Build Status Windows: AppVeyor Build status

Universal, customizable and multi-objective software and hardware autotuning

This is a stable repository for universal, customizable, multi-dimensional, multi-objective SW/HW autotuning with JSON API across Linux, Android, MacOS and Windows-based machines using Collective Knowledge Framework.


Please, check out examples in this demo directory and notes about CK portable and customizable workflows.

These reusable and customizable modules are now used in various common experimental scenarios include universal, customizable, multi-dimensional, multi-objective DNN crowd-benchmarking and compiler crowd-tuning.

See continuously aggregated public results results and unexpected behavior in the CK live repository!

Also check out our related Android apps to let you participate in our experiment crowdsourcing using spare Android mobile phones, tables and other devices:

Further details are available at CK wiki, open research challenges wiki and reproducible and CK-powered AI/SW/HW co-design competitions at ACM/IEEE conferences.



During many years of research on machine learning based autotuning we spent more time on data management then on innovation. At the end, we decided to provide a complete solution in CK where our plugin-based autotuning tools are combined with our repository and python or R-based machine learning plugins.

We are gradually moving, simplifying and extending autotuning from Collective Mind into new CK format! Since design and optimization spaces are very large, we are trying to make their exploration practical and scalable by combining autotuning, crowdsourcing, predictive analytics and run-time adaptation.

Modules from this repository will be used to unify:

  • program compilation and execution (with multiple data sets)
  • benchmarking
  • statistical analysis
  • plugin-based autotuning
  • automatic performance modeling
  • static and dynamic features extraction
  • machine learning to predict optimizations and run-time adaptation
  • reproducibility of experimental results



  • BSD, 3-clause


ck pull repo:ck-autotuning


Please, refer to the CK online guides including CK portable workflows and autotuning example.


  • Issues with GLIBCXX_3.4.20/3.4.21 when using LLVM installed via CK: These sometimes occur on earlier Ubuntu versions (14.04) on ARM/x86. This can be fixed by upgrading to later versions of Ubuntu, or can sometimes be fixed by:
 $ sudo add-apt-repository ppa:ubuntu-toolchain-r/test
 $ sudo apt-get update
 $ sudo apt-get upgrade
 $ sudo apt-get dist-upgrade
  • Issues with (not found) on some older machines: It can be fixed by compiling and installing lib-ncurses with the support for wide characters. This can be done automatically via CK:
 $ ck install package:lib-ncurses-6.0-root


The concepts have been described in the following publications:

    hal_id = {hal-01054763},
    url = {},
    title = {{Collective Mind}: Towards practical and collaborative auto-tuning},
    author = {Fursin, Grigori and Miceli, Renato and Lokhmotov, Anton and Gerndt, Michael and Baboulin, Marc and Malony, Allen, D. and Chamski, Zbigniew and Novillo, Diego and Vento, Davide Del},
    abstract = {{Empirical auto-tuning and machine learning techniques have been showing high potential to improve execution time, power consumption, code size, reliability and other important metrics of various applications for more than two decades. However, they are still far from widespread production use due to lack of native support for auto-tuning in an ever changing and complex software and hardware stack, large and multi-dimensional optimization spaces, excessively long exploration times, and lack of unified mechanisms for preserving and sharing of optimization knowledge and research material. We present a possible collaborative approach to solve above problems using Collective Mind knowledge management system. In contrast with previous cTuning framework, this modular infrastructure allows to preserve and share through the Internet the whole auto-tuning setups with all related artifacts and their software and hardware dependencies besides just performance data. It also allows to gradually structure, systematize and describe all available research material including tools, benchmarks, data sets, search strategies and machine learning models. Researchers can take advantage of shared components and data with extensible meta-description to quickly and collaboratively validate and improve existing auto-tuning and benchmarking techniques or prototype new ones. The community can now gradually learn and improve complex behavior of all existing computer systems while exposing behavior anomalies or model mispredictions to an interdisciplinary community in a reproducible way for further analysis. We present several practical, collaborative and model-driven auto-tuning scenarios. We also decided to release all material at to set up an example for a collaborative and reproducible research as well as our new publication model in computer engineering where experimental results are continuously shared and validated by the community.}},
    keywords = {High performance computing; systematic auto-tuning; systematic benchmarking; big data driven optimization; modeling of computer behavior; performance prediction; predictive analytics; feature selection; collaborative knowledge management; NoSQL repository; code and data sharing; specification sharing; collaborative experimentation; machine learning; data mining; multi-objective optimization; model driven optimization; agile development; plugin-based auto-tuning; performance tracking buildbot; performance regression buildbot; performance tuning buildbot; open access publication model; collective intelligence; reproducible research},
    language = {Anglais},
    affiliation = {POSTALE - INRIA Saclay - Ile de France , cTuning foundation , University of Rennes 1 , ICHEC , ARM [Cambridge] , Technical University of Munich - TUM , Computer Science Department [Oregon] , Infrasoft IT Solutions , Google Inc , National Center for Atmospheric Research - NCAR},
    booktitle = {{Automatic Application Tuning for HPC Architectures}},
    publisher = {IOS Press},
    pages = {309-329},
    journal = {Scientific Programming},
    volume = {22},
    number = {4 },
    audience = {internationale },
    doi = {10.3233/SPR-140396 },
    year = {2014},
    month = Jul,
    pdf = {},

    title = {{Collective Knowledge}: towards {R\&D} sustainability},
    author = {Fursin, Grigori and Lokhmotov, Anton and Plowman, Ed},
    booktitle = {Proceedings of the Conference on Design, Automation and Test in Europe (DATE'16)},
    year = {2016},
    month = {March},
    url = {}

  author =    {Grigori Fursin},
  title =     {{Collective Tuning Initiative}: automating and accelerating development and optimization of computing systems},
  booktitle = {Proceedings of the GCC Developers' Summit},
  year =      {2009},
  month =     {June},
  location =  {Montreal, Canada},
  keys =      {}
  url  =      {}


If you have problems, questions or suggestions, do not hesitate to get in touch via the following mailing lists: