Skip to content

Collective Knowledge extension to let users implement customizable, portable, multi-dimensional and multi-objective SW/HW auto-tuning workflows using Collective Knowledge Framework. It is used as a universal engine for collaborative AI crowd-benchmarking and crowd-tuning across different compilers, libraries, run-time systems and platforms:

License

Notifications You must be signed in to change notification settings

zenghanfu/ck-autotuning

 
 

Repository files navigation

compatibility License

Linux & MacOS: Travis Build Status Windows: AppVeyor Build status

Universal, customizable and multi-objective software and hardware autotuning

This is a stable repository for universal, customizable, multi-dimensional, multi-objective SW/HW autotuning with JSON API across Linux, Android, MacOS and Windows-based machines using Collective Knowledge Framework.

logo

Please, check out examples in this demo directory and notes about CK portable and customizable workflows.

These reusable and customizable modules are now used in various common experimental scenarios include universal, customizable, multi-dimensional, multi-objective DNN crowd-benchmarking and compiler crowd-tuning.

See continuously aggregated public results results and unexpected behavior in the CK live repository!

Also check out our related Android apps to let you participate in our experiment crowdsourcing using spare Android mobile phones, tables and other devices:

Further details are available at CK wiki, open research challenges wiki and reproducible and CK-powered AI/SW/HW co-design competitions at ACM/IEEE conferences.

Prerequisites

Description

During many years of research on machine learning based autotuning we spent more time on data management then on innovation. At the end, we decided to provide a complete solution in CK where our plugin-based autotuning tools are combined with our repository and python or R-based machine learning plugins.

We are gradually moving, simplifying and extending autotuning from Collective Mind into new CK format! Since design and optimization spaces are very large, we are trying to make their exploration practical and scalable by combining autotuning, crowdsourcing, predictive analytics and run-time adaptation.

Modules from this repository will be used to unify:

  • program compilation and execution (with multiple data sets)
  • benchmarking
  • statistical analysis
  • plugin-based autotuning
  • automatic performance modeling
  • static and dynamic features extraction
  • machine learning to predict optimizations and run-time adaptation
  • reproducibility of experimental results

Authors

License

  • BSD, 3-clause

Installation

ck pull repo:ck-autotuning

Usage

Please, refer to the CK online guides including CK portable workflows and autotuning example.

Troubleshooting

  • Issues with GLIBCXX_3.4.20/3.4.21 when using LLVM installed via CK: These sometimes occur on earlier Ubuntu versions (14.04) on ARM/x86. This can be fixed by upgrading to later versions of Ubuntu, or can sometimes be fixed by:
 $ sudo add-apt-repository ppa:ubuntu-toolchain-r/test
 $ sudo apt-get update
 $ sudo apt-get upgrade
 $ sudo apt-get dist-upgrade
  • Issues with libncursesw.so.6 (not found) on some older machines: It can be fixed by compiling and installing lib-ncurses with the support for wide characters. This can be done automatically via CK:
 $ ck install package:lib-ncurses-6.0-root

Publications

The concepts have been described in the following publications:

@article{fursin:hal-01054763,
    hal_id = {hal-01054763},
    url = {http://hal.inria.fr/hal-01054763},
    title = {{Collective Mind}: Towards practical and collaborative auto-tuning},
    author = {Fursin, Grigori and Miceli, Renato and Lokhmotov, Anton and Gerndt, Michael and Baboulin, Marc and Malony, Allen, D. and Chamski, Zbigniew and Novillo, Diego and Vento, Davide Del},
    abstract = {{Empirical auto-tuning and machine learning techniques have been showing high potential to improve execution time, power consumption, code size, reliability and other important metrics of various applications for more than two decades. However, they are still far from widespread production use due to lack of native support for auto-tuning in an ever changing and complex software and hardware stack, large and multi-dimensional optimization spaces, excessively long exploration times, and lack of unified mechanisms for preserving and sharing of optimization knowledge and research material. We present a possible collaborative approach to solve above problems using Collective Mind knowledge management system. In contrast with previous cTuning framework, this modular infrastructure allows to preserve and share through the Internet the whole auto-tuning setups with all related artifacts and their software and hardware dependencies besides just performance data. It also allows to gradually structure, systematize and describe all available research material including tools, benchmarks, data sets, search strategies and machine learning models. Researchers can take advantage of shared components and data with extensible meta-description to quickly and collaboratively validate and improve existing auto-tuning and benchmarking techniques or prototype new ones. The community can now gradually learn and improve complex behavior of all existing computer systems while exposing behavior anomalies or model mispredictions to an interdisciplinary community in a reproducible way for further analysis. We present several practical, collaborative and model-driven auto-tuning scenarios. We also decided to release all material at http://c-mind.org/repo to set up an example for a collaborative and reproducible research as well as our new publication model in computer engineering where experimental results are continuously shared and validated by the community.}},
    keywords = {High performance computing; systematic auto-tuning; systematic benchmarking; big data driven optimization; modeling of computer behavior; performance prediction; predictive analytics; feature selection; collaborative knowledge management; NoSQL repository; code and data sharing; specification sharing; collaborative experimentation; machine learning; data mining; multi-objective optimization; model driven optimization; agile development; plugin-based auto-tuning; performance tracking buildbot; performance regression buildbot; performance tuning buildbot; open access publication model; collective intelligence; reproducible research},
    language = {Anglais},
    affiliation = {POSTALE - INRIA Saclay - Ile de France , cTuning foundation , University of Rennes 1 , ICHEC , ARM [Cambridge] , Technical University of Munich - TUM , Computer Science Department [Oregon] , Infrasoft IT Solutions , Google Inc , National Center for Atmospheric Research - NCAR},
    booktitle = {{Automatic Application Tuning for HPC Architectures}},
    publisher = {IOS Press},
    pages = {309-329},
    journal = {Scientific Programming},
    volume = {22},
    number = {4 },
    audience = {internationale },
    doi = {10.3233/SPR-140396 },
    year = {2014},
    month = Jul,
    pdf = {http://hal.inria.fr/hal-01054763/PDF/paper.pdf},
}

@inproceedings{ck-date16,
    title = {{Collective Knowledge}: towards {R\&D} sustainability},
    author = {Fursin, Grigori and Lokhmotov, Anton and Plowman, Ed},
    booktitle = {Proceedings of the Conference on Design, Automation and Test in Europe (DATE'16)},
    year = {2016},
    month = {March},
    url = {https://www.researchgate.net/publication/304010295_Collective_Knowledge_Towards_RD_Sustainability}
}

@inproceedings{Fur2009,
  author =    {Grigori Fursin},
  title =     {{Collective Tuning Initiative}: automating and accelerating development and optimization of computing systems},
  booktitle = {Proceedings of the GCC Developers' Summit},
  year =      {2009},
  month =     {June},
  location =  {Montreal, Canada},
  keys =      {http://www.gccsummit.org/2009}
  url  =      {https://scholar.google.com/citations?view_op=view_citation&hl=en&user=IwcnpkwAAAAJ&cstart=20&citation_for_view=IwcnpkwAAAAJ:8k81kl-MbHgC}
}

Feedback

If you have problems, questions or suggestions, do not hesitate to get in touch via the following mailing lists:

About

Collective Knowledge extension to let users implement customizable, portable, multi-dimensional and multi-objective SW/HW auto-tuning workflows using Collective Knowledge Framework. It is used as a universal engine for collaborative AI crowd-benchmarking and crowd-tuning across different compilers, libraries, run-time systems and platforms:

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 88.3%
  • C 7.1%
  • Shell 1.8%
  • Batchfile 1.7%
  • Cuda 0.5%
  • HTML 0.2%
  • Other 0.4%