Research and development challenges

Grigori Fursin edited this page Jul 13, 2018 · 35 revisions

[ Home ]

This pages requires an update.

Table of Contents

Our events with open challenges

Introduction to open challenges

Having an interdisciplinary background in physics, electronics and AI, we have been trying to bring the methodology for systematic, collaborative and reproducible experimentation from natural sciences to computer engineering for more than a decade (new publication model, cTuning history).

Our idea was to enable continuous sharing of artifacts and results (such as multi-objective optimization of realistic workloads across diverse hardware on a Pareto frontier) along with publications to be validated, improved, reused and build upon by the community. This should enable practical open science while letting the community focus on innovation rather than wasting time rebuilding experimental setups from numerous articles.

However, we faced numerous problems including ever changing hardware and software stack, stochastic behavior of computer systems, lack of large, diverse and representative workloads, and lack of mechanisms for efficient knowledge sharing and reuse.

Eventually, we decided to develop a common experimental framework and repository to let researchers share all their artifacts and workflows in some format, crowdsource empirical experiments and exchange results with the community.

The first framework and repository, called cTuning, was released in 2008 (cTuning.org, article), and immediately helped enable the world's first machine learning based compiler (IBM's press-release, project website, paper).

Importantly, having common infrastructure and repository of optimization knowledge combined with statistical analysis and predictive analytics enabled crowdsourcing of empirical experiments similar to SETI@home, and open challenges in computer systems' research similar to Kaggle.

Furthermore, the new portable version of cTuning technology (Collective Knowledge aka CK, CK motivation, ACM ReQuEST tournaments) considerably simplifies open challenges by letting users share their artifacts as reusable Python components with JSON API; quickly prototype experimental workflows (such as multi-objective autotuning); automate, crowdsource and reproduce experiments; unify predictive analytics (scikit-learn, R, DNN); and enable interactive articles.

Thus, researchers can easily reuse components, build upon others' work, quickly prototype new ideas, validate them with the help of the community (see public results), and share back improved artifacts via GitHub!

Finally, our approach has been backed up by ACM and helped initiate and improve Artifact Evaluation for the leading computer systems conferences including CGO, PPoPP, SuperComputing and SysML.

Feel free to reuse and extend CK components shared by the community!

Who uses our collaborative R&D approach

See real use cases from our partners.

CK R&D challenges as open-source development

Just a few on-going challenges:

Co-designing efficient SW/HW stack for emerging workloads including deep learning

Enabling AI and machine-learning based self-learning, self-tuning and self-adapting systems

Sharing realistic workload (programs/kernels, data sets and tools) in the CK format

Making real algorithms more efficient and sharing them in the CK format

Improving ARM's Workload Automation

Improving GCC/LLVM high-level compiler flag predictions

  • Public CK repo: http://cknowledge.org/repo, https://github.com/ctuning/reproduce-milepost-project
  • Notes: The community continuously shares top performing GCC and LLVM optimizations for various shared benchmarks and applications with different data sets at the CK live repo. We would like to use this information to train predictive models which can successfully predict optimizations for new programs based on semantic, data set, hardware and dynamic features. We would also like to use unexpected behavior and miss-predictions to improve model accuracy or find missing features.
  • Related info: P1, P2, P3, P4

Participating in GCC/LLVM high-level crowd-tuning

Improving TensorFlow crowd-benchmarking and crowd-learning

Improving Caffe crowd-benchmarking

Speeding up BLAS (CUDA, OpenCL, OpenMP)

Enabling fine-grain auto-tuning support in GCC

Enabling fine-grain auto-tuning support in LLVM

Enabling universal run-time adaptation

Unifying SW/HW bug detection

Enabling P2P knowledge exchange

  • Notes: at the moment, we use relatively centralized approach to preserve and process knowledge via CK public server. However, our long-term vision is to enable first processing of data on participating machines while sharing only important knowledge or unexpected behavior in a distributed P2P fashion. This will also help us avoid or solve big data problem. CK was designed with an integrated web-service especially for this purpose, but we didn't yet have time to implement P2P communication and processing - help will be appreciated!
  • Related info: P1, P2, P3, P4

Unsorted challenges (2007-2014)

Open challenges based on previous cTuning technology (cTuning1, Collective Mind) - we are gradually converting them to the CK format (help is appreciated!):

  • Improving Interactive Compilation Interface for GCC (fine-grain tuning, function cloning for adaptive applications and program instrumentation): main wiki, function cloning to statically enable run-time adaptation, fine-grain tuning, fine-grain tuning 2, instrumentation, instrumentation C++ test.
  • Improving cTuning prediction web-service and optimization database: wiki
  • Improving collaborative benchmark and public datasets: wiki
  • Enhancing Continuous Collective Compilation Framework: wiki
  • Enhancing cTuningCC - universal wrapper around any compiler to enable machine-learning based optimization: wiki
  • Improving UNIDAPT framework - universal run-time adaptation based on decision trees and several function clones pre-optimized for representative data sets: wiki
  • Connecting JIT to cTuning: wiki
  • Connecting Hardware Simulators to cTuning database: wiki

Archive

  • 2012 - Google Summer of Code project: developing cTuning plugins to autotune GCC and enable run-time adaptation in statically compiled programs: wiki. As a result, the first version of universal, multi-objective and multi-dimensional autotuner and crowd-tuner was developed in the cTuning framework!
  • 2010 - final version of public MILEPOST GCC (including ICI/GCC pass manager and plugin framework; function cloning; semantic feature extractor; connector with cTuning predictive analytics web services): project wiki; reproduced in the CK framework in 2016 CK Github repo
  • 2009 - Google Summer of Code project: enabling Interactive Compilation Interface in GCC for fine-grain tuning, function cloning for adaptive applications and program instrumentation (link, Wiki1, Wiki2, W3, W4, W5, comparison). As a result, a plugin framework was added to mainline GCC 4.6+!

Questions and comments

You are welcome to get in touch with the CK community if you have questions and comments or if you would like to sponsor these activities!

Clone this wiki locally
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.