Crowdsourcing optimization

Grigori Fursin edited this page Aug 30, 2018 · 26 revisions

[ Home ]

Table of Contents

News

See motivation behind our approach to crowdsource benchmarking, optimization and co-design of SW/HW stack for emerging workloads.

Open repositories with results

Details of participating platforms

You may reuse/extend JSON meta of the participating platforms via this CK repository at GitHub!

How to participate

Using open ReQuEST tournaments

See ReQuEST website for more details.

Using Android-based mobile devices

If you have an Android-based mobile device (mobile phone, tablet, etc), you can participate in collaborative optimization and machine learning using the following apps from Google Play:

You can see all public results from various collaborative CK-powered experimental scenarios here.

Though not obligatory, we suggest you to connect your device to power supply to avoid frequency changes. We also strongly suggest you to use unlimited Internet since this app will be downloading various randomly optimized workloads that may be several megabytes of size.

Using Laptops, Desktops, Servers and Data Centers

If you use Linux, Windows or MacOS, you can participate in basic experiment crowdsourcing (GCC/LLVM compiler flags crowd-tuning) using several simple steps in your OS shell:

  1. Check that you have Python >= 2.7 and Git installed
  2. Install CK globally via PIP (requires sudo on Linux): (sudo) pip install ck
  3. alternatively you can easily install CK locally without root access as described here.
  4. Pull all repos for crowd-tuning (one of the examples of collaborative program optimization and machine learning): $ ck pull repo:ck-crowdtuning
  5. Start interactive experiment crowdsourcing: $ ck crowdsource experiment
  6. Start non-interactive crowdtuning for LLVM compilers: $ ck crowdtune program --quiet --llvm
  7. Start non-interactive crowdtuning for GCC compilers: $ ck crowdtune program --quiet --gcc

You can also participate in more complex collaborative experiments such as Deep Learning Engine optimization across diverse models and hardware. See installation and usage details here.

You can see public results for compiler crowd-tuning shared by volunteers here:

These public results support our collaborative research on machine-learning based auto-tuning (1, 2).

However, if you do not want to share results, it is also possible to autotune any shared program locally while still reproducing results, plotting graphs and so on, simply via

$ ck list program $ ck autotune program:{above program name}

Don't hesitate to provide feedback or ask questions using our public CK mailing list!

Extra crowd-tuning and crowd-benchmarking scenarios are available in third-party repositories. Please check their readme to find out how to participate:

Plans

Our collaborative approach opens up many interesting R&D opportunities. Please, read the ReQuEST report and machine-learning and crowd-tuning report to know more about our future R&D plans, and do not hesitate to get in touch if you are interested to join our effort!

Sponsorships

This is a community effort based on voluntary participation to help researchers collaboratively solve complex problems. We would like to thank dividiti (UK) and cTuning foundation (France) for providing prizes to the most active users (see CGO'17, PPoPP'17, CGO'16, PPoPP'16, ADAPT'16). If you are interested to sponsor this activity, get in touch!

Related publications (long-term vision)

See all related publications.

Previous work

Collective Mind (2012-2014)

As a proof-of-concept, we crowdsourced compiler flag tuning, predictive modeling and detection of unexpected behavior in previous Collective Mind Framework using a small client for Android mobile phones (Collective Mind Node available in Google Play Store) as described in these papers: 1, 2.

Collective Tuning (2006-2010)

We started developing cTuning framework and open repository to crowdsource experiments and share results in a reproducible way in 2006 to support the MILEPOST project (machine learning based multi-objective autotuning). The first practical framework exposed many issues when recording, sharing, reusing and reproducing computer systems' experiments and became a base for all our further developments including Collective Knowledge! It also motivated us to initiate artifact evaluation at computer systems' conferences. See GCC Summit'09 and IJPP'11.

Questions and comments

You are welcome to get in touch with the CK community if you have questions or comments!



        
Clone this wiki locally
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.